“Hardly worth the effort”? Medical journals’ policies and their editors’ and publishers’ views on trial registration and publication bias: quantitative and qualitative study
BMJ 2013; 347 doi: https://doi.org/10.1136/bmj.f5248 (Published 06 September 2013) Cite this as: BMJ 2013;347:f5248
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
We read the nice paper by Wager E. and Williams P.(1) There are some limitations they have mentioned in their paper. However, more limitations should be considered.
The present study has itself language bias because of only assessing journal which have instructions in English. Maybe they underestimate need for trial registration because non-famous journals (which are more in non-English language) need to register the trials lower than others.
Truncation in some sites is insensitive to "*" which they have used. Some sites work with "?", "$" and "#" and not with "*".
It is not completely clear why the word "Helsinki" was searched with word “AND” in addition to “regist”? This is not related to trial registration. Moreover, “AND” limit the search to the sites that need these both words: "regist*" and "Helsinki" be mentioned in that site. Nevertheless, sites that only mentioned registration are needed but have nothing about Helsinki will not be found and it is not in line with the aims of this study.
The paper has another bias naming sampling bias that is like publication bias from the selection biases. They have selected some types of journals which have recent change in their policy and it may overestimate the chance of including trial registration need in their policy. In each side, underestimate or overestimate, it is not a representative subsample from 200 journals and has high potential of bias.
We do not know how authors attempted to have a purposeful sampling (based on 31 journals with recent change in their policy for trial registration) and representative sampling for characteristics like publication frequency, type of publisher, status of editor and so on? These two methods are not consistent with each other; although, not completely in contrast.
Moreover, data show that selected journals which have agreed to participate in interview has not sufficient various pattern (for example there is no journal from South America, Asia, Middle East, Africa and New Zealand at all!) and may not be representative of all 200 primary selected journals as authors mentioned.
In addition, their explanation about journals that cooperated shows there was a very low response rate and this matter adds some more uncertainty due to high non-response bias.
Publication bias is a systematic tendency for publishing any types of the results; positive, negative or neutral.(2) Maybe, publishing in journals which are not indexed in prestigious sites or are not traceable can be another feature of publication bias.
Some editorials offered that another way of fighting with publication bias are some journals that publish negative findings. We believe journals that publish negative or statistically non-significant findings will have an increasingly impact factor due to low evidence in each filed about such findings and tendency of meta-analyses for such evidence.
Reference
1. Wager E, Williams P. “Hardly worth the effort”? Medical journals’ policies and their editors’ and publishers’ views on trial registration and publication bias: quantitative and qualitative study. BMJ. 2013;347(7923):f5248.
2. Szklo M. Issues in publication and interpretation of research findings. J Clin Epidemiol. 1991;44(Suppl. l):109S-13S.
Competing interests: No competing interests
Ms. Wager,
I agree with your assessment that reprints may provide an even greater source of revenues for medical journals. The reluctance that you mention of medical journals to reveal their sources of income might be construed as circumstantial evidence in this regard. On the basis of Peter C. Gotzsche's new book, Deadly Medicines and Organised Crime (2013)--particularly Chapter 5, "Conflicts of Interest at Medical Journals"--I'm inclined to view the bulk order of reprints and use by pharmaceutical companies as within their budget for product advertising. Thus, bulk reprint orders are probably for medical journals simply the advertising rose by another name.
Competing interests: No competing interests
Response to Ms Lin:
Thank you for letting us know this encouraging news about physiotherapy journals. I hope other journals follow their lead.
Competing interests: No competing interests
Response to Professor Noble:
Thank you for your response to our study. You raise an interesting and important point. The few studies looking at journal financing have, ironically, often revealed just how opaque it is, since many journals refused to share information about their income from reprints (which I'd argue may be as important a source of revenue, if not more so, than advertisements). (See for example Hopewell & Clark Int J Technol Assess Health Care 2003;19:711-14 and Lundh et al PLoS Medicine 2010;7:e1000354)
I would certainly back calls for greater transparency in journal funding. One limitation of our study is that editors may have been unwilling to admit to fears that demanding registration might reduce their journals' revenues -- although, counterbalancing this is the fact that, since the FDA Amendments Act in 2008 pharmaceutical companies have been forced to register many of their studies.
Competing interests: No competing interests
The Williams et al. report (1) extends the debate about biomedical research transparency in publications but doesn't address one important factor relating to economics. How much do the surveyed journals rely on advertising from various sources to make a go of it? Could it be that the smaller journals that do not require trial registration and data access stand to benefit from the advertising of pharmaceutical and medical device companies if these journals provide an alternative outlet for publications favorable to their interests?
Peter Gotzsche's latest expose, Deadly Medicines and Organised Crime, addresses the extent to which the industry has turned medical journals into marketing tools. (2) He concludes (p. 283): "Medical journals have major conflicts of interest and should publish the amount they get from sales of reprints, supplements and advertising, and should check manuscripts about drugs or devices particularly carefully to ensure that they don't contribute to illegal marketing or ghost authorship."
1. Wager E, Williams P. "Hardly worth the effort"? Medical journals' policies and their editors' and publishers' views on trial registration and publication bias: quantitative and qualitative study. BMJ 2013;347:f5248.
2. Gotzsche, P.C. Deadly Medicines and Organised Crime: How Big Pharma Has Corrupted Healthcare. London: Radcliffe Publishing, 2013.
Competing interests: No competing interests
We read with interest the article published by Wager et al[1] outlining the percentage of medical journals requiring clinical trial registration, the reasons for the reluctance of mandating trial registration among journal editors and publishers and the importance of trial registration to reduce publication bias and minimise selective reporting. The International Society of Physiotherapy Journal Editors (ISPJE, www.wcpt.org/ispje) is in support of prospective trial registration; in September 2012 we conducted a webinar promoting mandatory prospective clinical trial registration to physiotherapy journal editors. This was followed by the publication of a joint editorial in late 2012/early 2013 by 13 leading physiotherapy journals to announce the policy of mandatory prospective clinical trial registration.e.g.[2] Since then more physiotherapy journals are following suit to adopt this policy.[3]
A recent study found that only about 6% of the randomised trials investigating the effects of physiotherapy interventions published in 2009 had been registered prospectively, however the trials published in high impact journals were more likely to be registered compared to trials published in other journals.[4] Physiotherapy trials appear to be lagging behind trials in the biomedical field as one study showed that up to 61% of randomised trials published in Medline in 2010 were registered,[5] although it is unclear how many of the 61% registered randomised trials were prospectively registered. Prior to the ISPJE’s push for mandatory prospective trial registration, only one physiotherapy journal had such policy. Physiotherapy journal editors cited the fear of turning down a good quality trial due to the lack of prospective trial registration and biasing small trials among the reasons for the reluctance to do so. However there was agreement among physiotherapy journal editors that prospective trial registration was beneficial; this was to be done by first announcing the policy of prospective trial registration on the journal’s website and rolling out of the policy by a future date. From 1 January 2013, four more physiotherapy journals have implemented the policy of mandatory prospectively registration. By 1 January 2015 all 13 physiotherapy journals that are signatories to the joint editorial will have adopted this policy. This represents 100% of all physiotherapy journals that are on the ISPJE’s database that are Medline indexed. In addition, ISPJE, ISPJE members and the World Confederation of Physical Therapy have signed up to a recent petition on trial registration lead by organisations including the BMJ (http://www.alltrials.net). We hope that this will contribute to more transparent reporting of evidence and ultimately contribute to improved patient care.
References
1. Wager E, Williams P. "Hardly worth the effort"? Medical journals' policies and their editors' and publishers' views on trial registration and publication bias: quantitative and qualitative study. BMJ 2013;347:f5248.
2. Costa LO, Lin CW, Grossi DB, Mancini MC, Swisher AK, Cook C, et al. Clinical trial registration in physiotherapy journals: recommendations from the International Society of Physiotherapy Journal Editors. Journal of physiotherapy 2012;58(4):211-3.
3. Costa LOP, Lin C-WC, Grossi DB, Mancini MC, Swisher AK, Cook C, et al. Clinical trial registration in physiotherapy journals: recommendations from the International Society of Physiotherapy Journal Editors. European Journal of Physiotherapy 2013;15:42-45.
4. Pinto RZ, Elkins MR, Moseley AM, Sherrington C, Herbert RD, Maher CG, et al. Many Randomized Trials of Physical Therapy Interventions Are Not Adequately Registered: A Survey of 200 Published Trials. Physical therapy 2012.
5. van de Wetering FT, Scholten RJPM, Haring T, Clarke M, Hooft L. Trial registration numbers are underreported in biomedical publications. PLoS One 2012;7(11):e49599.
Competing interests: No competing interests
Further response to Dr Goh
(with apologies if your name is Dr Shyan -- it's not clear from your posts):
You are correct that trial registration, alone, is not a panacea. If trials are merely registered but results are never made available, the problem of publication bias persists, however in such cases, the non-publication becomes visible, the possible publication bias can be quantified, and the trial sponsors or investigators can be held accountable or asked to explain the non-publication. That, to me, seems worth the relatively trivial burden of registering studies.
Registration (even without results posting) brings a further benefit for journal editors, peer reviewers, and readers, in that it highlights selective reporting (when outcomes are omitted) and misleading reporting (e.g. when primary and secondary outcomes are changed or switched). There is clear evidence (e.g. the work of An-Wen Chan) that these problems are worryingly common. Even if journals simply require a trial registration number and do no further checking themselves, this enables peer reviewers (before publication) and readers (after publication) to identify these problems. The cost to the journal is trivial, but the benefits are considerable.
While I sympathise with concerns about adding bureacracy or costs, the amount of effort required to register a clinical trial is negligible compared to all the other work involved in securing funding, obtaining ethical approval, running the trial, analysing and reporting it.
Competing interests: No competing interests
Response to Thomas Lemon:
I'm glad you found our article interesting and also that you raise the important point about training in research techniques and publication ethics. I admit a big conflict of interest, because this is something I do quite a bit of, for both drug companies and universities. When I teach doctors I am often struck by their lack of awareness of reporting guidelines (such as CONSORT) and of the need to register clinical trials. When reviewing papers I also come across problems in both the reporting of research (which is relatively easy to fix) but, more worryingly, also in trial design. Good research is essential for evidence-based medicine, as is 'evidence-literacy' and understanding of research designs among doctors (even if they aren't doing research). Good luck in your efforts to promote these!
Competing interests: I run training courses on writing and publishing and have written a book on publication strategy.
I would like to respond to Elizabeth Wager's statements.
As I stated, there is no robust evidence on the effects of clinical trial registration on reducing publication bias.
Getting biomedical journals to enforce the clinical trial registration put unfair onus on individual journal resources, particularly for smaller publishing companies.
While specific research of trial publication highlights the potential selective reporting of results, there is yet any evidence involving of journal companies to put up another submission barrier will lead to better outcome.
An clinical analogy is this:
Cardiac arrhythmia post myocardial infarction is associated with higher mortality.
Drug A is a proven antiarrhythmic agent.
So giving Drug A to patients with cardiac arrhythmia should reduce mortality.
The CAST study by National Heart, Lung, and Blood Institute (NHLBI) proved that this kind of logic is not always true. (Ironically the CAST trial from 1986-98 is registered with ClinicalTrials.gov retrospectively in 1999)
Sure, there is a problem with publication bias, but getting individual journal companies spending a lot more time and money to vet submitted papers may be akin to the Y2K syndrome; where the problem matters to a limited few, but someone seemed to think everyone have to get theirs checked anyway.
Besides, a big part of the problem is certain registered trials did not lead to publication of results: there is no legal obligation to report results in the registry or other publications, or to compel authors to publish. The paper submission rules certainly does not affect those trials/ authors!
Competing interests: No competing interests
Re: “Hardly worth the effort”? Medical journals’ policies and their editors’ and publishers’ views on trial registration and publication bias: quantitative and qualitative study
Thank you for your detailed critique of our article. I’ve tried to respond to all your points below.
1 We are well aware of the limitation of only assessing journals with instructions in English and agree that this may underestimate requirements for trial registration. We hope other researchers will examine journals in other languages.
2 Truncation in some sites is insensitive to "*" which they have used. Some sites work with "?", "$" and "#" and not with "*".
We searched only within PDFs using this truncation, we did not use individual website search facilities. We also scanned website content visually, so the search term was a back-up in case we had missed something.
3 It is not completely clear why the word "Helsinki" was searched with word “AND” in addition to “regist”?
Our search did not include the word ‘and’, as stated in our article we searched for these terms separately. We included Helsinki, as explained later in the article, because the most recent versions call for trial registration.
4 You wrote that we selected journals which have recent change in their policy. This is correct but ONLY applies to the qualitative interviews. It did NOT apply to the random sample of 200 journals used in the quantitative part of our study.
5 We maintain that purposeful sampling was the most accurate description of our sampling method. Obviously, with the relatively small sample size it was not possible to account for every variable, but the consistency of our findings suggest that our sample was adequate.
6 You comment on the apparent low response rate, but, as explained in the paper, we purposely invited more journals than we needed so that we had sufficient available for interview during the study period. Again, the fact that responses reached ‘saturation’ and were consistent, suggests that our sample was adequate for this type of qualitative research which should not be confused with methods for quantitative surveys where response rates are more important.
7 You comment that “publishing in journals which are not indexed in prestigious sites or are not traceable can be another feature of publication bias”. This is the reason we chose journals from the Cochrane database and not from sites such as Medline. Cochrane systematic reviews include very thorough searches, contact with experts and hand searches of non-indexed journals, so this database is likely to be more representative of the total than more selective databases such as Medline or Scopus.
8 We agree that journals that publish negative or statistically non-significant findings are important and this was mentioned in our article. However, we believe that trial registration is essential and therefore hope that more journals (whatever they publish) will make this a requirement for publication.
Competing interests: I am an author on this paper.