Small study effects in meta-analyses of osteoarthritis trials: meta-epidemiological study
BMJ 2010; 341 doi: https://doi.org/10.1136/bmj.c3515 (Published 16 July 2010) Cite this as: BMJ 2010;341:c3515
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Dear Editor,
We were pleased to see that Nuesch and colleagues examined the effect
of small studies in meta-analyses of osteoarthritis trials [1]. The
conclusion, that small study effects can distort results of meta-analyses,
is not particularly new [2]. For example, in a large analysis of mortality
in patients experiencing a gastrointestinal bleed or perforation, the
event rate in smaller studies with fewer than 200 cases each was often
markedly different from the event rate in larger studies with many more
cases [3].
More important than this, though, is not the potential for distortion
of small trials in a meta-analysis that contains larger trials, but the
potential for getting the wrong answer completely in a meta-analysis that
contains ONLY small trials. The potential for the random play of chance to
have big effects when the number of events is fewer than 200 has also been
pointed out before [4-6], and the propensity for meta-analysis of small
trials to produce misleading results is contained in a much-cited recent
paper [7].
The problem, of course, is that many, if not most, meta-analyses are
comprised of small trials and even in aggregate they do not amount to
sufficient numbers from which to draw conclusions even if everything else
were perfect. Our experience is that whether one looks at complementary
therapies like acupuncture [8], or conventional therapies in difficult
situations like palliative care [9], too much is often made of too little.
The time is long past when meta-analyses of small studies can be
allowed to reach conclusions without pointing out the hole at the heart of
their analysis - too little information to be sure of a result. One
approach would be to agree a minimum number of events - beneficial and
harmful - below which a result cannot be trusted. Two hundred events is a
useful rule of thumb for believability. Size is an important source of
bias that needs to be considered alongside study quality and validity, as
we have recently suggested [10]. Size is not routinely covered in the
Cochrane risk of bias table; perhaps it should be.
The focus on events has one further benefit in pain studies: it
concentrates on clinically useful outcomes, particularly important where
the distribution of results is anything but Gaussian, where the average
result is obtained by few, and where substantial duration bias is known
[10].
References
[1] Nuesch E, Trelle S, Reichenbach S, Rutjes AW, Tschannen B, Altman
DG, Egger M, Juni P. Small study effects in meta-analyses of
osteoarthritis trials: meta-epidemiological study. BMJ 2010;341:c3515.
[2] Moore RA, Tramer MR, Carroll D, Wiffen PJ, McQuay HJ.
Quantitative systematic review of topically applied non-steroidal anti-
inflammatory drugs. BMJ 1998;316:333-8.
[3] Straube S, Tramer MR, Moore RA, Derry S, McQuay HJ. Mortality
with upper gastrointestinal bleeding and perforation: effects of time and
NSAID use. BMC Gastroenterol 2009;9:41.
[4] Shuster JJ. Fixing the number of events in large comparative
trials with low event rates: a binomial approach. Control Clin Trials
1993;14:198-208.
[5] Flather MD, Farkouh ME, Pogue JM, Yusuf S. Strengths and
limitations of meta-analysis: larger studies may be more reliable. Control
Clin Trials 1977;18:568-79.
[6] Moore RA, Gavaghan D, Tramer MR, Collins SL, McQuay HJ. Size is
everything - large amounts of information are needed to overcome random
effects in estimating direction. Pain 1998;78:209-16.
[7] Ioannidis JP. Why most published research findings are false.
PLoS Med 2005;2:e124.
[8] Derry CJ, Derry S, McQuay HJ, Moore RA. Systematic review of
systematic reviews of acupuncture published 1996-2005. Clin Med 2006;6:381
-6.
[9] Wee B, Hadley G, Derry S. How useful are systematic reviews for
informing palliative care practice? Survey of 25 Cochrane systematic
reviews. BMC Palliat Care 2008;7:13.
[10] Moore RA, Eccleston C, Derry S, Wiffen P, Bell RF, Straube S,
McQuay H; for the ACTINPAIN writing group of the IASP Special Interest
Group (SIG) on Systematic Reviews in Pain Relief and the Cochrane Pain,
Palliative and Supportive Care Systematic Review Group editors. "Evidence"
in chronic pain - establishing best practice in the reporting of
systematic reviews. Pain 2010 Jun 1 [Epub ahead of print].
Competing interests:
None declared
Competing interests: No competing interests
Small Trials Exclusion for Meta-analyses: The effect on heterogeneity
We read with great interest the paper published on BMJ by Nuesch et
al, with regard to the distortion effect provided by small trials upon the
general results of a cumulative meta-analysis[1]. Although in a different
scenario, and with different outcomes, we 'empirically' adopted the very
same approach in the context of a literature-based meta-analysis exploring
the benefit of adjuvant cisplatin-based chemotherapy for non-small-cell
lung cancer[2]. In order to screen for the effect consistency (in this
case the survival benefit provided by chemotherapy), to increase the power
of the analysis, and to decrease the statistical and methodological
heterogeneity between trials, underpowered trials with less than 100
patients per arm were excluded from this population in a sensitivity
analysis fashion. Interestingly, this approach was able to make
heterogeneity disappear; indeed, as the overall population relative risk
(RR) in favour of chemotherapy for overall survival was 0.91 (p=0.011)
with a significant heterogeneity (p=0.048), in the large trial sample (RR
0.91, p=0.001), the heterogeneity test was no more statistically
significant (p=0.45)[2].
When looking at the disease free survival, the
same phenomenon was observed, with a statistically significant
heterogeneity in the overall sample (p=0.001), and without any
significance in the large trials population (p=0.53). Although no
interaction in the outcome effect between the overall sample and the large
trials population was detected (regardless of the explored outcome), the
sensitivity analysis empirically performed according to trials sample size
was able to significantly decrease heterogeneity across the trials,
allowing the interpretation of the overall meta-analysis results more
useful for clinical practice. The paper published by Nuesch et al
represents a first-in-literature demonstration of this effect and should
be quoted as a reference paper for all upcoming meta-analyses.
References.
1. Nuesch E, Trelle S, Reichenbach S, et al: Small study effects in
meta-analyses of osteoarthritis trials: meta-epidemiological study. BMJ
341:c3515
2. Bria E, Gralla RJ, Raftopoulos H, et al: Magnitude of benefit of
adjuvant chemotherapy for non-small cell lung cancer: meta-analysis of
randomized clinical trials. Lung Cancer 63:50-7, 2009
Competing interests: No competing interests