Open access to research
BMJ 2008; 337 doi: https://doi.org/10.1136/bmj.a1051 (Published 31 July 2008) Cite this as: BMJ 2008;337:a1051
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
I am curious what the final result of this RCT will be, but I wish
the BMJ would have thought twice before publishing the results of this
trial prematurely (which is also unfair to other researchers who are
running similar RCTs but bite their tongues and withhold their data to
meaningful study endpoints defined in their protocols). I also wish BMJ
would have done a better job in peer-reviewing this manuscript and help
readers to understand some of the oddities and contradictions in this
manuscript (or make sense of missing data, e.g. the actual citation
counts, which are nowhere reported!).
Davis analysis fails not only to show an effect of OA status on citations,
but also fails to show any effect of other variables expected to be
predictors, e.g. cover page or press release coverage (see his Table 3 -
and according to his supplementary file, being on the cover page even
significantly REDUCES the odds of being cited). And ironically, according
to his data (Table 3), even "self-archiving" is not an independent
predictor for higher citations after 9-12 months, although - according to
Davis' own argumentation (with which I agree) - these are self-selected,
better quality articles! What does this say about the validity of his
other conclusions? Doesn’t this hint at the fact that the observation
period might be much too short? The three variables "cover page", "press-
release", and "self-archiving" should be seen as internal controls. I
would believe his results (in any follow-up analysis) if these variables,
which are clearly associated with quality/citation differences, emerge as
independent significant predictors while the randomized OA status remains
insignificant. But as long as Davis fails to show that these other
variables behave in an expected way ("expected" is that they are
predictors for citations), then I am unsurprised that OA status also does
not behave in the way most people would expect, and we have to assume that
either his data collection or analysis is flawed.
Questions for the authors which I would have asked as
reviewer/editor:
1. What were the mean citation counts in both groups (crude,
unadjusted - e.g. "0.5 versus 0.6 citations)? Interestingly, these crucial
data were omitted. What was the absolute difference in citations between
the groups? How predictive are these very early citation counts for total
citation counts at a later stage in these journals? (could have looked at
historical data)
2. When exactly was the test/were the tests for self-archiving
conducted (it would need to be conducted continuously because author can
self-archive their manuscript at any time)? How do you explain the low
self-archiving rate? What was the self-archiving rate in the control
group? (i.e. contamination rate - self-archiving in the control group
reduces the power to detect differences)?
3. How do you explain your results that being on the cover page has
no statistically significant impact on the number of citations (your Table
3), and even significantly REDUCES the odds of being cited (your
supplementary file)? Wouldn't one expect that editors select higher impact
/ quality papers for the title page, and that those are more often noticed
and cited?
4. How do you reconcile your results that papers covered in a press-
release have no citation advantage (Table 3) with previous research
stating the opposite? What is the odds ratio for that variable? (the odds
ratio from the supplementary file is missing) And why didn't you stress
that your paper appears to challenge not only the "dogma" that open access
articles are cited more often than non-Open Access, but also some other
"dogmas", for example that papers covered on the title page and in press
releases are cited more often?
5. How do you explain that you can't even reproduce the bias you are
talking about in the introduction and discussion in your own data set? I
think we all agree that self-archived articles are biased towards higher
quality, for the reasons discussed in your introduction and previously
also in my PLoS paper [1]. But if we accept this, then why do you fail to
show an increased citation effect of self-archived articles in your sample
(see Table 3 and supplementary file)? According to your data, self-
archived articles are even LESS cited than non-self-archived articles
(though not statistically significant). Doesn't this finding directly
contradict your conclusion that previous studies have found a citation
advantage due to a quality differential/self-selection bias? If self-
archived studies are the "better" studies which have been shown to be
cited more often (any discussion about the "cause" aside), wouldn't it be
necessary to see "self-archiving" status to be a strong independent
predictor for citations?
6. You write "Previous studies have relied on retrospective and
uncontrolled methods to study the effects of open access.". How do you
call the 2006 PloS study [1]? [it's a rhetorical question, of course: The
PloS study was a in fact a prospective cohort study. In any cohort study,
you have cases ("exposed") and controls. In fact, a RCT is also a cohort
study, with the only difference that exposure status is assigned randomly
so that unobservable confounders are distributed equally. But a
prospective cohort study is neither "retrospective" nor "uncontrolled",
its simply "observational" as opposed to "experimental". I also take
exception to the remark that previous studies have confused cause and
effect. There is no discussion about "cause" in the PLoS study. What was
said is that OA status remained an independent predictor, even when we
control for known confounders. I acknowledged in that discussion that we
can only adjust for known confounders and suggested a randomized trial.]
7. What is your trial registration number / where is your trial
protocol? What timeframe was originally (a priori) defined as the primary
endpoint? Why did you decide to publish negative results before the end of
the trial?
8. You argue that “our time-frame is more than sufficient to detect a
citation advantage, if one exists”, citing the 2006 PLoS study [1], which
in fact found an early citation differential between OA and nOA PNAS
articles (mean 1.5 versus 1.2 citations, crude, after 4-10 months).
However, PNAS has a very high impact factor of >10 (i.e. a very high
citation rate). What are the impact factors of the journals you included
and how would the different citation rates affect your type II error and
the timeframe needed to detect differences if they were present? (and this
goes back to question #1: what were your mean citation rates in both
groups? Are they comparable to the PNAS data?)
References
1. Eysenbach G (2006) Citation Advantage of Open Access Articles.
PLoS Biology 4(5) e157 DOI: 10.1371/journal.pbio.0040157
2. Eysenbach G (2008) Phil Davis: Open access publishing, article
downloads, and citations: The word is still out. Gunther Eysenbach Random
Research Rants Blog. 2008-08-01. URL:http://gunther-
eysenbach.blogspot.com/2008/07/phil-davis-open-access-publishing.html
Accessed: 2008-08-01. (Archived by WebCite® at
http://www.webcitation.org/5ZkxlLQGp)
Competing interests:
I am also Editor of the Journal of Medical Internet Research, an open access journal, and I self-archive all my papers!
Competing interests: No competing interests
"The study suggests that previous findings of a citation
advantage from open access may have been the result of self selection, with
more highly citable articles being more likely to be published in open access
journals."
Actually, the authors suggest this, but the study itself does not and cannot
support it: To show that previous findings of a citation advantage were the
results of self-selection, you first have to replicate the citation advantage,
and then show that if you eliminate self-selection, you eliminate the citation
advantage. But the authors simply failed to find the citation advantage, in a
year-long sample.
The most likely explanation is that there is no citation advantage that early.
At the very least, a control comparison would have to have been made,
likewise on the same 1-year sample, comparing self-selected OA self-
archiving with randomized OA, to show that the citation advantage is present
with the former yet absent in the latter. The study merely showed it was
absent in the latter.
This study, which had been announced as a 4-year study, was published
prematurely. It should have done the self-selection control and
extended the study duration into the 3-4 year time-window that most of the
studies it was trying to refute had analyzed.
"the higher the impact factor of the journal in which an
article was published, the more likely it was that the article would be
available on a non-publisher website."
That correlation is prima facie evidence for self-selection (better articles
being more likely to be self-selected to self-archive), but since the citation
advantage is based on within-journal comparisons, the correlation alone
does not give any estimate of how big an influence (if any) self-selection has
on the size of the citation advantage. It has also been found that the citation
advantage is bigger for higher-impact journals, suggesting that better
articles also garner more citations if self-archived (i.e., a quality advantage,
alongside any possible quality self-selection bias).
"Davis and colleagues’ finding that open access provided
no citation advantage, despite increased readership, may be explained by the
fact that journal readers who generate citations already have subscription
access to journals."
But an explanation that is at least equally likely is that, as other studies
have found, one year is simply too short a time to detect the citation
advantage. And, as other studies have likewise found, an earlier download
advantage is predictive of a
later citation advantage. But to test that, the study would have had to run its
announced 4-year duration, rather than reporting uncontrolled negative
results after a year.
"We know that press coverage increases
citations."
We do, and there is prior evidence for it. But this study failed to find that
effect either. Perhaps this is another effect that the report was premature.
Competing interests:
Research Progress
Competing interests: No competing interests
Abstract Humour
Whilst our American colleagues are frequently derided for their
perceived lack of irony, they have been far quicker to rectify one of the
major pit-falls that awaits all those clinicians who attempt to practice
evidence based medicine…finding the perfect research paper that matches
every single keyword and phrase of your hypothesis. Unfortunately, no
sooner have you read its word-perfect title before being redirected to a
helpful web-page outlining just how much it will cost to see this
document.
The NHS is the third largest employer in the world1, spending over
£90billion annually2 but drugs aren’t the only thing being rationed in the
21st Century welfare state. Perversely, access to the most up-to-date
information detailing how best to manage patients is withheld from the
very clinicians who need it most. Throughout medical school the importance
of practicing cost-effective, evidence-based medicine was continually
reinforced. Information was available through University subscriptions to
almost every journal and an ensuing wealth of knowledge and expertise; a
situation amazingly juxtaposed to that which every F1 finds themselves in
once entrusted to prescribe drugs (under supervision) and care for
patients. While the American health care model is structured differently
to the NHS, we should also strive for the best-available care platitudes
upon which it is built.
Those fortunate enough to access research will recognise the citation
advantage proffered by open access publications and even the study carried
out by Davis and colleagues confirmed open access publications increased
readership, even if there was no corresponding pick up rate in citations.3
Surely however, this is a false marker of utility. Sir Isaac Newton
famously once said “If I have seen further it is by standing on the
shoulders of giants”. Time spent reading is rarely wasted. Even if your
only conclusion is to avoid a particular journal, it has still shaped your
clinical judgment.
Anyone heard the joke about the doctor who was kept in the dark…the
Americans have, and I’m sure the irony is not wasted on them.
Competing interests:
None declared
Competing interests: No competing interests