Letter to the editor
Try as we might to make the manuscript selection process as objective as
possible, the crapshoot element is unquestionable. Prospective papers
are being submitted more frequently than ever, which has broadened the
number of reviewers. Medical students and senior faculty alike are being
tasked with assessing manuscripts. Different levels of experience,
knowledge and variable personal research interests introduce undeniable
biases in how papers are ultimately critiqued. We’ve become keenly aware
of the importance of evaluating research techniques and the studies
themselves for risks of biases; PRISMA, MINORS, MOOSE and ROBINS tools
lead a growing list of objective protocols and
assessments.1,2,3,4 Have we ever thought of addressing
potential biases in how we actually select articles for publication?
Obviously, this would be no simple task, but that shouldn’t be a
deterrent to making improvements in the process where possible; personal
connections come to mind in this regard. Generally speaking, very little
is being done to prevent reviewers from being aware of who the authors
are and where they’re coming from. Additionally, many submission
platforms allow for the selection of preferred reviewers as well as the
ability to decline undesired reviewers. While these tendencies are
understandable for multiple reasons, their potential to introduce
personal biases is noteworthy. For the sake of argument, let’s assign a
very simple “risk of personal bias reduction score” for a journal’s
manuscript submission platform: One point is given for a) maintaining
author confidentiality, b) maintaining institution/location
confidentiality and c) avoiding the option to select or decline
particular reviewers. As such, the scores range from 0 to 3, with 3
being the most favorable.
So how are we doing? Table 1 shows a list of the top 20
otolaryngology journals to date as determined by the h -index, an
increasingly popular measure of journal quality based on the number of
publication citations.5 Ten of the 19 eligible
journals did not take any measures to reduce the potential for personal
biases, thus scoring 0. Eight journals earned one point for avoiding the
opportunity to select or decline reviewers. Of note, several journals
cite this feature as a means of reducing bias; encouraging the
submitting author to target “unbiased” reviewers. The value of this is
debatable as this feature can easily be used paradoxically. Lastly, one
journal scored two points for blinding the reviewers to both the author
names and locations.
It may seem trivial at first glance, as we’ve grown so accustomed to
these aspects of the submission process, but it really isn’t. The notion
that editors reviewing manuscripts are immune to biases from prior
personal connections and experiences would be extremely shortsighted. Do
we really think a given reviewer can assess a submission from a beloved
former trainee in a reliably unbiased fashion? How about a manuscript
from an institution with which there was a falling out of some kind?
These themes are getting increasingly acknowledged in academic
publishing, with growing numbers of journals implementing safeguarding
measures. At most, there appears to be a nascent interest in addressing
these topics within otolaryngology field. With rejection rates at
all-time highs, it behooves us to reflect upon what can be done to
ensure that the best manuscript wins: Who the authors are, who they
know, and where they’re from shouldn’t be significant factors. As it
stands currently, our submission platforms leave open avenues for
personal connections to have a considerable influence. Reforming these
potential biases, or at the very least acknowledging them, is in order.