Peer review is frustrating and flawed – here’s how we can fix it

By Eliza.Compton, 13 July, 2022
View
What would peer review 2.0 look like? Mark Humphries offers ways to optimise the process for better efficiency and research outcomes
Article type
Article
Main text

Fancy a torrent of war stories? Just ask an experienced researcher about having their papers reviewed. Sucking air through their teeth, they’ll tell you about that time the reviewer said their paper was “simply, manure” or complained that “the writing and data presentation are so bad that I had to leave work and go home early and then spend time to wonder what life is about”.

In the cold light of day, peer review can seem an absurd way to decide whether a paper is worth publishing. The sample size is tiny, just two or three reviewers, each bringing their own ideas of what questions are important, and what a good study looks like. Consistency among reviewers can be poor, as seen in the scoring of conference papers and grants. Cloaked in anonymity, the occasional unprofessionalism of reviewers is both fodder for endless memes and evidence of their bias against minority groups.

For many journals, reviewers are also gatekeepers, asked to judge the novelty and importance of the work, when novelty is arbitrary, and importance is often clear only in retrospect.

Peer review is also a thankless task. It’s done pro bono, and there’s an expectation that you’ll do your share of reviewing when the time comes. Invitations to review a paper are increasing year-on-year, but the proportion of acceptances is falling, and editors frequently complain of having to send 10 or more invitations to find one or two reviewers. Squeezed in around other commitments, reviews often betray the speed at which they are written, and can be incoherent in their argument and unclear on advice for revisions.

Here are a few proposals for how to fix this mess, and get to version 2.0:

Journals should publish all reviews

Knowing that your words will appear in public, attributed or not, can only serve to focus your attention more closely on what you say and how you say it. Publishing reviews requires little overhead, yet few journals publish them all (and voluntary publication only provides an escape clause for critical reviewers). Sure, for some reviewers, the promise – or threat – of their words being published will make no difference. But every little helps.

Reach a consensus

Journals could send to authors a single consensus review, based on the editor’s and reviewers’ individual reviews, that also states clearly what the authors must do for the paper to be accepted. Done right, this would force consistency, temper extreme or unprofessional comments, and improve the chances of producing a readable, coherent text.

That it might also reduce the likelihood of being asked for more pointless experiments is a bonus.

Doing the consensus right, though, relies on editors committing to it. The process at eLife, for example, has been exemplary. But from another journal, we’ve had a consensus letter that was two reviews copied and pasted together.

One objection is that a consensus review slows things down a little – but it’s hard to see why that’s a bad thing. Indeed, research fields with slower review processes rate their quality more highly.

Review only after publication

With post-publication review, you get the expert opinion and a chance to improve the paper, but not the gatekeeping or judgements on novelty or importance.

But it raises questions: who will do the review? Without the need to review before publication, the motivation to review goes away.

And what about prestige? Some journals, notably F1000Research, have been doing post-publication review for years, with little impact on mainstream publication, having gained little prestige. Arguably, the prestige of a journal stems from the fierceness of its pre-publication peer review; we often assume that a paper that has survived the editors and reviewers at a highly selective journal is worth reading.

There is a halfway house: the overlay journals. These act as portals to preprints hosted on a well-established platform, typically arXiv. The idea is simple: you publish the preprint, submit a link to an overlay journal, and the journal arranges reviews. Then you revise the preprint, and once accepted, it is formally published by the journal – given a DOI, listed on subject indexes. The whole time, anyone can read and cite the preprint and can make up their own mind about its worth, without the gatekeeping of peer review.

Decoupling reviewing from journals

What if a paper were reviewed once for a group of journals, and the authors then revised and submitted it to the journal of their choice?

The ambitious ReviewCommons platform is testing this idea with 17 well-respected biology journals, including those from EMBO and PLoS. In this process, authors submit to the platform, and ReviewCommons handles the reviews and journal submission. This solves many problems at a stroke: gatekeeping goes away, as the reviewers aren’t assessing the fit or quality for a specific journal; time is saved by not duplicating the review at many journals independently.

Ethics, checklists and other nudges

Small tweaks can improve the process, such as prompting reviewers to think ethically about their review and its language. The Committee on Publication Ethics (COPE) has long had a code of ethics for peer reviewers. Requiring reviewers to confirm that they’ve read that code, or something similar, before submitting their review would be a start.

Submission checklists could work harder to help catch simple errors. One simple addition: have you finished the damn paper before submitting it? I’ve reviewed papers with unfinished sentences, missing paragraphs, figure panels and figures, and figure contents not matching the caption or text. This carelessness removes trust.

This is important because peer review depends on trust between reviewers and authors – that the experiments were done as described, that the data are real, that the analyses were done and accurately reported. The less the reviewers trust, the more they will question a paper, the more they will seek to find fault – and the more adversarial and flawed the process becomes.

Peer review 2.0 can solve many problems, but not the basic one: do unto others as you would have them do unto you.

Mark D Humphries is a professor of computational neuroscience at the University of Nottingham.

If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter.

Standfirst
What would peer review 2.0 look like? Mark Humphries offers ways to optimise the process for better efficiency and research outcomes

comment3

THE_comment

1 year 9 months ago

Reported
False
User Id
3373588
User name
Fiorina Umbarto
Comment body
Agree with the comment: "Journals should publish all reviews"; meahwhile, the idea of performing the "Review only after publication >>> With post-publication review, ..."; but the problem is that in most of the times the Journal editors and/or authors' institutions ask for the real identity of the whistle-blowers, which is truly annoying and nonsensical! They use it as a trick to get released from their responsibilities... Any comments on this, please?
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoiRmlvcmluYSBVbWJhcnRvIiwiZW1haWwiOiJmaW9yaW5hLnVtYmFydG9AcHJvdG9ubWFpbC5jb20iLCJpZCI6IjMzNzM1ODgiLCJpYXQiOjE2NTc5OTY4MTksImV4cCI6MTY2MDU4ODgxOX0.PsaJADG2fjxc47q12bL2IAdell581urGWtMs8N7J85Vjy2PB1oe9A2NODOo-1RnGtPHAC5sIA1fyJoGQfLrrlg
Reviewed
Off

THE_comment

1 year 9 months ago

Reported
False
User Id
2896074
User name
medscicom
Comment body
First, as background, you may wish to read the Joint Position Statement on Peer Review, which was published in 2019, as a consensus from AMWA, EMWA, and ISMPP: https://www.tandfonline.com/doi/full/10.1080/03007995.2021.1900365. I concur with the aspirational suggestions from Mark Humphries; however, in the case of post-publication peer review, we are faced with the reality that, once published (particularly online), these manuscripts assume a life of their own and are resistant to retraction, sanctioning, and other methods of debunking. Sadly, once the "toothpaste is out of the tube", it cannot be put back in. RE: decoupling the review process from the journals, I would support this - perhaps convince the ICMJE journals to adopt this process. RE: identification of reviewers, this is a 2-edged sword. Yes - one would not like to "suppress" critical assessments; however, it is important that the journals identify credible and appropriate reviewers with the background and expertise to perform an objective review. Credibility is key to the principal of peer review.
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoibWVkc2NpY29tIiwiZW1haWwiOiJtZWRzY2ljb21AcmNuLmNvbSIsImlkIjoiMjg5NjA3NCIsImlhdCI6MTY1ODE1MzYxMywiZXhwIjoxNjYwNzQ1NjEzfQ.Z6fecqi4zwycP1vecrWBVQfZH92nGRWssGzzcZ9Uvh60LG4LvdPfd6tSQUMN2xcLyEMihPxGv9NC9wL-LqmM2g
Reviewed
Off

THE_comment

1 year 9 months ago

Reported
False
User Id
3025738
User name
berndpulverer
Comment body
Nice article. Note that many of these suggestions are already implemented for a decade at progressive journals like those at EMBO Press. Regarding the ‘consensus’ section: we run a variant of this that preserved the original referee opinion in individual reports, but through a consultative process between editors and all the referees allows them to nuance their views. The editor the add a synthesis of all this – does not have to represent referee consensus, but a single set of detailed journal requirement to publish. Our ref don’t know each others’ ID, se we stay focused on the science not to social network. On preprint overlay journals don't change the process, rather the refereed preprint concept does: post referee reports from in-revision or rejected journal papers from EMBO Press or ReviewCommons on biorxiv or medrxiv. Funders like EMBO and cOAlitionS affiliates consider them in eligibility criteria and research assessment.
jwt token
eyJhbGciOiJIUzUxMiJ9.eyJuYW1lIjoiYmVybmRwdWx2ZXJlciIsImVtYWlsIjoiYmVybmRwdWx2ZXJlckBlbWJvLm9yZyIsImlkIjoiMzAyNTczOCIsImlhdCI6MTY1OTE3MjgwMCwiZXhwIjoxNjYxNzY0ODAwfQ.T5-MAwespFoVNIL6PqIyFgd2wZBG26MS-6dhjUVKd8cZVv7z_Z-XYH24egCO2sDeUTmg7eBR4RWpK6SIvt5pIA
Reviewed
Off