
Would a shop window for reviewers improve the inclusivity and efficiency of peer review?
Every year the volume of published scientific literature increases. With preprints having achieved mainstream acceptance, the publicly-available scientific literature now encompasses both editorially-approved journal articles and self-published material.
All those studies, whether first existing as preprints or not, need to undergo peer review before they gain general acceptance by the scientific community. Whatever its flaws, peer review remains an integral part of scientific process, essential for evaluating the quality of manuscripts and for forecasting their possible impact. And it’s not just papers that get subjected to peer review – research grants do too. The two pillars of the scientific research enterprise – publishing papers and securing third party funding from grants – both intrinsically depend on high-quality peer review.
But scientists get very little actual credit for carrying out peer review – it’s viewed as an altruistic activity, a moral obligation, a professional duty.
In today’s high-pressure environment (and let’s be honest, it’s gone from being merely “intense” to being a surface-of-Venus pulverising cauldron) this creates a paradox. On one side, funders and publishers directly depend on high-quality peer review from scientists in order to make decisions about funding and publishing, which has major effects on those same scientists’ careers and prospects. But conversely, peer review as an activity is only of indirect value to scientists, who are incentivised to expend their energy on publishing papers and writing grants and must sacrifice their time to do peer review.
Put another way, scientists’ actual careers depend on getting fair and high-quality peer review, but they themselves are under so much pressure to publish and get grants that there’s little incentive besides a sense of honour for doing peer review work properly. Peer review is also a shadowy activity, usually carried out anonymously, rarely with direct training, and no formal accreditation. Unsurprisingly then, referees are getting harder and harder to find, and both journals and funders are expending more and more time on finding reviewers. This gums the wheels of the system even more.
And there’s another demographic that depends heavily on peer review and who often get overlooked: authors. Authors need to nominate reviewers whenever they send a paper off for review, and it can often be extremely difficult thinking of names, especially if you’re trying to be inclusive and not send out a list of old white men.
Something’s got to give, but what will it be? Fewer papers? Not going to happen. Fewer grants? Not going to happen either. More reviewers? Yes please.
There is a widespread sense that involving ECRs (early career researchers) more in peer review would be a good thing, and several initiatives are already underway (with organisations such as ASAPbio taking a lead). Mobilising more ECRs increases the reviewer pool and makes sense because ECRs are generally closer to the bench and have the time to do peer review (probably more time than senior academics, whose timetables are less flexible). But while ECRs may have fewer timetabled hours than senior academics, they’re more of an untested quantity, they don’t have the visibility of more established academics, and they’re arguably under more career pressure because they don’t have permanent positions.
So why should ECRs commit time to peer review activities? ECRs seeking tenure will be evaluated based on their teaching and their research (papers, impact, grants), and right now, peer review is never going to be as important as these metrics from an institutional or a funding perspective.
But the truth – I think – is that most ECRs would welcome the opportunity to get more involved in the peer review process, because of the professional esteem (no matter how indirect) it signals. And peer review is undeniably an intellectually beneficial activity; the problem is just that our current reward system doesn’t acknowledge it.
So what kind of system could be established to communicate the intangible benefits that peer review brings, and perhaps demonstrate to institutions and funders that this is an activity that they should perhaps celebrate and promote amongst their employees and awardees?
Call it a shop window, call it a marketplace, call it a database, but if there were some kind of portal in which ECRs could present themselves as peer reviewer candidates this might be a workable solution. The ECRs could use either their real names or a pseudonym depending on their attitudes to anonymisation, list their specialisations, and journals could require authors to choose at least one reviewer from this pool when they nominate reviewers. After the review process was completed, the authors, editors, and reviewers could rate each reviewer’s contribution (e.g. having first indicated whether the review was positive or negative, they could then grade the reviewers for rigour, insight, constructive feedback, and cordiality – basically according to the FAST principles advocated by Iborra et al. for preprint review. Reviewers could gain “swimming badges” for the number of times they have reviewed for different journals. This would create a fine-grained readout of individual scientists’ peer review activities, which would require only some digital sleight of hand in order to shield the identities of those who would prefer to remain anonymous.
As an opt-in system, there would be no coercion. The feedback and swimming badges would provide a perspective on the range and quality of the peer review conducted – probably a good thing in a profession where there is little certification and licensing compared to others. The feedback from authors, editors, and other reviewers would allow the ECRs to improve as reviewers, and additionally help make authors more active participants in a system in which they’re currently more supplicant than client. Referee rating would hopefully mean that high-quality reviews from less senior scientists would still carry weight. And it would almost certainly help minoritised scientists actively boost their profile by demonstrating their capability, and also by reducing the chance of them being overlooked, and bringing them into the room instead.
Yes, of course there would be teething troubles with implementation. Yes, there would be the risk that this would not shake up inclusivity as much as we’d like. But it would at least help codify, document, and quantify an activity that is extremely nebulous at present. It would boost inclusion. It would help provide a readout of accumulated experience and in time, who knows, maybe even be of value for research assessment as a means of signalling expertise and aggregated contributions. Journals already keep informal stables of reviewers, so this is only a more public version of what they’re doing already – keeping tabs on reliable reviewers.
Of course, Goodhart’s Law applies: the problem with any metrics-based system is that the metrics invariably become the target, but we could at least use metrics that would facilitate a reform of the peer review system and an elevation of the level of dialogue (promoting cordiality, rigour, timeliness, fairness, and constructive input instead of venom, sloppiness, unreliability, bias, and negativity). It might be naive, but improving the dialogue between reviewers and authors might help reform what is currently a rather antagonistic system into something that’s more geared towards benefitting science. We’ve already seen how Review Commons spearheading the process of preprint review led to similar initiatives from eLife and other publishers, so who wants to be first to launch a reviewer accreditation system?
Acknowledgement/Disclosure
This posting draws directly from discussions at an expert elicitation workshop on referee credit mechanisms in research assessment, organised by EMBO.
One thought on “A (peer) review of the market”