Praise, censure, and the dream of open science

Marcelo Bacciarelli, “Allegory of Justice” (~1792)

Highlighting good-quality work post-publication might be more feasible and of more value than attempting to police data integrity. 

It’s the dream of open science: people post their work online, the community reviews it and provides constructive feedback, and then the authors correct their findings based on that feedback. Any new knowledge produced is rapidly and expertly assessed by the community as a whole, thereby maximising the input that the authors receive as they continue to pursue their lines of enquiry. Everyone participates, everyone benefits.

It sounds great, right? The problem is that almost nobody in the community voluntarily reviews others’ work. We’re all too busy. Peer review is a community service that does not have the cold hard reputational currency of grants and publications, and while appreciated, it is nonetheless undervalued in career terms. Preprints have belatedly and wonderfully achieved mainstream recognition in the biological sciences, but the majority of chatter for the majority of preprints is publicity-based. The comments area of most preprints is empty, with only around 8% of accruing public input.

It’s a nice reminder that one of the most important functions of journals within the scientific ecosystem is the simple act of commissioning peer review. It’s probably only thanks to the effort of journal editors that many papers manage to get 2-3 expert opinions at all.

Of course, the peer review process is imperfect. With such a small number of experts commenting on any given paper they’re bound to miss things. Authors can exclude some of the people best-positioned to provide feedback. Some reviewers prefer to employ peer review as a weapon to slow down or handicap others, and approach the whole process with a nonscientific mindset. And right now, people are so pushed for time that editors are often stretched just to find two people who have the expertise and the time to properly review a paper.

In addition, research itself has become so specialised that it can be hard to assess the merit of work even when it’s conceptually not that far removed from our own interests. As a result, journals have increasingly evolved another function – that of assigning value by proxy. Rather than there being a fairly flat hierarchy of quality journals offering affordable homes for papers based on methodology or topic, there’s an increasingly unseemly scrabble of journals all jostling to publish the same papers and with exorbitant rates charged for the most desirable locations. This has led to a concomitant warping of the peer review process, where reviewers are often asked or feel obliged to comment not only on whether the work is of good quality (a relatively objective judgement), but whether it deserves to be published in that particular journal (a highly subjective judgement).

The consequences of these scenarios have been documented at length. Because the value of publishing in specific journals is perceived to have a strong correlation to scientists’ future funding and promotion chances, there is a strong incentive to game the system at the expense of scientific rigour. Overhyped conclusions abound. There are mad stampedes to publish in whatever area is currently fashionable. And, regrettably, many scientists succumb to the temptation to do either sloppy or sometimes outright fraudulent work in the rush to publish.

While we’re not yet at the level of the Augean Stables, the growing volume of scientific publications allied to these problems in formal or community-level peer review means that there is an awful lot of problematic findings out there. The cancer field has been particularly damaged by unreproducible findings in so-called “landmark” papers, with an eye-popping number of findings failing to hold up in follow-up studies and leading to acute appreciation of the Reproducibility Crisis

This in turn has produced a cottage industry of data sleuths such as Elizabeth Bik, Retraction Watch, Ivan Oransky, Smut Clyde, and many more who have been diligently scanning the published literature and calling out or documenting instances of papers’ claims not being supported. It’s a Sisyphean task at the best of times, especially when there is little incentive to actually implement corrections and little censure when correction fails to happen, or happens only at a glacial rate.

It all adds up to a deeply unsatisfactory and rather depressing situation, with a Big Brother-style scrabble for attention and money, with good science taking second place to sales tactics. And it won’t end. The numbers of scientists are increasing, the numbers of papers are increasing, and in the absence of some Utopian community-led solution (despite the best efforts of eLife and other trailblazers) the numbers – maybe not the proportion, but the numbers – of flawed papers will continue to rise. Policing data integrity has in this respect become little different from combating and moderating hate speech online.

Ironically, one possible alternative approach has been hiding in plain sight. There may be many problematic papers, but by extension this means that there are relatively few really good ones. Instead of trying to police the problem papers, would it not be easier to promote the good ones? 

Faculty Opinions (formerly Faculty of 1000 and then Faculty1000Prime) has been around for 20 years now and has long been following this policy. It’s unstructured in that members are free to highlight whichever articles they want, but that should mean – as long as the member pool is large enough – that the best articles naturally rise to the top of the recommendations. It’s therefore a kind of reverse quality control: there’s no censure of substandard work, but good work across a wide portfolio of fields get promoted. 

By collating the considered feedback of scientists who are ideally highlighting good instead of merely fashionable work, it provides a really valuable service to the community and could counteract the obsession with prestige publishing that continues to blight the publishing landscape and often acts to the detriment of good-quality work. 

Prestige publications (which predominantly focus on areas currently in vogue) need little assistance with recognition, but high-quality work, especially that conducted in less mainstream areas, would benefit from highlighting. It would offer a genuine counterbalance to the hype merchants that peddle fashionable work with sloppy standards. And this constitutes genuine community-driven evaluation of papers, albeit at the post-publication stage instead of pre-publication. Perhaps that’s for the best. If we started focusing on whether papers’ findings hold up over time rather than whether or not they get accepted in a particular journal, then we might get to a better place. The obsession with immediacy in modern culture too easily makes us forget that real impact can be determined only after the passage of time. Good research is like wine, not grapejuice.

It’s also a reminder of the importance of reading as a part of scientists’ job description. If we’re not regularly reading the literature, then we’re delegating oversight of science’s published output to others. 

It’s essential that scientists read in order to engage with the work going on around them, and not just focus on what they themselves are doing in the lab right now. And helpfully, reading and recommending is different from formally reviewing. You don’t have to be as critical, you don’t have to be an expert, you don’t even have to be posting to Faculty Opinions – you just have to be looking for good quality work that’s well-presented and clearly communicated and deserves highlighting, and then letting others know about it. There’s no need for imposter syndrome here. Really good papers remain a rarity, so if you come across one that stands out, it’s worth shouting out.

Disclosure, Acknowledgement: I am associate member of Faculty Opinions. Thanks to Richard Sever for the ballpark figure on preprint comment frequency (important to note that there is a wide degree of field/subject variation, unsurprisingly).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s