The first results of eLife’s pioneering peer review system might point to the future of scientific publication.
What’s the point of publishing scientific data?
Objectively, the goal of publication is the dissemination of new research data. The payoff to both science and society from publication is increased knowledge about the natural world; the payoff to the individuals associated with the work is through reputation.
Publishing is not the only way to get a good reputation. A scientist can also acquire a good reputation through teaching, training, commentary, and public outreach. A problem though is that the payoff associated with research has become disproportionately large – not in terms of status, as scientists will always ultimately be judged by their competence at the generation of new knowledge – but in terms of career prospects. We rank scientists solely in terms of their research output. Being a good teacher, or producing a stream of well-adjusted and capable PhDs and junior faculty, or shaping opinions within the community, or conveying the excitement and importance of science to the lay public are valued, but what counts primarily (in job applications, for instance) is someone’s published research record.
Consequently, two things have happened. A desperate need to publish new findings before others do. And a desperate need to publish new findings in a small cabal of journals which are thought to contain better-quality work than others. Publishing first emphasises that the discovery is yours. Publishing in a “reputable” journal is, regrettably, often used as a shorthand metric for the work’s value.
Neither of these factors serves science, or the scientific community. Rushing to publish first encourages sloppy work, if not outright fraud; seeking to publish only in a small clique of journals encourages overhyping, overinterpretation, and sensationalism – and probably also a diminished diversity of research effort, as popular areas will be deemed more valuable simply by virtue of having a larger audience (and thereby a higher probability of citation). In addition, the varying speed at which peer review is conducted can mean that a paper is rejected from a journal before publication simply because a competitor has completed peer review quicker and the journal no longer deems the story of value. It is a problem which has led to the reproducibility crisis and worrying levels of disillusionment if not actual mental health issues in young scientists. It has also led to journal editors inadvertently wielding a huge amount of power over the careers of research scientists.
The physics community has pointed the way to a possible solution in its early adoption of preprints as part of the publishing paradigm. Preprint servers enable the publication of non-peer-reviewed work online for community viewing and scrutiny. As noted by in an outstanding Opinion piece written by Vale & Hyman (HERE), this creates a system in which preprints provide disclosure and establish priority (but no explicit quality control), while publication provides formal validation (through peer review). The indications are that this is catching on, and preprints are even beginning to be cited in research papers before their final publication.
What then is the actual point of peer review? Quality control is clearly important to remedy errors of fact, attribution, and interpretation. But 2-3 peers (reviewers) are not the whole community of peers, and potentially not even an unbiased and representative sample. There thus exists a substantial role in science for a post-publication process, i.e acceptance or not of the work’s findings through reproduction, elaboration, and citation. It is this post-publication process that ultimately determine’s a work’s real value – value which sometimes can lurk undiscovered for long periods of time.
The rather delicious and potentially explosive implication of the preprint model is that if community-level evaluation of preprints became widespread, then what’s the point of journals? This is much like the crisis that faces mainstream journalism in the age of blogs and social media. In that instance, the riposte is “quality” – professional journalists provide a high-quality, fact-checked, trusted service that non-mainstream media cannot always rival. But in science, the principle function of journals – via editors and the unpaid reviewers who are effectively adjunct staff – is to provide a seal of peer-reviewed validation. In the digital age, owning a printing press is no longer an asset when scientists can upload their manuscripts for public viewing. And if community-led feedback became widespread, then the opinions of 2-3 peers becomes of less value.
Consequently, it’s possible to ask what would happen if everybody just started publishing preprints? Or, taking it a step further, just started putting their results on their personal or departmental website and then publicising it through social media? In practice, having a central repository of uploaded work like bioRxiv is probably preferable to a decentralised system. But either way, the preprint then would become a kind of living document, annotated or updated or refined on the basis of reader feedback. The difference between a preprint and a paper would begin to melt away – or in fact, these living documents would then surpass printed papers as they would be more up to date and accurate. Exactly like Wikipedia usurping printed encyclopaedias.
Key to this vision, and probably the stumbling block to its realisation, is the need for democratisation of peer review. While in principle community-led research assessment is desirable, in practice what holds this back is that reviewing papers is seen as a duty – people generally do it only if they are asked to because it is time-consuming, cognitively intensive, and serves little direct benefit or acknowledgement to the reviewer. It is an activation energy barrier.
It’s in this context and against this background that eLife’s recent experiment becomes genuinely revolutionary, and points to a future paradigm for scientific journals and publishing. In time, it may even be recognised as a farsighted attempt to pre-emptively react to this exact scenario.
The basis of the eLife trial was simple – if a paper goes out for review, it gets published. Reviewed manuscripts are published with the editor’s letter, referee reports, and author responses. In the first tranche of submissions (see HERE), 22% of papers were sent out for review – this is lower than the 29% seen in the conventional process at the journal, but given that the conventional process results in a 15% publication rate, this means that there is a roughly 7% increase in the number of articles being published.
The eLife trial thereby points to a publishing model where the role of journals is to organise peer review (perhaps both small-scale and community-directed), handle publication (formatting and presentation), oversee long-term curation, and provide publicity. They are not there to determine whether or not work has “value” or “impact”. eLife is exemplary for insisting on work quality as the sole criterion for publication, not likely future citations or nebulous and transient notions of impact. The journal is thus acting as a gateway for dissemination and evaluation, not an arbiter of fashion.
Too many people forget that journals exist to provide a service, and if the service is not good (for example by allowing abusive reviews, being slow, imposing impractical limits on data volume through restrictive figure counts, or sheer snobbery) then customers should avoid them. Scientific publishing houses will need to offer something that justifies the choice in publishing with them.
Should journals be worried? Not especially. The ones doing a good job, i.e. providing a good service, should prosper in this system. The ones that do not will fall by the wayside. eLife, not for the first time, deserves enormous credit for thinking outside the box and attempting to change things in a way that serves both science and scientists better.