It’s here at last: hard evidence that small research groups do more original work.
Most scientists feel intuitively that small groups are somehow better for research, but it’s so damn hard finding any evidence to support that idea. Big groups publish more papers, attract more money, and hog the headlines. Never mind the fact that the per capita productivity of big groups is often appalling, they’re the big kids on the block and they get the attention befitting that status.
Consequently, as a scientist gains success and reputation, she or he will naturally tend to grow their group and in a manner resembling a gas – expanding to fill whatever container (defined not by space, but by money) they are in. More money means more people, and the more people you have the more data you can generate. The more data, the more papers; and the more papers, the more money.
As such, the affection that many scientists have for small groups seems a romantic abstraction, an impractical ideal that successful scientists – running big, well-funded groups – look on in a rather misty-eyed and unfocused way. It’s like the urbanite’s fantasy of a cottage in the countryside with roses growing up the walls, beehives in the garden, and chickens in the yard – attractive, but an idyll that’s best left unspoiled and unexplored. Then those bigshot profs shake their heads, snap out of it, and get back to the business of running their big successful group.
Or should they?
A genuinely fascinating recent article from Wu, Wang & Evans finally offers empirical support for that intuitive belief that small groups are better for science, and an essential feature of the scientific ecosystem. To determine this, they adapted a metric previously used in the assessment of patent applications, an area where determining the novelty of a contribution is critical to its rating.
Wu et al., call their metric disruptiveness, but what they really mean (and are maybe too polite to say so) is “originality”. This metric – disruptiveness – is based on whether a paper tends to be cited on its own, or if it tends to be cited alongside the same papers that it itself cites.
It’s a brilliant insight. Work that consolidates and extends existing knowledge will naturally tend to cite all the work that precedes it. In this way, the paper’s contribution is firmly situated within a corpus of prior work, a rhetorical device that not only provides context but also appropriate recognition to the other people working in the field (who, let’s not forget, are also the ones likely to be reviewing it). Subsequent publications repay the favour, and cite both the paper itself and pretty much its entire reference list as well.
Conversely, highly original work – i.e, a kind of fresh start – doesn’t have to cite volumes and volumes of other work, because there isn’t any other prior work out there to acknowledge. And any subsequent papers that capitalise on this paper’s original contribution will tend to cite just that paper – or at the very least, a much smaller proportion of its reference list than is the case for a conventional consolidating contribution. It’s ground zero for that question.
So Wu et al. take their shiny new metric, run it through several tens of millions (no kidding) scientific articles, patents, and software projects, and what do they find? They find that small groups are far more “disruptive” (again, let’s keep with the polite language) than large teams across all fields and eras. These small groups also tend to reach further back into the past in their citations, while citation lists from large groups are broad and shallow timewise. Not surprisingly, work from large teams also accumulates raw citation counts faster, as the work has a more immediate audience that it can reach and satisfy.
Let’s not forget that that rapid accumulation of citations by publications from large groups (thereby imparting the “impact” so beloved of those who, well, tend or try to publish papers with high citations in the two-year range, nod nod wink wink) is not just a feature, it’s a prerogative, a NEED. Large groups need “impact” in the JIF sense because they need to justify their payrolls, so they can’t wait around for something to catch fire, they need those citations now. Conversely, publications from small groups tend to accumulate citations more slowly but have a greater ripple effect as time goes on.
Even interdiscipliniarity – a loathsome buzzword that tends to mean “collaborators with different vested interests” – decreases as group size goes up, contrary to dogma. And even more remarkably, the type of work done by individual scientists varies depending on whether they are affiliated with a large or a small research group. The same people who do original work in small teams will do consolidation work in large ones.
There are naturally some questions and caveats to be made. How accurate is the determination of group size? Why shouldn’t large teams also do disruptive work? (after all, they have people to spare) What proportion of small group publications go on to be genuinely “disruptive” instead of just ignored? And never mind disruptiveness, how many small groups actually last long enough to see their work bear fruit? Science may benefit from their input, but that’s scant consolation to the former junior group leader who’s now working as a barista.
What’s beautiful though is the way it finally validates so many intuitions that we have about research. Big groups tend to do boring (often high quality, but boring) work, while smaller groups are in a more precarious situation but somehow alive in a more vital way. Big groups are mainstream, small groups are underground. Big groups work in fashionable areas and cite the dozens of papers that come out in that field every month; small groups go chasing after the obscure ideas that never got followed up and are buried in the literature. Big groups are Jack Vettriano, small groups are Vincent Van Gogh.
An indelible further conclusion is that a healthy science ecosystem must resist the consolidating urge, must keep biodiversity high and maintain habitats for small teams as well as large ones. For scientists themselves, there’s a rather delicious implication: if you care about the originality of your science, you shouldn’t seek to keep building your group simply for the sake of it.
To be clear, this is absolutely not an argument in favour of austerity – Wu et al.’s work concerns team size, not team resources. It’s also worth noting that there is plenty of important work that gets done that’s not thrilling. There is clearly a requirement for medically relevant but conceptually dull research, and Wu et al are concerned solely with originality, the stuff that tends to catalyse research and win prizes. Arguably then, the best environment for originality would be one that insisted on small groups, but lavishly funded ones – interestingly, almost exactly the philosophy that’s pursued by the MRC’s Laboratory of Molecular Biology and the HHMI’s Janelia Farms campus.
The key thing is that if you let success change you, if you embrace the path of the big group, allow success to actually remodel the environment in which you conduct your research, by growing, then the type of work you do will change too.
There’s a lesson too for universities and institutes that are always chasing after the latest trends – you will get success, you will get citations galore and high-impact (yawn) papers aplenty, but you may not get the Nobels. The smart money is made by looking to the future, not to the now.
3 thoughts on “Good things come in small packages”