The Sad World of Uncited Papers

Posted on Categories Discover Magazine

A Nature News feature examines academic papers that have never been cited.

According to author Richard Van Noorden, by some estimates up to half of all papers have yet to receive their first citation 5 years after publication, and even 10% of Nobel Prizewinners’ papers go uncited.

However, Van Noorden reports that these estimates are far too high. For recent papers indexed on Web of Science (WoS), “records suggest that fewer than 10%” remain uncited, and even this is likely an overestimate, because it doesn’t track citations from journals outside WoS, not to mention books, patents, etc. As for Nobelist’s papers, it seems that just 0.3% are uncited.

The proportion of uncited papers does seem to have been higher in the past, however, with 20% of WoS papers published in 1980 remaining without a single citation today. So uncited papers do exist, but they’re uncommon, and getting rarer – probably because the number of references cited in each paper is growing, so there are more citations to go around.

Van Noorden notes that we shouldn’t assume that an uncited paper is worthless – they may still influence researchers and practitioners.

The article contains a particularly interesting example of uncited impact: a paper which wasn’t cited because it “closed off an unproductive avenue of research”

In 2003, Niklaas Buurma and colleagues published a paper about ‘the isochoric controversy’ – an argument about whether it would be useful to stop a solvent from contracting or expanding during a reaction, as usually occurs when temperatures change.

In theory, this technically challenging experiment might offer insight into how solvents influence chemical reaction rates. But Buurma’s tests showed that chemists don’t learn new information from this type of experiment. “We set out to show that something was not worth doing — and we showed it,” he says. “I am quite proud of this as a fully uncitable paper,” he adds.

In my view, “false path closing” is a key part of the scientific enterprise, and papers like Buurma et al.’s are very important. Such papers don’t always get zero citations – they may even be highly cited – but I suspect that they rarely get the kind of citations that greets the introduction of a new method (even if that method turns out to be flawed later.)

Once a method is shown to be flawed, few people will talk about it – even to acknowledge whoever showed that it doesn’t work.

A scientist who sets out to criticize a certain technique will often be told ‘well, you come up with a better approach’ – and in terms of getting citations, this is certainly better than writing a purely negative paper. But sometimes there is no better method available (at least not with current technology.)

A scientist with doubts about the methods used in their field will, therefore, face a dilemma: continue using the flawed methods available, and publishing citable if questionable results; or criticize the method, and risk cutting off the branch they stand on.

Leave a Reply