Posted on Categories Discover Magazine
The discussion of scientific papers on Twitter is largely dominated by spam bots, paid content promoters, and “monomaniacs” obsessed with a single issue. That’s according to researchers Nicolas Robinson-Garcia and colleagues in a new paper called The unbearable emptiness of tweeting—About journal articles
To reach their bleak conclusion, Robinson-Garcia et al. read 8,206 tweets. Tweets were included if they contained a link to a peer-reviewed paper in the field of dentristy and dental science, and if they originated from the USA between June 2011 and June 2016.
The authors’ discussion of the tweets is wide-ranging, but I think their points can be reduced to two main arguments.
First off, Robinson-Garcia et al. criticize the idea that the amount of tweets mentioning a given paper is a valid measure of how many people have engaged with it. They specifically criticize the Altmetric score, a measure of scientific ‘impact’ which includes tweeter count as one of its components:
A multi-year campaign has sought to convince us that counting the number of tweets about papers has value. Yet, reading tweets about dental journal articles suggested the opposite. This analysis found: obsessive single issue tweeting, duplicate tweeting from many accounts presumably under centralized professional management, bots, and much presumably human tweeting duplicative, almost entirely mechanical and devoid of original thought.
Advocates of tweeting about the research literature and of altmetrics have extoled an ideal of curating and informing about the literature. Some accounts exemplified this, but they posted less than 10% of tweets about dental papers. Finding these accounts or seeing their influence on Twitter data about dental papers would be like looking for a needle in a haystack.
I agree that raw tweet counts are not very useful, but I think the people behind Altmetrics also agree. According to Altmetrics, their Twitter impact score is weighted based on the influence of the tweeting accounts, whether the account selectively tweets about particular journals, and other factors. In other words, Altmetrics already incorporates measures to filter out low-quality tweets and journals’ social media teams.
Whether Altmetrics’ quality control measures are effective is another question, but Robinson-Garcia et al. didn’t evaluate this. They didn’t study Altmetric scores, but rather looked at raw tweet counts (although they did draw their sample from Altmetrics’ dataset.)
Robinson-Garcia et al.’s second argument in their paper is more of a philosophical objection to Twitter as a medium for scientific discussion. They come close to suggesting that the majority of human Twitter users who tweet about papers are simply ‘mechanically’ clicking on things, making them little better than bots:
Chu and colleagues [45] offered several criteria distinguishing humans from bots on Twitter: originality, evidence of intelligence and specificity. Most accounts’ tweets about dental papers violated two of Chu et al.’s criteria for identifying a human because they lack original and intelligent content… The title and URL of a paper can be tweeted by clicking on an icon on a paper’s page. Again, this is easily automated and excessive automation was another of Chu et al.’s criteria for being a bot.
Then again, this rather misses the point that people decide which share buttons to click on, i.e. which papers are most interesting.
The authors go on to suggest that Twitter is fundamentally not a good forum for serious discussion:
In forums or blogs, the concept of a thread applies; a focus is expected. On Twitter, rich conversation invariably signals a veering away from the original tweet with reactions based on how tweeters feel at the time, not exactly stream of consciousness because that implies pure thought, but stream of judgement, attitude or feeling… Twitters’ thinned out, 140 character missives contrast with the thick and rich texts of research and scholarship. Twitter’s valuing of volume contrasts with the thoughtfulness of high quality scholarship.
Robinson-Garcia et al. say plenty more in this vein, but in my view their critique of Twitter discourse is misplaced, because Twitter is not supposed to be a replacement for ‘high quality scholarship’ but is rather a medium of conversation.
One could take Robinson-Garcia et al.’s points and apply them equally well to the conversational medium of human speech. In fact if you recorded the spoken words of all the attendees at a scientific conference (say), you would probably conclude that science was doomed.
I suspect that the vast majority of speech within the conference venue would be a kind of ‘stream of judgement, attitude or feeling’, if it was related to science at all. This doesn’t mean that talking about science is pointless.
So I disagree with much of this article, but that’s not to say it’s a bad paper. In taking social media seriously and evaluating it critically, Robinson-Garcia et al. are doing important work.