Posted on Categories Discover Magazine
Sometimes, scientific misconduct is so blatant as to be comical. I recently came across an example of this on Twitter. The following is an image from a paper published in the Journal of Materials Chemistry C:
As pointed out on PubPeer, this image – which is supposed to be an electron microscope image of some carbon dot (CD) nanoparticles – is an obvious fake. The “dots” are identical, and have clearly been cut-and-pasted. Where one copy has been placed over the top of another, the overlap is quite visible.
It would be charitable to even call this ‘scientific’ fraud.
The Journal of Materials Chemistry editors said on Twitter that they are “urgently” looking into this paper; I’ve no doubt it will be retracted soon, although the fact that it was published at all raises questions about the peer-review standards of this journal.
To me as a neuroscientist, cases like this from chemistry get me worried. In a field like materials chemistry, or any field in which results take the form of images or photographs (such as Western blots), low-effort fraud is easy to spot because the manipulation of an image can, at least in unsubtle cases, be easily proven from the image itself.
But what of fields like psychology or neuroscience where data don’t take the form of images? Perhaps low-effort frauds occur in these fields as well, but it is much more difficult to detect them when the results are statistical rather than pictorial in nature.
(An aside: neuroscience has plenty of images, e.g. fMRI maps, but I’ve never heard of a case of someone manipulating neuroimages. I’ve heard researchers joke about being tempted to “add some blobs in MS Paint”, but I’m not aware of this actually happening. A fraudster wouldn’t need to manipulate neuroimages directly, however, because these images are usually depicitions of statistics, not actual pictures, so fiddling the underlying data would be enough.)
There are many statistical tools for detecting fraud and I’ve blogged about some of these before. But these methods don’t directly show fraud. They can warn us that a certain set of data are extremely unlikely to be real, but they usually can’t show how the fraud was performed – unlike images which show us that e.g. copy-pasting was done (although see). And these tools are only applied when suspicion has been raised about a certain dataset, whereas with image manipulation, it can be immediately visible in the picture.
So, my worry is that psychology and neuroscience might have its own share of “copy-pasted carbon dots”, and no-one would ever know.
I’m not suggesting that this is a common problem; I think the vast majority of researchers are not frauds, and I think there are bigger systemic problems in science. It’s more a matter of pride. If I’m going to get fooled by a fraud, I’d want them to at least be a good fraud.