Scientists Behaving Badly

You may think that science is a noble activity carried out by dedicated souls thinking only of the search for understanding and of improving the lot of society. Wrong! According to an item published in Nature ( there is rot in the core. A survey of 64,000 researchers at 22 universities in the Netherlands was carried out, 6,813 actually filled out the form and returned it, and an estimated 8% of scientists who so returned their forms in the anonymous survey confessed to falsifying or fabricating data at least once between 2017 and 2020. Given that a fraudster is less likely to confess, that figure is probably a clear underestimate.

There is worse. More than half of respondents also reported frequently engaging in “questionable research practices”. These include using inadequate research designs, which can be due to poor funding and hence more understandable, and frankly this could be a matter of opinion. On the other hand, if you confess to doing it you are at best slothful. Much worse, in my opinion, was deliberately judging manuscripts or fund applications while peer reviewing unfairly. Questionable research practices are “considered lesser evils” than outright research misconduct, which includes plagiarism and data fabrication. I am not so sure of that. Dismissing someone else’s work or fund application hurts their career.

There was then the question of “sloppy work”, which included failing to “preregister experimental protocols (43%), make underlying data available (47%) or keep comprehensive research records (56%)” I might be in danger here. I had never heard about “preregistering protocols”. I suspect that is more for the medical research than for physical sciences. My research has always been of the sort where you plan the next step based on the last step you have taken. As for “comprehensive records, I must admit my lab books have always been cryptic. My plan was to write it down, and as long as I could understand it, that was fine. Of course, I have worked independently and records were so I could report more fully and to some extent for legal reasons.

If you think that is bad, there is worse in medicine. On July 5 an item appeared in the British Medical Journal with the title “Time to assume that health research is fraudulent until proven otherwise?” One example: a Professor of epidemiology apparently published a review paper that included a paper that showed mannitol halved the death rate from comparable injuries. It was pointed out to him that that paper that he reviewed was based on clinical trials that never happened! All the trials came from a lead author who “came from an institution” that never existed! There were a number of co-authors but none had ever contributed patients, and many did not even know they were co-authors. Interestingly, none of the trials had been retracted so the fake stuff is still out there.

Another person who carried out systematic reviews eventually realized that only too many related to “zombie trials”. This is serious because it is only by reviewing a lot of different work can some more important over-arching conclusions be drawn, and if a reasonable percentage of the data is just plain rubbish everyone can jump to the wrong conclusions. Another medical expert attached to the journal Anaesthesia found from 526 trials, 14% had false data and 8% were categorised as zombie trials. Remember, if you are ever operated on, anaesthetics are your first hurdle! One expert has guessed that 20% of clinical trials as reported are false.

So why doesn’t peer review catch this? The problem for a reviewer such as myself is that when someone reports numbers representing measurements, you naturally assume they were the results of measurement. I look to see that they “make sense” and if they do, there is no reason to suspect them. Further, to reject a paper because you accuse it of fraud is very serious to the other person’s career, so who will do this without some sort of evidence?

And why do they do it? That is easier to understand: money and reputation. You need papers to get research funding and to keep your position as a scientist. It is very hard to detect, unless someone repeats your work, and even then there is the question, did they truly repeat it? We tend to trust each other, as we should be able to. Published results get rewards, publishers make money, Universities get glamour (unless they get caught out). Proving fraud (as opposed to suspecting it) is a skilled, complicated and time-consuming process, and since it shows badly on institutions and publishers, they are hardly enthusiastic. Evil peer review, i.e. dumping someone’s work to promote your own is simply strategic, and nobody will do anything about it.

It is, apparently, not a case of “bad apples”, but as the BMJ article states, a case of rotten forests and orchards. As usual, as to why, follow the money.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s