In a recent article in the Guardian, Brian Deer poses the question of whether regulation needs to be applied to scientific research. The article, Scientific fraud in the UK: The time has come for regulation, Mr. Deer states:
Fellows of the Royal Society aren’t supposed to shriek. But that’s what one did at a public meeting recently when I leapt onto my hobbyhorse: fraud in science. The establishment don’t want to know. An FRS in the audience – a professor of structural biology – practically vaulted across the room in full cry. What got this guy’s goat was my suggestion that scientists are no more trustworthy than restaurant managers or athletes.
Restaurant kitchens are checked because some of them are dirty. Athletes are drug-tested because some of them cheat. Old people’s homes, hospitals and centres for the disabled are subjected to random inspections. But oh-so-lofty scientists plough on unperturbed by the darker suspicions of our time.
Mr. Deer’s article mentions a just release UK Government report “Peer review in scientific publications” Here is the final paragraph from the summary for that report:
Finally, we found that the integrity of the peer-review process can only ever be as robust as the integrity of the people involved. Ethical and scientific misconduct—such as in the Wakefield case—damages peer review and science as a whole. Although it is not the role of peer review to police research integrity and identify fraud or misconduct, it does, on occasion, identify suspicious cases. While there is guidance in place for journal editors when ethical misconduct is suspected, we found the general oversight of research integrity in the UK to be unsatisfactory. We note that the UK Research Integrity Futures Working Group report recently made sensible recommendations about the way forward for research integrity in the UK, which have not been adopted. We recommend that the Government revisit the recommendation that the UK should have an oversight body for research integrity that provides “advice and support to research employers and assurance to research funders”, across all disciplines. Furthermore, while employers must take responsibility for the integrity of their employees’ research, we recommend that there be an external regulator overseeing research integrity. We also recommend that all UK research institutions have a specific member of staff leading on research integrity.
I find it odd that they are focusing on peer-review, which seems to me to be a narrow field of research integrity. That said, the report is recommending that “an oversight body for research integrity” be formed.
Back to Mr. Deer’s article. He quotes a Dr. David Taylor on why such oversight might not be needed:
“It is important to recognise that in the long term it matters little if published material is inaccurate, incompetent or even fraudulent, since the advance of the scientific canon only uses that material which turns out to fit the gradually emerging jigsaw,” is how Dr David Taylor, a former executive at AstraZeneca, expressed this tenet in a recent submission to the House of Commons science and technology committee, which publishes a report today.
I think that Dr. Taylor is taking a rather idealistic view of research. Yes, there is a self-correcting nature to research. Those results which are wrong will not be replicated and will, over time, fade.
But, how much time does it take for the self-correction? Autism research provides, unfortunately, a great example of the persistence of poor level, even fraudulent, research. For example, the concept of an epidemic caused by vaccines, either through the MMR vaccine or through thimerosal, was promoted by research which ran the gamut from reasonable speculation to outright fraud. One of the prime examples of research fraud which the committee cited in the report is on the MMR/autism hypothesis.
The problem is that while we wait for this “self correction”, real people suffer the consequences. Aside from the mental anguish it has caused, the vaccine/autism epidemic idea has spawned an industry of alternative medicine practitioners and treatments. These treatments run the gamut from worthless/harmless to powerful medicine and potentially dangerous.
Researchers, especially those who are publicly funded and/or publish, hold a public trust. Certainly, researchers hold a trust to use public funds wisely. Unfortunately, published research, even bad published research, is used to promote non-science agendas. The term “tobacco science” gets thrown around a lot, but the fact is that sometimes journal publications are less about reporting results as making a political or business statement. This happens for both “big pharma” and for “little pharma“. The harm from research fraud, or even just heavily biased research, is not limited to medicine. But I would posit that the most harm is done in the area of medical research.
As an American, I will be only an observer in if/how the UK pursues regulation of research integrity. However, the damage from research fraud knows no boundaries. I don’t know if there is an optimal solution which reduces the damage of research fraud through regulation while still promoting the freedom of self-direction for researcher. Is there a need? I’d say yes. Taking the Wakefield affair as an example, there may be few examples of really damaging fraudulent research, but the damage of even these few examples can be very great.
Recent Comments