Non-Peer Review of Medical Studies
The downside to online archives of pre-print articles.
WaPo (“Online archives where scientists post their research spark information revolution“):
News coverage of recent medical research often comes with a caveat that, before the pandemic, would have baffled many physicians — let alone other readers:
“This study was posted on a preprint server and has not yet undergone peer-review.”
Preprint servers — free online archives where scientists can post their research before formal publication — are a relatively new phenomenon in medicine (although popular for decades in other fields).
The traditional method of sharing new findings with the medical community is through a confidential process known as peer-review where study authors submit their research — including details on study design, results, conclusions and limitations of their findings — to a journal.
If the journal editor deems it worthy of further consideration, the research is usually sent to external experts in the field (the “peers” of peer-review) for comments. Based on their feedback, a study can be accepted for publication, rejected or given the opportunity for resubmission with revisions.
In the past, this back-and-forth process could drag out for months — sometimes for more than a year.
So when the coronavirus erupted in spring 2020, preprint servers were poised to lead a revolution.
MedRxiv (pronounced “Med-archive”) is one such server geared for the health-care community. It saw a five-fold jump in submissions in spring 2020 as researchers rushed to share knowledge of the deadly pandemic. More than 30,000 of the coronavirus articles published in 2020 were preprints — a trend that continued, albeit tapering, in 2021 even as major journals hastened their editorial review processes to accommodate the surge of studies.
The paradigm shift has forced scientists, journalists and the general public to change how they approach new studies.
In many cases, using a preprint server can lead to rapid dissemination of valid, much-needed data — such as this preprint study from June 2020 demonstrating that dexamethasone reduced deaths from severe covid-19 — a lifesaving treatment that changed medical practice weeks before the data was preliminarily published in the New England Journal of Medicine.
In December, data from a preprint study revealing the critical immune benefits of a booster shot against the omicron variant bolstered widespread recommendations by the Centers for Disease Control and Prevention and the Food and Drug Administration leading up to the holidays. Those findings were published online more than a week later by Cell and will be published in print in a February issue of the journal.
Some preprint studies, however, never make it through the peer-review process — or worse, report inaccurate findings that are spread by the media and public.
These rarer cases such as this summer’s notorious, now debunked preprint study on ivermectin can have dangerous consequences for public trust. Although the study was later retracted from the server, the damage was difficult to contain: It had been viewed more than 150,000 times and covered widely by the news media — not to mention contributed to a host of serious side effects among those who ingested ivermectin with no proven benefit against covid-19.
“People should realize that preprint data should be considered a preliminary version of the report,” said Douglas Jabs, director of the Center for Clinical Trials and Evidence Synthesis at the Johns Hopkins Bloomberg School of Public Health. “The final interpretation could be subject to some change.”
“Most articles that are ultimately accepted by peer-review are revised prior to publication, indicating there is usually potential for improvement,” he said.
There’s a whole lot more but you get the point.
Sharing preliminary research with other medical professionals through open servers strikes me as good practice for reasons that are obvious—getting information out faster is vital in emergencies like a pandemic—and less so—it helps those building a reputation as scholars establish the provenance of their ideas, since others may have similar ideas and reach print first because of the vagaries of the editorial practices of competing journals.
At the same time, it borders on criminal to make these studies available to every, Tom, Dick, and Tucker. In an era where every idiot is doing his own research, putting unverified findings out in the wild simply adds to the confusion and misinformation. “Why, I have a study from Harvard Medical School proving my insane theory!”
I’ll once again share an old joke attributed to a Dr. EE Peacock:
One day when I was a junior medical student, a very important Boston surgeon visited the school and delivered a great treatise on a large number of patients who had undergone successful operations for vascular reconstruction. At the end of a lecture, a young student at the back of the room timidly asked, “Do you have any controls?” Well, the great surgeon drew himself up to his full height, hit the desk, and said, “Do you mean did I not operate on half of the patients?” The hall grew very quiet then. The voice at the back of the room very hesitatingly replied, “Yes, that’s what I had in mind.” Then the visitor’s fist really came down as he thundered, “Of course not. That would have doomed half of them to their death.” G·d, it was quiet then, and one could scarcely hear the small voice ask, “Which half?”
That the Internet has made it much easier to do research is almost an unalloyed good. Academia of all sorts, and certainly medical academia, should do more to remove barriers to accessing vetted research. But unreviewed findings on the topic de jure are dangerous in the wrong hands.