Non-Peer Review of Medical Studies
The downside to online archives of pre-print articles.
WaPo (“Online archives where scientists post their research spark information revolution“):
News coverage of recent medical research often comes with a caveat that, before the pandemic, would have baffled many physicians — let alone other readers:
“This study was posted on a preprint server and has not yet undergone peer-review.”
Preprint servers — free online archives where scientists can post their research before formal publication — are a relatively new phenomenon in medicine (although popular for decades in other fields).
The traditional method of sharing new findings with the medical community is through a confidential process known as peer-review where study authors submit their research — including details on study design, results, conclusions and limitations of their findings — to a journal.
If the journal editor deems it worthy of further consideration, the research is usually sent to external experts in the field (the “peers” of peer-review) for comments. Based on their feedback, a study can be accepted for publication, rejected or given the opportunity for resubmission with revisions.
In the past, this back-and-forth process could drag out for months — sometimes for more than a year.
So when the coronavirus erupted in spring 2020, preprint servers were poised to lead a revolution.
MedRxiv (pronounced “Med-archive”) is one such server geared for the health-care community. It saw a five-fold jump in submissions in spring 2020 as researchers rushed to share knowledge of the deadly pandemic. More than 30,000 of the coronavirus articles published in 2020 were preprints — a trend that continued, albeit tapering, in 2021 even as major journals hastened their editorial review processes to accommodate the surge of studies.
The paradigm shift has forced scientists, journalists and the general public to change how they approach new studies.
In many cases, using a preprint server can lead to rapid dissemination of valid, much-needed data — such as this preprint study from June 2020 demonstrating that dexamethasone reduced deaths from severe covid-19 — a lifesaving treatment that changed medical practice weeks before the data was preliminarily published in the New England Journal of Medicine.
In December, data from a preprint study revealing the critical immune benefits of a booster shot against the omicron variant bolstered widespread recommendations by the Centers for Disease Control and Prevention and the Food and Drug Administration leading up to the holidays. Those findings were published online more than a week later by Cell and will be published in print in a February issue of the journal.
Some preprint studies, however, never make it through the peer-review process — or worse, report inaccurate findings that are spread by the media and public.
These rarer cases such as this summer’s notorious, now debunked preprint study on ivermectin can have dangerous consequences for public trust. Although the study was later retracted from the server, the damage was difficult to contain: It had been viewed more than 150,000 times and covered widely by the news media — not to mention contributed to a host of serious side effects among those who ingested ivermectin with no proven benefit against covid-19.
“People should realize that preprint data should be considered a preliminary version of the report,” said Douglas Jabs, director of the Center for Clinical Trials and Evidence Synthesis at the Johns Hopkins Bloomberg School of Public Health. “The final interpretation could be subject to some change.”
“Most articles that are ultimately accepted by peer-review are revised prior to publication, indicating there is usually potential for improvement,” he said.
There’s a whole lot more but you get the point.
Sharing preliminary research with other medical professionals through open servers strikes me as good practice for reasons that are obvious—getting information out faster is vital in emergencies like a pandemic—and less so—it helps those building a reputation as scholars establish the provenance of their ideas, since others may have similar ideas and reach print first because of the vagaries of the editorial practices of competing journals.
At the same time, it borders on criminal to make these studies available to every, Tom, Dick, and Tucker. In an era where every idiot is doing his own research, putting unverified findings out in the wild simply adds to the confusion and misinformation. “Why, I have a study from Harvard Medical School proving my insane theory!”
I’ll once again share an old joke attributed to a Dr. EE Peacock:
One day when I was a junior medical student, a very important Boston surgeon visited the school and delivered a great treatise on a large number of patients who had undergone successful operations for vascular reconstruction. At the end of a lecture, a young student at the back of the room timidly asked, “Do you have any controls?” Well, the great surgeon drew himself up to his full height, hit the desk, and said, “Do you mean did I not operate on half of the patients?” The hall grew very quiet then. The voice at the back of the room very hesitatingly replied, “Yes, that’s what I had in mind.” Then the visitor’s fist really came down as he thundered, “Of course not. That would have doomed half of them to their death.” G·d, it was quiet then, and one could scarcely hear the small voice ask, “Which half?”
That the Internet has made it much easier to do research is almost an unalloyed good. Academia of all sorts, and certainly medical academia, should do more to remove barriers to accessing vetted research. But unreviewed findings on the topic de jure are dangerous in the wrong hands.
I can tell you that in surgery, at least, a lot of papers boil down to “I tried this on six patients and here is what I thought.” Not really very useful, and potentially harmful.
I have mixed feelings about this. When used as intended it is beneficial. It lets authors put up papers and they get free and often helpful commentary. They can revise studies before submitting them for real publication. As a reader you can get ideas about what might be happening in the near future. So for someone like me who has been reading the medical literature for about 40 years it is moderately helpful. I can usually spot obviously bad papers right away and can often tell which are suspect, especially just from being severely underpowered. Most of all, even if you think the findings in one of these papers is probably correct you understand that you need to wait for peer review and you probably need there studies for confirmation.
On the downside you have a ton of people with no understanding of statistics and no knowledge of the area that they are reading about and no understanding of the history of publications on the topic.
Heck, everyone on the internet is an instant expert. You find a preprint that confirms what you want to believe and it becomes gospel truth. What is especially sad, at least to me, is that if you interact with people it is painfully obvious that most of the time the people who cite these awful studies have not even read them. Since this is what I do for a living I have a real interest so I actually do read them and then when you try to interact on the article you find it was just recommended to them by either some family member or someone they have decided is an authority, again because that authority is telling them what they want to hear.
I have ended up coming down on the side that we should keep these. What is not mentioned here is that there are now also a lot of journals where you can pay to have your research published. They all have legitimate sounding names. Then you have the quack organizations like Front Line Doctors that put out bad studies. If you do away with he pre-prints I think you will still get the bad ,advocacy papers published and lose the positive parts of pre-prints. So lets keep them.
(Slightly OT but VAERS causes a similar problem. If you have longer term experience with VAERS you know what it is and what it is used for and not useful for. You now have people missing VAERS data to achieve their political/ideological goals.)
I recently read someone characterizes these folks as “intellectually curious, but lacking in any intellectual rigor.” One can’t go without the other. And I’m not even sure how intellectually curious they are as opposed to just wanting to be told what they want to hear.
Anyone who cites The Conservative Tree House as an informed source is, at best, seeking confirmation bias.
Very informative. And shocking.
This seems so obvious. What is the counterargument? I’m shocked this is a thing. Shouldn’t preprint servers be available only to peers in specific domain or cohort?
In certain arenas, gatekeeping serves a very good purpose.
In medicine, you might kill someone or get a less than optimal outcome because you were leaning on bullshit to guide you.
With tongue only partially in cheek… Given the number of published peer-reviewed medical papers with results that can’t be reproduced by other researchers that we read about, that “unreviewed” may be unnecessary.
It’s complex. Journals as gatekeepers isn’t ideal – if for no other reason than the glacial pace of review and publication. Pre-prints allow ethical researchers to get good work out faster and speed the turnover of information and ideas. Unfortunately, they’re 1) useful for unethical researchers to get attention and money and 2) used to spread misinformation (see early ivermectin COVID studies). Ideally, researchers would use preprints to improve ideas and speed science and reporters would avoid them because they are, by definition, the unverified first draft of science…
It’s also important to point out that peer review is primarily a spam filter – peer review does not validate results or methodology, it just looks for gross mistakes and fabrications and, like any process, is subject to gaming.
Regardless, it’s a difficult problem. Doing science properly and validating results takes time and often society and political leaders have to make decisions and take actions before that can happen. The best course of action is to be open and transparent about the uncertainties, about what is actually known and unknown, and the reasoning behind decisions. And then made corrections have better information becomes available.
Sadly that’s is very rarely the reality – social psychology drives incentives to downplay uncertainties and project more confidence in a decision than is actually the case. Then this gets filtered through the business of journalism and the attendant incentives before it reaches the public.
People make fun of “doing your own research” but I’ve found that’s the only way I can reliably know what is actually going on given the poor state of most journalism and it’s made all the more difficult when journalists, experts and interested parties don’t transparently show their work.
….and sometimes the preprints never make it to publication because of totally other reasons (like your co-author never getting around to rewrite his half of the paper grumble grumble hiss.)
I’m of the opinion that access to preprints should be limited to those who show they understand the scientific method, demonstrate that if data which contradicts their theories shows up the theory will be thrown out, not the data, and that they understand how error bars work. (Dudes, you can’t argue about the curve going through data when the error bars run the length of the page.)