By Olavo Amaral
Academic science quality sieve exudes authority but means little
You can bet that, sooner or later, in any discussion of scientific data, someone will invoke the “peer-reviewed article” argument, either to believe a claim or to discredit it if it has not been verified.
Peer review – the approval of independent researchers before an article is published – has been considered the bastion of scientific research for decades (or more than a century, depending on the field) and, for many, delimits what is considered “science ” and what not.
On an iconic image of the March for Science in Washington in 2017, there is a poster in front of the Capitol that reads “In Peer Review we trust”, an allusion to “In God we trust”. Substitution, however, means exchanging one dogmatic belief for another.
After all, “peer-reviewed” just means that a few people – usually two or three – have reviewed an article and seen no reason to deny its publication. Since the process usually takes place behind closed doors, we don’t know who these people are, what opinions they have expressed, or what they went out of the way to review.
That being said, reviewers often become non-task oriented or given no instructions on what to review, and are not paid or rewarded for their work, leaving them with little support or encouragement to continue the review. Unsurprisingly, the correspondence between the different reviewers is minimal and sometimes almost random.
As if that weren’t enough, they only act at the end of the scientific process, when problems with data collection are already unsolvable. Worse, they work based on the authors’ reports and generally do not have access to the original data, which prevents them from identifying most of the errors and omissions that can occur during a project.
If you don’t suspect anything is wrong, imagine applying the same logic in other areas. If an airline told you they were delegating their quality control to two or three experts reviewing a multi-page report on the construction of a finished aircraft, would you get on board?
The reliance on peer review by the scientific community is even more worrying given the limited evidence in the scientific literature about the implications of the procedure. Comparisons between preprints – articles published prior to peer review – and their revised versions show that the differences in quality are small and that both the text and key conclusions rarely change.
When it comes to the filter function, the failure of the system is even more noticeable. Nonsensical articles with blunt errors or absurd conclusions, written with joking intent, will always be accepted somewhere. The problem is exacerbated by the so-called “Predatory Journals” – journals that charge fees for publication and whose profit is maximized through a lack of rigor.
The Covid-19 pandemic is fertile in examples of the fragility of the system. Theoretically peer-reviewed magazines have published things as bizarre as the idea that 5G technology could produce SARS-CoV2. Meanwhile, journals whose editors are affiliated with Didier Raoult’s Institut Hospitalo-Universitaire Méditerranée Infection have become a skewed showcase of studies advocating the use of hydroxychloroquine.
It would be easy to attribute the problem to poor quality publications, but the pandemic’s most notorious scandal has struck the Lancet and the New England Journal of Medicine, the world’s most respected medical journals, which were forced to publish articles with suspected manufacturing dates from one Part of the Surgisphere company.
This is not surprising: although traditional journals tend to be more selective in accepting articles, their peer-to-peer review processes do not differ significantly. In addition, the pressure to publish in these magazines can lead scientists to gold plating the pill to make its results more attractive. Thus, the quality criterion “Impact Publishing” does not solve the problem: Visibility and reliability are ultimately different things.
In the Surgisphere episode, critics were quick to point out culprits like the bias of the editors or the rush of the reviewers. In principle, however, the verification system itself is responsible, as it cannot recognize well-done cases of fraud without access to the data or the process by which it was collected.
If peer review is not a ruler, what can we call “scientifically proven”? Perhaps the best answer, somewhat tautological, is “the scientific consensus”. But the identification is not always obvious. The positions of scientific institutions and societies are approximate, but they have their political side – which in cases like the Brazilian medical associations mostly flirts with unionism – and are anything but biased.
The truth is that we do not have efficient institutional means to delineate reliable science that is badly missed in public debate. This is evident in fact-checking agencies headbanging to deal with the dozen of articles for and against early treatment for Covid, a matter too complicated to lump into “#fact” or “#fake”.
So there is a lot to do to build a trustmark that goes beyond being “peer-reviewed”. However, this will only be achieved if we overcome the belief that having two or three reviewers examining a PDF is enough to judge the quality of a complex process like scientific research.
There are enough examples of success: audits, certifications and standard procedures are routine at airports, buildings and hospitals, and one wonders why they are so rare in academic institutions. And even companies like Wikipedia have more sophisticated and robust review and correction processes than the anemic and opaque peer review of scientific articles.
Without better controls, academic research remains vulnerable to fraud, error and bias and fuels quackery with the “scientifically proven” seal. This is just the natural consequence of believing in a process in which no one sees what is being done. As in children’s story, the king is naked under the invisible clothing of peer-reviewed and sometimes it takes a child or a pandemic to force us to admit it.
Olavo Amaral is Professor at the Leopoldo de Meis Institute of Medical Biochemistry at UFRJ and Coordinator of the Brazilian Reproducibility Initiative.
Subscribe to Serrapilheira’s newsletter for more news from the Institute and the Ciência Fundamental blog. Do you have a proposal for an agenda? This is how you can work together.