By Olavo Amaral
Scientist rankings are popular, but who rates the contribution of science?
* *
Researchers at Stanford University as well as SciTech and Elsevier published an updated ranking of the 100,000 scientists who received the most citations in articles in 2019 and over the past 22 years. The classification was welcomed by universities, who were quick to reveal how many of their professors were on the list: citations are a popular rule of influence, or academic “impact,” and both institutions and academics are often obsessed with this type of metric. Closer observers also saw positive signs for Brazilian science – after all, the country jumped from 600 scientists on the list in the past 22 years to 853 on the list in 2019.
However, accounting for citations creates a number of controversies. On the one hand, it is a better measure of academic relevance than the number of articles or the awareness of the journal they are published in – indices that unfortunately still make up the bulk of the rating made by researchers in Brazil. However, it also has important limitations, as it varies depending on the research area (scientists from popular fields are mentioned most often) and partnerships (collaborations with renowned colleagues are cited more often) and is open to manipulation (such as self-quotations and clams of mutual quotations).
However, even this criticism does not address the main problem. Quotations can be a reasonable way to measure academic impact, but even when used critically, they do not measure more than that. So, the classification of scientists by quotation shows who says who has the highest internal prestige in a given area however little about the quality and importance of their contributions to society.
While it is possible to celebrate the growth in the number of Brazilians on the list, that fact does little to tell the true meaning of national science. The number gives an idea of our relative position in the academic race – where we are still in the middle cadre, way behind the scientific forces of the developed world. But the number doesn’t help much in understanding where the race is going.
Academic science is known to have numerous structural problems, many of which are caused by hyper-competitiveness. Surveys in various fields show that most of the results published in scientific articles are not reproducible by other researchers – and still are not cited. So counting a researcher’s citations without confirming the reliability of his publications can be a shot in the foot. When an item of data is unreliable, the impact becomes cancer, not a virtue.
That being said, citations in articles measure influence within the academic world, which says little about the contribution of research – or any scientific field – to society. The culture of self-regulation by the academic community comes from far and wide – according to the model proposed by American Vannevar Bush in the 1950s, investing in basic research driven by the interests of the scientists themselves would be the smartest way in the long run to reap the fruits of science.
However, it’s not hard to imagine that self-regulation could result in an entire scientific field being lost to unicorns. There are scientific journals devoted to creationism or homeopathy with articles cited by researchers in these areas, although such theories revolve around premises that are apparently physically impossible. Wouldn’t there be academic alleyways within the so-called “serious” scientific examples that are also dead-end but more difficult to identify?
The American Daniel Sarewitz describes academic research as “a masturbatory endeavor worthy of Swift or Kafka”. Scientists produce data to publish articles and generate citations for funding to produce more data. He illustrated the picture with the myriad of animal models of Alzheimer’s disease in which hundreds of effective therapies to treat mice were developed to generate articles, citations, and reputations for their authors. Virtually none of this has returned to human benefit, however, leading the suspicion that we have been studying changes that may have little to do with real Alzheimer’s disease.
Of course, Sarewitz can bite his tongue: Nothing prevents models from bearing fruit in another 10 or 20 years, not least because the consensus of the academic community is more than right on average. As a result, citation rankings are not useless, and academic influence, even without guaranteeing the applicability of the research, is usually better than not having it. But wouldn’t it be necessary to assess the quality and practical results of science more directly?
Our contribution in this regard is the Brazilian Reproducibility Initiative, which tries to reproduce random results published by scientists in Brazil over the past 20 years through a network of more than 60 laboratories. However, the project is limited to a limited number of methods of biomedical research and has barely begun to solve the problem of reproducibility. That being said, it leaves out major or more important issues like the social impact of the research being done, for which we don’t have good indicators in most areas.
Perhaps we should care less about climbing the rankings of a self-centered, competitive, and less transparent academic science – not least because it’s a tram that often goes in the wrong direction. Perhaps it is better to use our scientific pads to build a better tram that is geared towards healthier values.
A more telling reason to celebrate could be the fact that Brazil is world champion in terms of percentage of open access articles, according to a 2018 survey. The title is largely due to the Scielo platform, which allowed us to create a cheaper and more rational scientific publishing infrastructure than the academic publishing market, where scientists deliver their work for free to companies to benefit from by making them Place behind paywalls. However, the Brazilian government itself is sabotaging the effort by requiring researchers to publish in prestigious foreign journals – and they often pay dearly to do so.
However, the search for more relevant methods of evaluating science is not to be expected to come from the academic world, dominated by researchers obsessed with metrics that enable them to survive in the system and keep the wheel going hold. On the contrary, in order to create a healthier science, it must be brought into public debate and open to bilateral communication with other actors. How to do this mature in times of scientific negation is a delicate balance. It’s not difficult to assume that criticism of science self-regulation can backfire when governments thirst for excuses to end it.
Even so, we can’t wait for the darkness to clear to start the discussion. Academic rankings are worth a look as she crawls, but not anymore, as quotes don’t make anyone’s word true – as her clumsy use as an ad hominem argument in recent pro-hydroxychloroquine manifestos shows. Fortunately, to make a difference in the real world, we need to win the public’s trust with more than that.
* *
Olavo Amaral is professor at the Medical Biochemistry Institute Leopoldo de Meis at UFRJ and coordinator of the Brazilian reproducibility initiative.
Sign up for the Serrapilheira newsletter for more news from the institute and the Fundamental Science blog.