Should the H-Index Even Be a Criterion To Measure Scientific Excellence?

Research performance needs to be evaluated in a much more holistic manner rather than just one parameter.
Listen to this story

This year’s Nobel Prize for physics was awarded to Alain Aspect, John Clauser and Anton Zeilinger. They separately conducted groundbreaking experiments using entangled quantum states. Their results have now paved the way for the development of a new technology that would be based on quantum information. 

Interestingly, a few people online have pointed out that Clauser has a relatively low h-index. Clauser is not the only one. Harry Kroto, the 1996 winner of the Nobel Prize in Chemistry, did not have a high h-index; in fact, a single publication earned him the prestigious award.

Often considered a metric to measure the research quality of a scientist, Clauser and Kroto’s case shows that the h-index may not be an accurate indicator of scientific excellence.

How h-index came to being

In 2005, JE Hirsch, professor of Physics at the University of California, devised the h-index and introduced it through a paper titled, ‘An index to quantify an individual’s scientific research output’. In his paper, he said that it is a challenge to quantify the cumulative impact and relevance of a researcher’s output. A standardised metric or a quantification helps in evaluating and comparing the output, which could, in turn, help in university faculty recruitment and award of grants, and more. Measuring quality is important, particularly when scientists are competing for a limited number of opportunities.

JE Hirsch

He then proposed the h-index, which in his own words, provides “a useful yardstick with which to compare, in an unbiased way, different individuals competing for the same resource when an important evaluation criterion is a scientific achievement”.

This index measures the number of highly-cited papers a scientist has written. The value of h is the number of papers that have each been cited by at least h other papers. For example, a scientist who has authored five papers must have at least five citations for each paper to have an h-index of five. It has been close to two decades now, and the h-index has been accepted as the industry standard to gauge when applying for grants, jobs, and awards. The h-index is sometimes also used to measure the scientific output of a research group, scientific facilities, and even countries.

Criticism against h-index

Hirsch’s proposal to have a metric that could mathematically indicate the research achievements of a scientist has fascinated many. While this metric has become a sort of ‘industry standard’, it has faced much criticism from various quarters. It is often deemed too generalised, simplistic, and even misleading.

In a 2008 paper, authors Lehmann and team wrote, “The problem is that Hirsch assumes equality between incommensurable quantities. An author’s papers are listed in order of decreasing citations with paper i having C(i) citations. Hirsch’s index is determined by the equality, h = C(h), which posits equality between two quantities with no evident logical connection”. 

The h-index may vary depending on the research field in consideration. Even in his seminal work, Hirsh reported that researchers from the life sciences field will have higher h-indices compared to a field like Physics. Further, he also said that researchers working in domains other than science will not have the same h-index values as the top in those working in the high topical areas. This is indicative of how the h-index imposes a ‘one-size-fits-all approach’ to scientific impact. 

Another drawback of the h-index is that an author can artificially inflate their index. At this point, it is important to talk about the eminent French microbiologist, Didier Raoult. He said that the anti-malaria drug hydroxychloroquine could also treat coronavirus. This ‘discovery’ helped him earn global fame, and even the then US President Donald Trump championed the method. However, it was quickly proven that the drug does not work against the virus. Raoult was soon slapped with criminal charges. 

Didier Raoult

A separate report showcases another facet of Raoult’s research work. He published 2,053 articles between 1979 and 2018 and received a total citation of 72,847 and his h-index was found to be 120. However, the Web of Science Database shows that of the total citations attributed to articles co-authored by him, 18,145 come from the articles he has co-authored. The self-citations alone formed 25% of the total number. Deducting these, his actual h-index comes to a value of 104. The h-index thereby fails to differentiate relative contributions to the work in multiple-author papers.

Another major criticism against the h-index is that it puts young scientists at a disadvantage. It is almost impossible to compare researchers at different stages in their careers, even those working in the same field. This is because some articles accumulate citations, and the number only increases over time. In such situations, it purely becomes a case of what research work has been around for a longer period of time.

Not an accurate metric

Analytics India Magazine spoke to Prof. Rajat Agarwal, Associate Dean of Innovation and Incubation and Faculty at the Department of Management Studies, IIT Roorkee, regarding the impact of the h-index. He said, “Exceptions are always possible where the h-index may not measure the impact of your research output. It is just one of the many parameters which can be used for measuring academic impact. It is more useful to track the consistency in your research output and ensure you are continuously active in the field. For an active researcher tracking the growth of the h-index means that you are continuously producing impactful research”.

He further added that research performance needs to be evaluated in a much more holistic manner rather than using merely one parameter. The most important factor for measuring the research performance is its application in solving real-life problems. “Therefore, patents and bringing products/solutions based on these patents to the market is one important dimension of research performance. Next is the publication, where publication in the top-ranking journal is very important, and citation of research in a similar-level journal is an important indicator,” he concluded.

Download our Mobile App

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

6 IDEs Built for Rust

Rust IDEs aid efficient code development by offering features like code completion, syntax highlighting, linting, debugging tools, and code refactoring