Hey everyone! I was thinking about writing an upcoming piece on the h-index and share some of the communities ideas on what we could change it to. For as much as so many researchers understand that it’s not a great summary of an individual’s contributions, it’s often difficult to judge the merit of someone who publishes in a sub-area dissimilar to yourself and, particularly in cases like that, a neutral-seeming metric like an h-index is often referenced (whether we like it or not).

I’d love to know what you personally would change the h-index to if you had unilateral power for a day and could do such things! I’d love to (with your permission and giving you credit) share some of the more interesting/telling contributions in the article and talk about them a bit!

For example, a cruder version of what I might change the h-index to would be something like the following: 


I’d maybe find a way to upweight publications a bit. I’d also consider loads of other ways to calculate impact that didn’t utilize just publications and citations. This is just an example.

Reasoning: I think this metric much more fairly rewards massive hit research while still incentivizing researchers to be productive to some extent. Using something like this calculation, a person like John Nash, who has an h-index of only 10-ish, would still have a massive score (as he should). IMHO these h-indexes of like 50+ required to get tenure positions at some great departments are absurd. What is wrong with 15 really really good papers mixed with some down time in publishing/some duds?


Would love to know what you’d do! It can be extremely practical or a little more risky and fun! You can use more than publications and citations, of course! 

The vibe of the piece will hopefully be fun but also informative. Looking forward to seeing your responses!


1 comments, sorted by Click to highlight new comments since: Today at 12:22 AM
New Comment

I would like to understand what the biggest advantages of the h-index are. It seems to me the advantages are that it balances quantity and quality. Let's try the opposite for a decade or two. A measurement strategy that gives high weight to quantity or quality.

Here are some ideas, likely bizarre for reasons others will eagerly point out.

S-index = |N-log(C)|^log(C)

N = number of topics written on as measured by Milojević 2015 or more simply the unique keywords used by the journals to describe the article. C = total citations.

That formula is a response to Matt Clancy's paper on innovation getting harder. He points out that the number of topics is slowing down. So the incentive under this paradigm to work either on unique topics or on a few topics intently.

Another idea is to think that as science slows down groups and coalitions are becoming more important.

So here is a second idea, almost certainly terrible.

Average h-index of self and all co-authors. This would be something like a measure of the h-strength of one's network but gives no reward for the size of the network. One effect might be that it encourages strong research networks, which would have pros and cons, granting more freedom to the more productive clusters but making social life more important.