Hi everyone. I'm Mark Khurana, a medical doctor and epidemiologist currently working as a research fellow at the University of Copenhagen, primarily in infectious disease epidemiology. I am the creator of the 'Untold Health' Substack and podcast, which explores topics within the field of health that are significant but overlooked. Also, I am the author of the upcoming book, The Trajectory of Discovery: What Determines the Rate and Direction of Medical Progress?, due to be released in the US in May 2023:
Medical research works in trajectories. Scientists and researchers must choose to pursue certain scientific pathways and omit others, limited by resources, attention, and time. The trajectory of medical progress is therefore characterized by two crucial characteristics: rate and direction. These two components form the foundation for this book - what are the forces that determine the rate and direction of progress in medicine? This book brings together the worlds of scientific policy, economics, sociology, philosophy, and innovation to describe why the world of medical research looks the way it does. The book also addresses fundamental contemporary issues in medicine, how they influence progress, and how we might improve medical research going forward. The contemporary issues discussed include: flawed incentive structures, a concentration of power and resources among few actors and disease groups, the potential distortionary effects of lobbying by different scientific actors, and missing novelty in drug development.
Ask me anything! I will be here Monday, March 27. Use the comments below to add questions, and upvote any questions you'd like to see me answer.
What is the problem of missing novelty in drug development, and what can be done to fix it?
A few different issues! I’ll preface my answer by saying that there is certainly some evidence that ‘good ideas are becoming harder to find’, meaning that the marginal effort required to discover a new drug is increasing. This isn’t an excuse for the pharmaceutical industry, but it is worth noting.
Structurally, large pharmaceutical companies take too few risks during drug development, meaning that the onus is on smaller companies and universities to develop novel products. Why so? Well in more recent times, large pharma companies have essentially offloaded a lot of their R&D in favor of simply acquiring smaller companies or the intellectual property rights to discoveries made at universities. This has been enabled by a variety of legislative changes, the most obvious one being the Bayh-Dole act in the United States (which allowed universities and institutions to acquire the rights for intellectual property generated from federal funding, which they could then sell on to companies). Of course, this seems like a logical strategy if it saves money, but from a broader drug discovery perspective, it slows the rate of progress.
A big issue with this set-up is smaller companies and universities don’t necessarily have the capital to try risk-taking, either. Universities are incentivized by a ‘publish or perish’ culture, where they are pressured to publish often, and aren’t funded to the extent that they can try out a wide range of potential drug candidates. Smaller biotech firms are also relatively cash-constrained, meaning that they might be able to focus on one (or maybe two) products simply because their cashflow is too small.
The result of this is that larger pharmaceutical companies have more liquidity and cashflow than smaller firms but aren’t willing to take risks (because they can simply acquire externally). On the flip side, smaller firms and academia are (relatively) more willing to try and develop novel products, but they are cash constrained. The overall consequence is stasis.
Other reasons could be over-regulation in certain settings (increasing the cost of getting drugs to market), broken drug markets (such as antibiotics, which I talk a lot more about in the book), and that the lowest hanging fruits have been picked (as mentioned at the start).
Some potential solutions? There are currently different drug payment models that are being tested (such as subscription models for antibiotics that is designed to make antibiotic production more lucrative), there are examples of early-stage incentives (such as Operation Warp-Speed to incentivise vaccine production during COVID-19) that might be effective, and different financing options for companies (i.e., pooling large sums together and constructing a diverse research portfolio of 50 drugs in the R&D pipeline, where only one or two need to succeed to make a profit overall). Some of the other solutions I’ll leave to be read in the book!
Why are there so many medical studies using sloppy research methods, and how big a problem do you think this is?
I noticed this when trying to figure out how common Long Covid is - most of the studies being reported in the media, at least early on, did not have a control group. On the basis of these studies, the media was saying that Long Covid affects 30, 50, or even 60% of people who get Covid.
Many of the studies also use methods which suffer from responder bias like surveying online support groups. Studies which track cohorts over time and have a good control group find more modest figures like 10-15% of patients experiencing greater than expected symptoms at 3 months. However nearly all of these are retrospective studies which as I understand it are not as good as prospective studies. More recently a study came out which does what should be done all along - it compares outcomes of Covid patients with patients who got symptomatic non-covid upper respiratory infection. They found more symptoms in the control group than the Covid group at 3 months. This calls into question whether Long Covid is actually an actual phenomena in its own right or just another iteration of post-viral illness / post-viral chronic fatigue syndrome (see Vinay Prasad's video).
I wonder, if low quality studies can be so misleading, is it worth doing them at all? It seems to me we should be pooling resources to do more high quality studies rather than many low quality ones.
Definitely an enormous issue! I’m not as familiar with the Long-COVID data, but the issue applies to a lot of other fields/areas.
I argue in the book that one of the most prevalent issues is linked to how researchers are incentivized to publish quickly and often. This means that studies will often tend to be under-powered (i.e., too few participants to show the ‘right’ level of statistical certainty) because it’s time-consuming to include more patients and because it’s often more expensive. The result is, as you point out, that we’re inundated with studies that don’t show much of an effect size, or at least not enough to conclude anything meaningful. Ultimately, this wastes research resources on a systems level, because one big study would have sufficed for fewer resources overall. But because it’s a publishing game, researchers aren’t incentivized to collaborate as much as we’d like from a progress perspective. In the book, I call this ‘artificial progress’, where we think we’ve learnt something new about the world (through the publishing of these studies), but ultimately we’re just misleading ourselves and need to use even more resources to clarify studies that should have been clear from the outset.
One could argue that it should ‘cost’ more for authors to submit under-powered studies to journals, since journals often accept their research despite the methodological flaws, and therefore authors aren’t penalized for this type of behavior. The journals might also prioritize interesting results over the study size being adequate – meaning that too many of these articles get published. Authors and journals ultimately both ‘win’ from this behavior.
I think this issue of sloppy research methods is probably MUCH more prevalent than we think, but I haven’t been able to find reliable sources. In the book I talk about research misconduct and fraud, where some “studies suggest that the true rate of fraud among published studies lies somewhere between 0.01% and 0.4%.” I’d suspect the rate of sloppy research methods to be many times higher than this.
That makes sense, thank you.
"studies suggest that the true rate of fraud among published studies lies somewhere between 0.01% and 0.4%". Even 0.4% seems drastically too low - perhaps 10 times too low. I'd be curious to see the source for this claim. An analysis by Elizabeth Bik and others found problematic image duplication in 3.8% of studies. Some of that may have been accidental, but I suspect most were intentional fraud. If ~3.8% percent of papers have this one specific type of fraud, that suggests an even larger percentage contain fraud in general. It's extremely hard to know, though. I doubt it's over 10% but I could easily see it being 5%, which is obviously still a massive problem.
What has gone wrong in the fight against Alzheimer's? Did a “cabal” prevent funding for anything other than the amyloid plaque hypothesis?
What do you think is the cause of Eroom's Law? Why has it (fortunately) stalled in the last decade? Do we have any hope of reversing it?
How do you think the role of physicians would be affected by AI?
how far we are from bio engineering to treat diseases/conditions?
Are we winning the war on cancer? Is it reasonably fast/steady progress, or has something gone wrong?
Good question! I think we seem to be going at a steady pace, but that depends on who you ask. Ultimately, it probably depends on what your expectations are of progress; my hunch is that people have higher expectations for cancer than for other diseases, particularly since it’s received so much attention historically, and that sets us up for inevitable failure when these expectations fail to materialize – much like the ‘war on cancer’ in the latter part of the 1900’s.
Broadly speaking, one can approach this from a treatment perspective and a prevention perspective. From a treatment perspective, there is definitely progress: “Since 1971, the cancer death rate is down more than 25 percent. Between 1975 and 2016, the five-year survival rate increased 36 percent. The arsenal of anticancer therapies has expanded more than tenfold.” We’re also in a position now where immunotherapies are becoming commonplace, and the drugs are becoming highly sophisticated. I think the next big treatment frontier is figuring out how best to use the arsenal of drugs we have, i.e., can we combine therapies in such a way that our treatments become more effective. We obviously hope to keep developing breakthrough drugs, but there’s a lot of untapped potential in lower-cost solutions and re-combining cancer drugs in new ways. This would also certainly save money, but pharmaceutical companies are obviously not as interested in doing this. To sum up, I think the treatment frontier involves greater experimentation with the implementation of drugs we currently have.
I’m not as convinced that our cancer prevention progress has been as impressive, however. Obviously, we’ve gotten a lot better at identifying environmental contaminants that might increase the likelihood of developing cancer, but a lot of the lifestyle diseases (e.g., obesity) that increase the risk of cancer haven’t been solved by any means. Ultimately, preventing cancer in the first place is a lot more efficient than having to treat it later.
As the saying goes – cancer is such a heterogenous phenomenon that it might not be prudent to lump them all together. They’re so distinct that the ‘war on cancer’ is more a ‘war on many, many fronts’. We're definitely making progress, but we shouldn't expect a one-size-fits-all solution anytime soon.
Which chapter of the book was the most interesting to write?
Two chapters spring to mind - for different reasons!
The very first chapter, “Citations as Currency", was probably the most fun to write, mostly because friends and colleagues that have read it can identify with the themes. The chapter is concerned with how researchers attempt to accumulate ‘scientific capital’ by publishing papers and getting citations, but this ends up distorting the types of research projects we choose to pursue. I enjoyed the colleagues telling me: “yes, this is exactly how I feel!” – validating that this really is an issue.
Chapter 15, “Death of a Star // New Kids on the Block”, was also fun to write! It’s concerned with how intergenerational dynamics in scientific teams influence progress. More concretely, I look at what happens when prestigious research leaders pass away, who takes over, and the difference between experienced and younger researchers in their research habits. I didn’t know anything about this literature prior to researching for the book, so it was an eye-opener!
What is the impact of social media on medical discovery?
Speed of information transfer: There’s good reason to believe that social media rapidly increases the speed at which science can be disseminated. Ideally, this increases the rate of medical discovery by a) making us aware of what others are doing and that we can build on and b) exposing us to alternative approaches and methods from other disciplines that we can integrate into our own work. I’ve certainly benefitted greatly from being exposed to ‘random’ articles from other fields.
Epistemic disorientation: In contrast to the first point, there are potentially negative effects of social media on both the rate and direction of discovery. For example, one the main issues with social media is that we can end up inducing a type of epistemic disorientation, where there is simply too much information to make sense of anything. We experienced some of this during COVID-19, where the amount of (contrasting) information that was being published ultimately confused us more than it provided clarity. Downstream consequences of this are that we end up having to conduct research to disprove the opinions of others, rather than doing it for any scientific reasons. Various conspiracy theories circulating online, such as the link between vaccination and autism, could also waste research resources.
Hype: Social media could also overhype certain treatments (e.g., Wegovy at the moment). This could result in disproportionate amounts of funding going towards ‘trendy’ research areas, meaning that resources are detracted from potentially more pressing health areas. In the book I call this ‘scientific bubbles’, where too much capital is concentrated in a small research area; the fear is, of course, that our expectations fail to materialize – resulting in a bubble burst of confidence and a loss of public trust in science.
Definitely many other ways - but these three come to mind most immediately!