Good question! I think we seem to be going at a steady pace, but that depends on who you ask. Ultimately, it probably depends on what your expectations are of progress; my hunch is that people have higher expectations for cancer than for other diseases, particularly since it’s received so much attention historically, and that sets us up for inevitable failure when these expectations fail to materialize – much like the ‘war on cancer’ in the latter part of the 1900’s.
Broadly speaking, one can approach this from a treatment perspective and a prevention perspective. From a treatment perspective, there is definitely progress: “Since 1971, the cancer death rate is down more than 25 percent. Between 1975 and 2016, the five-year survival rate increased 36 percent. The arsenal of anticancer therapies has expanded more than tenfold.” We’re also in a position now where immunotherapies are becoming commonplace, and the drugs are becoming highly sophisticated. I think the next big treatment frontier is figuring out how best to use the arsenal of drugs we have, i.e., can we combine therapies in such a way that our treatments become more effective. We obviously hope to keep developing breakthrough drugs, but there’s a lot of untapped potential in lower-cost solutions and re-combining cancer drugs in new ways. This would also certainly save money, but pharmaceutical companies are obviously not as interested in doing this. To sum up, I think the treatment frontier involves greater experimentation with the implementation of drugs we currently have.
I’m not as convinced that our cancer prevention progress has been as impressive, however. Obviously, we’ve gotten a lot better at identifying environmental contaminants that might increase the likelihood of developing cancer, but a lot of the lifestyle diseases (e.g., obesity) that increase the risk of cancer haven’t been solved by any means. Ultimately, preventing cancer in the first place is a lot more efficient than having to treat it later.
As the saying goes – cancer is such a heterogenous phenomenon that it might not be prudent to lump them all together. They’re so distinct that the ‘war on cancer’ is more a ‘war on many, many fronts’. We're definitely making progress, but we shouldn't expect a one-size-fits-all solution anytime soon.
Speed of information transfer: There’s good reason to believe that social media rapidly increases the speed at which science can be disseminated. Ideally, this increases the rate of medical discovery by a) making us aware of what others are doing and that we can build on and b) exposing us to alternative approaches and methods from other disciplines that we can integrate into our own work. I’ve certainly benefitted greatly from being exposed to ‘random’ articles from other fields.
Epistemic disorientation: In contrast to the first point, there are potentially negative effects of social media on both the rate and direction of discovery. For example, one the main issues with social media is that we can end up inducing a type of epistemic disorientation, where there is simply too much information to make sense of anything. We experienced some of this during COVID-19, where the amount of (contrasting) information that was being published ultimately confused us more than it provided clarity. Downstream consequences of this are that we end up having to conduct research to disprove the opinions of others, rather than doing it for any scientific reasons. Various conspiracy theories circulating online, such as the link between vaccination and autism, could also waste research resources.
Hype: Social media could also overhype certain treatments (e.g., Wegovy at the moment). This could result in disproportionate amounts of funding going towards ‘trendy’ research areas, meaning that resources are detracted from potentially more pressing health areas. In the book I call this ‘scientific bubbles’, where too much capital is concentrated in a small research area; the fear is, of course, that our expectations fail to materialize – resulting in a bubble burst of confidence and a loss of public trust in science.
Definitely many other ways - but these three come to mind most immediately!
Two chapters spring to mind - for different reasons!
The very first chapter, “Citations as Currency", was probably the most fun to write, mostly because friends and colleagues that have read it can identify with the themes. The chapter is concerned with how researchers attempt to accumulate ‘scientific capital’ by publishing papers and getting citations, but this ends up distorting the types of research projects we choose to pursue. I enjoyed the colleagues telling me: “yes, this is exactly how I feel!” – validating that this really is an issue.
Chapter 15, “Death of a Star // New Kids on the Block”, was also fun to write! It’s concerned with how intergenerational dynamics in scientific teams influence progress. More concretely, I look at what happens when prestigious research leaders pass away, who takes over, and the difference between experienced and younger researchers in their research habits. I didn’t know anything about this literature prior to researching for the book, so it was an eye-opener!
A few different issues! I’ll preface my answer by saying that there is certainly some evidence that ‘good ideas are becoming harder to find’, meaning that the marginal effort required to discover a new drug is increasing. This isn’t an excuse for the pharmaceutical industry, but it is worth noting.
Structurally, large pharmaceutical companies take too few risks during drug development, meaning that the onus is on smaller companies and universities to develop novel products. Why so? Well in more recent times, large pharma companies have essentially offloaded a lot of their R&D in favor of simply acquiring smaller companies or the intellectual property rights to discoveries made at universities. This has been enabled by a variety of legislative changes, the most obvious one being the Bayh-Dole act in the United States (which allowed universities and institutions to acquire the rights for intellectual property generated from federal funding, which they could then sell on to companies). Of course, this seems like a logical strategy if it saves money, but from a broader drug discovery perspective, it slows the rate of progress.
A big issue with this set-up is smaller companies and universities don’t necessarily have the capital to try risk-taking, either. Universities are incentivized by a ‘publish or perish’ culture, where they are pressured to publish often, and aren’t funded to the extent that they can try out a wide range of potential drug candidates. Smaller biotech firms are also relatively cash-constrained, meaning that they might be able to focus on one (or maybe two) products simply because their cashflow is too small.
The result of this is that larger pharmaceutical companies have more liquidity and cashflow than smaller firms but aren’t willing to take risks (because they can simply acquire externally). On the flip side, smaller firms and academia are (relatively) more willing to try and develop novel products, but they are cash constrained. The overall consequence is stasis.
Other reasons could be over-regulation in certain settings (increasing the cost of getting drugs to market), broken drug markets (such as antibiotics, which I talk a lot more about in the book), and that the lowest hanging fruits have been picked (as mentioned at the start).
Some potential solutions? There are currently different drug payment models that are being tested (such as subscription models for antibiotics that is designed to make antibiotic production more lucrative), there are examples of early-stage incentives (such as Operation Warp-Speed to incentivise vaccine production during COVID-19) that might be effective, and different financing options for companies (i.e., pooling large sums together and constructing a diverse research portfolio of 50 drugs in the R&D pipeline, where only one or two need to succeed to make a profit overall). Some of the other solutions I’ll leave to be read in the book!
If you were given a million dollar grant, which question in innovation policy would you want to answer and what might that dataset look like?
Definitely an enormous issue! I’m not as familiar with the Long-COVID data, but the issue applies to a lot of other fields/areas.
I argue in the book that one of the most prevalent issues is linked to how researchers are incentivized to publish quickly and often. This means that studies will often tend to be under-powered (i.e., too few participants to show the ‘right’ level of statistical certainty) because it’s time-consuming to include more patients and because it’s often more expensive. The result is, as you point out, that we’re inundated with studies that don’t show much of an effect size, or at least not enough to conclude anything meaningful. Ultimately, this wastes research resources on a systems level, because one big study would have sufficed for fewer resources overall. But because it’s a publishing game, researchers aren’t incentivized to collaborate as much as we’d like from a progress perspective. In the book, I call this ‘artificial progress’, where we think we’ve learnt something new about the world (through the publishing of these studies), but ultimately we’re just misleading ourselves and need to use even more resources to clarify studies that should have been clear from the outset.
One could argue that it should ‘cost’ more for authors to submit under-powered studies to journals, since journals often accept their research despite the methodological flaws, and therefore authors aren’t penalized for this type of behavior. The journals might also prioritize interesting results over the study size being adequate – meaning that too many of these articles get published. Authors and journals ultimately both ‘win’ from this behavior.
I think this issue of sloppy research methods is probably MUCH more prevalent than we think, but I haven’t been able to find reliable sources. In the book I talk about research misconduct and fraud, where some “studies suggest that the true rate of fraud among published studies lies somewhere between 0.01% and 0.4%.” I’d suspect the rate of sloppy research methods to be many times higher than this.