Interesting commentary. As a clinical research myself, with experience from the CRO side, with many sites, and from the pharma/Sponsor side, I agree with many of these points, but I think some additional context/comments could clarify others.
1. One of the biggest reasons we don't use electronic source in oncology is because sites don't/won't use it. Much of the data is standard of care (SoC) and therefore BILLED as SoC. If we require the site to use our processes and electronic data capture system, we (Pharma) pay for it. And since source is defined as the first recording of information, if the site first records the data into an electronic source model, they either have to write everything down twice so they can also capture it into their eMR system as well (which doesn't reduce the double-recording problem or source verification burden, since you still need to check the duplicate information) or they need to find a way to import the trial data into their eMR. In a global, multi-center trial we can often make this work for a majority of the sites, but we can't do it for all sites, so we revert to one global system for data entry. It's not exactly a first-mover problem, so much as a last-mover has the veto problem.
"Why not only use sites that are willing to use electronic source?" This is nice in theory, but not in practice. We were talking today about a list of sites, where one site's feasibility responses suggest they're sub-optimal for an upcoming study. We're selecting them anyway because of who the PI is and their relationships. Very often, you don't get to choose to exclude certain big names or well-connected PIs. For example, what if my drug gets approval, but without significant backing from insiders within the field, who are uninterested in discussing my drug vs. if I get approval 6 months later, and one of the biggest names in the industry mentions the new approval as "a promising new therapeutic option" in their keynote address to a thousand physicians at the annual conference? Worse, what if my competitor gets the PI on their trial, and they generate buzz for the competing product (more on that below) that helps accelerate funding/resources for my competitor because I passed on including her in the study? If the hot shot from Prestigious University joins the study, we don't tell them what to do, they tell us (and the rest of the study) what they're willing to tolerate.
2. Yes, there's a concern about using <100% SDV. Part of the concern is that lots of people don't really understand risk-based monitoring and the statistics behind when you should intervene. (I've personally seen this system accidentally abused to the detriment of study performance.) Part of it is because there really are lots of problems that arise from the dumb system of copying all source information from overworked coordinator nurses and expecting them to get even just the important stuff right. (This is also the fault of study operations managers who make everything both 'urgent' and 'critical'; when everything is important, nothing is.) There is a benefit to sending CRAs out to sites to walk the site through the "why" of each specific study-related requirement, and why they can't just carry forward their standard process. While 100% SDV isn't necessary for the CRAs to catch all those differences, the trend since 2020 has been to do more SDV remotely, so CRAs do the extra work without the benefit of in-person interactions with study nurses to explain to them that, no this study is subtly different and since you did these procedures out of sequence we can't use the data from those 4 patients in our study results.
3. Yes, we get patent protection, but new drug development is not a monopoly system, free of competition, where we can take our time to market. Few projects I have worked on could be described as free of competition. Usually we have at least one competitor, even for very specific/niche drugs. For example, I worked on Larotrectinib, which was very effective and eventually FDA approved. At the time, we worried because there was another TRK inhibitor that was ahead of us in clinical development. But their trial stalled, which allowed us to leapfrog them and get to market first. I'm currently working on a drug with at least 2 competitors. And indeed part of the process of deciding which drug(s) to bring to IND involves looking at the competitive landscape and asking whether there's a possibility of entering the market, or if you're just too far behind competitors. Meanwhile, everyone is uber secretive and concerned about giving away details that might become useful to competitors. Confidentiality is a big deal, because if we spend a million dollar figuring out specifics about how a trial in our indication should be done, we don't want to give that information to our competitor for free when they're developing competing trials.
4. I welcome a platform system, and hope it's wildly successful. However, I can't think of many trials I've worked on that would be able to interface with such a system. Most of my trials are one-off because we're doing something differently, which requires a lot of specific changes to standard procedures to ensure we're capturing the data correctly. I suspect a successful platform system will need some good seed funding to get it off the ground, since lots of Sponsors won't want to entrust their drug to a platform they have little control of.
5. "Your problem is you think every trial is special and different." Lots of trials are NOT special, but nearly every one is unique/different in unpredictable ways. Usually this comes from some stupid requirement we're forced to face due to something about the underlying biology of the drug, some anomalous pre-clinical signal we have to follow up on (this liver study will randomly require mandatory eye exams!), or something else. I work hard to reduce the amount of 'special' requirements in my trials, because I know that complexity breeds risk. My mindset is literally the opposite of what we're accused of; I'm looking to make my study LESS special. Trust me, the problem isn't coming from management unable to standardize the study.
6. I agree that the large Ph3 all-eggs-in-one-basket trial system is terrible. It leads to perverse incentives (cf. Vinay Prasad's book "Malignant" for more on pharma sins in trial design), and doesn't reflect what I would consider "good science" in the sense of an iterative process of progressing toward the truth, instead of the One-Big-Beautiful-Study designed to definitively "answer the question". But you can't blame pharma for doing it this way. This is how the FDA - and really the laws FDA is bound by - structured this system. If I conduct 4 mid-sized phase 2 studies that iteratively improve my design each time and all demonstrate efficacy, I might be generating better data, cumulatively, than if I bet everything on a large phase 3 trial. But the FDA will still ask for the large Ph3 trial in the NDA, so I have no choice but to run that study.
What about learning from successful examples? The RECOVERY trial for Covid-19 was set up in six weeks, recruited >40k patients over 185 sites and had an initial cost of £2 million.
Agree that RECOVERY is a powerful example. The circumstances around it were obviously unique, and I don't think we should use it as a cost benchmark. But the approach that Martin Landray and his team took - eliminating waste and extraneous data collection, focusing on the things that matter, and integrating trials into care - ought to be applied more broadly. Today's system seems to incentivize against that.
From memory, the trial only had a couple of primary endpoints-it was designed to answer a specific question at a binary and statistically significant level (and obviously kudos to those who delivered it).
The diagnosis here is accurate. The tools are not the problem, the incentive structure and fragmentation are, and no demonstration project changes that without structural intervention.
What digital trials and AI actually offer is a fundamentally different evidence architecture. Real-time safety monitoring, direct EHR data integration, and decentralized models that eliminate the eight-month site activation bottleneck change the operational math entirely.
Adoption has been slow because regulatory uncertainty drives risk aversion that no efficiency argument overcomes on its own. That calculus shifts once the FDA, IRBs, and ethics committees establish clear and consistent governance frameworks for AI-assisted trials. When sponsors know with certainty what the agency will accept, the first-mover problem dissolves and studies move at an increasingly faster pace. The regulatory clarity around electronic source data in the 2010s proved exactly this point. Governance rules are the accelerant that makes the technology consequential.
Trials are absolutely unique and different. But that's not the main problem.
If anything, from the clinical side, there's a plague of trials that are barely positive/poorly designed/arranged to force a positive result that are then marketed aggressively and pushed to patients.
This is especially common in oncology; where informative censoring or (most egregiously) subpar care for the control group is so common people have created whole online personas pointing it out.
Last year, I and 2 people I know participated in a trial on a weight loss drug Orforglipron by Eli Lili. (Btw, the drug works). The site was not a hospital, but a small outpatient office specialised on clinical trials only. They don't do anything else. The data collection was fully electronic. On site, they had tablets, where we filled up the surveys by checking the boxes. For home, we have received cell phones to log in and record whether we have taken a pill. We could still see ineficiencies. The onsite survey were unreasonably long. Imagine, like, three different surveys per session with partially overlaping questions. As for the at home cell phones, they were a disaster - 5 steps needed to communicate a simple fact that, yes, I have taken the pill this morning. Each next step loaded for a minute or so. No way to swallow a pill, click and go in the rush morning hour. I eventually recorded my compliance by making a paper record and feeding a phone every 5 days or so.
I've had an answer sitting on Quora for how a libertarian would propose to reform health care in America. My item 5 was "allow new solutions to bypass phase 3 clinical trials" - roughly equivalent to your "lean trials" solution. I added 5a: "just make them cheaper", and added link to this article.
I'm no big Quora mover, but that answer gets about 200 views a year, and hopefully it's all about the right pair of eyes seeing it.
Thanks for sharing. When approaching regulation of trials, the question we ought to ask is: is the benefit of the information we gain from the trial greater than its costs? If not, it's obviously tempting to skip the trial, or perhaps make the trial smaller (trading off some certainty for lower cost). But the best approach is to make the trial cheaper in the first place!
The problem with skipping something like phase 3 altogether is that a lot of treatments that had very sound theoretical reasons that they should have worked have completely failed in real humans.
That's not as damning of skipping trials as you might think, though. If a phase 3 trial is able to verify a specific treatment is effective on humans, but requires $100M in the process, that means the cost per treatment has to go up to $100-10000, say, to pay for the cost of the trial (and that's without counting trials that fail).
If, OTOH, we skip the phase 3 trial, then a patient could get something that's been shown not to hurt him (it passed phase 1), might help (phase 2), but might not (no phase 3), but it only costs $15 to give it a try.
I think a lot of US patients would welcome that tradeoff. And if enough of them turn out to show better symptoms after a while, then that's practically a phase 3 right there, assuming no false positives.
Phase 1 trials only catch the most obvious of harms in the healthiest of patients. The drug won't be deployed for such a short time span in such healthy patients.
A great example is the recently approved (and soon to be un-approved) Alzheimer's drugs that not only passed Phase 1 and 2 trials; but also Phase 3 trials. But when it hit the real world, it ended up causing more brain bleeding and swelling than helping. Because even Phase 3 patients are HIGHLY selected and don't reflect the real world.
An enlightening article. Was not aware that old school data gathering is the bottle neck in clinical trials. When needed during COVID, vaccines were approved fast right? How was that made possible?
Interesting commentary. As a clinical research myself, with experience from the CRO side, with many sites, and from the pharma/Sponsor side, I agree with many of these points, but I think some additional context/comments could clarify others.
1. One of the biggest reasons we don't use electronic source in oncology is because sites don't/won't use it. Much of the data is standard of care (SoC) and therefore BILLED as SoC. If we require the site to use our processes and electronic data capture system, we (Pharma) pay for it. And since source is defined as the first recording of information, if the site first records the data into an electronic source model, they either have to write everything down twice so they can also capture it into their eMR system as well (which doesn't reduce the double-recording problem or source verification burden, since you still need to check the duplicate information) or they need to find a way to import the trial data into their eMR. In a global, multi-center trial we can often make this work for a majority of the sites, but we can't do it for all sites, so we revert to one global system for data entry. It's not exactly a first-mover problem, so much as a last-mover has the veto problem.
"Why not only use sites that are willing to use electronic source?" This is nice in theory, but not in practice. We were talking today about a list of sites, where one site's feasibility responses suggest they're sub-optimal for an upcoming study. We're selecting them anyway because of who the PI is and their relationships. Very often, you don't get to choose to exclude certain big names or well-connected PIs. For example, what if my drug gets approval, but without significant backing from insiders within the field, who are uninterested in discussing my drug vs. if I get approval 6 months later, and one of the biggest names in the industry mentions the new approval as "a promising new therapeutic option" in their keynote address to a thousand physicians at the annual conference? Worse, what if my competitor gets the PI on their trial, and they generate buzz for the competing product (more on that below) that helps accelerate funding/resources for my competitor because I passed on including her in the study? If the hot shot from Prestigious University joins the study, we don't tell them what to do, they tell us (and the rest of the study) what they're willing to tolerate.
2. Yes, there's a concern about using <100% SDV. Part of the concern is that lots of people don't really understand risk-based monitoring and the statistics behind when you should intervene. (I've personally seen this system accidentally abused to the detriment of study performance.) Part of it is because there really are lots of problems that arise from the dumb system of copying all source information from overworked coordinator nurses and expecting them to get even just the important stuff right. (This is also the fault of study operations managers who make everything both 'urgent' and 'critical'; when everything is important, nothing is.) There is a benefit to sending CRAs out to sites to walk the site through the "why" of each specific study-related requirement, and why they can't just carry forward their standard process. While 100% SDV isn't necessary for the CRAs to catch all those differences, the trend since 2020 has been to do more SDV remotely, so CRAs do the extra work without the benefit of in-person interactions with study nurses to explain to them that, no this study is subtly different and since you did these procedures out of sequence we can't use the data from those 4 patients in our study results.
3. Yes, we get patent protection, but new drug development is not a monopoly system, free of competition, where we can take our time to market. Few projects I have worked on could be described as free of competition. Usually we have at least one competitor, even for very specific/niche drugs. For example, I worked on Larotrectinib, which was very effective and eventually FDA approved. At the time, we worried because there was another TRK inhibitor that was ahead of us in clinical development. But their trial stalled, which allowed us to leapfrog them and get to market first. I'm currently working on a drug with at least 2 competitors. And indeed part of the process of deciding which drug(s) to bring to IND involves looking at the competitive landscape and asking whether there's a possibility of entering the market, or if you're just too far behind competitors. Meanwhile, everyone is uber secretive and concerned about giving away details that might become useful to competitors. Confidentiality is a big deal, because if we spend a million dollar figuring out specifics about how a trial in our indication should be done, we don't want to give that information to our competitor for free when they're developing competing trials.
4. I welcome a platform system, and hope it's wildly successful. However, I can't think of many trials I've worked on that would be able to interface with such a system. Most of my trials are one-off because we're doing something differently, which requires a lot of specific changes to standard procedures to ensure we're capturing the data correctly. I suspect a successful platform system will need some good seed funding to get it off the ground, since lots of Sponsors won't want to entrust their drug to a platform they have little control of.
5. "Your problem is you think every trial is special and different." Lots of trials are NOT special, but nearly every one is unique/different in unpredictable ways. Usually this comes from some stupid requirement we're forced to face due to something about the underlying biology of the drug, some anomalous pre-clinical signal we have to follow up on (this liver study will randomly require mandatory eye exams!), or something else. I work hard to reduce the amount of 'special' requirements in my trials, because I know that complexity breeds risk. My mindset is literally the opposite of what we're accused of; I'm looking to make my study LESS special. Trust me, the problem isn't coming from management unable to standardize the study.
6. I agree that the large Ph3 all-eggs-in-one-basket trial system is terrible. It leads to perverse incentives (cf. Vinay Prasad's book "Malignant" for more on pharma sins in trial design), and doesn't reflect what I would consider "good science" in the sense of an iterative process of progressing toward the truth, instead of the One-Big-Beautiful-Study designed to definitively "answer the question". But you can't blame pharma for doing it this way. This is how the FDA - and really the laws FDA is bound by - structured this system. If I conduct 4 mid-sized phase 2 studies that iteratively improve my design each time and all demonstrate efficacy, I might be generating better data, cumulatively, than if I bet everything on a large phase 3 trial. But the FDA will still ask for the large Ph3 trial in the NDA, so I have no choice but to run that study.
What about learning from successful examples? The RECOVERY trial for Covid-19 was set up in six weeks, recruited >40k patients over 185 sites and had an initial cost of £2 million.
It is possible!
Agree that RECOVERY is a powerful example. The circumstances around it were obviously unique, and I don't think we should use it as a cost benchmark. But the approach that Martin Landray and his team took - eliminating waste and extraneous data collection, focusing on the things that matter, and integrating trials into care - ought to be applied more broadly. Today's system seems to incentivize against that.
(Obviously this trial was of repurposed meds, but I think some of the same principles will apply )
From memory, the trial only had a couple of primary endpoints-it was designed to answer a specific question at a binary and statistically significant level (and obviously kudos to those who delivered it).
How does China run trials?
Or Germany, for that matter?
The diagnosis here is accurate. The tools are not the problem, the incentive structure and fragmentation are, and no demonstration project changes that without structural intervention.
What digital trials and AI actually offer is a fundamentally different evidence architecture. Real-time safety monitoring, direct EHR data integration, and decentralized models that eliminate the eight-month site activation bottleneck change the operational math entirely.
Adoption has been slow because regulatory uncertainty drives risk aversion that no efficiency argument overcomes on its own. That calculus shifts once the FDA, IRBs, and ethics committees establish clear and consistent governance frameworks for AI-assisted trials. When sponsors know with certainty what the agency will accept, the first-mover problem dissolves and studies move at an increasingly faster pace. The regulatory clarity around electronic source data in the 2010s proved exactly this point. Governance rules are the accelerant that makes the technology consequential.
Trials are absolutely unique and different. But that's not the main problem.
If anything, from the clinical side, there's a plague of trials that are barely positive/poorly designed/arranged to force a positive result that are then marketed aggressively and pushed to patients.
This is especially common in oncology; where informative censoring or (most egregiously) subpar care for the control group is so common people have created whole online personas pointing it out.
Last year, I and 2 people I know participated in a trial on a weight loss drug Orforglipron by Eli Lili. (Btw, the drug works). The site was not a hospital, but a small outpatient office specialised on clinical trials only. They don't do anything else. The data collection was fully electronic. On site, they had tablets, where we filled up the surveys by checking the boxes. For home, we have received cell phones to log in and record whether we have taken a pill. We could still see ineficiencies. The onsite survey were unreasonably long. Imagine, like, three different surveys per session with partially overlaping questions. As for the at home cell phones, they were a disaster - 5 steps needed to communicate a simple fact that, yes, I have taken the pill this morning. Each next step loaded for a minute or so. No way to swallow a pill, click and go in the rush morning hour. I eventually recorded my compliance by making a paper record and feeding a phone every 5 days or so.
Thanks for this article.
I've had an answer sitting on Quora for how a libertarian would propose to reform health care in America. My item 5 was "allow new solutions to bypass phase 3 clinical trials" - roughly equivalent to your "lean trials" solution. I added 5a: "just make them cheaper", and added link to this article.
I'm no big Quora mover, but that answer gets about 200 views a year, and hopefully it's all about the right pair of eyes seeing it.
Thanks for sharing. When approaching regulation of trials, the question we ought to ask is: is the benefit of the information we gain from the trial greater than its costs? If not, it's obviously tempting to skip the trial, or perhaps make the trial smaller (trading off some certainty for lower cost). But the best approach is to make the trial cheaper in the first place!
The problem with skipping something like phase 3 altogether is that a lot of treatments that had very sound theoretical reasons that they should have worked have completely failed in real humans.
That's not as damning of skipping trials as you might think, though. If a phase 3 trial is able to verify a specific treatment is effective on humans, but requires $100M in the process, that means the cost per treatment has to go up to $100-10000, say, to pay for the cost of the trial (and that's without counting trials that fail).
If, OTOH, we skip the phase 3 trial, then a patient could get something that's been shown not to hurt him (it passed phase 1), might help (phase 2), but might not (no phase 3), but it only costs $15 to give it a try.
I think a lot of US patients would welcome that tradeoff. And if enough of them turn out to show better symptoms after a while, then that's practically a phase 3 right there, assuming no false positives.
Absolutely not how this works.
Phase 1 trials only catch the most obvious of harms in the healthiest of patients. The drug won't be deployed for such a short time span in such healthy patients.
A great example is the recently approved (and soon to be un-approved) Alzheimer's drugs that not only passed Phase 1 and 2 trials; but also Phase 3 trials. But when it hit the real world, it ended up causing more brain bleeding and swelling than helping. Because even Phase 3 patients are HIGHLY selected and don't reflect the real world.
An enlightening article. Was not aware that old school data gathering is the bottle neck in clinical trials. When needed during COVID, vaccines were approved fast right? How was that made possible?