Discussion about this post

User's avatar
sclmlw's avatar

Interesting commentary. As a clinical research myself, with experience from the CRO side, with many sites, and from the pharma/Sponsor side, I agree with many of these points, but I think some additional context/comments could clarify others.

1. One of the biggest reasons we don't use electronic source in oncology is because sites don't/won't use it. Much of the data is standard of care (SoC) and therefore BILLED as SoC. If we require the site to use our processes and electronic data capture system, we (Pharma) pay for it. And since source is defined as the first recording of information, if the site first records the data into an electronic source model, they either have to write everything down twice so they can also capture it into their eMR system as well (which doesn't reduce the double-recording problem or source verification burden, since you still need to check the duplicate information) or they need to find a way to import the trial data into their eMR. In a global, multi-center trial we can often make this work for a majority of the sites, but we can't do it for all sites, so we revert to one global system for data entry. It's not exactly a first-mover problem, so much as a last-mover has the veto problem.

"Why not only use sites that are willing to use electronic source?" This is nice in theory, but not in practice. We were talking today about a list of sites, where one site's feasibility responses suggest they're sub-optimal for an upcoming study. We're selecting them anyway because of who the PI is and their relationships. Very often, you don't get to choose to exclude certain big names or well-connected PIs. For example, what if my drug gets approval, but without significant backing from insiders within the field, who are uninterested in discussing my drug vs. if I get approval 6 months later, and one of the biggest names in the industry mentions the new approval as "a promising new therapeutic option" in their keynote address to a thousand physicians at the annual conference? Worse, what if my competitor gets the PI on their trial, and they generate buzz for the competing product (more on that below) that helps accelerate funding/resources for my competitor because I passed on including her in the study? If the hot shot from Prestigious University joins the study, we don't tell them what to do, they tell us (and the rest of the study) what they're willing to tolerate.

2. Yes, there's a concern about using <100% SDV. Part of the concern is that lots of people don't really understand risk-based monitoring and the statistics behind when you should intervene. (I've personally seen this system accidentally abused to the detriment of study performance.) Part of it is because there really are lots of problems that arise from the dumb system of copying all source information from overworked coordinator nurses and expecting them to get even just the important stuff right. (This is also the fault of study operations managers who make everything both 'urgent' and 'critical'; when everything is important, nothing is.) There is a benefit to sending CRAs out to sites to walk the site through the "why" of each specific study-related requirement, and why they can't just carry forward their standard process. While 100% SDV isn't necessary for the CRAs to catch all those differences, the trend since 2020 has been to do more SDV remotely, so CRAs do the extra work without the benefit of in-person interactions with study nurses to explain to them that, no this study is subtly different and since you did these procedures out of sequence we can't use the data from those 4 patients in our study results.

3. Yes, we get patent protection, but new drug development is not a monopoly system, free of competition, where we can take our time to market. Few projects I have worked on could be described as free of competition. Usually we have at least one competitor, even for very specific/niche drugs. For example, I worked on Larotrectinib, which was very effective and eventually FDA approved. At the time, we worried because there was another TRK inhibitor that was ahead of us in clinical development. But their trial stalled, which allowed us to leapfrog them and get to market first. I'm currently working on a drug with at least 2 competitors. And indeed part of the process of deciding which drug(s) to bring to IND involves looking at the competitive landscape and asking whether there's a possibility of entering the market, or if you're just too far behind competitors. Meanwhile, everyone is uber secretive and concerned about giving away details that might become useful to competitors. Confidentiality is a big deal, because if we spend a million dollar figuring out specifics about how a trial in our indication should be done, we don't want to give that information to our competitor for free when they're developing competing trials.

4. I welcome a platform system, and hope it's wildly successful. However, I can't think of many trials I've worked on that would be able to interface with such a system. Most of my trials are one-off because we're doing something differently, which requires a lot of specific changes to standard procedures to ensure we're capturing the data correctly. I suspect a successful platform system will need some good seed funding to get it off the ground, since lots of Sponsors won't want to entrust their drug to a platform they have little control of.

5. "Your problem is you think every trial is special and different." Lots of trials are NOT special, but nearly every one is unique/different in unpredictable ways. Usually this comes from some stupid requirement we're forced to face due to something about the underlying biology of the drug, some anomalous pre-clinical signal we have to follow up on (this liver study will randomly require mandatory eye exams!), or something else. I work hard to reduce the amount of 'special' requirements in my trials, because I know that complexity breeds risk. My mindset is literally the opposite of what we're accused of; I'm looking to make my study LESS special. Trust me, the problem isn't coming from management unable to standardize the study.

6. I agree that the large Ph3 all-eggs-in-one-basket trial system is terrible. It leads to perverse incentives (cf. Vinay Prasad's book "Malignant" for more on pharma sins in trial design), and doesn't reflect what I would consider "good science" in the sense of an iterative process of progressing toward the truth, instead of the One-Big-Beautiful-Study designed to definitively "answer the question". But you can't blame pharma for doing it this way. This is how the FDA - and really the laws FDA is bound by - structured this system. If I conduct 4 mid-sized phase 2 studies that iteratively improve my design each time and all demonstrate efficacy, I might be generating better data, cumulatively, than if I bet everything on a large phase 3 trial. But the FDA will still ask for the large Ph3 trial in the NDA, so I have no choice but to run that study.

Thomas Reilly's avatar

What about learning from successful examples? The RECOVERY trial for Covid-19 was set up in six weeks, recruited >40k patients over 185 sites and had an initial cost of £2 million.

It is possible!

13 more comments...

No posts

Ready for more?