Why clinical trials are inefficient. And why it matters.
If we want to accelerate biomedical progress, we need to understand the root causes of clinical trial inefficiency.
Clinical trials occupy a strange place in biomedical innovation.
Modern medicine is built upon what we have learned from clinical trials. They are the final arbiter of whether treatments work, and the most important step in getting a new treatment approved. But the clinical trials industry is deeply inefficient, and trials are growing even more expensive. If trials are the engines of biomedical progress, we appear to be stuck in a very slow gear.
We need clinical trial abundance. When trials are slow and costly, it doesn’t just hurt the pharmaceutical industry that pays for the trials - it limits how many treatments reach patients and how quickly they arrive. Less expensive, more abundant trials would lead to more treatments and cures.
Experts across the field - from industry leaders to FDA commissioners - agree that clinical trials need reform. And many have offered solutions. But while proposed solutions are easy to find, we have not yet seen a clear explanation of exactly why clinical trials are so inefficient in the first place. Why does an industry that depends so heavily on trials allow them to become so slow and expensive? Without a diagnosis, we don’t know whether we are providing the right treatment.
So in this post, I’d like to take a deep dive: what are the root causes of inefficient trials? And how do we fix them?
The symptoms of inefficiency
To diagnose the cause of clinical trial inefficiency, we can start by examining the symptoms. There are many examples of waste and inefficiency in clinical trials, but I want to highlight among these, one practice stands out: a practice called 100% source data verification, which is both distinctly wasteful and can tell us a lot about what does - and what doesn’t - drive the inefficiencies in today’s trials.
To help understand this practice - and why it is so important to the broader story of clinical trial inefficiency - we first need to familiarize ourselves with how clinical trials are organized. Trials are paid for and overseen by drug or biotech companies - they’re called the sponsor of the trial. But the day-to-day work of the trial, such as - seeing patients, administering the study treatment, and collecting data, - is done by clinical trial sites like academic hospitals and research centers. In a typical trial, the site collects the data from patients on paper, and then enters and submits the data to the sponsor on a computer using electronic case report forms.
Sponsors and sites must work hand-in-hand, but their working relationship is one of deep institutional mistrust. Sponsors do not trust sites; they verify them. This is the environment that gave rise to the practice of 100% source data verification: After the site sends the data they collected to the sponsor, the sponsor sends a consultant to the site to pore through their paper records to make sure that every single data element the site collected matches the data the sponsor received.
100% source data verification is a huge task. The typical phase 3 clinical trial collects 3 million data points. So it is perhaps not surprising that it’s expensive to check them all: the process of 100% source data verification accounts for 25%-40% of clinical trial costs. And it’s completely pointless. It doesn’t meaningfully help with clinical trial quality. The FDA has publicly and repeatedly recommended against doing it for over ten years.
Nobody has a very good explanation for why the industry continues this practice. Some have noted that pharmaceutical companies are uncertain whether FDA will accept their alternative to 100% source data verification, but plenty of trials have used less expensive methods. Others have blamed immature technology or challenges in implementing alternatives, but plenty of companies have navigated these challenges. Given the amount that companies could save, neither of these barriers seem convincing.
But if it’s not technology or regulations that’s perpetuating this practice, what exactly is it? To answer that, we need to take a closer look at the institutions and incentives that drive the clinical trials industry.
Trials are too big to fail
Drug development is one of the most complex and capital-intensive endeavors on Earth. From the standpoint of technological complexity, risk, and expense, it’s medicine’s equivalent of sending humans into space. Imagine if, each time we wanted to send people into space, we had to do everything from scratch and all at once. In other words, each time we wanted to launch, we had to first design and build a new rocket, and then we only had one shot at getting the launch right. How much would it cost? How long would it take to make sure it was safe enough for people to ride?
We actually have a good answer to that, because this is how NASA used to build. Projects like the space shuttle and the James Webb Space Telescope were designed to get it right the very first time. This required extensive time and engineering effort. Both projects took about ten years to complete and came in massively over budget. This expensive approach to building even has a name: Big Space. Big Space means big one-off projects with big ambitions, big budgets, and big timelines. Thanks largely to SpaceX, we have moved away from Big Space: space missions are now handled like other complex engineering projects; they make progress through iterative testing and improvement.
Unfortunately, today’s approach to clinical trials looks much more like NASA’s “Big Space” approach than SpaceX’s iterative approach. Each trial is created from scratch. Researchers develop a protocol custom-designed to study a specific drug. Then, infrastructure, researchers, and contractors are all onboarded for that specific trial. Finally, that one-of-a-kind trial is executed, carrying its precious cargo: the pharmaceutical company’s treasured pharmaceutical asset, with billions of dollars of revenue on the line.
By the time the drug reaches late-stage clinical trials, the company has already invested huge sums in its research and development. They now have just one chance to get the trials right. As they run their trials, the company’s patents, marketing exclusivities, and funding are all running out. If they don’t succeed, it’s unlikely they’ll have the resources to try again.
Under the circumstances, it’s no wonder drug companies are risk-averse, especially when it comes to seeking regulatory approval, a process they can’t completely control or predict. If you have only one chance to get your rocket up in the air, you’re not going to try any new or untested technology. You’re not going to remove components to make your rocket more efficient. You’re going to stick with what works, no matter how much it costs. With billions on the line, drug companies would prefer to leave their trial designs alone and stick with what they’ve done before, including 100% source data verification.
Trials resist efficiency improvements
From the outside, this one-off approach to running trials seems odd. Why doesn’t the industry gradually improve, test, and iterate its trial designs? Why doesn’t it build re-usable trial infrastructure so that they can deploy those improvements at scale? It’s not because they don’t understand the concept. Reusable components are common across drug development and life sciences; they even have a name - they’re called platforms. You see platforms everywhere: in manufacturing, in product testing, in drug discovery, and even, every once in a while, in trials. For instance, there are large-scale platform trials for specific diseases like COVID and breast cancer. But those are the exceptions.
Below, I’ll walk through a few reasons why the industry remains so inefficient.
Uncertainty leads to excessive risk aversion
We’ve already discussed how today’s too-big-to-fail trials lead to risk aversion. This is driven in large part by uncertainty over regulation. As the source data verification example shows, the problem isn’t the regulations themselves - which are often reasonable on paper - but uncertainty over what the FDA will accept in practice. The FDA does not approve trial platforms or approaches - it approves drugs. So if a company seeks approval for a drug based on a trial that uses an innovative or unusual approach (anything other than 100% source data verification, for example), companies fear that FDA reviewers may question their approach and delay or reject the drug application, notwithstanding the FDA’s own policy statements to the contrary. And just because a company gets an innovative trial approach past one FDA review team doesn’t mean that their next drug application won’t hit barriers if a different FDA reviewer looks at it. The case-by-case nature of regulation forces companies to adopt bespoke designs for each trial, limiting innovation.

But that’s not the only way regulation breeds excess risk aversion. In a recent post, I discussed the regulatory cascade, a phenomenon where vague, firm-based regulations can lead to bureaucracy, excessive compliance, and risk aversion at the firm level. Pharmaceutical companies, fearing rejection from the FDA, IRBs, and other regulators, build trials to minimize legal and regulatory risk rather than to minimize costs.
Uncertainty and risk-aversion create a culture of fear in pharmaceutical companies. Importantly, individual managers at pharmaceutical companies fear failure. While risk-aversion is a reasonable response to the uncertainties and expenses involved in drug development, individual managers’ fear of failure pushes pharma’s risk aversion into overdrive. In industries like tech, failure is common and it’s nothing to be ashamed of; rather, it’s a learning opportunity. Failure is not treated so kindly in the pharmaceutical industry. Failing to successfully launch a product is seriously career-damaging, especially if someone can point to something you “should have” done to mitigate the risk of failure. Managers would happily add millions of dollars to a drug development budget if it bought them the slightest reduction in the likelihood of failure - even if the spending doesn’t pass a basic cost-benefit test. It’s perfectly rational for companies in the pharmaceutical industry to be risk-averse, but principal-agent problems like these push the industry’s risk aversion beyond what anyone would consider reasonable.
The structure of the trial industry resists efficiency improvements
Even if there were no risk aversion, it would be difficult to improve how trials operate, because the structure of the clinical trials industry makes it resistant to efficiency improvements.
The trials industry is deeply fragmented. Today’s infrastructure for running trials evolved from an archaic system that dates back to the first half of the 20th century, when clinical trials were designed and run by independent “clinician-investigators” who were paid by drug companies to test their products. Nowadays, there are still independent investigators, but their job is to execute the protocol developed by the drug company and then document everything they did.
Being an investigator at a trial site is not much fun: on a day-to-day basis, the work of an investigator resembles that of a glorified data entry clerk - the investigator’s primary responsibility is gathering the data that the drug company needs and sending it to them. And it creates an awkward division of labor that hampers improvement efforts. The drug company designs the protocol, but they don’t decide how it’s operationalized; it’s the clinician-investigator who actually figures out how to collect the data according to the protocol. And there’s usually yet another third party - a contract research organization - who actually oversees the conduct of the trials. Across the players, there is limited standardization of approaches, tools, and software, and plenty of coordination problems. Like the health care system itself, the clinical trials system is fragmented; there’s no simple leverage point from which to improve the system as a whole.
The fragmentation of the trial industry makes even simple tasks difficult to achieve. For example, each time a new clinical trial is launched, the trial’s sponsor must identify and recruit sites that are willing and able to run the trial and “activate” those sites. Sites review the protocol, assess whether they have enough patients who could participate, negotiate payment and contractual details with the drug sponsor, and train and prepare their staff to execute the trial. In big academic medical centers, this process takes over 8 months, on average.
For sites, fragmentation can make efficiency improvements far more difficult. Most sites contract with multiple different sponsors, each with their own idiosyncratic demands; including their own operating procedures, compliance needs, and software requirements. Sites cannot streamline their own approach to running trials when they are beholden to sponsors. Moreover, most sites run clinical trials as a supplementary source of income and to better serve their patients; it’s not their primary business. Even without competing pressures from multiple sponsors, they would have a limited ability to implement cost-saving technologies and approaches.
Companies lack the incentive and ability to improve the situation
Could one of the incumbents in this industry break through this morass? It’s unlikely because there are no players in the industry with the ability, resources, and incentive to drive improvement.
It starts with a lack of motivation. Pharmaceutical companies don’t face competitive pressure over trial costs. In other industries, competition and creative destruction drive cost reduction. Firms face pressure to cut costs so that they do not lose market share to competitors, and firms that fail to contain costs are replaced by firms that do. Pharma faces little pressure to make trials less costly. Yes, lower costs would increase profits and benefit investors, but for the most part, pharmaceutical companies have deep competitive moats. They often face little or no competition for the drugs they make, and the complexity of the industry presents barriers to new entrants and disruptors.
In the absence of competitive pressure, companies fall under the sway of Olson-style distributional coalitions and risk-averse managers. You can see this in the bloated clinical trial protocols they develop, which collect too many data elements, have too many endpoints, overly strict eligibility criteria, and then, unsurprisingly, produce trials that struggle to recruit patients and contain costs. Former FDA commissioner Robert Califf refers to these as “Christmas Tree” protocols - “you start out with a beautiful green tree that should be admired and then everybody in the family wants to put an ornament on it.”
But let’s say a company were aware of these problems and genuinely wanted to do something about them. They still might lack the internal capacity to do so. Pharmaceutical companies rely on trials to get their products approved, but trial operations are actually not their core competency. Rather, their businesses are built primarily to oversee drug development and investment, get drugs through the FDA approval process, and market them. Even if a company could simplify its own protocols, it must still work with the same fragmented and inefficient trials ecosystem as everyone else. Fixing that is a difficult, expensive undertaking that would distract companies from their core mission.
Beyond the competency issue, pharmaceutical companies are simply too small to take on a project as large as clinical trial reform. Improving trials will require many iterative improvements and the ability to amortize trial infrastructure investments across multiple trials. Most companies don’t produce the volume of trials necessary to drive this kind of improvement. Compared to industries like tech, where there are massive, trillion-dollar companies, the drug industry tends to produce smaller companies that can manage a portfolio of specialized products. There are no companies running trials in very large numbers; even the biggest companies launch only a handful of major clinical trials in a given year.
If pharma is too fragmented to improve trials, what about the contract research organizations (CROs) that run the trials for the pharmaceutical industry? In theory, they’re in the perfect position to iterate on and improve trials, since each CRO runs far more trials than any individual pharmaceutical company does. But for both economic and cultural reasons they adopt the same risk-averse stance as their pharmaceutical company sponsors. They compete on minimizing risk - not on minimizing costs. Efficiency improvements also run counter to their business model; they charge based on labor hours, not based on end results, and have little incentive to cut costs.
The trials industry lacks an engineering culture
The clinical trials industry has a culture problem. In short, they don’t sufficiently acknowledge that they are an industry in the first place.
The problem is one of emphasis. Clinical trials are both a scientific enterprise and a large-scale industrial system that needs to be engineered and optimized. Today’s industry leaders overemphasize the former at the expense of the latter. The result: leaders and policymakers don’t apply an engineering mindset to trials. If you look at convenings of industry leadership, like a recent National Academies conference on the topic of clinical trial modernization, you will see little discussion of the approaches that other industries have used to reduce costs and engineer efficient systems. You will see almost no mention of standardization, modularization, iteration, automation, and process optimization across trials. To the extent that engineering principles are applied in trials, they’re applied one trial at a time, not to the clinical trials system as a whole.
The framing of clinical trials as a primarily scientific enterprise - rather than an industrial process - also prevents clear-eyed discussions of their costs. Philip Tetlock calls this “the taboo trade-off.” In the clinical trials and research community, science is not just a means to an end, it is sacred: a value “worthy of boundless reverence, commitment, and protection.” We seek to “protect sacred values from secular encroachments by increasingly powerful societal trends toward market capitalism.” If clinical trials are science, then talking about making trials cheaper - applying the logic of capitalism and cost reduction - is taboo.
In real life, this taboo doesn’t mean that nobody talks about costs or efficiency in trials; in fact, rising trial costs are universally acknowledged as a problem. But when it comes time to talk about solutions, the taboo trade-off limits the conversation. If you look at leading proposals for trial reform, cost reduction is rarely at the center. Instead, you’ll see lots of discussions of “pragmatism,” “inclusiveness”, “access”, and “patient-centricity”. That muddy rhetoric transforms into muddy policies: trial reforms become everything bagels (or Christmas trees) with too many goals and too little focus.
The absence of an engineering mindset also affects how industry approaches the topic of trial standardization and iteration. The industry perceives every trial as being unique and different and unsuitable for standardization. If you are in the pharmaceutical industry or academia and you are reading this article, you might have taken issue with the analogy between rocket launches and clinical trials. After all, clinical trials don’t carry standard “payloads”. Rather, each trial is designed to answer distinct scientific questions using distinct data from distinct patient populations. How can trials be standardized and streamlined when each one is so distinct?
I think this view - trials as bespoke enterprises - has some truth to it, but it also betrays a lack of imagination and a failure to approach the problem with an engineering mindset. There are clear differences between trials that limit standardization. Yet there is an enormous amount that trials have in common, and clear opportunities for shared learning and “modular” approaches to trial execution that could cut costs: from standardized recruitment tools and pipelines, to electronic data collection, to risk-based monitoring. Companies often seem to be unaware of these opportunities; one insider account notes that companies are still copy-and-pasting Microsoft Word documents to design their case report forms. Opportunities for even basic process improvement are under-emphasized for the very same reason that industry hesitates to discuss costs; a widespread view that trials are strictly scientific endeavors, not an industrial process.
A case study in inefficiency
We now have a clearer picture of how clinical trials have become so inefficient. First, there is little desire for change: Companies are deeply risk-averse - in part because of uncertainty over regulation - and reluctant to try new approaches. They also face little incentive to improve: Companies are not existentially threatened by high trial costs. But even if companies did want to improve how trials are run, the industry is simply too fragmented and the individual pharmaceutical companies are too small to drive system-wide improvement. Plus, the culture of the trial industry, which celebrates the individual clinician-investigator, resists the forces of standardization and iterative improvement that other industries have applied to improve their productivity.
Does this theory explain the kinds of inefficiencies we see in the real world? To figure that out, let’s look at another example of clinical trial inefficiency: the continued use of paper at trial sites. The typical phase 3 clinical trial collects over 3 million data points, and most of that data collection is still done on paper. How come trials don’t collect data electronically? Plenty of commentators have noted that switching away from paper is expensive and difficult; but somehow nearly every other industry has managed it. Why are clinical trials lagging behind?
It started with regulatory uncertainty and risk aversion. For years, sites and sponsors worried whether regulators, who were used to seeing trial data collected on paper, would accept the electronic data. What exactly would a digital paper trail look like? Thankfully, regulators spent much of the 2010s issuing guidelines around electronic data collection - a process referred to as “electronic source” or eSource. Regulatory uncertainty may not be a problem anymore, but it delayed adoption of electronic data collection in clinical trials for years.
Yet the resolution of regulatory uncertainty did not lead to widespread adoption of electronic source data. That’s because of the industry’s fragmented structure. As we discussed earlier, trials have two stages of data collection; first, data is collected from study participants at trial sites. Then, the sponsor of the trial gathers the data from the sites, analyzes it, and presents it to FDA for review. The sites and sponsors have competing desires. Sponsors, as stewards of trial data, want to be able to control how the data is collected. Sites want a consistent approach to data collection that fits within their workflows. But industry hasn’t provided a consistent approach; each drug company demands that sites use different software to submit their data and different approaches to collect it. Facing competing demands, sponsors find it’s easier to just stick with paper.
Without a clear standard to follow for electronic data collection, trial sites lack a clear path down the technology learning curve. Anyone who is familiar with technology rollouts can imagine what will happen the very first time a site uses electronic data collection in one of its trials: there are guaranteed to be hiccups and errors. In all likelihood, the first few trials a site runs using electronic source will take more time to prepare and will be costlier to complete. After all, trial sites need time to learn how to manage a new digital workflow. But who is going to help sites make that upfront investment in learning - or even pay for the software? And what if a site goes through all of the effort to install and use an electronic data collection system, only to be asked by a drug company to use a different piece of software? It’s easy to see why the effort to go electronic doesn’t seem worthwhile.
If inconsistency is the problem, why can’t industry agree on a standard approach to electronic source data? But that’s where we run into the next barrier - there are no players with the ability to mandate change. When Apple or Google choose to adopt a new technology or standard, an entire industry might follow. No drug company is remotely big enough to have this influence. Companies face cultural barriers to change too: few drug companies have the awareness, technical capacity or desire to solve this problem in a comprehensive way. If every trial is different, why should the technology be standardized?
Current efforts to improve trials are insufficient
In a moment, I’ll share some ideas on how we can begin to tackle these problems. But first, I’d like to review what is being done today to make trials more efficient, and why it has been insufficient to counter ballooning trial costs.
The clinical trials and pharmaceutical industry are well aware that there is room for substantial improvement in how trials are run, and there are multiple organizations in the industry who are trying to fix the problem. The most prominent among them are Duke’s Clinical Trials Transformation Initiative, which is funded largely by the FDA, and Transcelerate Biopharma, which is funded by the pharmaceutical industry. These organizations follow a standard four-step playbook for developing and promoting trial innovations: First, they bring together leaders and experts to identify better approaches to running trials. Second, they develop tools and technical standards to help the pharmaceutical industry put those improved approaches into practice. Third, they pilot those tools and standards in real trials. And fourth, they educate the pharmaceutical industry on their findings and promote the adoption of their tools.
Many of the best ideas in clinical trials have been popularized through the efforts of these organizations: they have advocated for risk-based monitoring instead of 100% source data verification; for the use of electronic data instead of paper at trial sites; and for the use of standard protocol templates that can easily be exchanged with study sites and FDA. These and other groups are now working on efforts that could advance the state-of-the art even further: a “digital protocol” that can fuel downstream automation at sites, techniques to simplify and automate the process of matching patients to trials, and approaches to integrating trials into community practices using the practice’s electronic health records as study data sources.
These organizations do high-quality work. But adoption of their innovations remains excruciatingly slow. That’s because their approach on developing and proving new methodologies is poorly suited to the problem we face: clinical trials are not slow and expensive because we lack the tools to make them cheaper. Clinical trials are expensive because the industry lacks sufficient incentive to adopt these tools. The efforts of these organizations are not enough to overcome the industry’s fragmentation, entrenched risk aversion, and limited desire and ability to change. In a sense, these improvement efforts mirror the pathologies of the trial industry itself: Instead of pursuing systemic structural reform, trial innovators typically focus on large-scale one-off demonstration projects that generate impressive-looking journal articles but create little lasting change.
Tackling the root causes of trial inefficiency
Trial reform proposals are unlikely to be successful if they don’t directly address the fragmentation and incentive problems that limit adoption of innovative and cost-saving trial approaches. Fortunately, organizations like the Institute for Progress’ Clinical Trials Abundance initiative and the UK’s Lord O’Shaughnessy review have explored incentives and policy levers that could create meaningful change. In the spirit of those efforts, I present a few ideas for how we might address the root causes of clinical trial inefficiency.
Create a market for “lean trials”
The clinical trials market is ripe for disruption. Today, the clinical trials industry exists to provide a very specific kind of product: big, bloated, risk-averse trials designed to help drug companies approve billion-dollar blockbusters. Yet, in theory, low-cost upstarts could build trials that meet the same scientific standards as today’s expensive trials, but at far lower cost. I like to call these less-bloated trials “lean trials.”
But, with a few small exceptions, the market for lean trials does not exist today. The problem is a classic one in economics: coordinating a two-sided market. On the supply side, we need to build the capacity to run lean trials. This requires lots of up-front investment in low-cost trial infrastructure and experiential learning. Yet that investment won’t be made without a clear demand signal: a demonstrated willingness to buy what a lean trial industry is selling.
Fortunately, there are plenty of places to go looking for that demand. Not everybody who wants to run a clinical trial is a pharma company with a billion-dollar product; many smaller players would run more trials if they could afford to. To name just a few: companies seeking to validate clinical algorithms in pursuit of medical device approval; drug rescue operations who seek to rehabilitate abandoned or marginal pharmaceutical assets and make them profitable; biomarker qualification consortia; companies running confirmatory trials for drugs that received accelerated approval; drug repurposing efforts; drug lifecycle studies for new dosage forms, routes, etc., and more. These low-cost uses can form a base of demand that could drive a lean trials industry. Eventually, like any disruptor, lean trial providers could climb up the value chain. If they’re successful, even large pharmaceutical companies will perceive them as a safe choice.
An even better source of demand for lean trials could come from the public sector, which can offer the kind of scale and stability needed to launch a new industry. The government already funds many trials; particularly in cancer, orphan drugs, and medical countermeasures. There’s no reason these and other trials couldn’t be a source of demand for lean trials; they might even wish to run them on a custom-built lean trials network that focused specifically on cost efficiency.
There is also one government program worthy of special mention: the ARPA-H Accelerating Clinical Trial Readiness (ACTR) program. ACTR was launched in late 2023 with the goal of building a national capacity for faster, streamlined, less costly trials - motivated in part by a desire to improve the nation’s ability to run trials quickly in the event of future pandemics. Sadly, the program, after a promising start, appears to have stalled. I would love to see ARPA-H revive ACTR and focus it on building a US-based market for lean trials. ACTR could both kickstart the lean trials industry and send a powerful signal that the government stands behind the goal of less costly trials and is willing to marshal federal resources to support it.
Clarify regulations
To support simpler clinical trials, we should also simplify the regulations that govern them. I have already highlighted the role that regulatory uncertainty plays in driving clinical trial inefficiency. Companies are not certain how FDA will respond to their innovative trial approaches, so they choose tried-and-true approaches instead. But the uncertainty stems, at least in part, from unclear guidance and regulation.
Each time FDA reviews a drug, they closely examine how the clinical trials were conducted, including whether the data in the trial are reliable and whether the trial provides adequate evidence on a drug’s safety and efficacy. They also look into whether there are sources of bias or methodological flaws that might affect their ability to interpret the results. If they do have a concern, the consequences for the drug company can be dire: they might deny approval of the drug or, if the trial is still ongoing, they can pause it. They even have the right to bar researchers from participating in FDA-overseen trials if they flagrantly violate FDA standards.
To help make sure that companies stay on the right side of FDA’s regulations, FDA and other regulators publish guidelines - most notably, the Good Clinical Practice guideline. These guidelines are eminently reasonable. There is nothing in the guidelines themselves that prevent a company from running a lean, efficient trial, and there are many useful recommendations that, if followed, would make the trial more reliable, safer, and more likely to generate useful clinical evidence. But no matter what FDA does to provide clarity on what it expects, there will be areas of ambiguity. And it is those areas where risk aversion can seep in and lead to bloated trials.
There is no magic bullet to resolve this ambiguity, but there are lots of small changes that could help. Some of these changes are already happening. FDA recently created a program devoted to helping drug companies implement innovative clinical trial practices. They hold meetings with drug companies before they even run the trials to help advise them on how to design their protocols and avoid downstream surprises. And they also have the ability to devote extra management attention to drug reviews to ensure that the recommendations made to the sponsor are consistent with agency-wide policy.
These FDA actions are helpful, but what’s missing is clarification on what “good” looks like, and - even more importantly - what “good enough” looks like. For example, consent forms in trials are so long and detailed as to be incomprehensible. The current guidelines from FDA recommend that consent forms be “concise,” a recommendation that is largely ignored by industry. Perhaps more specifics would help. Martin Landray, one of the world’s preeminent trials experts, has suggested a hard cap on the number of words in consent forms.
The FDA could also provide more specifics on alternatives to 100% source data verification. Their current guidance recommends ”risk-based monitoring” but lacks specific metrics or benchmarks companies can follow to know that they have done enough to be compliant. FDA could provide clearer real-world examples of successful approaches, or, even better, specific numeric targets companies could hit.
Apply carrots and sticks
Better regulations and real competition in clinical trials can make a big difference. But it may also be time for the government and industry leaders to step in and create incentives for sponsors and trial sites to actually adopt these best practices.
This could happen in two phases. First, trial experts, like the ones at the clinical trials transformation initiative, should define exactly what a lean trial ought to look like - including best practices drug makers and sites should follow and technical standards they should use.
Next, the government could roll out a series of incentives: carrots and sticks that push trials towards greater efficiency. A good place to start might be government-funded trials. Only a small portion of clinical trials are government-funded (NIH focuses more on early stage research and rarely funds late-stage trials), but where government funding does exist, there are opportunities to drive greater efficiency and make wiser use of taxpayer dollars. NIH and other funders of clinical trials could require the trials they fund to follow certain best practices that are known to reduce trial costs, such the use of risk-based monitoring and electronic source data. They could also require both trial sponsors and trial sites that accept federal funds to adopt common electronic standards that promote lean trials, such as automated protocol execution and trial matching. Such requirements are not unprecedented: NIH already requires the trials it funds to use central IRBs, for example.
The FDA can help too. If a trial is deemed “lean”, the FDA could give the trial white glove treatment, making it easier for its sponsor to meet with agency staff and providing management oversight to ensure that review teams are actually granting the sponsor the flexibility that the regulation permits. Again, this is not unprecedented; the FDA worked with NCI to co-develop a protocol for a “lean” drug trial in lung cancer, part of a broader effort to promote pragmatic (and less burdensome) trials in cancer care.
Outside of government, a sufficiently motivated group of drug companies could also help. While a single pharmaceutical company can’t drive industry-wide change, if even 5-10 big pharmaceutical companies agreed to change their procurement practices to favor lean trials, it could be transformative. Frankly I don’t expect this to happen, but I mention it to emphasize that it would not necessarily take an industry-wide push to drive meaningful change. Just a few forward-thinking leaders could make a big difference.
To find new cures, we need better trials
Clinical trials are the critical engine of biomedical progress. When trials are inefficient, it hurts our pharmaceutical industry, and renders it vulnerable to competition from China’s faster-paced clinical trials industry. But it hurts patients too. When trials are too slow and expensive, we miss the chance to benefit from the treatments and cures that cheaper, faster, and more abundant trials could provide.
Clinical trials are poised to become even more important if the pace of drug discovery accelerates due to AI, as some are predicting. Dean Ball vividly depicted what this future might look like when he predicted that “The speed of drug development will increase within a few years, and we will see headlines along the lines of “10 New Computationally Validated Drugs Discovered by One Company This Week.” But patients won’t get the opportunity to benefit from those new drugs if we can’t test them in clinical trials.
The solutions I’ve provided might seem incomplete and unsatisfying. In particular it is awkward to contemplate enlisting government to make an industry more efficient. Shouldn’t this industry be able to fix their own problems? But I hope I have made it clear that the barriers to clinical trial efficiency are difficult for the private sector to surmount on its own, and that these problems affect all of us. In such cases, government intervention can act as a kind of axe: cutting (indiscriminately) through thorny coordination problems by setting standards and rules of the road that everyone can follow.
That said, I hope we don’t sit and wait for the government to come to the rescue. We also need fresh people and fresh ideas; in particular, the trials industry needs private sector leaders to take the problem of trial efficiency more seriously and develop creative solutions. There are lots of ideas waiting to be surfaced, including modular, streamlined approaches to trials and innovative business models that might solve or bypass the coordination problems that make reform so difficult.
Inefficiency in this industry is deeply entrenched, and I don’t expect these leaders and ideas to necessarily come from industry incumbents. Rather, I suspect it will come from new organizations and new entrants bringing in new ideas. If you made it all the way to the end of this article, then I’d encourage you to consider joining the effort to make trials leaner and more efficient. Biomedical progress depends on it.
This article was written as part of the Roots of Progress writing fellowship. Thanks to Abby ShalekBriski, Elizabeth van Nostrand, and Mike Riggs for their comments and feedback.




Interesting commentary. As a clinical research myself, with experience from the CRO side, with many sites, and from the pharma/Sponsor side, I agree with many of these points, but I think some additional context/comments could clarify others.
1. One of the biggest reasons we don't use electronic source in oncology is because sites don't/won't use it. Much of the data is standard of care (SoC) and therefore BILLED as SoC. If we require the site to use our processes and electronic data capture system, we (Pharma) pay for it. And since source is defined as the first recording of information, if the site first records the data into an electronic source model, they either have to write everything down twice so they can also capture it into their eMR system as well (which doesn't reduce the double-recording problem or source verification burden, since you still need to check the duplicate information) or they need to find a way to import the trial data into their eMR. In a global, multi-center trial we can often make this work for a majority of the sites, but we can't do it for all sites, so we revert to one global system for data entry. It's not exactly a first-mover problem, so much as a last-mover has the veto problem.
"Why not only use sites that are willing to use electronic source?" This is nice in theory, but not in practice. We were talking today about a list of sites, where one site's feasibility responses suggest they're sub-optimal for an upcoming study. We're selecting them anyway because of who the PI is and their relationships. Very often, you don't get to choose to exclude certain big names or well-connected PIs. For example, what if my drug gets approval, but without significant backing from insiders within the field, who are uninterested in discussing my drug vs. if I get approval 6 months later, and one of the biggest names in the industry mentions the new approval as "a promising new therapeutic option" in their keynote address to a thousand physicians at the annual conference? Worse, what if my competitor gets the PI on their trial, and they generate buzz for the competing product (more on that below) that helps accelerate funding/resources for my competitor because I passed on including her in the study? If the hot shot from Prestigious University joins the study, we don't tell them what to do, they tell us (and the rest of the study) what they're willing to tolerate.
2. Yes, there's a concern about using <100% SDV. Part of the concern is that lots of people don't really understand risk-based monitoring and the statistics behind when you should intervene. (I've personally seen this system accidentally abused to the detriment of study performance.) Part of it is because there really are lots of problems that arise from the dumb system of copying all source information from overworked coordinator nurses and expecting them to get even just the important stuff right. (This is also the fault of study operations managers who make everything both 'urgent' and 'critical'; when everything is important, nothing is.) There is a benefit to sending CRAs out to sites to walk the site through the "why" of each specific study-related requirement, and why they can't just carry forward their standard process. While 100% SDV isn't necessary for the CRAs to catch all those differences, the trend since 2020 has been to do more SDV remotely, so CRAs do the extra work without the benefit of in-person interactions with study nurses to explain to them that, no this study is subtly different and since you did these procedures out of sequence we can't use the data from those 4 patients in our study results.
3. Yes, we get patent protection, but new drug development is not a monopoly system, free of competition, where we can take our time to market. Few projects I have worked on could be described as free of competition. Usually we have at least one competitor, even for very specific/niche drugs. For example, I worked on Larotrectinib, which was very effective and eventually FDA approved. At the time, we worried because there was another TRK inhibitor that was ahead of us in clinical development. But their trial stalled, which allowed us to leapfrog them and get to market first. I'm currently working on a drug with at least 2 competitors. And indeed part of the process of deciding which drug(s) to bring to IND involves looking at the competitive landscape and asking whether there's a possibility of entering the market, or if you're just too far behind competitors. Meanwhile, everyone is uber secretive and concerned about giving away details that might become useful to competitors. Confidentiality is a big deal, because if we spend a million dollar figuring out specifics about how a trial in our indication should be done, we don't want to give that information to our competitor for free when they're developing competing trials.
4. I welcome a platform system, and hope it's wildly successful. However, I can't think of many trials I've worked on that would be able to interface with such a system. Most of my trials are one-off because we're doing something differently, which requires a lot of specific changes to standard procedures to ensure we're capturing the data correctly. I suspect a successful platform system will need some good seed funding to get it off the ground, since lots of Sponsors won't want to entrust their drug to a platform they have little control of.
5. "Your problem is you think every trial is special and different." Lots of trials are NOT special, but nearly every one is unique/different in unpredictable ways. Usually this comes from some stupid requirement we're forced to face due to something about the underlying biology of the drug, some anomalous pre-clinical signal we have to follow up on (this liver study will randomly require mandatory eye exams!), or something else. I work hard to reduce the amount of 'special' requirements in my trials, because I know that complexity breeds risk. My mindset is literally the opposite of what we're accused of; I'm looking to make my study LESS special. Trust me, the problem isn't coming from management unable to standardize the study.
6. I agree that the large Ph3 all-eggs-in-one-basket trial system is terrible. It leads to perverse incentives (cf. Vinay Prasad's book "Malignant" for more on pharma sins in trial design), and doesn't reflect what I would consider "good science" in the sense of an iterative process of progressing toward the truth, instead of the One-Big-Beautiful-Study designed to definitively "answer the question". But you can't blame pharma for doing it this way. This is how the FDA - and really the laws FDA is bound by - structured this system. If I conduct 4 mid-sized phase 2 studies that iteratively improve my design each time and all demonstrate efficacy, I might be generating better data, cumulatively, than if I bet everything on a large phase 3 trial. But the FDA will still ask for the large Ph3 trial in the NDA, so I have no choice but to run that study.
What about learning from successful examples? The RECOVERY trial for Covid-19 was set up in six weeks, recruited >40k patients over 185 sites and had an initial cost of £2 million.
It is possible!