FDA Deploying AI to Accelerate Drug Approvals
ArtemisDiana/iStock/Getty Images Plus
Article audio sponsored by The John Birch Society

In a sweeping internal reform, the U.S. Food and Drug Administration (FDA) has launched its first agency-wide deployment of artificial intelligence for drug review. Last Friday, Commissioner Dr. Martin Makary announced that all FDA centers must “aggressively” and “immediately” integrate generative AI by June 30, following the agency’s first pilot review using the technology.

“I was blown away by the success of our first AI-assisted scientific review pilot,” Makary said in a statement. He added,

We need to value our scientists’ time and reduce the amount of non-productive busywork that has historically consumed much of the review process. The agency-wide deployment of these capabilities holds tremendous promise in accelerating the review time for new therapies.

The pilot, designed to support FDA scientists by automating repetitive document review tasks, reportedly reduced a three-day task to mere minutes.

“This is a game-changer technology,” said Jinzhong Liu, deputy director at the FDA’s Office of Drug Evaluation Sciences.

The rollout will be led by Jeremy Walsh, the FDA’s newly appointed chief AI officer. Walsh previously served as chief technologist at Booz Allen Hamilton. Joining him is Sridhar Mantha, who previously led data modernization efforts within the FDA.

More Drugs, Faster

Announcing the rollout, Makary spoke of the new “incredible” drugs waiting in the pipeline. Those include, he said, “treatments for [amyotrophic lateral sclerosis], stage-four cancers, neurodegenerative conditions, diabetes…. These need to come to market once we can establish that they are safe and effective.”

The push to speed up approvals comes at a time when Americans are already consuming more medications than any other population on Earth. And the trend is steadily going up. It is estimated that 70 percent of American adults take at least one prescription medication daily. Additionally, 24 percent of Americans require four or more prescription medications daily. And the pattern doesn’t stop with adults. Between 25 percent and 43 percent of American children rely on prescription medications to manage chronic health conditions such as asthma, ADHD, diabetes, and behavioral disorders.

Despite a tidal wave of pharmaceuticals, Americans remain among the sickest people in the developed world. Life expectancy continues to decline. The system isn’t curing chronic illnesses such as diabetes, obesity, or heart disease; it stabilizes them just enough to sell another refill.

The U.S. healthcare model — engineered for treatment, not prevention — has long stood as a monument to institutionalized dysfunction. Now, an aggressive AI rollout threatens to mechanize that failure, transforming it into a seamless, self-optimizing production line.

Eroding Standards

The FDA has faced intensifying criticism for relaxing its safety and efficacy standards in recent years. Nowhere was that more evident than during Operation Warp Speed (OWS), which rushed Covid vaccines to market with minimal long-term data.

Yet multiple studies suggest this erosion of standards isn’t new — it’s systemic. Since the 1990s, programs such as the Prescription Drug User Fee Act and the Accelerated Approval pathway have cut median drug-review times nearly in half. A 2024 review of oncology approvals found that nearly half of cancer drugs fast-tracked by the FDA failed to demonstrate clinical benefit even five years after release.

Now, with generative AI being deployed across the agency, that collapse in scrutiny is poised to accelerate. What once took 27 months now takes 10. Soon, it may take 10 minutes.

But speed is not progress. The FDA already struggles — routinely — to ensure new drugs are meaningfully tested before they hit the market. And AI, for all its hype, is far from reliable. It mimics language, not understanding. It fabricates, it bluffs, and it inherits the blind spots of the data it consumes.

Whether safety can keep pace with speed is no longer a theoretical concern — it’s a structural flaw. And while the FDA insists that AI will “support — not replace — human expertise,” the pace and scale of deployment suggest otherwise. With automation in the driver’s seat, it’s no longer clear whether the experts are steering — or just along for the ride.

Broken by Design

The FDA promises more details in June — performance metrics, updated features, expansion plans. For now, the AI rollout barrels forward at full speed.

But the deeper the agency dives into automation, the more urgent the questions become. Who really benefits from faster approvals? Is this about patient safety — or pipeline “efficiency”? Can AI salvage a regulatory body that is fundamentally compromised?

As with so much of modern Washington, the answer may lie not in what the agency says, but in what it refuses to examine.

Because behind the dashboards and data models sits an uncomfortable truth: The U.S. Constitution grants the federal government no power to regulate drugs. The FDA’s sweeping authority didn’t arise from constitutional mandate, but from a century of legal improvisation, industry pressure, and congressional surrender.

And now, the FDA is seizing a shiny new tool to speed up work it never designed for transparency, restraint, or even legality. The agency calls it a “game changer.” If it is, it’s a change for the worse — mission creep wrapped in silicon and sold as progress.

AI Governance

The FDA’s shift is part of a broader government push to digitize and automate. Under the Trump administration’s “AI First” agenda, the federal agencies are adopting AI, ostensibly to cut costs and increase speed.

Agencies including the IRS, the U.S. Treasury, the Department of Defense (DOD), and General Services Administration (GSA) are implementing AI in auditing, surveillance, hiring, contract evaluations, and policy modeling. The Department of Government Efficiency (DOGE), led by Elon Musk, tracks AI implementation across all federal operations.

HHS is also integrating AI into its broader operations, beyond the FDA’s latest push. For instance, in April, the National Institutes of Health announced plans to build a centralized digital platform to track various health metrics. The stated goal is to study autism and other chronic conditions — using AI to rake through vast, disparate data sources.