The Moral Imperative of AI Adoption
Fabrizio Maniglio
November 15, 2025
Is it ethical NOT to use AI if it demonstrably improves patient outcomes?
The life sciences industry frames AI adoption as a risk question. Is AI safe enough? Can we trust it? What if it makes a mistake? These are reasonable questions. They are also the wrong starting point.
The Real Question
The question should never be “is AI perfect or can it be foolproof?” That’s a fool’s errand. The question is: “How much better am I using the technology compared to not?”
Consider a straightforward comparison:
- Humans are accurate to roughly 85%. Reviewers catch most of the remaining 15%.
- AI is accurate to roughly 95%, leaving only 5% to catch.
- If you can demonstrate that improvement, why would you NOT adopt?
The gap between 85% and 95% isn’t an abstraction. In pharmaceutical manufacturing, in clinical data review, in quality oversight, that gap represents real outcomes. Real patients.
The Conservatism Trap
The proposed EU GMP Annex 22 excludes probabilistic models from GxP-critical applications. This covers the vast majority of useful AI, including large language models. The regulation takes an already conservative industry and gives it official permission to stay conservative.
That might sound prudent. But prudence has a cost. If a probabilistic AI model demonstrably outperforms the current manual process in validation accuracy, defect detection, or quality output, the decision to exclude it isn’t neutral. It’s a choice to accept worse outcomes because the better tool doesn’t fit a regulatory category.
The “wait and see” posture that dominates the industry is itself a choice with consequences. Every quarter spent waiting for perfect regulatory clarity is a quarter spent accepting the error rates we’ve normalized.
The AB Test Argument
Nobody is suggesting full autonomy overnight. The path forward is measured and evidence-based:
- Run AB tests. Side-by-side comparison of AI-assisted vs. manual processes.
- Gather data. Measure accuracy, consistency, time, and error rates.
- Use the data to justify progressive removal of human oversight, not because you’re hoping for the best, but because the evidence supports it.
This isn’t experimentation on patients. It’s measurement. And the status quo has its own error rate that we’ve normalized into invisibility.
The Inversion
Patient safety is the ultimate trump card in pharmaceutical regulation. It’s the reason every requirement exists, every validation is performed, every audit trail is maintained.
Using that same principle to argue FOR AI adoption is a powerful inversion, but it’s not a rhetorical trick. If the evidence shows AI improves outcomes, then the patient safety argument doesn’t just permit adoption. It demands it.
The conversation is shifting from “can we trust AI?” to “can we justify not using it?” That shift isn’t comfortable. But comfort was never the point. Patient outcomes are.
What This Means for Leaders
This is not an argument against regulation. Risk-based frameworks, validation requirements, and human oversight all have their place. The argument is against blanket exclusion of an entire category of technology when the evidence suggests it performs better than what we’re doing now.
The organizations that engage with this question honestly, that run the tests, gather the data, and make evidence-based decisions, will be the ones that earn both regulatory confidence and better outcomes.
The ones that hide behind conservatism will eventually have to explain why they chose the less effective option when a better one was available and demonstrable.
That’s not a technology question. It’s an ethical one.
Fabrizio Maniglio
Keynote speaker & thought leader helping life sciences organizations navigate AI, quality, and the humans caught between the two.