New ISPE Framework Targets Uncertainty In Pharma's AI Deployment
A conversation with Brandi Stockton and Martin Heitmann

As artificial intelligence continues to find its way into regulated pharmaceutical operations, industry stakeholders have faced a persistent lack of consistent, practical guidance for deploying AI in GxP environments. The void creates uncertainty for diverse functions, including, but not limited to, visual inspection, data analysis, and pharmacovigilance.
To address that gap, the International Society for Pharmaceutical Engineering (ISPE) published the ISPE GAMP Guide: Artificial Intelligence, released in 2025 to build on the principles of GAMP 5 and adapt them to the unique characteristics of AI technologies.
The guide offers a risk-based framework for evaluating, implementing, and maintaining AI systems across the pharmaceutical life cycle, including development, manufacturing, distribution, also covers AI-related aspects of medical devices.
It aims to support harmonized implementation across regions while acknowledging local regulatory nuances. It covers a wide range of AI approaches — from rule-based systems and traditional machine learning to generative AI — and emphasizes life cycle management, data integrity, and ongoing performance monitoring.
Brandi Stockton, CEO of The Triality Group, and Martin Heitmann, a consultant also with The Triality Group, were co-leads of the new guide, along with Eric Staib, vice president of ccorporate quality at Syneos Health. Stockton and Heitmann offered to help us understand how the guide addresses regulatory alignment and terminology challenges, and how it supports responsible use of AI through quality by design and continuous oversight.
Stockton and Heitmann collaborated on the responses below to provide more thoughtful and comprehensive answers.
Can we begin with a quick overview of the scope of this new guide? It has a broad reach covering just about every area of regulated pharmaceutical development, manufacturing, and distribution — and in most regions of the world, too.
Indeed, the AI guide covers not only pharmaceutical development, manufacturing, and distribution, but effectively all GxP areas, including aspects of AI-enabled medical devices like software-as-medical-device. Its concepts have been established based on hands-on practical experience of our author team, representing use cases across several areas of GxP, including medical devices.
Regarding coverage, we aimed for inclusion of concepts and ideas from across the globe. This is reflected by our international team of experts, reviewers, and leveraging regulatory guidance as available to the date of publication. This mirrors the global nature of many organizations in life sciences, thus aiming to support scalability and harmonized implementation across sites to gain maximum benefit from AI.
On that note, what kinds of AI technology are we talking about here? LLMs to help operators find SOPs faster? Advanced process controllers for optimized equipment performance?
A broad view on the use of AI is included in the guide – starting from rule-based systems over traditional machine learning and deep learning approaches to new forms like generative AI, including the use of large language models. The guide promotes a flexible approach to technology – which may consist of combinations of various AI approaches and technologies.
The guide is grounded in several concepts and accompanying case studies, many previously published within the ISPE ecosystem. Such cases include visual inspection of finished pharmaceutical products, chromatography data analysis for optimization of biomanufacturing, the use of image analysis to detect tooth decay (clinically known as caries), and the use of large language models to summarize safety-relevant information in a pharmacovigilance context.
An important goal of the guide is to understand AI as one tool, among other approaches to digitalization and automation. AI will not serve a business need alone – hence the emphasis on process and product understanding, data understanding, and AI literacy to identify those use cases where AI can bring competitive value.
The guide seems to aim at harmonizing — or at least reconciling — global regulatory expectations. In compiling it, did you encounter any major points of regulatory dissonance you had to resolve? Terminology, for example, seems like it would be problematic.
Establishing a clear, well-understood, and harmonized terminology is a common challenge. Therefore, we carefully evaluated various terminology sources to reach a common consensus on a set of key terms around AI and ML in alignment with GAMP 5 - ISPE GAMP 5 Guide (Second Edition). We are aware of compromises that are required; a prominent example is the term “validation data set” used in the guide, which may be confused with formal validation activities as per U.S. FDA publications, while commonly used by data scientists to describe a data set used for evaluation during iterative experimentation and included in terminology published by EMA.
While we observe that general principles like the need for transparency, independence of test data sets, and the relevance of ongoing monitoring are becoming an agreed standard, we are observing different approaches to the scope of applicable use cases; in some jurisdictions, clearer lines are drawn on what use of AI is seen as generally acceptable, as per the recent example of the EMA Reflection Paper and the draft EU GMP Annex 22. In this regard, the guide promotes a universally applicable risk-based approach, while highlighting relevant aspects for local or regional regulations at the time of writing.
How does the guide address life cycle management of AI models — particularly ongoing quality risk management to mitigate issues like model drift and bias?
The AI guide follows ISPE GAMP 5 Guide (Second Edition) in promoting a quality by design approach, specifically interpreted and tailored to the use of AI. It aims for comprehensive, evidence-based decisions throughout the life cycle, beginning at the concept phase with prototyping activities, iterative development activities, and evaluation of performance over formal verification to transition to ongoing monitoring. All these activities should be guided by performance indicators evaluated on suitable data to determine fitness for purpose of the model throughout the life cycle.
Specifically, regarding model drift and bias, data understanding within the context of use of a model is of prime importance. Therefore, the guide highlights the importance of ongoing monitoring of input data to detect changes in data distributions that may have an impact on model performance and trigger change management activities, in addition to ongoing monitoring of model performance. The guide also covers the use of dynamic systems, i.e., systems that exhibit adaptive learning behavior during operation and may deploy new model versions automatically. Here, thorough control of model changes and stop criteria in case of unexpected behavior along the model’s evolutionary path are needed.
Chapter 4 touches on a lesser-discussed aspect of life cycle management — model retirement. Can you offer an example of what might trigger model retirement, and what factors should be considered when planning for it?
Model retirement may occur for several reasons, e.g., as part of the overall retirement of the AI-enabled computerized system or when a model is retired because of insufficient performance or availability of a superior approach.
Factors to be considered when retiring include traceability of model input, the model, and model output, as well as the integration into the AI-enabled computerized system to allow for ex-post assessment. Further factors to consider are any interdependencies of data and models as they may be used in more than one computerized system and the use of auxiliary functionality like explainable AI methods.
Further Reading:
- International Society for Pharmaceutical Engineering, ISPE GAMP Guide: Artificial Intelligence, 2025, https://ispe.org/publications/guidance-documents/gamp-guide-artificial-intelligence
- International Society for Pharmaceutical Engineering, GAMP 5: A Risk-Based Approach to Compliant GxP Computerized Systems (Second Edition), 2022, https://ispe.org/publications/guidance-documents/gamp-5-guide-2nd-edition
- Eric Staib, Tomos Williams, Siôn Wyn, “Applying GAMP Concepts to Machine Learning,” ISPE Pharmaceutical Engineering, 2023, https://ispe.org/pharmaceutical-engineering/january-february-2023/applying-gampr-concepts-machine-learning
- Rolf Blumenthal, Nico Erdmann, Martin Heitmann, Anna-Liisa Lemettinen, Brandi Stockton, “Machine Learning Risk and Control Framework,” ISPE Pharmaceutical Engineering, 2024, https://ispe.org/pharmaceutical-engineering/january-february-2024/machine-learning-risk-and-control-framework
- Martin Heitmann, Stefan Münch, Brandi Stockton, Frederick Blumenthal, “ChatGPT, BARD, and Other Large Language Models Meet Regulated Pharma,” ISPE Pharmaceutical Engineering, 2023, https://ispe.org/pharmaceutical-engineering/july-august-2023/chatgpt-bard-and-other-large-language-models-meet
- European Medicines Agency, “Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle,” 2024, https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle_en.pdf
- European Commission, “Stakeholders’ Consultation on EudraLex Volume 4 - Good Manufacturing Practice Guidelines: Chapter 4, Annex 11 and New Annex 22,” 2025, https://health.ec.europa.eu/consultations/stakeholders-consultation-eudralex-volume-4-good-manufacturing-practice-guidelines-chapter-4-annex_en
About The Experts:
Brandi Stockton is founder and CEO of The Triality Group. She has over 25 years of pharmaceutical industry experience across several areas of GxP. Brandi serves as secretary of GAMP Global, is the immediate past chair of GAMP Americas, co-leads the GAMP Global Software Automation and AI Special Interest Group, and is a member of the AI CoP Steering Committee. She was the visionary originator and principal strategist of the AI Guide Initiative.
Martin Heitmann is a consultant with The Triality Group, holding a decade of experience focusing on technology, innovation, and transformation. He serves as secretary of the GAMP Global Software Automation and AI Special Interest Group and co-led the AI Guide Initiative.