Should We Refocus On The Product, Then Engineer The Process?
By Arnaud Deladeriere, Ph.D., Cell&Gene Consulting Inc.

There’s an intrinsic problem with a phrase that has invariably worked its way into the advanced therapy development lexicon. We can’t say “process is the product” because the starting material is intrinsically variable, especially in autologous therapies. Process alone cannot guarantee quality.
The Cell&Gene Foundry, a group of scientific and industry leaders I assembled to debate deep-seated issues, revisited this idea and tested it against reality.
We converged on a simple but demanding sequence: define the product first and measure it well, then engineer a process capable of repeatedly delivering that product despite biological variation. Characterization is the cornerstone of that shift. It links biological mechanisms to clear CQAs, ties patient context to culture behavior, and turns “product” into something observable, measurable, and controllable along the entire path from apheresis to release.
About the Cell&Gene Foundry
These ideas are shared in collaboration with the Cell&Gene Foundry, an industry group assembled to discuss important topics in cell and gene therapy development, led by Arnaud Delederiere. This conversation included insights from: J. Kelly Ganjei, an independent consultant; Nina Bauer, Ph.D., at Fujifilm Cellular Dynamics; and Marty Giedlin, a cell and gene therapy consultant.
To learn more about the Foundry, visit www.cellgeneconsulting.com.
The discussion emphasized three levers that make the shift operational.
- Analytics and characterization must be designed into development, not wrapped around it. A cultural reluctance to collect data is giving way to a more pragmatic stance as tools — from applied biostatistics to ML-assisted image inference — help teams make sense of complex data quickly.
- Automation needs to get smarter. Closed, hands-off steps reduce variance only if the systems are instrumented to sense, decide, and document; otherwise, automation can simply hide failure modes and de-skill teams.
- Manufacturing platforms work best when they are standardized in architecture and flexible in execution. Predefined data-driven branches — the “if-then” logic that responds to starting-material and in-process data — allow platforms to control processes despite variable starting materials.
Process Vs. Product Debate
The roundtable started by interrogating where the old mantra came from and why it persists. Early in the field, equating process with product was as much about protecting IP and “secret sauce” as it was about science. If the method was the moat, it followed that replicating the method would replicate the medicine.
In hindsight, that logic collapses when the material entering the process is non-uniform by design. Human donors are not vials of well-characterized intermediates, and the panel’s collective experience reflects the consequences. Teams have executed SOPs perfectly and still produced divergent outcomes, only to find, on the hard look back, that the differences traced to the patients rather than the procedures: a fever two weeks before apheresis, a chemotherapy window in the prior month, an age-linked shift in naïve versus memory T cell compartments. The biology of the incoming material was a critical factor.
That distinction matters because it reshapes what “product” must mean in practice. A process can be compliant, closed, and consistent and still not deliver the same product if the inputs vary in ways the team does not measure or understand. This is why the panel repeatedly returned to characterization as the cornerstone. If the field expects consistent outputs from inconsistent inputs, it must first specify what “consistent” means in terms of critical quality attributes aligned to mechanism and clinical context. Those CQAs then need to be mapped onto the journey from apheresis through activation, transduction, and hold, not simply checked at release. When teams treat those signals as the definition of the product, they can engineer processes that are coherent with the biology of the product they are handling.
The lived examples were concrete. Operators followed the recipe to the letter, but a heavily pretreated donor’s cells behaved nothing like those from a healthier donor. Activation kinetics shifted, cell size distributions told a different story about synchronicity, and culture performance lagged without any obvious stepwise error. In that setting, insisting that the process is the product becomes a way to avoid the harder question: what did we fail to measure about our product and starting material that would have predicted this divergence? The answer is not to abandon process discipline — it is to put product definition ahead of process fidelity. That sequence prevents a familiar trap: tweaking steps to make the numbers look right while never truly knowing whether the biology that matters remained constant.
The panel also entertained a provocative reframing. If characterization becomes exquisite and controls become tight, could the process eventually be indistinguishable from the product — so well-specified that output is functionally invariant across realistic inputs?
It is an attractive vision, but the group treated it as a possible destination, far from being the current premise. Only when product understanding drives control should anyone speak of equivalence between process and product. Until then, conflating the two invites brittle programs that break when biology refuses to conform.
Analytics And Characterization
The panel described a blocker that is more cultural than technical. For years, many sponsors hesitated to collect deep data, worried that new signals might complicate filings or force uncomfortable disclosures. That posture kept variation hidden in plain sight and left processes fragile. What is changing now is not only the availability of tools but also the willingness to use them. Participants noted a palpable shift at recent meetings: data are increasingly treated as an asset because teams finally have practical ways to interrogate messy, multidimensional sets that span production parameters, product characterization, and patient context.
At the center of this shift is a rebalancing of expertise. No one argued that AI or statistics replaces judgment. The argument was that judgment scales when paired with the right analytical capabilities.
That stance captures what is possible when teams stop trying to wring meaning from spreadsheets alone and instead couple domain knowledge to people and tools built for the job. The timeline changes. Patterns that once took weeks of manual slicing and arguing can surface in hours.
A modern measurement system begins upstream of manufacturing. It includes patient selection policies and apheresis characterization that capture context aligned to known drivers of variability. It builds in time-resolved, in-line or at-line sensing during culture — glucose and lactate in closed systems are common and useful examples — because they provide continuous context to feed into decision algorithms. And it continues through final release and longitudinal follow-up, so the model of what matters is constantly tested against outcomes.
The group also stressed fundamentals. Visual intuition is not a relic, it is critical to train the algorithms. Morphology, synchronicity, and early signs of distress should be recognized by people and captured by systems. When those worlds meet, teams gain a double safeguard against blind spots: a human who knows when something “looks wrong” and a system that can spot and quantify what “wrong” is and when it began.
Turning analytics into capability requires codification. The panel advocated replacing ad hoc “eye-test” knowledge with explicit logic. If a lactate trajectory crosses a defined threshold at a specific timepoint, the process does not rely on endless meetings to figure out what the next step is: it executes a predefined if-then logic adjustment and records it. None of this works without data hygiene and thoughtful study design. Bringing biostatistics into the room early ensures that questions are framed explicitly, that data are collected in ways that enable comparisons, and that conclusions are reported with the right confidence language. Equally, none of it works if the organization continues to fear its own data. Several participants observed a change here too: teams are increasingly willing to collect what they can act on because the path from data to decision is clearer and faster than it used to be.
The practical consequence of all this is not a dashboard for its own sake. It is comparability of the right kind: a clear line from mechanism and patient context to CQAs to in-process data to release and, ultimately, to clinical behavior.
Automation And Standardization
Automation earned praise in the roundtable for what it confidently delivers — fewer hands, reduced variance, and the promise of reproducibility at scale — and scrutiny for what it sometimes erodes: skill, observability, and the ability to explain what happened when something goes wrong. Closing steps inside automated systems is essential to reduce contamination risk and human variability. But a closed box without eyes and judgment is a risk in its own right.
To improve automation, we need to instrument it. Systems must be built to see, decide, and document. Optical turbidity checks can halt supernatant removal when opacity suggests cell carryover. Interlocks can prevent proceeding past known risk thresholds. eQMS software should write decisions back into the batch record automatically: if perfusion was increased to bring a parameter back within limits, the log should show what changed, when, and with what effect on growth rate or viability. Human-machine interfaces matter, too. Clear, explanatory screens and alarm logic turn a set of hidden rules into a visible conversation with the operator. When a system behaves like this, it does more than remove hands; it becomes a valuable extension of the team.
Training must evolve alongside the machines. Operators should be responsible for the training of algorithms that weaves cell-state recognition into the use of automated platforms, so that people can catch the unexpected and the machines can flag it. In the same vein, maintenance of automation needs to be treated as a scientific discipline, not a back-of-house chore. Many deviations trace to instrument issues as much as to human ones; the capacity to diagnose and correct those issues is part of making automation safe.
The conversation’s most important contribution to standardization was its insistence that “platform” and “flexible” are not contradictory. A strong platform codifies what is common and stable: closed architecture, validated unit operations, known ranges for key parameters, data pathways that move signals from instruments to decision logic without friction. Within that scaffold, the execution must adapt to biology. The branching logic discussed across the hour — if-then adjustments tied to starting material and in-process signals — is how a platform honors variability where it matters without degenerating into one-off recipes. The simplest example is conditional feeding or perfusion based on a time-resolved metabolite profile. More advanced branches include altering activation dwell times or modulating transduction exposure based on image-inferred phenotype mixes. In each case, the choice to branch is prespecified and validated, not decided in a panic; the system executes and records; the operator oversees and understands.
This approach reframes what success looks like. It is not “automation for automation’s sake” and not “platform because everyone else has one.” It is a coherent system in which architecture is standardized so that knowledge accumulates, while execution is variable so that biology has room to be itself. The result is not only better science; it is better economics. Each avoided failure, each prevented deviation, and each reduction in rework is a down payment on affordability. The panel returned to that point often. Costs will fall when variation is designed out by platform plus sensors plus analytics, when first-time-right becomes a norm rather than a hope, and when the knowledge to keep those gains is embedded in both people and systems.
Key Takeaways
1. Move to a data-first culture
Collect more and better data and actually use it. Embed biostatistics/ML to ask the right questions and extract patterns quickly. Pair newer, less-invasive/in-line analytics supported by operator expertise to remove the “secret sauce” rhetoric.2. Define and target your product beyond the results of consistent processing
For living medicines — especially autologous — human input variability means the process is only a means to an end (the product). Start by rigorously defining CQAs tied to mechanism and measure them from apheresis through release; then engineer the process to hit those targets consistently.3. Roll out responsive, intuitive automation
Closed, automated steps reduce variance only if they’re instrumented: integrate sensors, error-proofing, alarms, and automatic batch annotations. Otherwise, automation can de-skill teams and hide failure modes.4. Drive down COGS by engineering out variation using platform processes with integrated sensors and analytics
This optimizes patient selection and scheduling (ideally pre-chemo) and builds first-time-right operations that cut labor, rework, and footprint. The winning systems will collapse labor, failures, and footprint via platform plus sensors plus analytics, driving down COGS without compromising product understanding.5. Reframe the 'procedure'
Ex vivo steps could increasingly resemble a dialysis-like procedure: collect cells, execute a characterized process, give cells back — with the true differentiating IP being the targeting construct rather than the unit-operation choreography.About The Author:
Arnaud Deladeriere, Ph.D., is principal consultant at Cell&Gene Consulting Inc. Previously, he was head of MSAT and Manufacturing at Triumvira Immunologics and, before that, manufacturing manager at C3i. He received his Ph.D. in biochemistry from the University of Cambridge.