Guest Column | September 26, 2025

Derisking AI Means First Asking: Who Does It Serve?

A conversation with Kat Kozyrytska

Robot holding service bell-GettyImages-949212002

Imagine a hypothetical but plausible scenario — a process development scientist in your biotech company’s development lab wants to know about transitioning to single-use bioreactors for early-phase bispecific antibody production. Using a browser plug-in on a company network, they pop open ChatGPT and ask which bioreactor is best to use. Two things happen:

  1. OpenAI now knows that their employer is working on bispecifics and is looking for bioreactors. (You have more legal protections if you have a business account, but with a free or personal account, this information has left your organization and can be used by OpenAI).
  2. The suggestion ChatGPT is about to give your process development scientist may or may not be in your best interest. At best, it may not have the competence to answer the question because it’s been trained on one reddit thread a by process development scientist who only tried one or two bioreactors. And for a different cell line. Or ChatGPT may be giving you an answer that is in the best interest of a bioreactor manufacturer. Or your in your competitor’s best interest. You simply do not know because ChatGPT has no obligation to give you the best answer for you. But it is very persuasive, so your process development scientist is likely to believe the answer served to them.

Artificial intelligence, including off-the-shelf consumer-grade products like ChatGPT, promises productivity boosts and accelerated innovation. But using them invites risk, including accidentally exposing intellectual property.

AI implementation consultant Kat Kozyrytska offered ideas about managing these types of risks during Cambridge Health Institute’s Bioprocessing Summit in August, where she spoke about balancing the unquenching allure of artificial intelligence with smart practices and due diligence. We caught up with her after the talk. Here’s what she told us. The transcript below is edited for clarity.

 

You started by comparing ethics — human-based ethics vs. the ethics of artificial intelligence. Can you summarize that?

Kozyrytska: In addition to human organizational structures, companies are implementing AI agentic organizational structures. Within life sciences, pharma, biotech, and healthcare, we have strict requirements for what we could call the “goodness level” of humans — essentially the ethics. At its core, it’s about: “In whose best interest are these employees acting?”

When you think of a doctor, you want the doctor to be acting in the best interest of the patient, and in the end there are different — competing — parties playing into that. Maybe you have somebody selling to the doctor, or the doctor is working with the local hospital, but whatever the other factors are, you still want the doctor to be acting in the best interest of the patient. The law is clear on this.

This is true across different industries where you have a fiduciary duty. You can think about how in law, real estate, finance, transportation, and urban planning, there are many other implementations of the same framework. Essentially, that's a core concept.

An additional construct for us in the life sciences is confidentiality — keeping our own data confidential and keeping the patient's data confidential. Accountability is a big topic in discussions today.

How do you rate the risk of bad outputs from, for example, large language models? These things are based on probability and in this industry, probability kind of just doesn't cut it.

Kozyrytska: It’s consistent with the current regulatory view that we want deterministic outputs rather than probabilistic outputs. We want to make sure that every time we ask the same question with the same inputs, we get the same answer.

A way of thinking about this is that humans typically have a certain way of thinking about a problem. If you ask them to solve similar problems, they will give you similar solutions. They have an embedded way of thinking about it.

This is not where we are with AI today, to the degree that we understand it. Every company implementing AI has to assess those risks on their own, which is a lot of work. And here we're only talking about some of the risks, but there's also the infrastructural aspects. It’s a lot of effort for companies to bring on AI in a meaningful, secure, private, confidential, productive way.

It's a wonderful space for us to collaborate on, and it’s already happening to some degree at the large companies. But you can make an even better case for smaller companies. They have arguably fewer resources to evaluate these risks. So, can we give them some sort of framework to implement, or maybe from the regulatory side can we certify the technologies so that the companies have to do less evaluation and spend less before they unlock the benefits of AI?

We've implemented other novel technologies. This is not the first new tech that we're bringing in, so we already know how something's going to go wrong. To me, it's leveraging past learnings to apply to this new tech in order to avoid running into incidents that we might not want to see.

You talked about the need for a human in the loop. Other experts have suggested there’s inevitably going to be a person involved with every task.

Kozyrytska: The current regulatory framework, and also the implementation framework within companies, is certainly to have a human in the loop. And I agree that's absolutely the best we can do today.

That’s because our legal framework is set up so you can punish a human if something goes wrong. You must have an accountable individual who's humanoid somewhere in the decision making process.

Moving forward, we can imagine that the jobs are going to get really boring. For example, if your job is QC of the work of an algorithm, what you're doing is literally clicking on the link that it's referencing to confirm what it says. These are not going to be very engaging jobs.

I think we're going to have some challenges on the employee retention and motivation side. So, I hope we can solve some of these problems and build out some certification programs to alleviate the load on the human in the loop.

In this futuristic view, maybe we have a way of assigning legal responsibility to an algorithm. I don't know what that would look like. That's still in development.

But maybe something to note here is that a Massachusetts senator has brought a proposal to the Senate for legal responsibility for the deployment of AI. The proposal is to share that responsibility between the developer of the algorithm and the deployer of the algorithm. It's not specific to life sciences, but it's a way of thinking about it that's consistent with some of the other products that we've deployed, where, if something goes wrong, we hold both parties responsible.

The overarching theme of your talk was the nonnegotiable factors of AI implementation. Can you give us a rundown of those?

Kozyrytska: I think we all want productivity. There has to be some benefit. Most of the time we're willing to take on some risk to get the benefit.

We can ask a lot of questions in order to minimize the risk. But, of course, the biggest one that companies are mostly thinking about is IP. That is, maintaining the privacy and confidentiality of your information and your data.

We have been doing this for years now because we are becoming a data-driven sector, but there are novel ways that data can escape your organization. Additional layers of thinking go a long way, such as, for example, asking questions of your technology providers like: what happens with the data? Where does it go? Where does it get used?

We're at the stage where there's an opportunity for therapy developers to drive and guide technology developers in terms of those specific requirements.

Speaking of confidentiality, you have a progressive take on just what should be considered protected data. You described rethinking some types of data that could help partners move more nimbly and collaborate more effectively.

Kozyrytska: Importantly, in this concept of confidential collaboration, it's key to keep your data private and confidential. We now have the great technology of decentralized AI and decentralized machine learning. The movement of decentralized science is here.

The benefit of collaborating is also very clear. We have been doing this since prehistoric times and obviously it's served us well. We can get to innovation faster, at a lower cost, and serve more patients - sooner.

A way of thinking about this is that already in the industry, even in the novel space of cell therapies, we know that companies are most likely looking at CD4 or CD8 - If they're in the immune space. Most of the time we can't really talk about a lot of the profiling that we do because it’s tied into IP, but for these shared factors that all – or most – are looking at, we can develop industry best practices to help us speed up development and reduce manufacturing costs.

We're now in a place where, with just some minor amendments to the very widespread infrastructures for carrying out these types of collaborations to make them more confidential than they are today, we can get to a point where we can share information about the parameters that everybody's looking at and keep the other ones confidential.

About The Expert:

Kat Kozyrytska is an industry consultant who helps biotech and pharma companies evaluate and implement artificial intelligence. She holds an S.B. in biology from MIT and an M.Sc. in neuroscience from Stanford University.