From The Editor | June 12, 2019

FDA's "Show Me The Data" A Recipe For Tailored Biosimilar Development?

Anna Rose Welch Headshot

By Anna Rose Welch, Editorial & Community Director, Advancing RNA

biosimilar industry

Over the past four whirlwind years, I’ve written thousands of words discussing the many (occasionally dramatic) commercial and regulatory updates occurring in biosimilar markets around the globe. Though I have focused quite a bit on regulatory policies and the science behind them, I recently set a goal for myself to become more familiar with the technical side of biosimilar development — more specifically, the processes behind making a biosimilar and some of the pitfalls that can arise, especially in the wild world of analytical characterization. The timing of my burgeoning interest in these scientific territories could not have been more perfect, seeing as the FDA just recently released its Comparative Analytical Assessment and Other Quality-Related Considerations draft guidance.

In order to better understand the agency’s recommendations and what the industry may be faced with in addressing them, I reached out to Fouad Atouf, the VP of global biologics, science and standards, for U.S. Pharmacopeia (USP). Throughout our conversation, Atouf shared his thoughts on the potential impact of the FDA’s latest guidance. In this first of what will be a two-part article, Atouf highlights the challenges presented by the FDA’s newest guidance while remaining optimistic that the large amount of data recommended today will open doors to more efficient development in the (hopefully) near future.

The Opportunities And Challenges Of The FDA’s Comparative Analytics Guidance

Those of you who have read my recap of the FDA’s comparability draft guidance will know that its recommendations are much improved compared to the withdrawn statistical analysis guidance. In fact, as Atouf continued to emphasize throughout our call, he really admires the newest iteration of this guidance. While the former statistical analysis guidance was, as the title implies, much more focused on statistics, this newest draft recognizes what can be done today in biosimilar analytical development, thanks to improving methodologies and technology. The science behind manufacturing and production along with the methods used to develop medicines is regularly advancing, and this is truly something to celebrate.

However, this also presents biosimilar manufacturers with the challenge of keeping up with these innovations. And, as we know, reverse engineering and characterizing a biosimilar molecule is not an overnight procedure. As Atouf explained to me, it can take a biosimilar manufacturer a couple of years to analytically characterize a reference product and develop the appropriate analytical methods for its biosimilar. During that time period, as analytical methods become more precise, the company will likely find that the data gleaned later in a development program will be different than data collected at the program’s launch.

There are bridging studies that can compare the new and old methods of analysis. But as Atouf acknowledged, that’s extra work in what is an already complicated, resource-intensive process. There is often little room in a development program to go backwards and completely recraft a new analytical method to incorporate these newer technologies. So, one of the biggest challenges for all biologic manufacturers is keeping abreast of the ongoing technological advances and being able to capture critical changes in methodologies.

Atouf also emphasized just how much of an opportunity this guidance poses for the industry — though seizing this opportunity inherently will be accompanied with its own hurdles. It’s clear looking at the list of information outlined in the guidance that the FDA has not been messing around with the phrase, “totality of the evidence.” Though the recommended data and the methods for evaluating it are standard and well understood by industry, this guidance is asking for a lot of data. I guarantee many of you would argue the FDA is asking for too much data. Not only does the agency specify nine factors to evaluate in order to determine any key differences between the biosimilar and reference product, it also recommends biosimilar makers look closely at the nature of each moderate to highly ranked critical quality attribute (CQA), each attribute’s distribution and abundance, as well as the type (quantitative vs. qualitative) and sensitivity of the assays used to assess attributes. Altogether, this creates a rich and complex tableau of data, which, currently, is then followed by preclinical and clinical studies (aka even more time, effort, and data).

Given this ongoing recommendation to pursue preclinical studies (for the FDA) and clinical equivalence trials (all regulators in developed nations), I’m sure some of you may argue the information requested in the FDA’s guidance is overkill, especially given the 13 years of experience we’ve had developing and launching biosimilars. However, Atouf is optimistic that this guidance — regardless of the high bar it sets for data quantity — is actually the key to more streamlined development in the future.

As he pointed out, in this guidance, as well as in many others before this, the FDA has always expressed the possibility that a solid analytical program may allow the regulator to waive the need for preclinical or even clinical testing. I’ve been privy to many conversations over the past few years about just how far we still are from seeing this tailored development implemented on a case-by-case basis, let alone widely across molecules. But from where Atouf stands, he sees this recent guidance as the FDA’s efforts to establish a framework for more tailored development protocol in the future — perhaps as soon as within the next five years. To help the agency arrive at this point, it will be critical for companies to not only provide data but also to thoroughly discuss the rationale behind their analytical program, homing in on what the data means and why certain methods were chosen.   

By requesting great amounts of data now, Atouf anticipates the agency is ensuring it gets all the data it needs to increase regulators’ knowledge and, in turn, is laying the groundwork for eliminating certain data recommendations or requirements (e.g., nonclinical data, for starters).While the notion of tailored development in the future is alluring, to say the least, many of you may still be frustrated by the amount of data being asked for in this guidance and in reviews with the agency. But as Atouf reiterated, the more thorough the efforts of developers today, the closer the FDA can get to eliminating certain clinical recommendations, which will only benefit all players in the future.

Stay tuned for part two, in which Atouf outlines several of the biggest challenges in analytical comparability, including protocol standardization in assay selection (or the general lack thereof) and knowing how much data is “enough data.”