From The Editor | February 27, 2017

IQPC Biosimilars Event Emphasizes Need For Regulatory Alignment

Anna Rose Welch Headshot

By Anna Rose Welch, Editorial & Community Director, Advancing RNA

Biosimilar industry

Early last week, I found myself in Philadelphia for the second annual IQPC Biosimilars Analytical Similarity, Clinical Studies, and Market Entry conference. This was my first time at this event, and because of its small size — roughly 40 people over the course of two days — it was an especially tight-knit gathering of experts from Teva, Pfizer, Samsung Bioepis, ICON, Merck, and Momenta, to name a few. Though the conference included panels on overarching regulatory and commercialization topics, such as Medicare and Medicaid biosimilar policy and the newly negotiated Biosimilar User Fee Act (BsUFA II), this event (as the name promises) primarily took me on a journey into the heart of biosimilar development: the analytical and statistical work that happens behind the scenes.

As a non-scientist and non-statistician, I’ll admit this was a complicated show for me to grasp at times, though it certainly heightened my respect for bioanalytics and those who work in this realm. There was one overarching focus and challenge I singled out from many of the discussions — even those boasting illustrations of pharmacokinetic (PK) curves and peptide mapping. (My more artistic sensibilities often interpreted the latter of these as drawings of stalagmites.) The overarching challenge I identified was navigating the scientific differences that can arise when working with different regulators.

Balancing Scientific And Regulatory Demands

While the European Medicines Agency (EMA), the FDA, and Health Canada (HC) are aligned on many of the broad, overarching concepts of biosimilars, there are slight differences in how each agency approaches the science. Indeed, as the number of biosimilar development programs increases, the FDA, EMA, HC, and Japan’s Pharmaceuticals and Medical Devices Agency (PMDA), have established a biosimilar cluster to work toward what the FDA has termed “scientific alignment.” This conference was a good place to learn where these differences can crop up and some of the challenges companies have faced in the past when it comes to the more nitty-gritty statistical work.

As ICON’s Clinical Development Consultant Dr. Tim Clark described, “Statistically, you cannot prove two things are the same; you can only determine they are similar within the bounds of a certain margin.” This margin is often referred to as an acceptance or biosimilarity margin — the largest difference that can be clinically acceptable (even though what is acceptable is often subjective). The acceptance or biosimilarity margin drives the sample size of a trial. So, the narrower the margin, the larger the trial’s sample size will be. In instances where there’s a broader margin, there will be a smaller sample size. This is why five different companies developing the same biosimilar will often have varying Phase 3 clinical trial sizes.

This margin and the sample size is determined by a systematic review of historical data, which culminates in a meta-analysis leading to an overall estimation of the sample size. (For more in-depth information on the methodologies behind deriving the specific margins for your trial, Clark suggested turning to the FDA’s non-inferiority clinical trials guidance.)

But what was particularly interesting was Clark’s discussion on how the FDA and EMA have approached companies’ biosimilarity margins and, in turn, sample sizes.

For example, Remsima’s (Inflectra in the U.S.) clinical program was built around a 15 percent margin. Though the EMA had some concerns about this margin being too broad in certain indications, the totality of the analytical, non-clinical, and PK data supporting the molecule ultimately secured the EMA’s final approval.

However, when the same drug appeared before the FDA, the agency did not agree with the studies included in the historical meta-analysis, and as such, didn’t agree with the 15 percent margin. Rather, the agency believed a 12 percent margin would have been more appropriate. As Clark outlined, the trial for a15 percent margin had 300 patients per arm. Had the study been sized for a 12 percent margin from the beginning, the company would’ve needed 373 patients. (But, in order to account for dropouts, this number really would’ve been closer to 400 patients per arm.)

And Remsima/Inflectra wasn’t the only example of this; The FDA also disagreed with the studies included in the meta-analysis and the margin established for Amgen’s Amjevita. Had the trial originally been designed according to the FDA’s analysis, the sample sizes per arm would’ve been roughly 330 per arm, as opposed to 250.

Now, both Inflectra’s and Amjevita’s data held up to the FDA’s specifications and the products were approved. But, obviously, these differences between the EMA’s and the FDA’s interpretation and acceptance of the chosen margin are somewhat concerning. “At the end of the day, these statistical analyses are what drive the size of a trial,” Clark explained. “As a statistician, it’s concerning to see meta-analyses being questioned and that the means by which a study is designed are being changed at the end,” he said. “A study isn’t powered to suddenly operate at more minimal margins.”

Linglong Zou, director of experimental immunology for Teva, also highlighted some differences between the FDA and EMA’s assay requirements. The FDA emphasizes a one-assay approach for comparative immunogenicity testing, while the EMA recommends two in its guidance. But when it comes to Remsima/Inflectra, an EMA analysis of one- versus two-assay approaches revealed there were no differences between the two approaches (outside of the extra work to perform two assays). Remsima’s original marketing application to the EMA only included one assay. Following the EMA’s request to see a two-assay approach, no changes in data were revealed. As Zou outlined, using a single-assay approach is four times faster than using two, and offers a three-fold cut in sample analysis time and decreases data analysis time by two-fold.

Though there are instances in which there may be scientific differences between the different regulatory agencies, there is perhaps some good news for those of you working on biosimilar development in the U.S. In her presentation, BIO’s SVP of Science Policy, Kay Holcombe, highlighted the BsUFA II goals letter published by the FDA late last year. In this letter, the FDA outlined its plans to establish a new review program, which would allow companies to have more interaction with FDA reviewers during the 10-month biosimilar application review period. In these two additional meetings, companies would learn of any red-flags in their application or missing data so they aren’t surprised by last-minute changes in order to secure approval.

While the show had a specific scientific focus that was, at times, difficult for me personally to grasp, it was still clear that the ultimate goal behind these complicated methods is to improve biosimilar quality and shorten development timelines. As the aforementioned examples show, one of the biggest challenges can be working with regulators and balancing the differences that may arise in justifying trial design.

As Pantelis Vlachos, director of strategic consulting for Cytel, advised, one best practice to employ is to bring statisticians to meetings with the FDA (and any other regulator, for that matter). This will ensure your review is well balanced. (For instance, Vlachos said one review he attended included himself and five FDA statisticians). “In discussion with agencies, it will be key to explain and, perhaps, defend why you’ve chosen to structure your study in a specific way,” he offered.