↓ Skip to main content

Why European and United States drug regulators are not speaking with one voice on anti-influenza drugs: regulatory review methodologies and the importance of ‘deep’ product reviews

Overview of attention for article published in Health Research Policy and Systems, November 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (85th percentile)
  • Good Attention Score compared to outputs of the same age and source (66th percentile)

Mentioned by

blogs
1 blog
twitter
6 X users

Citations

dimensions_citation
9 Dimensions

Readers on

mendeley
25 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Why European and United States drug regulators are not speaking with one voice on anti-influenza drugs: regulatory review methodologies and the importance of ‘deep’ product reviews
Published in
Health Research Policy and Systems, November 2017
DOI 10.1186/s12961-017-0259-8
Pubmed ID
Authors

Shai Mulinari, Courtney Davis

Abstract

Relenza represents the first neuraminidase inhibitor (NI), a class of drugs that also includes the drug Tamiflu. Although heralded as breakthrough treatments in influenza, NI efficacy has remained highly controversial. A key unsettled question is why the United States Food and Drug Administration (FDA) has approved more cautious efficacy statements in labelling than European regulators for both drugs. We conducted a qualitative analysis of United States and European Union regulatory appraisals for Relenza to investigate the reasons for divergent regulatory interpretations, pertaining to Relenza's capacity to alleviate symptoms and reduce frequency of complications of influenza. In Europe, Relenza was evaluated via the so-called national procedure with Sweden as the reference country. We show that FDA reviewers, unlike their European (i.e. Swedish) counterpart, (1) rejected the manufacturer's insistence on pooling efficacy data, (2) remained wary of subgroup analyses, and (3) insisted on stringent statistical analyses. These differences meant that the FDA was less likely to depart from prevailing regulatory and scientific standards in interpreting trial results. We argue that the differences are explained largely by divergent institutionalised review methodologies, i.e. the European regulator's reliance on manufacturer-compiled summaries compared to the FDA's examination of original data and documentation from trials. The FDA's more probing and meticulous evaluative methodology allowed its reviewers to develop 'deep' knowledge concerning the clinical and statistical facets of trials, and more informed opinions regarding suitable methods for analysing trial results. These findings challenge the current emphasis on evaluating regulatory performance mainly in terms of speed of review. We propose that persistent uncertainty and knowledge deficits regarding NIs could have been ameliorated had regulators engaged in the public debates over the drugs' efficacy and explained their contrasting methodologies and judgments. Regulators use major resources to evaluate drugs, but if regulators' assessments are not effectively disseminated and used, resources are used inefficiently.

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 25 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 25 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 6 24%
Student > Doctoral Student 3 12%
Other 2 8%
Researcher 2 8%
Student > Ph. D. Student 2 8%
Other 4 16%
Unknown 6 24%
Readers by discipline Count As %
Social Sciences 6 24%
Medicine and Dentistry 4 16%
Sports and Recreations 4 16%
Nursing and Health Professions 2 8%
Economics, Econometrics and Finance 1 4%
Other 2 8%
Unknown 6 24%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 14. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 08 December 2017.
All research outputs
#2,270,634
of 23,007,887 outputs
Outputs from Health Research Policy and Systems
#327
of 1,226 outputs
Outputs of similar age
#47,321
of 331,173 outputs
Outputs of similar age from Health Research Policy and Systems
#8
of 24 outputs
Altmetric has tracked 23,007,887 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 90th percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,226 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.0. This one has gotten more attention than average, scoring higher than 73% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 331,173 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 85% of its contemporaries.
We're also able to compare this research output to 24 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 66% of its contemporaries.