↓ Skip to main content

Patient Reported Outcome (PRO) assessment in epilepsy: a review of epilepsy-specific PROs according to the Food and Drug Administration (FDA) regulatory requirements

Overview of attention for article published in Health and Quality of Life Outcomes, March 2013
Altmetric Badge

Mentioned by

twitter
2 X users

Citations

dimensions_citation
21 Dimensions

Readers on

mendeley
74 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Patient Reported Outcome (PRO) assessment in epilepsy: a review of epilepsy-specific PROs according to the Food and Drug Administration (FDA) regulatory requirements
Published in
Health and Quality of Life Outcomes, March 2013
DOI 10.1186/1477-7525-11-38
Pubmed ID
Authors

Annabel Nixon, Cicely Kerr, Katie Breheny, Diane Wild

Abstract

Despite collection of patient reported outcome (PRO) data in clinical trials of antiepileptic drugs (AEDs), PRO results are not being routinely reported on European Medicines Agency (EMA) and Food and Drug Administration (FDA) product labels. This review aimed to evaluate epilepsy-specific PRO instruments against FDA regulatory standards for supporting label claims. Structured literature searches were conducted in Embase and Medline databases to identify epilepsy-specific PRO instruments. Only instruments that could potentially be impacted by pharmacological treatment, were completed by adults and had evidence of some validation work were selected for review. A total of 26 PROs were reviewed based on criteria developed from the FDA regulatory standards. The ability to meet these criteria was classified as either full, partial or no evidence, whereby partial reflected some evidence but not enough to comprehensively address the FDA regulatory standards. Most instruments provided partial evidence of content validity. Input from clinicians and literature was common although few involved patients in both item generation and cognitive debriefing. Construct validity was predominantly compromised by no evidence of a-priori hypotheses of expected relationships. Evidence for test-retest reliability and internal consistency was available for most PROs although few included complete results regarding all subscales and some failed to reach recommended thresholds. The ability to detect change and interpretation of change were not investigated in most instruments and no PROs had published evidence of a conceptual framework. The study concludes that none of the 26 have the full evidence required by the FDA to support a label claim, and all require further research to support their use as an endpoint. The Subjective Handicap of Epilepsy (SHE) and the Neurological Disorders Depression Inventory for Epilepsy (NDDI-E) have the fewest gaps that would need to be addressed through additional research prior to any FDA regulatory submission, although the NDDI-E was designed as a screening tool and is therefore unlikely to be suitable as an instrument for capturing change in a clinical trial and the SHE lacks the conceptual focus on signs and symptoms favoured by the FDA.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 74 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Spain 1 1%
Unknown 73 99%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 11 15%
Student > Master 11 15%
Researcher 8 11%
Student > Doctoral Student 5 7%
Student > Bachelor 5 7%
Other 13 18%
Unknown 21 28%
Readers by discipline Count As %
Medicine and Dentistry 23 31%
Psychology 7 9%
Nursing and Health Professions 7 9%
Pharmacology, Toxicology and Pharmaceutical Science 5 7%
Agricultural and Biological Sciences 2 3%
Other 7 9%
Unknown 23 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 16 April 2013.
All research outputs
#17,286,379
of 25,374,647 outputs
Outputs from Health and Quality of Life Outcomes
#1,449
of 2,297 outputs
Outputs of similar age
#134,594
of 208,569 outputs
Outputs of similar age from Health and Quality of Life Outcomes
#29
of 45 outputs
Altmetric has tracked 25,374,647 research outputs across all sources so far. This one is in the 21st percentile – i.e., 21% of other outputs scored the same or lower than it.
So far Altmetric has tracked 2,297 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.5. This one is in the 29th percentile – i.e., 29% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 208,569 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 26th percentile – i.e., 26% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 45 others from the same source and published within six weeks on either side of this one. This one is in the 24th percentile – i.e., 24% of its contemporaries scored the same or lower than it.