↓ Skip to main content

Requirements for benefit assessment in Germany and England – overview and comparison

Overview of attention for article published in Health Economics Review, August 2014
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
2 X users
facebook
1 Facebook page

Citations

dimensions_citation
17 Dimensions

Readers on

mendeley
61 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Requirements for benefit assessment in Germany and England – overview and comparison
Published in
Health Economics Review, August 2014
DOI 10.1186/s13561-014-0012-8
Pubmed ID
Authors

Victor Ivandic

Abstract

This study compared the methodological requirements for early health technology appraisal (HTA) by the Federal Joint Committee/Institute for Quality and Efficiency in Health Care (G-BA/IQWiG; Germany) and the National Institute for Health and Care Excellence (NICE; England). The following aspects were examined: guidance texts on methodology and information sources for the assessment; clinical study design and methodology; statistical analysis, quality of evidence base, extrapolation of results (modeling), and generalisability of study results; and categorisation of outcome. There is some degree of similarity regarding basic methodological elements such as selection of information sources (e.g. preference of randomised controlled studies, RCTs) and quality assessment of the available evidence. Generally, the approach taken by NICE seems to be more open and less restrictive as compared with G-BA/IQWiG. Any kind of potentially relevant evidence is requested, including data from non-RCTs. Surrogate endpoints are also accepted more readily, if they are reasonably likely to predict clinical benefit. Modeling is expected to be performed wherever possible and appropriate, e.g. for study duration, patient population, choice of comparator, and type of outcomes. The resulting uncertainty is quantified through sensitivity analyses before making a recommendation regarding reimbursement. By contrast, G-BA/IQWiG bases its assessment and quantification of the additional benefit largely, if not exclusively, on evidence of the highest level and quality and on measurements of "hard" clinical endpoints. This more conservative approach rather firmly dismisses evidence from non-RCTs and measurements of surrogate endpoints that have not or only partly been validated. Moreover, neither qualitative extrapolation nor quantitative modeling of data is done. Methodological requirements differed mainly in the acceptance of low-level evidence, surrogate endpoints, and data modeling. Some of the discrepancies may be explained, at least in part, by differences in the health care system and procedural aspects (e.g. timing of assessment).

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 61 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Ghana 1 2%
Unknown 60 98%

Demographic breakdown

Readers by professional status Count As %
Other 9 15%
Researcher 9 15%
Student > Master 8 13%
Student > Ph. D. Student 5 8%
Librarian 4 7%
Other 6 10%
Unknown 20 33%
Readers by discipline Count As %
Medicine and Dentistry 18 30%
Pharmacology, Toxicology and Pharmaceutical Science 7 11%
Economics, Econometrics and Finance 4 7%
Agricultural and Biological Sciences 3 5%
Nursing and Health Professions 2 3%
Other 8 13%
Unknown 19 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 28 April 2021.
All research outputs
#17,530,642
of 25,701,027 outputs
Outputs from Health Economics Review
#300
of 511 outputs
Outputs of similar age
#149,308
of 248,401 outputs
Outputs of similar age from Health Economics Review
#7
of 10 outputs
Altmetric has tracked 25,701,027 research outputs across all sources so far. This one is in the 21st percentile – i.e., 21% of other outputs scored the same or lower than it.
So far Altmetric has tracked 511 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.6. This one is in the 33rd percentile – i.e., 33% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 248,401 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 30th percentile – i.e., 30% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 10 others from the same source and published within six weeks on either side of this one. This one has scored higher than 3 of them.