↓ Skip to main content

Measuring patients’ priorities using the Analytic Hierarchy Process in comparison with Best-Worst-Scaling and rating cards: methodological aspects and ranking tasks

Overview of attention for article published in Health Economics Review, November 2016
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
1 tweeter
facebook
1 Facebook page
reddit
1 Redditor

Citations

dimensions_citation
11 Dimensions

Readers on

mendeley
39 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Measuring patients’ priorities using the Analytic Hierarchy Process in comparison with Best-Worst-Scaling and rating cards: methodological aspects and ranking tasks
Published in
Health Economics Review, November 2016
DOI 10.1186/s13561-016-0130-6
Pubmed ID
Authors

Katharina Schmidt, Ana Babac, Frédéric Pauer, Kathrin Damm, J-Matthias von der Schulenburg

Abstract

Identifying patient priorities and preference measurements have gained importance as patients claim a more active role in health care decision making. Due to the variety of existing methods, it is challenging to define an appropriate method for each decision problem. This study demonstrates the impact of the non-standardized Analytic Hierarchy Process (AHP) method on priorities, and compares it with Best-Worst-Scaling (BWS) and ranking card methods. We investigated AHP results for different Consistency Ratio (CR) thresholds, aggregation methods, and sensitivity analyses. We also compared criteria rankings of AHP with BWS and ranking cards results by Kendall's tau b. The sample for our decision analysis consisted of 39 patients with rare diseases and mean age of 53.82 years. The mean weights of the two groups of CR ≤ 0.1 and CR ≤ 0.2 did not differ significantly. For the aggregation by individual priority (AIP) method, the CR was higher than for aggregation by individual judgment (AIJ). In contrast, the weights of AIJ were similar compared to AIP, but some criteria's rankings differed. Weights aggregated by geometric mean, median, and mean showed deviating results and rank reversals. Sensitivity analyses showed instable rankings. Moderate to high correlations between the rankings resulting from AHP and BWS. Limitations were the small sample size and the heterogeneity of the patients with different rare diseases. In the AHP method, the number of included patients is associated with the threshold of the CR and choice of the aggregation method, whereas both directions of influence could be demonstrated. Therefore, it is important to implement standards for the AHP method. The choice of method should depend on the trade-off between the burden for participants and possibilities for analyses.

Twitter Demographics

The data shown below were collected from the profile of 1 tweeter who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 39 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 39 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 8 21%
Student > Bachelor 5 13%
Lecturer 4 10%
Student > Master 4 10%
Other 3 8%
Other 9 23%
Unknown 6 15%
Readers by discipline Count As %
Medicine and Dentistry 8 21%
Engineering 7 18%
Business, Management and Accounting 3 8%
Economics, Econometrics and Finance 3 8%
Agricultural and Biological Sciences 2 5%
Other 9 23%
Unknown 7 18%

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 14 November 2016.
All research outputs
#11,778,639
of 15,442,255 outputs
Outputs from Health Economics Review
#196
of 294 outputs
Outputs of similar age
#194,161
of 289,751 outputs
Outputs of similar age from Health Economics Review
#32
of 55 outputs
Altmetric has tracked 15,442,255 research outputs across all sources so far. This one is in the 20th percentile – i.e., 20% of other outputs scored the same or lower than it.
So far Altmetric has tracked 294 research outputs from this source. They receive a mean Attention Score of 4.0. This one is in the 24th percentile – i.e., 24% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 289,751 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 27th percentile – i.e., 27% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 55 others from the same source and published within six weeks on either side of this one. This one is in the 36th percentile – i.e., 36% of its contemporaries scored the same or lower than it.