↓ Skip to main content

The paradox of verbal autopsy in cause of death assignment: symptom question unreliability but predictive accuracy

Overview of attention for article published in Population Health Metrics, October 2016
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

policy
1 policy source

Readers on

mendeley
30 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
The paradox of verbal autopsy in cause of death assignment: symptom question unreliability but predictive accuracy
Published in
Population Health Metrics, October 2016
DOI 10.1186/s12963-016-0104-2
Pubmed ID
Authors

Peter Serina, Ian Riley, Bernardo Hernandez, Abraham D. Flaxman, Devarsetty Praveen, Veronica Tallo, Rohina Joshi, Diozele Sanvictores, Andrea Stewart, Meghan D. Mooney, Christopher J. L. Murray, Alan D. Lopez

Abstract

We believe that it is important that governments understand the reliability of the mortality data which they have at their disposable to guide policy debates. In many instances, verbal autopsy (VA) will be the only source of mortality data for populations, yet little is known about how the accuracy of VA diagnoses is affected by the reliability of the symptom responses. We previously described the effect of the duration of time between death and VA administration on VA validity. In this paper, using the same dataset, we assess the relationship between the reliability and completeness of symptom responses and the reliability and accuracy of cause of death (COD) prediction. The study was based on VAs in the Population Health Metrics Research Consortium (PHMRC) VA Validation Dataset from study sites in Bohol and Manila, Philippines and Andhra Pradesh, India. The initial interview was repeated within 3-52 months of death. Question responses were assessed for reliability and completeness between the two survey rounds. COD was predicted by Tariff Method. A sample of 4226 VAs was collected for 2113 decedents, including 1394 adults, 349 children, and 370 neonates. Mean question reliability was unexpectedly low (kappa = 0.447): 42.5 % of responses positive at the first interview were negative at the second, and 47.9 % of responses positive at the second had been negative at the first. Question reliability was greater for the short form of the PHMRC instrument (kappa = 0.497) and when analyzed at the level of the individual decedent (kappa = 0.610). Reliability at the level of the individual decedent was associated with COD predictive reliability and predictive accuracy. Families give coherent accounts of events leading to death but the details vary from interview to interview for the same case. Accounts are accurate but inconsistent; different subsets of symptoms are identified on each occasion. However, there are sufficient accurate and consistent subsets of symptoms to enable the Tariff Method to assign a COD. Questions which contributed most to COD prediction were also the most reliable and consistent across repeat interviews; these have been included in the short form VA questionnaire. Accuracy and reliability of diagnosis for an individual death depend on the quality of interview. This has considerable implications for the progressive roll out of VAs into civil registration and vital statistics (CRVS) systems.

Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 30 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 30 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 5 17%
Student > Master 4 13%
Student > Bachelor 3 10%
Student > Ph. D. Student 3 10%
Student > Doctoral Student 2 7%
Other 6 20%
Unknown 7 23%
Readers by discipline Count As %
Medicine and Dentistry 10 33%
Nursing and Health Professions 2 7%
Social Sciences 2 7%
Psychology 2 7%
Business, Management and Accounting 1 3%
Other 4 13%
Unknown 9 30%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 03 September 2023.
All research outputs
#8,473,662
of 25,286,324 outputs
Outputs from Population Health Metrics
#229
of 412 outputs
Outputs of similar age
#119,029
of 324,390 outputs
Outputs of similar age from Population Health Metrics
#8
of 13 outputs
Altmetric has tracked 25,286,324 research outputs across all sources so far. This one is in the 43rd percentile – i.e., 43% of other outputs scored the same or lower than it.
So far Altmetric has tracked 412 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.6. This one is in the 35th percentile – i.e., 35% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 324,390 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 50% of its contemporaries.
We're also able to compare this research output to 13 others from the same source and published within six weeks on either side of this one. This one is in the 46th percentile – i.e., 46% of its contemporaries scored the same or lower than it.