↓ Skip to main content

Citizen crowds and experts: observer variability in image-based plant phenotyping

Overview of attention for article published in Plant Methods, February 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (79th percentile)
  • Above-average Attention Score compared to outputs of the same age and source (60th percentile)

Mentioned by

12 tweeters


25 Dimensions

Readers on

54 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Citizen crowds and experts: observer variability in image-based plant phenotyping
Published in
Plant Methods, February 2018
DOI 10.1186/s13007-018-0278-7
Pubmed ID

M. Valerio Giuffrida, Feng Chen, Hanno Scharr, Sotirios A. Tsaftaris


Image-based plant phenotyping has become a powerful tool in unravelling genotype-environment interactions. The utilization of image analysis and machine learning have become paramount in extracting data stemming from phenotyping experiments. Yet we rely on observer (a human expert) input to perform the phenotyping process. We assume such input to be a 'gold-standard' and use it to evaluate software and algorithms and to train learning-based algorithms. However, we should consider whether any variability among experienced and non-experienced (including plain citizens) observers exists. Here we design a study that measures such variability in an annotation task of an integer-quantifiable phenotype: the leaf count. We compare several experienced and non-experienced observers in annotating leaf counts in images ofArabidopsis Thalianato measure intra- and inter-observer variability in a controlled study using specially designed annotation tools but also citizens using a distributed citizen-powered web-based platform. In the controlled study observers counted leaves by looking at top-view images, which were taken with low and high resolution optics. We assessed whether the utilization of tools specifically designed for this task can help to reduce such variability. We found that the presence of tools helps to reduce intra-observer variability, and that although intra- and inter-observer variability is present it does not have any effect on longitudinal leaf count trend statistical assessments. We compared the variability of citizen provided annotations (from the web-based platform) and found that plain citizens can provide statistically accurate leaf counts. We also compared a recent machine-learning based leaf counting algorithm and found that while close in performance it is still not within inter-observer variability. While expertise of the observer plays a role, if sufficient statistical power is present, a collection of non-experienced users and even citizens can be included in image-based phenotyping annotation tasks as long they are suitably designed. We hope with these findings that we can re-evaluate the expectations that we have from automated algorithms: as long as they perform within observer variability they can be considered a suitable alternative. In addition, we hope to invigorate an interest in introducing suitably designed tasks on citizen powered platforms not only to obtain useful information (for research) but to help engage the public in this societal important problem.

Twitter Demographics

The data shown below were collected from the profiles of 12 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 54 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 54 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 14 26%
Researcher 8 15%
Other 5 9%
Student > Master 4 7%
Student > Bachelor 3 6%
Other 10 19%
Unknown 10 19%
Readers by discipline Count As %
Computer Science 12 22%
Agricultural and Biological Sciences 10 19%
Engineering 7 13%
Biochemistry, Genetics and Molecular Biology 3 6%
Social Sciences 2 4%
Other 4 7%
Unknown 16 30%

Attention Score in Context

This research output has an Altmetric Attention Score of 9. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 02 August 2018.
All research outputs
of 13,322,622 outputs
Outputs from Plant Methods
of 570 outputs
Outputs of similar age
of 348,370 outputs
Outputs of similar age from Plant Methods
of 5 outputs
Altmetric has tracked 13,322,622 research outputs across all sources so far. Compared to these this one has done well and is in the 85th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 570 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.4. This one has done well, scoring higher than 83% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 348,370 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 79% of its contemporaries.
We're also able to compare this research output to 5 others from the same source and published within six weeks on either side of this one. This one has scored higher than 3 of them.