↓ Skip to main content

Development and validation of a classification and scoring system for the diagnosis of oral squamous cell carcinomas through confocal laser endomicroscopy

Overview of attention for article published in Journal of Translational Medicine, June 2016
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
35 Dimensions

Readers on

mendeley
35 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Development and validation of a classification and scoring system for the diagnosis of oral squamous cell carcinomas through confocal laser endomicroscopy
Published in
Journal of Translational Medicine, June 2016
DOI 10.1186/s12967-016-0919-4
Pubmed ID
Authors

Nicolai Oetter, Christian Knipfer, Maximilian Rohde, Cornelius von Wilmowsky, Andreas Maier, Kathrin Brunner, Werner Adler, Friedrich-Wilhelm Neukam, Helmut Neumann, Florian Stelzle

Abstract

Confocal laser endomicroscopy (CLE) is an optical biopsy method allowing in vivo microscopic imaging at 1000-fold magnification. It was the aim to evaluate CLE in the human oral cavity for the differentiation of physiological/carcinomatous mucosa and to establish and validate, for the first time, a scoring system to facilitate CLE assessment. The study consisted of 4 phases: (1) CLE-imaging (in vivo) was performed after the intravenous injection of fluorescein in patients with histologically confirmed carcinomatous oral mucosa; (2) CLE-experts (n = 3) verified the applicability of CLE in the oral cavity for the differentiation between physiological and cancerous tissue compared to the gold standard of histopathological assessment; (3) based on specific patterns of tissue changes, CLE-experts (n = 3) developed a classification and scoring system (DOC-Score) to simplify the diagnosis of oral squamous cell carcinomas; (4) validation of the newly developed DOC-Score by non-CLE-experts (n = 3); final statistical evaluation of their classification performance (comparison to the results of CLE-experts and the histopathological analyses). Experts acquired and edited 45 sequences (260 s) of physiological and 50 sequences (518 s) of carcinomatous mucosa (total: 95 sequences/778 s). All sequences were evaluated independently by experts and non-experts (based on the newly proposed classification system). Sensitivity (0.953) and specificity (0.889) of the diagnoses by experts as well as sensitivity (0.973) and specificity (0.881) of the non-expert ratings correlated well with the results of the present gold standard of tissue histopathology. Experts had a positive predictive value (PPV) of 0.905 and a negative predictive value (NPV) of 0.945. Non-experts reached a PPV of 0.901 and a NPV of 0.967 with the help of the DOC-Score. Inter-rater reliability (Fleiss` kappa) was 0.73 for experts and 0.814 for non-experts. The intra-rater reliability (Cronbach's alpha) of the experts was 0.989 and 0.884 for non-experts. CLE is a suitable and valid method for experts to diagnose oral cancer. Using the DOC-Score system, an accurate chair-side diagnosis of oral cancer is feasible with comparable results to the gold standard of histopathology-even in daily clinical practice for non-experienced raters.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 35 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 35 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 7 20%
Student > Ph. D. Student 6 17%
Student > Master 4 11%
Student > Bachelor 3 9%
Other 2 6%
Other 3 9%
Unknown 10 29%
Readers by discipline Count As %
Medicine and Dentistry 12 34%
Biochemistry, Genetics and Molecular Biology 4 11%
Linguistics 1 3%
Sports and Recreations 1 3%
Agricultural and Biological Sciences 1 3%
Other 2 6%
Unknown 14 40%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 03 June 2016.
All research outputs
#15,377,214
of 22,876,619 outputs
Outputs from Journal of Translational Medicine
#2,238
of 4,004 outputs
Outputs of similar age
#211,837
of 339,345 outputs
Outputs of similar age from Journal of Translational Medicine
#76
of 113 outputs
Altmetric has tracked 22,876,619 research outputs across all sources so far. This one is in the 22nd percentile – i.e., 22% of other outputs scored the same or lower than it.
So far Altmetric has tracked 4,004 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 10.5. This one is in the 31st percentile – i.e., 31% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 339,345 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 29th percentile – i.e., 29% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 113 others from the same source and published within six weeks on either side of this one. This one is in the 4th percentile – i.e., 4% of its contemporaries scored the same or lower than it.