↓ Skip to main content

Development and validation of an endoscopic images-based deep learning model for detection with nasopharyngeal malignancies

Overview of attention for article published in Cancer Communications, September 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Among the highest-scoring outputs from this source (#19 of 286)
  • High Attention Score compared to outputs of the same age (83rd percentile)
  • High Attention Score compared to outputs of the same age and source (88th percentile)

Mentioned by

news
1 news outlet
twitter
5 tweeters

Citations

dimensions_citation
33 Dimensions

Readers on

mendeley
36 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Development and validation of an endoscopic images-based deep learning model for detection with nasopharyngeal malignancies
Published in
Cancer Communications, September 2018
DOI 10.1186/s40880-018-0325-9
Pubmed ID
Authors

Chaofeng Li, Bingzhong Jing, Liangru Ke, Bin Li, Weixiong Xia, Caisheng He, Chaonan Qian, Chong Zhao, Haiqiang Mai, Mingyuan Chen, Kajia Cao, Haoyuan Mo, Ling Guo, Qiuyan Chen, Linquan Tang, Wenze Qiu, Yahui Yu, Hu Liang, Xinjun Huang, Guoying Liu, Wangzhong Li, Lin Wang, Rui Sun, Xiong Zou, Shanshan Guo, Peiyu Huang, Donghua Luo, Fang Qiu, Yishan Wu, Yijun Hua, Kuiyuan Liu, Shuhui Lv, Jingjing Miao, Yanqun Xiang, Ying Sun, Xiang Guo, Xing Lv

Abstract

Due to the occult anatomic location of the nasopharynx and frequent presence of adenoid hyperplasia, the positive rate for malignancy identification during biopsy is low, thus leading to delayed or missed diagnosis for nasopharyngeal malignancies upon initial attempt. Here, we aimed to develop an artificial intelligence tool to detect nasopharyngeal malignancies under endoscopic examination based on deep learning. An endoscopic images-based nasopharyngeal malignancy detection model (eNPM-DM) consisting of a fully convolutional network based on the inception architecture was developed and fine-tuned using separate training and validation sets for both classification and segmentation. Briefly, a total of 28,966 qualified images were collected. Among these images, 27,536 biopsy-proven images from 7951 individuals obtained from January 1st, 2008, to December 31st, 2016, were split into the training, validation and test sets at a ratio of 7:1:2 using simple randomization. Additionally, 1430 images obtained from January 1st, 2017, to March 31st, 2017, were used as a prospective test set to compare the performance of the established model against oncologist evaluation. The dice similarity coefficient (DSC) was used to evaluate the efficiency of eNPM-DM in automatic segmentation of malignant area from the background of nasopharyngeal endoscopic images, by comparing automatic segmentation with manual segmentation performed by the experts. All images were histopathologically confirmed, and included 5713 (19.7%) normal control, 19,107 (66.0%) nasopharyngeal carcinoma (NPC), 335 (1.2%) NPC and 3811 (13.2%) benign diseases. The eNPM-DM attained an overall accuracy of 88.7% (95% confidence interval (CI) 87.8%-89.5%) in detecting malignancies in the test set. In the prospective comparison phase, eNPM-DM outperformed the experts: the overall accuracy was 88.0% (95% CI 86.1%-89.6%) vs. 80.5% (95% CI 77.0%-84.0%). The eNPM-DM required less time (40 s vs. 110.0 ± 5.8 min) and exhibited encouraging performance in automatic segmentation of nasopharyngeal malignant area from the background, with an average DSC of 0.78 ± 0.24 and 0.75 ± 0.26 in the test and prospective test sets, respectively. The eNPM-DM outperformed oncologist evaluation in diagnostic classification of nasopharyngeal mass into benign versus malignant, and realized automatic segmentation of malignant area from the background of nasopharyngeal endoscopic images.

Twitter Demographics

The data shown below were collected from the profiles of 5 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 36 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 36 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 5 14%
Student > Bachelor 3 8%
Student > Postgraduate 3 8%
Professor > Associate Professor 3 8%
Lecturer 2 6%
Other 6 17%
Unknown 14 39%
Readers by discipline Count As %
Unspecified 3 8%
Nursing and Health Professions 3 8%
Engineering 3 8%
Computer Science 3 8%
Medicine and Dentistry 2 6%
Other 3 8%
Unknown 19 53%

Attention Score in Context

This research output has an Altmetric Attention Score of 12. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 16 November 2022.
All research outputs
#2,449,758
of 22,708,120 outputs
Outputs from Cancer Communications
#19
of 286 outputs
Outputs of similar age
#54,344
of 340,079 outputs
Outputs of similar age from Cancer Communications
#1
of 9 outputs
Altmetric has tracked 22,708,120 research outputs across all sources so far. Compared to these this one has done well and is in the 88th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 286 research outputs from this source. They receive a mean Attention Score of 4.9. This one has done particularly well, scoring higher than 93% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 340,079 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 83% of its contemporaries.
We're also able to compare this research output to 9 others from the same source and published within six weeks on either side of this one. This one has scored higher than all of them