↓ Skip to main content

A heuristic approach to determine an appropriate number of topics in topic modeling

Overview of attention for article published in BMC Bioinformatics, December 2015
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (91st percentile)
  • High Attention Score compared to outputs of the same age and source (93rd percentile)

Mentioned by

blogs
1 blog
policy
1 policy source
twitter
4 X users

Citations

dimensions_citation
246 Dimensions

Readers on

mendeley
299 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
A heuristic approach to determine an appropriate number of topics in topic modeling
Published in
BMC Bioinformatics, December 2015
DOI 10.1186/1471-2105-16-s13-s8
Pubmed ID
Authors

Weizhong Zhao, James J Chen, Roger Perkins, Zhichao Liu, Weigong Ge, Yijun Ding, Wen Zou

Abstract

Topic modelling is an active research field in machine learning. While mainly used to build models from unstructured textual data, it offers an effective means of data mining where samples represent documents, and different biological endpoints or omics data represent words. Latent Dirichlet Allocation (LDA) is the most commonly used topic modelling method across a wide number of technical fields. However, model development can be arduous and tedious, and requires burdensome and systematic sensitivity studies in order to find the best set of model parameters. Often, time-consuming subjective evaluations are needed to compare models. Currently, research has yielded no easy way to choose the proper number of topics in a model beyond a major iterative approach. Based on analysis of variation of statistical perplexity during topic modelling, a heuristic approach is proposed in this study to estimate the most appropriate number of topics. Specifically, the rate of perplexity change (RPC) as a function of numbers of topics is proposed as a suitable selector. We test the stability and effectiveness of the proposed method for three markedly different types of grounded-truth datasets: Salmonella next generation sequencing, pharmacological side effects, and textual abstracts on computational biology and bioinformatics (TCBB) from PubMed. The proposed RPC-based method is demonstrated to choose the best number of topics in three numerical experiments of widely different data types, and for databases of very different sizes. The work required was markedly less arduous than if full systematic sensitivity studies had been carried out with number of topics as a parameter. We understand that additional investigation is needed to substantiate the method's theoretical basis, and to establish its generalizability in terms of dataset characteristics.

X Demographics

X Demographics

The data shown below were collected from the profiles of 4 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 299 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Spain 1 <1%
Australia 1 <1%
South Africa 1 <1%
Unknown 296 99%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 66 22%
Student > Master 46 15%
Researcher 25 8%
Student > Bachelor 20 7%
Student > Doctoral Student 14 5%
Other 38 13%
Unknown 90 30%
Readers by discipline Count As %
Computer Science 71 24%
Business, Management and Accounting 32 11%
Engineering 18 6%
Social Sciences 15 5%
Agricultural and Biological Sciences 11 4%
Other 52 17%
Unknown 100 33%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 17. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 23 August 2023.
All research outputs
#2,126,518
of 25,837,817 outputs
Outputs from BMC Bioinformatics
#471
of 7,763 outputs
Outputs of similar age
#34,228
of 398,860 outputs
Outputs of similar age from BMC Bioinformatics
#9
of 147 outputs
Altmetric has tracked 25,837,817 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 91st percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 7,763 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.6. This one has done particularly well, scoring higher than 93% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 398,860 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 91% of its contemporaries.
We're also able to compare this research output to 147 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 93% of its contemporaries.