↓ Skip to main content

A framework for biomedical figure segmentation towards image-based document retrieval

Overview of attention for article published in BMC Systems Biology, October 2013
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
15 Dimensions

Readers on

mendeley
27 Mendeley
citeulike
1 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
A framework for biomedical figure segmentation towards image-based document retrieval
Published in
BMC Systems Biology, October 2013
DOI 10.1186/1752-0509-7-s4-s8
Pubmed ID
Authors

Luis D Lopez, Jingyi Yu, Cecilia Arighi, Catalina O Tudor, Manabu Torii, Hongzhan Huang, K Vijay-Shanker, Cathy Wu

Abstract

The figures included in many of the biomedical publications play an important role in understanding the biological experiments and facts described within. Recent studies have shown that it is possible to integrate the information that is extracted from figures in classical document classification and retrieval tasks in order to improve their accuracy. One important observation about the figures included in biomedical publications is that they are often composed of multiple subfigures or panels, each describing different methodologies or results. The use of these multimodal figures is a common practice in bioscience, as experimental results are graphically validated via multiple methodologies or procedures. Thus, for a better use of multimodal figures in document classification or retrieval tasks, as well as for providing the evidence source for derived assertions, it is important to automatically segment multimodal figures into subfigures and panels. This is a challenging task, however, as different panels can contain similar objects (i.e., barcharts and linecharts) with multiple layouts. Also, certain types of biomedical figures are text-heavy (e.g., DNA sequences and protein sequences images) and they differ from traditional images. As a result, classical image segmentation techniques based on low-level image features, such as edges or color, are not directly applicable to robustly partition multimodal figures into single modal panels. In this paper, we describe a robust solution for automatically identifying and segmenting unimodal panels from a multimodal figure. Our framework starts by robustly harvesting figure-caption pairs from biomedical articles. We base our approach on the observation that the document layout can be used to identify encoded figures and figure boundaries within PDF files. Taking into consideration the document layout allows us to correctly extract figures from the PDF document and associate their corresponding caption. We combine pixel-level representations of the extracted images with information gathered from their corresponding captions to estimate the number of panels in the figure. Thus, our approach simultaneously identifies the number of panels and the layout of figures. In order to evaluate the approach described here, we applied our system on documents containing protein-protein interactions (PPIs) and compared the results against a gold standard that was annotated by biologists. Experimental results showed that our automatic figure segmentation approach surpasses pure caption-based and image-based approaches, achieving a 96.64% accuracy. To allow for efficient retrieval of information, as well as to provide the basis for integration into document classification and retrieval systems among other, we further developed a web-based interface that lets users easily retrieve panels containing the terms specified in the user queries.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 27 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Germany 1 4%
Korea, Republic of 1 4%
Unknown 25 93%

Demographic breakdown

Readers by professional status Count As %
Researcher 6 22%
Student > Ph. D. Student 4 15%
Student > Bachelor 3 11%
Student > Doctoral Student 3 11%
Student > Master 3 11%
Other 4 15%
Unknown 4 15%
Readers by discipline Count As %
Computer Science 13 48%
Engineering 4 15%
Arts and Humanities 2 7%
Agricultural and Biological Sciences 1 4%
Linguistics 1 4%
Other 1 4%
Unknown 5 19%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 10 September 2014.
All research outputs
#17,285,668
of 25,373,627 outputs
Outputs from BMC Systems Biology
#651
of 1,132 outputs
Outputs of similar age
#140,832
of 224,697 outputs
Outputs of similar age from BMC Systems Biology
#20
of 35 outputs
Altmetric has tracked 25,373,627 research outputs across all sources so far. This one is in the 21st percentile – i.e., 21% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,132 research outputs from this source. They receive a mean Attention Score of 3.7. This one is in the 31st percentile – i.e., 31% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 224,697 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 28th percentile – i.e., 28% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 35 others from the same source and published within six weeks on either side of this one. This one is in the 22nd percentile – i.e., 22% of its contemporaries scored the same or lower than it.