• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Bai, Yunkun (Bai, Yunkun.) | Sun, Guangmin (Sun, Guangmin.) (Scholars:孙光民) | Li, Yu (Li, Yu.) | Shen, Le (Shen, Le.) | Zhang, Li (Zhang, Li.)

Indexed by:

CPCI-S EI Scopus

Abstract:

Deep learning-based segmentation algorithms for medical image require massive training datasets with accurate annotations, which is costly since it takes much human effort to manually labeling from scratch. Therefore, interactive image segmentation is important and may greatly improve the efficiency and accuracy of medical image labeling. Some interactive segmentation methods (e.g. Deep Extreme Cut and Deepgrow) may improve the labeling through minimal interactive input. However these methods only utilize the initial manually input information, while existing segmentation results (such as annotations produced by nonprofessionals or conventional segmentation algorithms) cannot be utilized. In this paper, an interactive segmentation method is proposed to make use of both existing segmentation results and human interactive information to optimize the segmentation results progressively. In this framework, the user only needs to click on the foreground or background of the target individual on the medical image, the algorithm could adaptively learn the correlation between them, and automatically completes the segmentation of the target. The main contributions of this paper are: (1) We adjusted and applied a convolutional neural network which takes medical image data and user's clicks information as input to achieve more accurate segmentation of medical images. (2) We designed an iterative training strategy to realize the applicability of the model to deal with different number of clicks data input. (3) We designed an algorithm based on false positive and false negative regions to simulate the user's clicks, so as to provide enough training data. By applying the proposed method, users can easily extract the region of interest or modify the segmentation results by multiple clicks. The experimental results of 6 medical image segmentation tasks show that the proposed method achieves more accurate segmentation results by at most five clicks. © 2021 SPIE.

Keyword:

Convolution Convolutional neural networks Image annotation Image segmentation Deep learning Iterative methods Image enhancement Medical image processing

Author Community:

  • [ 1 ] [Bai, Yunkun]Faculty of Information Technology, Beijing University of Technology, No. 100 PingLeYuan, Chaoyang District, Beijing; 100124, China
  • [ 2 ] [Sun, Guangmin]Faculty of Information Technology, Beijing University of Technology, No. 100 PingLeYuan, Chaoyang District, Beijing; 100124, China
  • [ 3 ] [Li, Yu]Faculty of Information Technology, Beijing University of Technology, No. 100 PingLeYuan, Chaoyang District, Beijing; 100124, China
  • [ 4 ] [Shen, Le]Key Laboratory of Particle and Radiation Imaging, Tsinghua University, Ministry of Education, China
  • [ 5 ] [Shen, Le]Department of Engineering Physics, Tsinghua University, Beijing; 100084, China
  • [ 6 ] [Zhang, Li]Key Laboratory of Particle and Radiation Imaging, Tsinghua University, Ministry of Education, China
  • [ 7 ] [Zhang, Li]Department of Engineering Physics, Tsinghua University, Beijing; 100084, China

Reprint Author's Address:

  • [shen, le]department of engineering physics, tsinghua university, beijing; 100084, china;;[shen, le]key laboratory of particle and radiation imaging, tsinghua university, ministry of education, china

Show more details

Related Keywords:

Related Article:

Source :

ISSN: 1605-7422

Year: 2021

Volume: 11596

Language: English

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count: 2

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 17

Online/Total:1142/10575072
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.