Indexed by:
Abstract:
Due to diversity among tumor lesions and less difference between surroundings, to extract the discriminative features of a medical image is still a challenging job. In order to improve the ability in the representation of these complex objects, the type of approach has been proposed with the encoderdecoder architecture models for biomedical segmentation. However, most of them fuse coarse-grained and fine-grained features directly which will cause a semantic gap. In order to bridge the semantic gap and fuse features better, we propose a style consistency loss to constrain semantic similarity when combing the encoder and decoder features. The comparison experiments are done between our proposed UNet with style consistency loss constraint in with the state-of-art segmentation deep networks including FCN, original U-Net and U-Net with residual block. Experimental results on LiTS-2017 show that our method achieves a liver dice gain of 1.7% and a tumor dice gain of 3.11% points over U-Net. © Springer Nature Switzerland AG 2019.
Keyword:
Reprint Author's Address:
Source :
ISSN: 0302-9743
Year: 2019
Volume: 11859 LNCS
Page: 390-396
Language: English
Cited Count:
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: