Indexed by:
Abstract:
Magnetic resonance imaging (MRI) is currently the main non-invasive method for detecting focal liver lesions (FLLs) as it can provide rich information from multiple modals. Although deep learning has made significant progress in medical image diagnosis, medical image datasets rarely contain large scale labelled data which often leads to overfitting and poor model generalization. In order to make full use of the multimodal MRI under few-shot scenarios, we propose an Efficient Multimodal-Contribution-Aware N-pair (EMCAN) network, which constructs a lightweight and efficient feature extractor to enhance representation of features. To improve the separability of the features of this network, we propose the multi-class N-pair loss. Experimental results show that our method outperforms conventional deep learning models in terms of diagnostic accuracy and provides more accurate reference for clinical diagnosis. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 1865-0929
Year: 2023
Volume: 1910 CCIS
Page: 373-387
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 11
Affiliated Colleges: