Indexed by:
Abstract:
Architectural Distortion (AD) is a common abnormality in digital mammograms, alongside masses and microcalcifications. Detecting AD in dense breast tissue is particularly challenging due to its heterogeneous asymmetries and subtle presentation. Factors such as location, size, shape, texture, and variability in patterns contribute to reduced sensitivity. To address these challenges, we propose a novel feature fusion-based Vision Transformer (ViT) attention network, combined with VGG-16, to improve accuracy and efficiency in AD detection. Our approach mitigates issues related to texture fixation, background boundaries, and deep neural network limitations, enhancing the robustness of AD classification in mammograms. Experimental results demonstrate that the proposed model achieves state-of-the-art performance, outperforming eight existing deep learning models. On the PINUM dataset, it attains 0.97 sensitivity, 0.92 F1-score, 0.93 precision, 0.94 specificity, and 0.96 accuracy. On the DDSM dataset, it records 0.93 sensitivity, 0.91 F1-score, 0.94 precision, 0.92 specificity, and 0.95 accuracy. These results highlight the potential of our method for computer-aided breast cancer diagnosis, particularly in low-resource settings where access to high-end imaging technology is limited. By enabling more accurate and timely AD detection, our approach could significantly improve breast cancer screening and early intervention worldwide. © 2013 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE Journal of Biomedical and Health Informatics
ISSN: 2168-2194
Year: 2025
7 . 7 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 23
Affiliated Colleges: