Indexed by:
Abstract:
In recent years, convolutional neural network (CNN), as a representative neural networks in the area of deep learning technology, has been widely used in machine vision such as object recognition and image processing. In this paper, we introduce a more flexible and more accurate recurrent convolutional neural network (RCNN) than traditional CNN for object recognition. But RCNN is restricted to locality and lacks the understanding of the global context. Inspired by the research of cognitive science and neuroscience, we put forward a recurrent convolutional neural network integrated with self-attention mechanism (A-RCNN) to address this issue. Unlike most methods that combine self-attention as an enhancement of convolutional layers with CNN, our method introduces the self-attention mechanism as an independent layer into the RCNN. We evaluate our constructed A-RCNN on the publicly available CIFAR-10, CIFAR-100 and MNIST datasets. The analysis of experimental results shows that our proposed A-RCNN has significantly improved accuracy on object recognition tasks compared with the original RCNN and CNN. © 2022 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2022
Volume: 2022-January
Page: 6312-6316
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 9
Affiliated Colleges: