Indexed by:
Abstract:
This paper investigates a lightweight Vision Transformer (Vi T) model based on Support Vector Machine (SVM) optimization, using methods such as structural optimization and hyperparameter optimization that introduce SVM into the ViT model, with the aim of improving the model's performance in target detection tasks. This study addresses several aspects of the lightweight ViT model structure design, SVM optimization strategy, and analysis of the total optimization process. First, SVM is introduced into the structural design of the Vi T model to optimize the feature extraction process. Second, by optimizing the hyperparameters of the ViT model, the training process of the model is accelerated with the powerful classification ability of SVM. Finally, the overall optimization process is analyzed and experimentally validated to demonstrate the superior performance of the optimized model in the target detection task on standard datasets such as the COCO dataset. The introduction of Support Vector Machines (SVMs) into the Vision Transformer (ViT) model allows the model parameters to be effectively reduced in resource-constrained scenarios, improves the training and inference speeds, and significantly improves the overall performance of the model. © 2024 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2024
Page: 261-265
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 7
Affiliated Colleges: