Indexed by:
Abstract:
Hypertension is one of the most common chronic diseases threatening human health, and early warning and intervention are crucial for controlling disease progression. However, in the real world, the sample data used for hypertension model training is often limited, posing challenges to model performance. To address this issue, this paper proposes an end-to-end hypertension early warning model based on Generative Adversarial Networks (GANs) and Long Short-Term Memory (LSTM) networks, which can generate a large number of high-quality synthetic electronic health records(EHRs) in a small sample environment and directly use them for training the hypertension early warning model. Specifically, we use the processed data from the public MIMIC-III dataset as input and generate a large number of synthetic EHRs through GAN and LSTM networks. The GAN network generates realistic synthetic data through adversarial training of the discriminator and generator, while the LSTM network is used to capture time series features, thereby enhancing the authenticity and diversity of the data. After generating the synthetic data, these data are directly used to train the hypertension early warning model. The feedback mechanism during the generation of EHRs can continuously obtain higher prediction accuracy, thus optimizing the quality of the generated data until the optimal effect is achieved. Finally, the optimal hypertension early warning model and the corresponding synthetic data are saved. Experimental results show that the hypertension early warning model trained using synthetic data significantly improves the prediction accuracy on the test set, with higher sensitivity and specificity compared to traditional methods. This study verifies the effectiveness and superiority of the proposed method in a small sample environment, providing a new perspective for the utilization of large-scale medical data, and has a wide range of application prospects. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0302-9743
Year: 2025
Volume: 15463 LNCS
Page: 341-356
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: