Indexed by:
Abstract:
This paper presents an uncertainty-aware deep reinforcement learning (DRL) framework for algorithmic trading agents, aimed at improving both profitability and risk management in volatile market environments. We integrate two uncertainty estimation methods-Monte Carlo Dropout for epistemic uncertainty and Reconstruction Uncertainty Estimate for distributional uncertainty-into the decision-making process of a DRL trading agent. The proposed approach was evaluated on the U.S. stock market, using historical data from the S&P 500 index, segmented into multiple sub-periods based on recession phases. Empirical results demonstrate that uncertainty-aware agents consistently outperform traditional models and risk-unaware agents in terms of cumulative returns and risk mitigation, as evidenced by lower maximum drawdowns. The Aggregated Agent, which combines both uncertainty sources, achieved the best overall performance, highlighting the importance of incorporating both risk-awareness and adaptive decision-making in automated trading systems. This work highlights the broader significance of integrating uncertainty estimation in DRL, paving the way for more robust and resilient decision-making across a variety of complex, dynamic environments. © 2024 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2024
Page: 82-87
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 7
Affiliated Colleges: