Indexed by:
Abstract:
Multi-objective optimization algorithms are essential for addressing real-world challenges characterized by conflicting objectives. Although conventional algorithms are effective in exploring solution spaces and generating non-dominated solutions, solution quality and dynamic adaptability of true Pareto fronts need to be improved. This work proposes a multi-objective algorithm that integrates Non-dominated sorting genetic algorithm II (NSGA-II) and Multi-Objective Reinforcement Learning (N-MORL). N-MORL consists of two parts including upstream and downstream components. In the upstream component, this work improves the Variance-stabilized Multi-objective Proximal Policy Optimization (VMPPO) for enhanced convergence stability by adjusting its iteration mechanism. Additionally, this work optimizes variance networks and action sampling to balance exploration and exploitation, which improves experience sampling efficiency. This work adopts high-quality solution sets yielded by MORL as the initial solution set for downstream NSGA-II, guiding the exploration space and increasing the solution number. High-quality initial solutions significantly accelerate the iterative convergence speed of N-MORL. N-MORL provides the quality and the number of solutions, better covering or approaching the true Pareto front. Experimental results with five benchmark multi-objective functions demonstrate that N-MORL outperforms the other three multi-objective evolutionary algorithms regarding high-quality solutions with the same iterations. © 2024 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 1062-922X
Year: 2024
Page: 3733-3738
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 12
Affiliated Colleges: