Indexed by:
Abstract:
The online optimization model was first introduced in the research of machine learning problems (Zinkevich, Proceedings of ICML, 928–936, 2003). It is a powerful framework that combines the principles of optimization with the challenges of online decision-making. The present research mainly consider the case that the reveal objective functions are convex or submodular. In this paper, we focus on the online maximization problem under a special objective function Φ(x):[0,1]n→R+ which satisfies the inequality 12⟨uT∇2Φ(x),u⟩≤σ·‖u‖1‖x‖1⟨u,∇Φ(x)⟩ for any x,u∈[0,1]n,x≠0. This objective function is named as one sided σ-smooth (OSS) function. We achieve two conclusions here. Firstly, under the assumption that the gradient function of OSS function is L-smooth, we propose an (1-exp((θ-1)(θ/(1+θ))2σ))- approximation algorithm with O(T) regret upper bound, where T is the number of rounds in the online algorithm and θ,σ∈R+ are parameters. Secondly, if the gradient function of OSS function has no L-smoothness, we provide an 1+((θ+1)/θ)4σ-1-approximation projected gradient algorithm, and prove that the regret upper bound of the algorithm is O(T). We think that this research can provide different ideas for online non-convex and non-submodular learning. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
Keyword:
Reprint Author's Address:
Email:
Source :
Journal of Combinatorial Optimization
ISSN: 1382-6905
Year: 2024
Issue: 5
Volume: 47
1 . 0 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 9
Affiliated Colleges: