Indexed by:
Abstract:
The online optimization model was first introduced in the research of machine learning problems (Zinkevich, Proceedings of ICML, 928-936, 2003). It is a powerful framework that combines the principles of optimization with the challenges of online decision-making. The present research mainly consider the case that the reveal objective functions are convex or submodular. In this paper, we focus on the online maximization problem under a special objective function Phi(x) : [0,1](n) -> R+ which satisfies the inequality 1/2 < u(T) del(2)Phi(x), u > <= sigma center dot ||u||(1)/||x||(1) < u, del Phi(x)> for any x, u is an element of[0, 1](n), x not equal 0. This objective function is named as one sided sigma-smooth (OSS) function. We achieve two conclusions here. Firstly, under the assumption that the gradient function of OSS function is L-smooth, we propose an (1-exp((theta - 1)(theta/(1+theta))(2 sigma)))- approximation algorithm with O(root T) regret upper bound, where T is the number of rounds in the online algorithm and theta, sigma is an element of R+ are parameters. Secondly, if the gradient function of OSS function has no L-smoothness, we provide an (1 + ((theta + 1)/theta)(4 sigma))(-1)-approximation projected gradient algorithm, and prove that the regret upper bound of the algorithm is O(root T). We think that this research can provide different ideas for online non-convex and non-submodular learning.
Keyword:
Reprint Author's Address:
Source :
JOURNAL OF COMBINATORIAL OPTIMIZATION
ISSN: 1382-6905
Year: 2024
Issue: 5
Volume: 47
1 . 0 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: