Indexed by:
Abstract:
In this paper, we study the problem of maximizing a nonmonotone one-sided- η smooth (OSS for short) function ψ(x) under a downwards-closed convex polytope constraint. The concept of OSS was first proposed by Mehrdad et al. [1, 2] to express the properties of multilinear extension of some set functions. It is a generalization of the continuous DR submodular function. The OSS property guarantees an alternative bound based on Taylor expansion. If the objective function is nonmonotone diminishing return (DR) submodular, Bian et al. [3] gave a 1/e approximation algorithm with a regret bound O(LD22K). On general convex sets, D ü rr et al. [4] gave a 133 approximation solution with O(LD2(lnK)2) regrets. In this paper, we consider maximizing the more general OSS function, and by adjusting the iterative step of the Jump-Start Frank Wolfe algorithm, an approximation of 1/e can still be obtained in the case of a larger regret bound O(L(μD)22K). (where L, μ, D are some parameters, see Table 1). The larger the parameter η we choose, the more regrets we will receive, because of μ=(ββ+1)-2η (β∈ (0, 1 ] ). © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0302-9743
Year: 2023
Volume: 13831 LNCS
Page: 259-267
Language: English
Cited Count:
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: