Indexed by:
Abstract:
Stochastic optimization has experienced significant growth in recent decades, with the increasing prevalence of variance reduction techniques in stochastic optimization algorithms to enhance computational efficiency. In this paper, we introduce two projection-free stochastic approximation algorithms for maximizing diminishing return (DR) submodular functions over convex constraints, building upon the Stochastic Path Integrated Differential EstimatoR (SPIDER) and its variants. Firstly, we present a SPIDER Continuous Greedy (SPIDER-CG) algorithm for the monotone case that guarantees a (1- e(-1))OPT- epsilon approximation after O(epsilon(-1)) iterations and O(epsilon(-2)) stochastic gradient computations under the mean-squared smoothness assumption. For the non-monotone case, we develop a SPIDER Frank-Wolfe (SPIDER-FW) algorithm that guarantees a 1/4 (1- minx (x is an element of C)||x||(infinity))OPT- epsilon approximation withO(epsilon(-1)) iterations and O(epsilon (-2)) stochastic gradient estimates. To address the practical challenge associated with a large number of samples per iteration, we introduce a modified gradient estimator based on SPIDER, leading to a Hybrid SPIDER-FW (Hybrid SPIDER-CG) algorithm, which achieves the same approximation guarantee as SPIDER-FW (SPIDER-CG) algorithm with only O(1) samples per iteration. Numerical experiments on both simulated and real data demonstrate the efficiency of the proposed methods.
Keyword:
Reprint Author's Address:
Email:
Source :
ALGORITHMICA
ISSN: 0178-4617
Year: 2023
Issue: 5
Volume: 86
Page: 1335-1364
1 . 1 0 0
JCR@2022
Cited Count:
WoS CC Cited Count: 1
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: