TY - GEN
T1 - Diminishing Regret for Online Nonconvex Optimization
AU - Park, Sang Woo
AU - Mulvaney-Kemp, Julie
AU - Jin, Ming
AU - Lavaei, Javad
N1 - Publisher Copyright:
© 2021 American Automatic Control Council.
PY - 2021/5/25
Y1 - 2021/5/25
N2 - A single nonconvex optimization is NP-hard in the worst case, and so is a sequence of nonconvex problems viewed separately. For online nonconvex optimization (ONO) problems, widely used local search algorithms are guaranteed to track a sequence of local optima, but offer no promises about global optimality. In this paper, we introduce the concept of nonconvexity regret to measure the performance of a local search method against a global optimization solver for ONO. We define the notion of depth of a global minimum, and show that memory and random explorations drive the nonconvexity regret to zero if the variability of the objective function is low compared to the depth of the global minima. We prove probabilistic guarantees on the regret bound that depend on the evolution of the landscapes of the time-varying objective functions. Then, based on the notions of missing mass and 1-occupancy set, we develop a practical algorithm that works even when there is no such information on the landscapes. The theoretical results imply that the existence of a low-complexity optimization at any arbitrary time instance of ONO can nullify the NP-hardness of the entire ONO problem. The results are verified through numerical simulations.
AB - A single nonconvex optimization is NP-hard in the worst case, and so is a sequence of nonconvex problems viewed separately. For online nonconvex optimization (ONO) problems, widely used local search algorithms are guaranteed to track a sequence of local optima, but offer no promises about global optimality. In this paper, we introduce the concept of nonconvexity regret to measure the performance of a local search method against a global optimization solver for ONO. We define the notion of depth of a global minimum, and show that memory and random explorations drive the nonconvexity regret to zero if the variability of the objective function is low compared to the depth of the global minima. We prove probabilistic guarantees on the regret bound that depend on the evolution of the landscapes of the time-varying objective functions. Then, based on the notions of missing mass and 1-occupancy set, we develop a practical algorithm that works even when there is no such information on the landscapes. The theoretical results imply that the existence of a low-complexity optimization at any arbitrary time instance of ONO can nullify the NP-hardness of the entire ONO problem. The results are verified through numerical simulations.
UR - http://www.scopus.com/inward/record.url?scp=85111933736&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85111933736&partnerID=8YFLogxK
U2 - 10.23919/ACC50511.2021.9482986
DO - 10.23919/ACC50511.2021.9482986
M3 - Conference contribution
AN - SCOPUS:85111933736
T3 - Proceedings of the American Control Conference
SP - 978
EP - 985
BT - 2021 American Control Conference, ACC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 American Control Conference, ACC 2021
Y2 - 25 May 2021 through 28 May 2021
ER -