A single nonconvex optimization is NP-hard in the worst case, and so is a sequence of nonconvex problems viewed separately. For online nonconvex optimization (ONO) problems, widely used local search algorithms are guaranteed to track a sequence of local optima, but offer no promises about global optimality. In this paper, we introduce the concept of nonconvexity regret to measure the performance of a local search method against a global optimization solver for ONO. We define the notion of depth of a global minimum, and show that memory and random explorations drive the nonconvexity regret to zero if the variability of the objective function is low compared to the depth of the global minima. We prove probabilistic guarantees on the regret bound that depend on the evolution of the landscapes of the time-varying objective functions. Then, based on the notions of missing mass and 1-occupancy set, we develop a practical algorithm that works even when there is no such information on the landscapes. The theoretical results imply that the existence of a low-complexity optimization at any arbitrary time instance of ONO can nullify the NP-hardness of the entire ONO problem. The results are verified through numerical simulations.