|
This article is cited in 7 scientific papers (total in 7 papers)
Accelerated and Unaccelerated Stochastic Gradient Descent in Model Generality
D. M. Dvinskikhabc, A. I. Turind, A. V. Gasnikovbcd, S. S. Omelchenkob a Weierstrass Institute
b Moscow Institute of Physics and Technology (National Research University), Dolgoprudny, Moscow Region
c Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow
d National Research University "Higher School of Economics", Moscow
Abstract:
A new method for deriving estimates of the rate of convergence of optimal methods for solving problems of smooth (strongly) convex stochastic optimization is described. The method is based on the results of stochastic optimization derived from results on the convergence of optimal methods under the conditions of inexact gradients with small noises of nonrandom nature. In contrast to earlier results, all estimates in the present paper are obtained in model generality.
Keywords:
stochastic optimization, accelerated gradient descent, model generality, composite optimization.
Received: 11.04.2020 Revised: 20.05.2020
Citation:
D. M. Dvinskikh, A. I. Turin, A. V. Gasnikov, S. S. Omelchenko, “Accelerated and Unaccelerated Stochastic Gradient Descent in Model Generality”, Mat. Zametki, 108:4 (2020), 515–528; Math. Notes, 108:4 (2020), 511–522
Linking options:
https://www.mathnet.ru/eng/mzm12751https://doi.org/10.4213/mzm12751 https://www.mathnet.ru/eng/mzm/v108/i4/p515
|
Statistics & downloads: |
Abstract page: | 326 | Full-text PDF : | 123 | References: | 45 | First page: | 10 |
|