40 citations to https://www.mathnet.ru/rus/at14682
  1. Yuanhanqing Huang, Jianghai Hu, “Zeroth-Order Learning in Continuous Games via Residual Pseudogradient Estimates”, IEEE Trans. Automat. Contr., 70:4 (2025), 2258  crossref
  2. Aleksandr Beznosikov, Ivan Stepanov, Artyom Voronov, Alexander Gasnikov, “One-point feedback for composite optimization with applications to distributed and federated learning”, Optimization Methods and Software, 2025, 1  crossref
  3. Yaowen Wang, Lipo Mo, Min Zuo, Yuanshi Zheng, “One‐Point Residual Feedback Algorithms for Distributed Online Convex and Non‐Convex Optimization”, Intl J Robust & Nonlinear, 2025  crossref
  4. Raghu Bollapragada, Cem Karamanli, Stefan M. Wild, “Derivative-free stochastic optimization via adaptive sampling strategies”, Optimization Methods and Software, 2025, 1  crossref
  5. Yan Zhang, Michael M. Zavlanos, “Cooperative Multiagent Reinforcement Learning With Partial Observations”, IEEE Trans. Automat. Contr., 69:2 (2024), 968  crossref
  6. A. V. Gasnikov, A. V. Lobanov, F. S. Stonyakin, “Highly Smooth Zeroth-Order Methods for Solving Optimization Problems under the PL Condition”, Comput. Math. and Math. Phys., 64:4 (2024), 739  crossref
  7. Wouter Jongeneel, Man-Chung Yue, Daniel Kuhn, “Small Errors in Random Zeroth-Order Optimization Are Imaginary”, SIAM J. Optim., 34:3 (2024), 2638  crossref
  8. Alexander Gasnikov, Darina Dvinskikh, Pavel Dvurechensky, Eduard Gorbunov, Aleksandr Beznosikov, Alexander Lobanov, Encyclopedia of Optimization, 2024, 1  crossref
  9. Yan Zhang, Yi Zhou, Kaiyi Ji, Yi Shen, Michael M. Zavlanos, “Boosting One-Point Derivative-Free Online Optimization via Residual Feedback”, IEEE Trans. Automat. Contr., 69:9 (2024), 6309  crossref
  10. Andrey Veprikov, Alexander Bogdanov, Vladislav Minashkin, Aleksandr Beznosikov, “New aspects of black box conditional gradient: Variance reduction and one point feedback”, Chaos, Solitons & Fractals, 189 (2024), 115654  crossref
1
2
3
4
Следующая