Konstantin Mishchenko
Konstantin Mishchenko
Verifierad e-postadress på kaust.edu.sa - Startsida
Citeras av
Citeras av
Tighter Theory for Local SGD on Identical and Heterogeneous Data
A Khaled, K Mishchenko, P Richtarik
International Conference on Artificial Intelligence and Statistics, 4519-4529, 2020
First Analysis of Local GD on Heterogeneous Data
A Khaled, K Mishchenko, P Richtárik
NeurIPS FL Workshop, arXiv preprint arXiv:1909.04715, 2019
Distributed Learning with Compressed Gradient Differences
K Mishchenko, E Gorbunov, M Takáč, P Richtárik
arXiv preprint arXiv:1901.09269, 2019
Stochastic Distributed Learning with Gradient Quantization and Variance Reduction
S Horváth, D Kovalev, K Mishchenko, S Stich, P Richtárik
arXiv preprint arXiv:1904.05115, 2019
SEGA: Variance Reduction via Gradient Sketching
F Hanzely, K Mishchenko, P Richtárik
Advances in Neural Information Processing Systems, 2082-2093, 2018
A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning
K Mishchenko, F Iutzeler, J Malick, MR Amini
International Conference on Machine Learning, 3584-3592, 2018
Revisiting Stochastic Extragradient
K Mishchenko, D Kovalev, E Shulgin, P Richtárik, Y Malitsky
AISTATS 2020, 2019
A distributed flexible delay-tolerant proximal gradient algorithm
K Mishchenko, F Iutzeler, J Malick
SIAM Journal on Optimization 30 (1), 933-959, 2020
99% of worker-master communication in distributed optimization is not needed
K Mishchenko, F Hanzely, P Richtárik
Conference on Uncertainty in Artificial Intelligence, 979-988, 2020
Stochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates
D Kovalev, K Mishchenko, P Richtárik
arXiv preprint arXiv:1912.01597, 2019
MISO is Making a Comeback With Better Proofs and Rates
X Qian, A Sailanbayev, K Mishchenko, P Richtárik
arXiv preprint arXiv:1906.01474, 2019
A Stochastic Decoupling Method for Minimizing the Sum of Smooth and Non-Smooth Functions
K Mishchenko, P Richtárik
arXiv preprint arXiv:1905.11535, 2019
Adaptive Gradient Descent without Descent
K Mishchenko, Y Malitsky
37th International Conference on Machine Learning (ICML 2020), 2020
A Stochastic Penalty Model for Convex and Nonconvex Optimization with Big Constraints
K Mishchenko, P Richtárik
arXiv preprint arXiv:1810.13387, 2018
DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate
S Soori, K Mischenko, A Mokhtari, MM Dehnavi, M Gurbuzbalaban
AISTATS 2020, 2019
Dualize, Split, Randomize: Fast Nonsmooth Optimization Algorithms
A Salim, L Condat, K Mishchenko, P Richtarik
arXiv preprint arXiv:2004.02635, 2020
Sinkhorn Algorithm as a Special Case of Stochastic Mirror Descent
K Mishchenko
NeurIPS OTML Workshop, arXiv preprint arXiv:1909.06918, 2019
Random Reshuffling: Simple Analysis with Vast Improvements
K Mishchenko, A Khaled, P Richtárik
arXiv preprint arXiv:2006.05988, 2020
A Self-supervised Approach to Hierarchical Forecasting with Applications to Groupwise Synthetic Controls
K Mishchenko, M Montgomery, F Vaggi
Time Series Workshop at International Conference on Machine Learning, 2019
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–19