Reza Babanezhad
Reza Babanezhad
Samsung AI lab
No verified email - Homepage
Title
Cited by
Cited by
Year
Stop wasting my gradients: Practical SVRG
R Babanezhad, MO Ahmed, A Virani, M Schmidt, J Konečný, S Sallinen
arXiv preprint arXiv:1511.01942, 2015
1282015
Non-uniform stochastic average gradient method for training conditional random fields
M Schmidt, R Babanezhad, M Ahmed, A Defazio, A Clifton, A Sarkar
artificial intelligence and statistics, 819-828, 2015
772015
Faster stochastic variational inference using proximal-gradient methods with general divergence functions
ME Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
arXiv preprint arXiv:1511.00146, 2015
302015
M-ADDA: Unsupervised domain adaptation with deep metric learning
IH Laradji, R Babanezhad
Domain adaptation for visual understanding, 17-31, 2020
202020
A generic top-n recommendation framework for trading-off accuracy, novelty, and coverage
Z Zolaktaf, R Babanezhad, R Pottinger
2018 IEEE 34th International Conference on Data Engineering (ICDE), 149-160, 2018
122018
Reducing the variance in online optimization by transporting past gradients
SMR Arnold, PA Manzagol, R Babanezhad, I Mitliagkas, NL Roux
arXiv preprint arXiv:1906.03532, 2019
82019
Convergence of proximal-gradient stochastic variational inference under non-decreasing step-size sequence
ME Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
arXiv preprint arXiv:1511.00146, 2015
82015
Process patterns for web engineering
R Babanezhad, YM Bibalan, R Ramsin
2010 IEEE 34th Annual Computer Software and Applications Conference, 477-486, 2010
62010
Masaga: a linearly-convergent stochastic first-order method for optimization on manifolds
R Babanezhad, IH Laradji, A Shafaei, M Schmidt
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2018
42018
An analysis of the adaptation speed of causal models
R Le Priol, R Babanezhad, Y Bengio, S Lacoste-Julien
International Conference on Artificial Intelligence and Statistics, 775-783, 2021
32021
To each optimizer a norm, to each norm its generalization
S Vaswani, R Babanezhad, J Gallego, A Mishkin, S Lacoste-Julien, ...
arXiv preprint arXiv:2006.06821, 2020
22020
Semantics Preserving Adversarial Learning
OA Dia, E Barshan, R Babanezhad
arXiv preprint arXiv:1903.03905, 2019
12019
SVRG Meets AdaGrad: Painless Variance Reduction
B Dubois-Taine, S Vaswani, R Babanezhad, M Schmidt, S Lacoste-Julien
arXiv preprint arXiv:2102.09645, 2021
2021
Geometry-Aware Universal Mirror-Prox
R Babanezhad, S Lacoste-Julien
arXiv preprint arXiv:2011.11203, 2020
2020
Semantics Preserving Adversarial Attacks
OA Dia, E Barshan, R Babanezhad
2019
Semantics Preserving Adversarial Learning
O Amadou Dia, E Barshan, R Babanezhad
arXiv e-prints, arXiv: 1903.03905, 2019
2019
Practical optimization methods for machine learning models
R Babanezhad Harikandeh
University of British Columbia, 2019
2019
Manifold Preserving Adversarial Learning.
OA Dia, E Barshan, R Babanezhad
CoRR, 2019
2019
Online variance-reducing optimization
N Le Roux, R Babanezhad, PA Manzagol
2018
Faster Stochastic Variational Inference using Proximal-Gradient Methods with General Divergence Functions
M Emtiyaz Khan, R Babanezhad, W Lin, M Schmidt, M Sugiyama
arXiv e-prints, arXiv: 1511.00146, 2015
2015
The system can't perform the operation now. Try again later.
Articles 1–20