Albert S. Berahas
Albert S. Berahas
Assistant Professor, University of Michigan
Verified email at umich.edu - Homepage
Title
Cited by
Cited by
Year
A multi-batch L-BFGS method for machine learning
AS Berahas, J Nocedal, M Takáč
Advances in Neural Information Processing Systems, 1063-1071, 2016
822016
An Investigation of Newton-Sketch and Subsampled Newton Methods
AS Berahas, R Bollapragada, J Nocedal
Optimization Methods and Software 35 (4), 661-680, 2020
762020
Balancing communication and computation in distributed optimization
AS Berahas, R Bollapragada, NS Keskar, E Wei
IEEE Transactions on Automatic Control 64 (8), 3141-3155, 2018
542018
Quasi-newton methods for deep learning: Forget the past, just sample
AS Berahas, M Jahani, P Richtárik, M Takáč
arXiv preprint arXiv:1901.09997, 2019
382019
A theoretical and empirical comparison of gradient approximations in derivative-free optimization
AS Berahas, L Cao, K Choromanski, K Scheinberg
Foundations of Computational Mathematics, 1-54, 2021
362021
Derivative-free optimization of noisy functions via quasi-Newton methods
AS Berahas, RH Byrd, J Nocedal
SIAM Journal on Optimization 29 (2), 965-993, 2019
352019
adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs
NS Keskar, AS Berahas
Joint European Conference on Machine Learning and Knowledge Discovery in …, 2016
352016
A robust multi-batch l-bfgs method for machine learning
AS Berahas, M Takáč
Optimization Methods and Software 35 (1), 191-219, 2020
272020
Global convergence rate analysis of a generic line search algorithm with noise
AS Berahas, L Cao, K Scheinberg
SIAM Journal on Optimization 31 (2), 1489-1518, 2021
122021
Sparse representation and least squares-based classification in face recognition
M Iliadis, L Spinoulas, AS Berahas, H Wang, AK Katsaggelos
2014 22nd European Signal Processing Conference (EUSIPCO), 526-530, 2014
122014
Scaling Up Quasi-Newton Algorithms: Communication Efficient Distributed SR1
M Jahani, M Nazari, S Rusakov, AS Berahas, M Takáč
6th International Conference on Machine Learning, Optimization, and Data …, 2020
102020
Linear interpolation gives better gradients than Gaussian smoothing in derivative-free optimization
AS Berahas, L Cao, K Choromanski, K Scheinberg
arXiv preprint arXiv:1905.13043, 2019
92019
Sonia: A symmetric blockwise truncated optimization algorithm
M Jahani, M Nazari, R Tappenden, A Berahas, M Takác
International Conference on Artificial Intelligence and Statistics, 487-495, 2021
52021
Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization
AS Berahas, FE Curtis, D Robinson, B Zhou
SIAM Journal on Optimization 31 (2), 1352-1379, 2021
52021
Nested Distributed Gradient Methods with Adaptive Quantized Communication
AS Berahas, C Iakovidou, E Wei
58th IEEE Conference on Decision and Control (CDC), 1519-1525, 2019
42019
A stochastic sequential quadratic optimization algorithm for nonlinear equality constrained optimization with rank-deficient Jacobians
AS Berahas, FE Curtis, MJ O'Neill, DP Robinson
arXiv preprint arXiv:2106.13015, 2021
32021
On the convergence of nested decentralized gradient methods with multiple consensus and gradient steps
AS Berahas, R Bollapragada, E Wei
IEEE Transactions on Signal Processing 69, 4192-4203, 2021
22021
Multi-model robust error correction for face recognition
M Iliadis, L Spinoulas, AS Berahas, H Wang, AK Katsaggelos
2016 IEEE International Conference on Image Processing (ICIP), 3229-3233, 2016
22016
Limited-memory BFGS with displacement aggregation
AS Berahas, FE Curtis, B Zhou
Mathematical Programming, 1-37, 2021
12021
Quasi-Newton methods for machine learning: forget the past, just sample
AS Berahas, M Jahani, P Richtárik, M Takáč
Optimization Methods and Software, 1-37, 2021
2021
The system can't perform the operation now. Try again later.
Articles 1–20