Qihang Lin
Cited by
Cited by
Smoothing proximal gradient method for general structured sparse learning
X Chen, Q Lin, S Kim, JG Carbonell, EP Xing
Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial …, 2011
An accelerated proximal coordinate gradient method
Q Lin, Z Lu, L Xiao
Advances in Neural Information Processing Systems, 3059-3067, 2014
Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing
X Chen, Q Lin, D Zhou
International Conference on Machine Learning, 64-72, 2013
Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
JD Lee, Q Lin, T Ma, T Yang
The Journal of Machine Learning Research 18 (1), 4404-4446, 2017
An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
Q Lin, Z Lu, L Xiao
SIAM Journal on Optimization 25 (4), 2244–2273, 2015
An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
Q Lin, L Xiao
Computational Optimization and Applications 60 (3), 633–674, 2015
Sparse latent semantic analysis
X Chen, Y Qi, B Bai, Q Lin, JG Carbonell
Proceedings of the 2011 SIAM International Conference on Data Mining, 474-485, 2011
RSG: Beating subgradient method without smoothness and strong convexity
T Yang, Q Lin
arXiv preprint arXiv:1512.03107, 2015
Optimal regularized dual averaging methods for stochastic optimization
X Chen, Q Lin, J Pena
Advances in Neural Information Processing Systems, 395-403, 2012
Non-convex min-max optimization: Provable algorithms and applications in machine learning
H Rafique, M Liu, Q Lin, T Yang
arXiv preprint arXiv:1810.02060, 2018
Stochastic convex optimization: Faster local growth implies faster global convergence
Y Xu, Q Lin, T Yang
Proceedings of the 34th International Conference on Machine Learning-Volume …, 2017
A Unified Analysis of Stochastic Momentum Methods for Deep Learning.
Y Yan, T Yang, Z Li, Q Lin, Y Yang
IJCAI, 2955-2961, 2018
A sparsity preserving stochastic gradient methods for sparse regression
Q Lin, X Chen, J Peņa
Computational Optimization and Applications 58 (2), 455-482, 2014
DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization.
L Xiao, AW Yu, Q Lin, W Chen
Journal of Machine Learning Research 20 (43), 1-58, 2019
Solving weakly-convex-weakly-concave saddle-point problems as weakly-monotone variational inequality
Q Lin, M Liu, H Rafique, T Yang
arXiv preprint arXiv:1810.10207, 2018
Statistical decision making for optimal budget allocation in crowd labeling
X Chen, Q Lin, D Zhou
The Journal of Machine Learning Research 16 (1), 1-46, 2015
Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than
Y Xu, Y Yan, Q Lin, T Yang
Advances In Neural Information Processing Systems, 1208-1216, 2016
ADMM without a fixed penalty parameter: Faster convergence with new adaptive penalization
Y Xu, M Liu, Q Lin, T Yang
Advances in Neural Information Processing Systems, 1267-1277, 2017
A smoothing stochastic gradient method for composite optimization
Q Lin, X Chen, J Peņa
Optimization Methods and Software 29 (6), 1281-1301, 2014
Block-normalized gradient method: An empirical study for training deep neural network
AW Yu, L Huang, Q Lin, R Salakhutdinov, J Carbonell
arXiv preprint arXiv:1707.04822, 2017
The system can't perform the operation now. Try again later.
Articles 1–20