Simon Shaolei Du
Simon Shaolei Du
Assistant Professor, School of Computer Science and Engineering, University of Washington
Verified email at cs.washington.edu - Homepage
Title
Cited by
Cited by
Year
Gradient descent provably optimizes over-parameterized neural networks
SS Du, X Zhai, B Poczos, A Singh
International Conference on Learning Representations 2019, 2018
4522018
Gradient descent finds global minima of deep neural networks
SS Du, JD Lee, H Li, L Wang, X Zhai
International Conference on Machine Learning 2019, 2018
4212018
Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks
S Arora, SS Du, W Hu, Z Li, R Wang
International Conference on Machine Learning 2019, 2019
3152019
On exact computation with an infinitely wide neural net
S Arora, SS Du, W Hu, Z Li, R Salakhutdinov, R Wang
arXiv preprint arXiv:1904.11955, 2019
2512019
Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima
SS Du, JD Lee, Y Tian, B Poczos, A Singh
International Conference on Machine Learning 2018, 2017
1522017
Gradient descent can take exponential time to escape saddle points
SS Du, C Jin, JD Lee, MI Jordan, B Poczos, A Singh
arXiv preprint arXiv:1705.10412, 2017
1372017
On the power of over-parametrization in neural networks with quadratic activation
SS Du, JD Lee
International Conference on Machine Learning 2018, 2018
1342018
When is a convolutional filter easy to learn?
SS Du, JD Lee, Y Tian
International Conference on Learning Representations 2018, 2017
962017
Computationally efficient robust estimation of sparse functionals
SS Du, S Balakrishnan, A Singh
Conference on Learning Theory, 2017, 2017
92*2017
Stochastic variance reduction methods for policy evaluation
SS Du, J Chen, L Li, L Xiao, D Zhou
International Conference on Machine Learning 2017, 2017
912017
Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced
SS Du, W Hu, JD Lee
arXiv preprint arXiv:1806.00900, 2018
662018
Understanding the acceleration phenomenon via high-resolution differential equations
B Shi, SS Du, MI Jordan, WJ Su
arXiv preprint arXiv:1810.08907, 2018
602018
Linear convergence of the primal-dual gradient method for convex-concave saddle point problems without strong convexity
SS Du, W Hu
International Conference on Artificial Intelligence and Statistics 2019, 2018
572018
What Can Neural Networks Reason About?
K Xu, J Li, M Zhang, SS Du, K Kawarabayashi, S Jegelka
International Conference on Learning Representations 2020, 2019
552019
Stochastic zeroth-order optimization in high dimensions
Y Wang, S Du, S Balakrishnan, A Singh
International Conference on Artificial Intelligence and Statistics 2018, 2017
522017
Provably efficient RL with rich observations via latent state decoding
SS Du, A Krishnamurthy, N Jiang, A Agarwal, M Dudík, J Langford
International Conference on Machine Learning 2019, 2019
512019
Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?
SS Du, SM Kakade, R Wang, LF Yang
International Conference on Learning Representation 2020, 2019
492019
Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels
SS Du, K Hou, B Póczos, R Salakhutdinov, R Wang, K Xu
Advances in Neural Information Processing Systems 2019, 2019
472019
Provably Efficient -learning with Function Approximation via Distribution Shift Error Checking Oracle
SS Du, Y Luo, R Wang, H Zhang
arXiv preprint arXiv:1906.06321, 2019
402019
Harnessing the power of infinitely wide deep nets on small-data tasks
S Arora, SS Du, Z Li, R Salakhutdinov, R Wang, D Yu
International Conference on Learning Representations 2020, 2019
392019
The system can't perform the operation now. Try again later.
Articles 1–20