Följ
Soufiane Hayou
Soufiane Hayou
Simons Institute for the Theory of Computing, UC Berkeley | PhD @ University of Oxford
Verifierad e-postadress på berkeley.edu - Startsida
Titel
Citeras av
Citeras av
År
On the impact of the activation function on deep neural networks training
S Hayou, A Doucet, J Rousseau
36th International Conference on Machine Learning (ICML 2019), 2019
2552019
On the selection of initialization and activation function for deep neural networks
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1805.08266, 2018
922018
Stable ResNet
S Hayou, E Clerico, B He, G Deligiannidis, A Doucet, J Rousseau
24th International Conference on Artificial Intelligence and Statistics …, 2021
542021
Robust Pruning at Initialization
S Hayou, JF Ton, A Doucet, YW Teh
International Conference on Learning Representations (ICLR 2021), 2021
502021
Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1905.13654, 2019
302019
Efficient low rank adaptation of large models
S Hayou, N Ghosh, BY LoRA
arXiv preprint arXiv:2402.12354 1, 2024
252024
LoRA+: Efficient Low Rank Adaptation of Large Models
S Hayou, N Ghosh, B Yu
ICML 2024 (arXiv:2402.12354), 2024
212024
On the infinite-depth limit of finite-width neural networks
S Hayou
Transactions on Machine Learning Research (arXiv:2210.00688), 2022
212022
Tensor Programs VI: Feature Learning in Infinite-Depth Neural Networks
G Yang, D Yu, C Zhu, S Hayou
ICLR 2024, 2023
152023
Pruning untrained neural networks: Principles and analysis
S Hayou, JF Ton, A Doucet, Y Whye Teh
arXiv e-prints, arXiv: 2002.08797, 2020
142020
Feature learning and signal propagation in deep neural networks
Y Lou, CE Mingard, S Hayou
International Conference on Machine Learning, 14248-14282, 2022
132022
On the impact of the activation function on deep neural networks training 2019
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1902.06853, 1902
13*1902
Width and Depth Limits Commute in Residual Networks
S Hayou, G Yang
ICML 2023 (arXiv preprint arXiv:2302.00453), 2023
122023
Training dynamics of deep networks using stochastic gradient descent via neural tangent kernel
S Hayou, A Doucet, J Rousseau
arXiv preprint arXiv:1905.13654, 2019
122019
Regularization in ResNet with Stochastic Depth
S Hayou, F Ayed
NeurIPS 2021, arXiv:2106.03091, 2021
102021
Feature Learning in Infinite Depth Neural Networks
G Yang, D Yu, C Zhu, S Hayou
The Twelfth International Conference on Learning Representations, 2023
72023
Leave-one-out distinguishability in machine learning
J Ye, A Borovykh, S Hayou, R Shokri
ICLR 2024, arXiv preprint arXiv:2309.17310, 2023
62023
The curse of depth in kernel regime
S Hayou, A Doucet, J Rousseau
I (Still) Can't Believe It's Not Better! Workshop at NeurIPS 2021, 41-47, 2022
62022
How Bad is Training on Synthetic Data? A Statistical Analysis of Language Model Collapse
MEA Seddik, SW Chen, S Hayou, P Youssef, M Debbah
arXiv preprint arXiv:2404.05090, 2024
52024
Data pruning and neural scaling laws: fundamental limitations of score-based algorithms
F Ayed, S Hayou
Transactions on Machine Learning Research 2023. (arXiv preprint arXiv:2302 …, 2023
42023
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–20