Följ
Ravid Shwartz-Ziv
Titel
Citeras av
Citeras av
År
Opening the black box of deep neural networks via information
R Shwartz-Ziv, N Tishby
Entropy, 21(219), 3390, 2019
15452019
Tabular Data: Deep Learning is Not All You Need
R Shwartz-Ziv, A Armon
Information Fusion 81, 84-90, 2022
8882022
Information Flow in Deep Neural Networks
R Shwartz Ziv
arXiv preprint arXiv:2202.06749, 2022
185*2022
Information in Infinite Ensembles of Infinitely-Wide Neural Networks
R Shwartz-Ziv, AA Alemi
Proceedings of the Symposium on Advances in Approximate Bayesian Inference, 2019
56*2019
To Compress or Not to Compress--Self-Supervised Learning and Information Theory: A Review
R Shwartz-Ziv, Y LeCun
arXiv preprint arXiv:2304.09355, 2023
55*2023
The dual information bottleneck
Z Piran, R Shwartz-Ziv, N Tishby
https://arxiv.org/abs/2006.04641, 2020
42*2020
Representation compression and generalization in deep neural networks
R Shwartz-Ziv, A Painsky, N Tishby
https://openreview.net/pdf?id=SkeL6sCqK7, 2019
40*2019
Neural correlates of learning pure tones or natural sounds in the auditory cortex
I Maor, R Shwartz-Ziv, L Feigin, Y Elyada, H Sompolinsky, A Mizrahi
Frontiers in neural circuits 13, 82, 2020
35*2020
What Do We Maximize in Self-Supervised Learning?
R Shwartz-Ziv, R Balestriero, Y LeCun
ICML 2022: Pre-training: Perspectives, Pitfalls, and Paths Forward workshop, 2022
30*2022
Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors
R Shwartz-Ziv, M Goldblum, H Souri, S Kapoor, C Zhu, Y LeCun, ...
NeurIPS 2022, 2022
292022
How much data are augmentations worth? An investigation into scaling laws, invariance, and implicit regularization
J Geiping, M Goldblum, G Somepalli, R Shwartz-Ziv, T Goldstein, ...
arXiv preprint arXiv:2210.06441, 2022
22*2022
Reverse engineering self-supervised learning
I Ben-Shaul, R Shwartz-Ziv, T Galanti, S Dekel, Y LeCun
Advances in Neural Information Processing Systems 36, 58324-58345, 2023
142023
Attentioned convolutional LSTM inpainting network for anomaly detection in videos
R Shwartz-Ziv, I Ben-Ari
NIPS 2018 Workshop on Systems for ML, 2018
13*2018
Opening the black box of deep neural networks via information. arXiv 2017
R Schwartz-Ziv, N Tishby
arXiv preprint arXiv:1703.00810, 0
11*
Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs
A Chen, R Shwartz-Ziv, K Cho, ML Leavitt, N Saphra
arXiv preprint arXiv:2309.07311, 2023
72023
Opening the black box of deep neural networks via information. 2017
R Schwartz-Ziv, N Tishby
arXiv preprint arXiv:1703.00810, 2019
52019
Variance-Covariance Regularization Improves Representation Learning
J Zhu, R Shwartz-Ziv, Y Chen, Y LeCun
arXiv preprint arXiv:2306.13292, 2023
32023
Sequence modeling using a memory controller extension for LSTM
R Shwartz-Ziv, I Ben-Ari
NIPS 2017 Time Series Workshop, 2017
32017
Opening the Black Box of Deep Neural Networks via Information. 2017. eprint
R Shwartz-Ziv, N Tishby
arXiv preprint arXiv:1703.00810, 0
3
Compression of Deep Neural Networks via Information
R Shwartz-Ziv, N Tishby
arXiv preprint arXiv:1703.00810, 2018
22018
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–20