Follow
Jamie Hayes
Jamie Hayes
Google DeepMind
Verified email at google.com
Title
Cited by
Cited by
Year
LOGAN: evaluating privacy leakage of generative models using generative adversarial networks
J Hayes, L Melis, G Danezis, E De Cristofaro
arXiv preprint arXiv:1705.07663, 506-519, 2017
601*2017
k-fingerprinting: A robust scalable website fingerprinting technique
J Hayes, G Danezis
25th USENIX Security Symposium (USENIX Security 16), 1187-1203, 2016
4162016
Generating steganographic images via adversarial training
J Hayes, G Danezis
Advances in neural information processing systems 30, 2017
3072017
Extracting training data from diffusion models
N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramer, B Balle, ...
32nd USENIX Security Symposium (USENIX Security 23), 5253-5270, 2023
2662023
The loopix anonymity system
AM Piotrowska, J Hayes, T Elahi, S Meiser, G Danezis
26th usenix security symposium (usenix security 17), 1199-1216, 2017
2082017
Learning universal adversarial perturbations with generative models
J Hayes, G Danezis
2018 IEEE Security and Privacy Workshops (SPW), 43-49, 2018
1582018
Unlocking high-accuracy differentially private image classification through scale
S De, L Berrada, J Hayes, SL Smith, B Balle
arXiv preprint arXiv:2204.13650, 2022
1322022
On visible adversarial perturbations & digital watermarking
J Hayes
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018
1102018
Local and central differential privacy for robustness and privacy in federated learning
M Naseri, J Hayes, E De Cristofaro
arXiv preprint arXiv:2009.03561, 2020
1092020
Website fingerprinting defenses at the application layer
G Cherubin, J Hayes, M Juárez
Proceedings on Privacy Enhancing Technologies 2017 (2), 168-185, 2017
1002017
Reconstructing training data with informed adversaries
B Balle, G Cherubin, J Hayes
2022 IEEE Symposium on Security and Privacy (SP), 1138-1156, 2022
992022
Contamination attacks and mitigation in multi-party machine learning
J Hayes, O Ohrimenko
Advances in neural information processing systems 31, 2018
952018
Toward robustness and privacy in federated learning: Experimenting with local and central differential privacy
M Naseri, J Hayes, E De Cristofaro
arXiv preprint arXiv:2009.03561, 2020
812020
A framework for robustness certification of smoothed classifiers using f-divergences
KD Dvijotham, J Hayes, B Balle, Z Kolter, C Qin, A Gyorgy, K Xiao, ...
432020
Guard Sets for Onion Routing
J Hayes, G Danezis
Proceedings on Privacy Enhancing Technologies 1 (2), Pages 65–80, 2015
37*2015
Tight auditing of differentially private machine learning
M Nasr, J Hayes, T Steinke, B Balle, F Tramčr, M Jagielski, N Carlini, ...
32nd USENIX Security Symposium (USENIX Security 23), 1631-1648, 2023
302023
Towards unbounded machine unlearning
M Kurmanji, P Triantafillou, J Hayes, E Triantafillou
Advances in Neural Information Processing Systems 36, 2024
282024
Differentially private diffusion models generate useful synthetic images
S Ghalebikesabi, L Berrada, S Gowal, I Ktena, R Stanforth, J Hayes, S De, ...
arXiv preprint arXiv:2302.13861, 2023
282023
Extensions and limitations of randomized smoothing for robustness guarantees
J Hayes
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
252020
Evading classifiers in discrete domains with provable optimality guarantees
B Kulynych, J Hayes, N Samarin, C Troncoso
arXiv preprint arXiv:1810.10939, 2018
242018
The system can't perform the operation now. Try again later.
Articles 1–20