Follow
Machel Reid
Machel Reid
Research Scientist, Google DeepMind
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Large Language Models are Zero-Shot Reasoners
T Kojima, SS Gu, M Reid, Y Matsuo, Y Iwasawa
NeurIPS 2022, 2022
35712022
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
21922023
Gemma: Open models based on gemini research and technology
G Team, T Mesnard, C Hardin, R Dadashi, S Bhupatiraju, S Pathak, ...
arXiv preprint arXiv:2403.08295, 2024
7242024
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ...
arXiv preprint arXiv:2403.05530, 2024
6892024
Gemma 2: Improving open language models at a practical size
G Team, M Riviere, S Pathak, PG Sessa, C Hardin, S Bhupatiraju, ...
arXiv preprint arXiv:2408.00118, 2024
1932024
Can wikipedia help offline reinforcement learning?
M Reid, Y Yamada, SS Gu
arXiv preprint arXiv:2201.12122, 2022
972022
LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
M Reid, V Zhong
Findings of the Annual Meeting of the Association for Computational …, 2021
712021
Diffuser: Diffusion via edit-based reconstruction
M Reid, VJ Hellendoorn, G Neubig
The Eleventh International Conference on Learning Representations, 2023
54*2023
A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation
DI Adelani, JO Alabi, A Fan, J Kreutzer, X Shen, M Reid, D Ruiter, ...
NAACL 2022, 2022
53*2022
Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
M Reid, E Marrese-Taylor, Y Matsuo
Findings of Empirical Methods in Natural Language Processing (EMNLP), 2021
472021
Learning to Model Editing Processes
M Reid, G Neubig
Findings of Empirical Methods in Natural Language Processing (EMNLP), 2022
342022
AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages
M Reid, J Hu, G Neubig, Y Matsuo
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
292021
VCDM: Leveraging Variational Bi-encoding and Deep Contextualized Word Representations for Improved Definition Modeling
M Reid, E Marrese-Taylor, Y Matsuo
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020
232020
M2D2: A Massively Multi-domain Language Modeling Dataset
M Reid, V Zhong, S Gururangan, L Zettlemoyer
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
202022
PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining
M Reid, M Artetxe
Conference of the North American Chapter of the Association for …, 2021
202021
mmt5: Modular multilingual pre-training solves source language hallucinations
J Pfeiffer, F Piccinno, M Nicosia, X Wang, M Reid, S Ruder
arXiv preprint arXiv:2305.14224, 2023
172023
Low-Resource Machine Translation Using Cross-Lingual Language Model Pretraining
F Zheng, M Reid, E Marrese-Taylor, Y Matsuo
AmericasNLP Workshop, NAACL 2021, 2021
142021
On the impact of data augmentation on downstream performance in natural language processing
I Okimura, M Reid, M Kawano, Y Matsuo
Proceedings of the third workshop on insights from negative results in NLP …, 2022
132022
Buffet: Benchmarking large language models for few-shot cross-lingual transfer
A Asai, S Kudugunta, XV Yu, T Blevins, H Gonen, M Reid, Y Tsvetkov, ...
arXiv preprint arXiv:2305.14857, 2023
122023
Variational Inference for Learning Representations of Natural Language Edits
E Marrese-Taylor, M Reid, Y Matsuo
AAAI 2021, 2020
82020
The system can't perform the operation now. Try again later.
Articles 1–20