Matthew E Peters
Matthew E Peters
Spiffy AI, Allen Institute for Artificial Intelligence
Verifierad e-postadress på allenai.org
Citeras av
Citeras av
Deep contextualized word representations
ME Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, ...
NAACL, Best Paper, 2018
Longformer: The Long-Document Transformer
I Beltagy, ME Peters, A Cohan
arXiv preprint arXiv:2004.05150, 2020
AllenNLP: A Deep Semantic Natural Language Processing Platform
M Gardner, J Grus, M Neumann, O Tafjord, P Dasigi, N Liu, M Peters, ...
arXiv preprint arXiv:1803.07640, 2018
Semi-supervised sequence tagging with bidirectional language models
ME Peters, W Ammar, C Bhagavatula, R Power
arXiv preprint arXiv:1705.00108, 2017
Relationships between water vapor path and precipitation over the tropical oceans
CS Bretherton, ME Peters, LE Back
Journal of climate 17 (7), 1517-1528, 2004
Knowledge enhanced contextual word representations
ME Peters, M Neumann, RL Logan IV, R Schwartz, V Joshi, S Singh, ...
arXiv preprint arXiv:1909.04164, 2019
Linguistic Knowledge and Transferability of Contextual Representations
NF Liu, M Gardner, Y Belinkov, M Peters, NA Smith
arXiv preprint arXiv:1903.08855, 2019
Transfer Learning in Natural Language Processing
S Ruder, ME Peters, S Swayamdipta, T Wolf
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks
M Peters, S Ruder, NA Smith
arXiv preprint arXiv:1903.05987, 2019
Construction of the Literature Graph in Semantic Scholar
W Ammar, D Groeneveld, C Bhagavatula, I Beltagy, M Crawford, ...
arXiv preprint arXiv:1805.02262, 2018
Dissecting Contextual Word Embeddings: Architecture and Representation
ME Peters, M Neumann, L Zettlemoyer, W Yih
arXiv preprint arXiv:1808.08949, 2018
Understanding the origin and analysis of sediment-charcoal records with a simulation model
PE Higuera, ME Peters, LB Brubaker, DG Gavin
Quaternary Science Reviews 26 (13-14), 1790-1809, 2007
Barack’s Wife Hillary: Using Knowledge Graphs for Fact-Aware Language Modeling
RL Logan IV, NF Liu, ME Peters, M Gardner, S Singh
Adversarial filters of dataset biases
R Le Bras, S Swayamdipta, C Bhagavatula, R Zellers, M Peters, ...
International Conference on Machine Learning, 1078-1088, 2020
Quantifying the source area of macroscopic charcoal with a particle dispersal model
ME Peters, PE Higuera
Quaternary Research 67 (2), 304-310, 2007
Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples
V Joshi, M Peters, M Hopkins
arXiv preprint arXiv:1805.06556, 2018
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
A Ross, A Marasović, ME Peters
arXiv preprint arXiv:2012.13985, 2020
ATTEMPT: Parameter-efficient multi-task tuning via attentional mixtures of soft prompts
A Asai, M Salehi, ME Peters, H Hajishirzi
arXiv preprint arXiv:2205.11961, 2022
Competency Problems: On Finding and Removing Artifacts in Language Data
M Gardner, W Merrill, J Dodge, ME Peters, A Ross, S Singh, N Smith
arXiv preprint arXiv:2104.08646, 2021
A simplified model of the Walker circulation with an interactive ocean mixed layer and cloud-radiative feedbacks
ME Peters, CS Bretherton
Journal of climate 18 (20), 4216-4234, 2005
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–20