Följ
Graham Neubig
Titel
Citeras av
Citeras av
År
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing
P Liu, W Yuan, J Fu, Z Jiang, H Hayashi, G Neubig
ACM Computing Surveys, 2021
34292021
How can we know what language models know?
Z Jiang, FF Xu, J Araki, G Neubig
TACL 8, 423-438, 2020
11522020
NusaCrowd: Open source initiative for Indonesian NLP resources
S Cahyawijaya, H Lovenia, AF Aji, G Winata, B Wilie, F Koto, R Mahendra, ...
Findings of the Association for Computational Linguistics: ACL 2023, 13745-13818, 2023
10332023
Are Sixteen Heads Really Better than One?
P Michel, O Levy, G Neubig
NeurIPS 2019, 2019
9712019
XTREME: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization
J Hu, S Ruder, A Siddhant, G Neubig, O Firat, M Johnson
ICML 2020, 2020
8322020
A Syntactic Neural Model for General-Purpose Code Generation
P Yin, G Neubig
ACL 2017, 2017
7582017
Towards a unified view of parameter-efficient transfer learning
J He, C Zhou, X Ma, T Berg-Kirkpatrick, G Neubig
ICLR 2022, 2022
6402022
BARTScore: Evaluating Generated Text as Text Generation
W Yuan, G Neubig, P Liu
NeurIPS 2021, 2021
5682021
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
P Yin, G Neubig, W Yih, S Riedel
ACL 2020, 2020
4772020
Dynet: The dynamic neural network toolkit
G Neubig, C Dyer, Y Goldberg, A Matthews, W Ammar, A Anastasopoulos, ...
arXiv preprint arXiv:1701.03980, 2017
448*2017
A Systematic Evaluation of Large Language Models of Code
FF Xu, U Alon, G Neubig, VJ Hellendoorn
ICLR DL4C Workshop, 2022
4312022
PAL: Program-aided Language Models
L Gao, A Madaan, S Zhou, U Alon, P Liu, Y Yang, J Callan, G Neubig
ICML 2023, 2022
4262022
When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?
Y Qi, DS Sachan, M Felix, SJ Padmanabhan, G Neubig
NAACL 2018, 2018
3942018
Pointwise prediction for robust, adaptable Japanese morphological analysis
G Neubig, Y Nakata, S Mori
ACL 2011, 529-533, 2011
3542011
Learning to generate pseudo-code from source code using statistical machine translation (t)
Y Oda, H Fudaba, G Neubig, H Hata, S Sakti, T Toda, S Nakamura
ASE 2015, 574-584, 2015
3532015
Stress Test Evaluation for Natural Language Inference
A Naik, A Ravichander, N Sadeh, C Rose, G Neubig
COLING 2018, 2018
3522018
Weight Poisoning Attacks on Pre-trained Models
K Kurita, P Michel, G Neubig
ACL 2020, 2020
3492020
Lagging Inference Networks and Posterior Collapse in Variational Autoencoders
J He, D Spokoyny, G Neubig, T Berg-Kirkpatrick
ICLR 2019, 2019
3292019
Competence-based Curriculum Learning for Neural Machine Translation
EA Platanios, O Stretcu, G Neubig, B Poczos, TM Mitchell
NAACL 2019, 2019
3212019
Controllable Invariance through Adversarial Feature Learning
Q Xie, Z Dai, Y Du, E Hovy, G Neubig
NIPS 2017, 2017
2982017
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–20