REPLUG: Retrieval-Augmented Black-Box Language Models W Shi, S Min, M Yasunaga, M Seo, R James, M Lewis, L Zettlemoyer, ... arXiv preprint arXiv:2301.12652, 2023 | 363* | 2023 |
One Embedder, Any Task: Instruction-Finetuned Text Embeddings H Su, W Shi, J Kasai, Y Wang, Y Hu, M Ostendorf, W Yih, NA Smith, ... arXiv preprint arXiv:2212.09741, 2022 | 181 | 2022 |
Fine-grained human feedback gives better rewards for language model training Z Wu, Y Hu, W Shi, N Dziri, A Suhr, P Ammanabrolu, NA Smith, ... Advances in Neural Information Processing Systems 36, 2024 | 176* | 2024 |
Selective annotation makes language models better few-shot learners H Su, J Kasai, CH Wu, W Shi, T Wang, J Xin, R Zhang, M Ostendorf, ... arXiv preprint arXiv:2209.01975, 2022 | 170* | 2022 |
Examining gender bias in languages with grammatical gender P Zhou, W Shi, J Zhao, KH Huang, M Chen, R Cotterell, KW Chang arXiv preprint arXiv:1909.02224, 2019 | 147* | 2019 |
Embedding uncertain knowledge graphs X Chen, M Chen, W Shi, Y Sun, C Zaniolo Proceedings of the AAAI conference on artificial intelligence 33 (01), 3363-3370, 2019 | 144 | 2019 |
Detecting pretraining data from large language models W Shi, A Ajith, M Xia, Y Huang, D Liu, T Blevins, D Chen, L Zettlemoyer arXiv preprint arXiv:2310.16789, 2023 | 121 | 2023 |
Promptcap: Prompt-guided task-aware image captioning Y Hu, H Hua, Z Yang, W Shi, NA Smith, J Luo arXiv preprint arXiv:2211.09699, 2022 | 98* | 2022 |
On tractable representations of binary neural networks W Shi, A Shih, A Darwiche, A Choi arXiv preprint arXiv:2004.02082, 2020 | 94* | 2020 |
Retrieval-augmented multimodal language modeling M Yasunaga, A Aghajanyan, W Shi, R James, J Leskovec, P Liang, ... arXiv preprint arXiv:2211.12561, 2022 | 89 | 2022 |
Trusting your evidence: Hallucinate less with context-aware decoding W Shi, X Han, M Lewis, Y Tsvetkov, L Zettlemoyer, SW Yih arXiv preprint arXiv:2305.14739, 2023 | 75 | 2023 |
Retrofitting contextualized word embeddings with paraphrases W Shi, M Chen, P Zhou, KW Chang arXiv preprint arXiv:1909.09700, 2019 | 72* | 2019 |
Ra-dit: Retrieval-augmented dual instruction tuning XV Lin, X Chen, M Chen, W Shi, M Lomeli, R James, P Rodriguez, J Kahn, ... arXiv preprint arXiv:2310.01352, 2023 | 63 | 2023 |
Recomp: Improving retrieval-augmented lms with compression and selective augmentation F Xu, W Shi, E Choi arXiv preprint arXiv:2310.04408, 2023 | 61* | 2023 |
Nonparametric masked language modeling S Min, W Shi, M Lewis, X Chen, W Yih, H Hajishirzi, L Zettlemoyer arXiv preprint arXiv:2212.01349, 2022 | 56 | 2022 |
kNN-Prompt: Nearest Neighbor Zero-Shot Inference W Shi, J Michael, S Gururangan, L Zettlemoyer arXiv preprint arXiv:2205.13792, 2022 | 56* | 2022 |
SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore S Min, S Gururangan, E Wallace, W Shi, H Hajishirzi, NA Smith, ... arXiv preprint arXiv:2308.04430, 2023 | 42 | 2023 |
Scaling expert language models with unsupervised domain discovery S Gururangan, M Li, M Lewis, W Shi, T Althoff, NA Smith, L Zettlemoyer arXiv preprint arXiv:2303.14177, 2023 | 39* | 2023 |
Lemur: Harmonizing natural language and code for language agents Y Xu, H Su, C Xing, B Mi, Q Liu, W Shi, B Hui, F Zhou, Y Liu, T Xie, ... arXiv preprint arXiv:2310.06830, 2023 | 38 | 2023 |
Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too? W Shi, X Han, H Gonen, A Holtzman, Y Tsvetkov, L Zettlemoyer arXiv preprint arXiv:2212.10539, 2022 | 33 | 2022 |