Beyond the imitation game: Quantifying and extrapolating the capabilities of language models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... arXiv preprint arXiv:2206.04615, 2022 | 727 | 2022 |
Nl-augmenter: A framework for task-sensitive natural language augmentation KD Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, ... arXiv preprint arXiv:2112.02721, 2021 | 64 | 2021 |
Explaining relationships between scientific documents K Luu, X Wu, R Koncel-Kedziorski, K Lo, I Cachola, NA Smith arXiv preprint arXiv:2002.00317, 2020 | 54* | 2020 |
What makes a good counselor? learning to distinguish between high-quality and low-quality counseling conversations V Pérez-Rosas, X Wu, K Resnicow, R Mihalcea Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019 | 53 | 2019 |
Linguistically-informed transformations (LIT): A method for automatically generating contrast sets C Li, L Shengshuo, LZ Liu, X Wu, X Zhou, S Steinert-Threlkeld arXiv preprint arXiv:2010.08580, 2020 | 29 | 2020 |
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models SU Toshniwal, S Debnath, S Shakeri, S Thormeyer, S Melzi, S Reddy, ... ArXiv, abs/2206.04615, 2022 | 11 | 2022 |