GPT understands, too X Liu, Y Zheng, Z Du, M Ding, Y Qian, Z Yang, J Tang AI Open 5, 208-215, 2024 | 1545* | 2024 |
Glm: General language model pretraining with autoregressive blank infilling Z Du, Y Qian, X Liu, M Ding, J Qiu, Z Yang, J Tang arXiv preprint arXiv:2103.10360, 2021 | 1411 | 2021 |
P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks X Liu, K Ji, Y Fu, WL Tam, Z Du, Z Yang, J Tang arXiv preprint arXiv:2110.07602, 2021 | 1278 | 2021 |
Glm-130b: An open bilingual pre-trained model A Zeng, X Liu, Z Du, Z Wang, H Lai, M Ding, Z Yang, Y Xu, W Zheng, X Xia, ... arXiv preprint arXiv:2210.02414, 2022 | 1003 | 2022 |
Agentbench: Evaluating llms as agents X Liu, H Yu, H Zhang, Y Xu, X Lei, H Lai, Y Gu, H Ding, K Men, K Yang, ... arXiv preprint arXiv:2308.03688, 2023 | 244 | 2023 |
Longbench: A bilingual, multitask benchmark for long context understanding Y Bai, X Lv, J Zhang, H Lyu, J Tang, Z Huang, Z Du, X Liu, A Zeng, L Hou, ... arXiv preprint arXiv:2308.14508, 2023 | 216* | 2023 |
Sequential scenario-specific meta learner for online recommendation Z Du, X Wang, H Yang, J Zhou, J Tang Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019 | 135 | 2019 |
Chatglm: A family of large language models from glm-130b to glm-4 all tools T GLM, A Zeng, B Xu, B Wang, C Zhang, D Yin, D Zhang, D Rojas, G Feng, ... arXiv preprint arXiv:2406.12793, 2024 | 132 | 2024 |
Wudaocorpora: A super large-scale chinese corpora for pre-training language models S Yuan, H Zhao, Z Du, M Ding, X Liu, Y Cen, X Zou, Z Yang, J Tang AI Open 2, 65-68, 2021 | 108* | 2021 |
Policy-gradient training of fair and unbiased ranking functions H Yadav, Z Du, T Joachims Proceedings of the 44th International ACM SIGIR Conference on Research and …, 2021 | 88* | 2021 |
WebGLM: Towards an efficient web-enhanced question answering system with human preferences X Liu, H Lai, H Yu, Y Xu, A Zeng, Z Du, P Zhang, Y Dong, J Tang Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and …, 2023 | 70 | 2023 |
Cogkr: Cognitive graph for multi-hop knowledge reasoning Z Du, C Zhou, J Yao, T Tu, L Cheng, H Yang, J Zhou, J Tang IEEE Transactions on Knowledge and Data Engineering 35 (2), 1283-1295, 2021 | 63* | 2021 |
Understanding emergent abilities of language models from the loss perspective Z Du, A Zeng, Y Dong, J Tang arXiv preprint arXiv:2403.15796, 2024 | 20 | 2024 |
Polar: Attention-based cnn for one-shot personalized article recommendation Z Du, J Tang, Y Ding Machine Learning and Knowledge Discovery in Databases: European Conference …, 2019 | 18 | 2019 |
POLAR++: active one-shot personalized article recommendation Z Du, J Tang, Y Ding IEEE Transactions on Knowledge and Data Engineering 33 (6), 2709-2722, 2019 | 13 | 2019 |
Sciglm: Training scientific language models with self-reflective instruction annotation and tuning D Zhang, Z Hu, S Zhoubian, Z Du, K Yang, Z Wang, Y Yue, Y Dong, ... arXiv preprint arXiv:2401.07950, 2024 | 9 | 2024 |
Efcnn: A restricted convolutional neural network for expert finding Y Zhao, J Tang, Z Du Advances in Knowledge Discovery and Data Mining: 23rd Pacific-Asia …, 2019 | 7 | 2019 |
Visualagentbench: Towards large multimodal models as visual foundation agents X Liu, T Zhang, Y Gu, IL Iong, Y Xu, X Song, S Zhang, H Lai, X Liu, H Zhao, ... arXiv preprint arXiv:2408.06327, 2024 | 6 | 2024 |
ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline Y Xu, X Liu, X Liu, Z Hou, Y Li, X Zhang, Z Wang, A Zeng, Z Du, W Zhao, ... arXiv preprint arXiv:2404.02893, 2024 | 4 | 2024 |
Chatglm-rlhf: Practices of aligning large language models with human feedback Z Hou, Y Niu, Z Du, X Zhang, X Liu, A Zeng, Q Zheng, M Huang, H Wang, ... arXiv preprint arXiv:2404.00934, 2024 | 3 | 2024 |