Följ
Seungone Kim
Seungone Kim
Verifierad e-postadress på kaist.ac.kr - Startsida
Titel
Citeras av
Citeras av
År
Exploring the benefits of training expert language models over instruction tuning
J Jang, S Kim, S Ye, D Kim, L Logeswaran, M Lee, K Lee, M Seo
ICML 2023, 2023
342023
Prometheus: Inducing fine-grained evaluation capability in language models
S Kim, J Shin, Y Cho, J Jang, S Longpre, H Lee, S Yun, S Shin, S Kim, ...
ICLR 2024, 2023
30*2023
Flask: Fine-grained language model evaluation based on alignment skill sets
S Ye, D Kim, S Kim, H Hwang, S Kim, Y Jo, J Thorne, J Kim, M Seo
ICLR 2024, 2023
252023
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo
EMNLP 2023, 2023
222023
Personalized soups: Personalized large language model alignment via post-hoc parameter merging
J Jang, S Kim, BY Lin, Y Wang, J Hessel, L Zettlemoyer, H Hajishirzi, ...
arXiv preprint arXiv:2310.11564, 2023
182023
Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization
S Kim, SJ Joo, H Chae, C Kim, S Hwang, J Yeo
COLING 2022, 2022
122022
CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification
S Kim, SJ Joo, Y Jang, H Chae, J Yeo
EACL 2023, 2023
52023
KMMLU: Measuring Massive Multitask Language Understanding in Korean
G Son, H Lee, S Kim, S Kim, N Muennighoff, T Choi, C Park, KM Yoo, ...
arXiv preprint arXiv:2402.11548, 2024
42024
LangBridge: Multilingual Reasoning Without Multilingual Supervision
D Yoon, J Jang, S Kim, S Kim, S Shafayat, M Seo
ICLR 2024 ME-FoMo Workshop, 2024
22024
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
H Hwang, D Kim, S Kim, S Ye, M Seo
arXiv preprint arXiv:2404.10346, 2024
2024
Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models
H Chae, Y Kim, S Kim, KT Ong, B Kwak, M Kim, S Kim, T Kwon, J Chung, ...
arXiv preprint arXiv:2404.02575, 2024
2024
Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?
G Son, S Baek, S Nam, I Jeong, S Kim
ICLR 2024 ME-FoMo Workshop, 2024
2024
Prometheus-Vision: Vision-Language Model as a Judge for Fine-Grained Evaluation
S Lee, S Kim, SH Park, G Kim, M Seo
ICLR 2024 ME-FoMo Workshop, 2024
2024
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–13