Multilingual LLM Evaluation in Practical Settings - Sebastian Ruder (Meta)
Автор: HiTZ zentroa
Загружено: 2025-02-10
Просмотров: 308
Large language models (LLMs) are increasingly used in a variety of applications across the globe but do not provide equal utility across languages. In this talk, I will discuss multilingual evaluation of LLMs in two practical settings: conversational instruction-following and usage of quantized models. For the first part, I will focus on a specific aspect of multilingual conversational ability where errors result in a jarring user experience: generating text in the user’s desired language. I will describe a new benchmark and evaluation of a range of LLMs. We find that even the strongest models exhibit language confusion, i.e., they fail to consistently respond in the correct language. I will discuss what affects language confusion, how to mitigate it, and potential extensions. In the second part, I will discuss the first evaluation study of quantized multilingual LLMs across languages. We find that automatic metrics severely underestimate the negative impact of quantization and that human evaluation—which has been neglected by prior studies—is key to revealing harmful effects. Overall, I highlight limitations of multilingual LLMs and challenges of real-world multilingual evaluation.
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: