iconLogo
Published:2026/1/7 5:07:22

LLM、言葉の壁をぶっ壊す!🌟

超要約: LLM(大規模言語モデル)が、話者の言葉をちゃんと理解して、その言葉で答えるようにするための研究だよ!✨

ギャル的キラキラポイント✨

● コードスイッチング(言葉のミックス使い)でも、ちゃんと相手の言葉を理解して答えてくれるようになるって、めっちゃすごいじゃん?😳

● 新しいベンチマーク(性能テストみたいなもの)「OLA」を作って、LLMの弱点を見つけちゃったってこと!🔍

続きは「らくらく論文」アプリで

OLA: Output Language Alignment in Code-Switched LLM Interactions

Juhyun Oh / Haneul Yoo / Faiz Ghifari Haznitrama / Alice Oh

Code-switching, alternating between languages within a conversation, is natural for multilingual users, yet poses fundamental challenges for large language models (LLMs). When a user code-switches in their prompt to an LLM, they typically do not specify the expected language of the LLM response, and thus LLMs must infer the output language from contextual and pragmatic cues. We find that current LLMs systematically fail to align with this expectation, responding in undesired languages even when cues are clear to humans. We introduce OLA, a benchmark to evaluate LLMs' Output Language Alignment in code-switched interactions. OLA focuses on Korean--English code-switching and spans simple intra-sentential mixing to instruction-content mismatches. Even frontier models frequently misinterpret implicit language expectation, exhibiting a bias toward non-English responses. We further show this bias generalizes beyond Korean to Chinese and Indonesian pairs. Models also show instability through mid-response switching and language intrusions. Chain-of-Thought prompting fails to resolve these errors, indicating weak pragmatic reasoning about output language. However, Code-Switching Aware DPO with minimal data (about 1K examples) substantially reduces misalignment, suggesting these failures stem from insufficient alignment rather than fundamental limitations. Our results highlight the need to align multilingual LLMs with users' implicit expectations in real-world code-switched interactions.

cs / cs.CL