iconLogo
Published:2026/1/7 2:43:00

LLMのバイアス、見つけちゃお!✨ 混合質問でチェック!

**超要約:**LLM(大規模言語モデル)のバイアスを、いろんな質問混ぜてチェックする研究だよ!

● ギャルあるある、データセットの質が命!💖 ● 指示(命令)の仕方で、結果が全然違うってコト!😲 ● AIちゃんの信頼性UPに貢献できるかも!😉

詳細解説

背景

続きは「らくらく論文」アプリで

Quantifying LLM Biases Across Instruction Boundary in Mixed Question Forms

Zipeng Ling / Shuliang Liu / Yuehao Tang / Chen Huang / Gaoyang Jiang / Shenghong Fu / Junqi Yang / Yao Wan / Jiawan Zhang / Kejia Huang / Xuming Hu

Large Language Models (LLMs) annotated datasets are widely used nowadays, however, large-scale annotations often show biases in low-quality datasets. For example, Multiple-Choice Questions (MCQs) datasets with one single correct option is common, however, there may be questions attributed to none or multiple correct options; whereas true-or-false questions are supposed to be labeled with either True or False, but similarly the text can include unsolvable elements, which should be further labeled as Unknown. There are problems when low-quality datasets with mixed question forms can not be identified. We refer to these exceptional label forms as Sparse Labels, and LLMs' ability to distinguish datasets with Sparse Labels mixture is important. Since users may not know situations of datasets, their instructions can be biased. To study how different instruction settings affect LLMs' identifications of Sparse Labels mixture, we introduce the concept of Instruction Boundary, which systematically evaluates different instruction settings that lead to biases. We propose BiasDetector, a diagnostic benchmark to systematically evaluate LLMs on datasets with mixed question forms under Instruction Boundary settings. Experiments show that users' instructions induce large biases on our benchmark, highlighting the need not only for LLM developers to recognize risks of LLM biased annotation resulting in Sparse Labels mixture, but also problems arising from users' instructions to identify them. Code, datasets and detailed implementations are available at https://github.com/ZpLing/Instruction-Boundary.

cs / cs.CL