タイトル & 超要約:曖昧性 (あいまいせい) に強いLM、爆誕✨ 新規事業チャンス到来!
🌟 ギャル的キラキラポイント✨ ● LM(言語モデル)の文章、もっと良くできるらしい🎵 ● 「Sigmoid Head(シグモイドヘッド)」っていう新しい方法がスゴイ! ● ITサービスを劇的に良くするチャンス到来ってコト💖
詳細解説 ● 背景 LLM(大規模言語モデル)って、文章の良し悪しを判断するのが苦手だったの😭 複数の正解があるのに、どれか一つしか正解だって思っちゃうから、イマイチだったんだよね…。
● 方法 「Sigmoid Head」っていうのをLMにプラスしたら、文章の曖昧さも考慮 (こうりょ) できるようになったの! それぞれの単語の良さを個別に評価できるから、色んな解釈(かいしゃく)を全部「イイね!」ってできるようになったってワケ🌟
● 結果 この技術のおかげで、翻訳 (ほんやく) 、チャットボット、文章作成の精度(せいど)が爆上がりする予感😍 ユーザーの満足度もアップしちゃうかも🎵
続きは「らくらく論文」アプリで
Language model (LM) probability is not a reliable quality estimator, as natural language is ambiguous. When multiple output options are valid, the model's probability distribution is spread across them, which can misleadingly indicate low output quality. This issue is caused by two reasons: (1) LMs' final output activation is softmax, which does not allow multiple correct options to receive high probabilities simultaneuously and (2) LMs' training data is single, one-hot encoded references, indicating that there is only one correct option at each output step. We propose training a module for Quality Estimation on top of pre-trained LMs to address these limitations. The module, called Sigmoid Head, is an extra unembedding head with sigmoid activation to tackle the first limitation. To tackle the second limitation, during the negative sampling process to train the Sigmoid Head, we use a heuristic to avoid selecting potentially alternative correct tokens. Our Sigmoid Head is computationally efficient during training and inference. The probability from Sigmoid Head is notably better quality signal compared to the original softmax head. As the Sigmoid Head does not rely on human-annotated quality data, it is more robust to out-of-domain settings compared to supervised QE.