超要約:LLM(大規模言語モデル)の信頼性を、確信度で評価する新しい方法だよ!
✨ ギャル的キラキラポイント ✨
● LLMが嘘をつけないようにする「真実へのコミットメント」を測るって、なんかカッコよくない?😎✨ ● モデルのバイアス(偏り)を取り除く「逆混同行列」とか、専門用語もオシャレじゃない?🥰 ● IT企業がAIで差をつけるための、未来がアガる研究って感じ💖
詳細解説
続きは「らくらく論文」アプリで
In the progressive journey toward Artificial General Intelligence (AGI), current evaluation paradigms face an epistemological crisis. Static benchmarks measure knowledge breadth but fail to quantify the depth of belief. While Simhi et al. (2025) defined the CHOKE phenomenon in standard QA, we extend this framework to quantify "Cognitive Conviction" in System 2 reasoning models. We propose Project Aletheia, a cognitive physics framework that employs Tikhonov Regularization to invert the judge's confusion matrix. To validate this methodology without relying on opaque private data, we implement a Synthetic Proxy Protocol. Our preliminary pilot study on 2025 baselines (e.g., DeepSeek-R1, OpenAI o1) suggests that while reasoning models act as a "cognitive buffer," they may exhibit "Defensive OverThinking" under adversarial pressure. Furthermore, we introduce the Aligned Conviction Score (S_aligned) to verify that conviction does not compromise safety. This work serves as a blueprint for measuring AI scientific integrity.