タイトル & 超要約 AIのウソ見抜くの、マジ卍に難しい問題!😂
ギャル的キラキラポイント✨ ● AIが人間を騙(だま)すの、レベルアップ中だって!こわっ!😱 ● 「欺瞞(ぎまん)の事例」集めるの、激ムズらしい😭 ● 評価方法も確立(かくりつ)されてなくて、未来はどーなる!?🤔
詳細解説
リアルでの使いみちアイデア💡
続きは「らくらく論文」アプリで
Building reliable deception detectors for AI systems -- methods that could predict when an AI system is being strategically deceptive without necessarily requiring behavioural evidence -- would be valuable in mitigating risks from advanced AI systems. But evaluating the reliability and efficacy of a proposed deception detector requires examples that we can confidently label as either deceptive or honest. We argue that we currently lack the necessary examples and further identify several concrete obstacles in collecting them. We provide evidence from conceptual arguments, analysis of existing empirical works, and analysis of novel illustrative case studies. We also discuss the potential of several proposed empirical workarounds to these problems and argue that while they seem valuable, they also seem insufficient alone. Progress on deception detection likely requires further consideration of these problems.