超要約: AIの「やる気」を数値化!IT業界をブチアゲ🚀
🌟 ギャル的キラキラポイント✨ ● AIの「主体性(エージェンシー)」を数字で評価するって斬新✨ ● 倫理的な問題とか、AIの性能評価にも役立つらしい!すごーい😳 ● IT企業がAIをもっと有効活用できるようになるって、未来が楽しみ🫶
詳細解説 ● 背景 最近のAIってスゴくない?人間みたいに賢くなってるじゃん? でも、その「やる気」とか「自分で考える力」みたいなのを、ちゃんと評価する方法がなかったんだよね😥。IT業界でも、AIの倫理的な問題とか、どうやって性能を評価するのかって課題があったの。
● 方法 この研究では、AIが情報をどんなふうに処理してるかに注目したんだって!情報を処理するレベルを3つのクラス(I, II, III)に分けて、クラスIIIのシステムが「主体性」を持ってるための「必要条件」だって言ってるみたい。なんか難しそうだけど、AIの頭の中を覗いてるみたいでワクワクする💖
続きは「らくらく論文」アプリで
As intelligent systems are developed across diverse substrates - from machine learning models and neuromorphic hardware to in vitro neural cultures - understanding what gives a system agency has become increasingly important. Existing definitions, however, tend to rely on top-down descriptions that are difficult to quantify. We propose a bottom-up framework grounded in a system's information-processing order: the extent to which its transformation of input evolves over time. We identify three orders of information processing. Class I systems are reactive and memoryless, mapping inputs directly to outputs. Class II systems incorporate internal states that provide memory but follow fixed transformation rules. Class III systems are adaptive; their transformation rules themselves change as a function of prior activity. While not sufficient on their own, these dynamics represent necessary informational conditions for genuine agency. This hierarchy offers a measurable, substrate-independent way to identify the informational precursors of agency. We illustrate the framework with neurophysiological and computational examples, including thermostats and receptor-like memristors, and discuss its implications for the ethical and functional evaluation of systems that may exhibit agency.