超要約: LLM(AI)の行動を、もっと人間に似せる方法を発見!IT企業も大注目だよ👀✨
✨ ギャル的キラキラポイント ✨ ● LLMに「コンテキスト(状況)」をちゃんと教えるのが大事💖 ● LLMがどう考えてるか、CoTで丸見え👀✨ ● 色んなLLMで試して、効果を証明済み!最強🚀
詳細解説いくね~! ● 背景 LLMは文章作るの得意だけど、複雑な状況での行動は苦手だったの。人間みたいに賢く振る舞うのが難しかったんだよね🤔
● 方法 2つのステップでLLMを教育! まずは「コンテキスト形成」で状況を理解させ、次に「コンテキストナビゲーション」で人間みたいな思考を促すの!CoT(思考の連鎖)分析で、LLMが何考えてるか丸裸にしちゃう作戦💖
続きは「らくらく論文」アプリで
Large language models (LLMs) are increasingly used to simulate human behavior in experimental settings, but they systematically diverge from human decisions in complex decision-making environments, where participants must anticipate others' actions and form beliefs based on observed behavior. We propose a two-stage framework for improving behavioral alignment. The first stage, context formation, explicitly specifies the experimental design to establish an accurate representation of the decision task and its context. The second stage, context navigation, guides the reasoning process within that representation to make decisions. We validate this framework through a focal replication of a sequential purchasing game with quality signaling (Kremer and Debo, 2016), extending to a crowdfunding game with costly signaling (Cason et al., 2025) and a demand-estimation task (Gui and Toubia, 2025) to test generalizability across decision environments. Across four state-of-the-art (SOTA) models (GPT-4o, GPT-5, Claude-4.0-Sonnet-Thinking, DeepSeek-R1), we find that complex decision-making environments require both stages to achieve behavioral alignment with human benchmarks, whereas the simpler demand-estimation task requires only context formation. Our findings clarify when each stage is necessary and provide a systematic approach for designing and diagnosing LLM social simulations as complements to human subjects in behavioral research.