iconLogo
Published:2025/11/8 2:07:15

LLMで無限ゲーム評価!✨ (GVGAI-LLM)

超要約: LLMでゲームAIを評価!課題を見つけ、未来を切り開く🚀

ギャル的キラキラポイント✨ ● ゲームAIの性能をLLM(大規模言語モデル)でチェックするって斬新~!🎮 ● いろんなゲームで試せるから、LLMの弱点もバッチリ見つけられちゃう!🧐 ● 空間認識とか計画力とか、LLMのすごいとこを引き出せるかも!😎

詳細解説 ● 背景 最近のLLMって、文章作ったり、質問に答えたりスゴイじゃん? でも、ゲームみたいな複雑な状況での「頭の良さ」は、まだよくわかんなかったんだよね🤔 そこで、ゲームAIの評価にLLMを使ってみよう!って研究が登場したの!

● 方法 GVGAI-LLMっていうフレームワークを使って、LLMがゲームをプレイ🎮 ゲームの状況を文字で伝えて、LLMがどう動くかをチェック! 空間的なこととか、計画を立てるところを重点的に見たんだって👀 どんな課題があるかも調べたよ!

続きは「らくらく論文」アプリで

GVGAI-LLM: Evaluating Large Language Model Agents with Infinite Games

Yuchen Li / Cong Lin / Muhammad Umair Nasir / Philip Bontrager / Jialin Liu / Julian Togelius

We introduce GVGAI-LLM, a video game benchmark for evaluating the reasoning and problem-solving capabilities of large language models (LLMs). Built on the General Video Game AI framework, it features a diverse collection of arcade-style games designed to test a model's ability to handle tasks that differ from most existing LLM benchmarks. The benchmark leverages a game description language that enables rapid creation of new games and levels, helping to prevent overfitting over time. Each game scene is represented by a compact set of ASCII characters, allowing for efficient processing by language models. GVGAI-LLM defines interpretable metrics, including the meaningful step ratio, step efficiency, and overall score, to assess model behavior. Through zero-shot evaluations across a broad set of games and levels with diverse challenges and skill depth, we reveal persistent limitations of LLMs in spatial reasoning and basic planning. Current models consistently exhibit spatial and logical errors, motivating structured prompting and spatial grounding techniques. While these interventions lead to partial improvements, the benchmark remains very far from solved. GVGAI-LLM provides a reproducible testbed for advancing research on language model capabilities, with a particular emphasis on agentic behavior and contextual reasoning.

cs / cs.AI