iconLogo
Published:2025/12/25 15:21:49

ロボットも賢く収納!見えないモノを探すAI✨

超要約: ロボットが「どこにあるか」を賢く推測する技術!✨

ギャル的キラキラポイント✨ ● 見えないモノもAIが特定!まるで名探偵コナン🕵️‍♀️ ● 家庭用ロボットがもっと便利に進化する予感!🏠💕 ● 将来は、家電もスマホみたいに賢くなるかも!?📱✨

詳細解説背景 家庭用ロボットって、まだまだ「お皿出して」みたいな指示に弱いの💦 目に見えるモノしか認識できないから、収納場所が分かんないんだよね😩 この研究は、その問題を解決する為に生まれたんだって!

方法 「Stored Household Item Challenge」っていう、新しいテストを作ったみたい。AIに家の間取りとか、アイテムの名前を教えて、どこにしまってあるか推測させるんだって! 視覚情報と常識的な知識(フォークは引き出しとか)を組み合わせて、賢く場所を特定するんだね!🧐

続きは「らくらく論文」アプリで

Break Out the Silverware -- Semantic Understanding of Stored Household Items

Michaela Levi-Richter / Reuth Mirsky / Oren Glickman

``Bring me a plate.'' For domestic service robots, this simple command reveals a complex challenge: inferring where everyday items are stored, often out of sight in drawers, cabinets, or closets. Despite advances in vision and manipulation, robots still lack the commonsense reasoning needed to complete this task. We introduce the Stored Household Item Challenge, a benchmark task for evaluating service robots' cognitive capabilities: given a household scene and a queried item, predict its most likely storage location. Our benchmark includes two datasets: (1) a real-world evaluation set of 100 item-image pairs with human-annotated ground truth from participants' kitchens, and (2) a development set of 6,500 item-image pairs annotated with storage polygons over public kitchen images. These datasets support realistic modeling of household organization and enable comparative evaluation across agent architectures. To begin tackling this challenge, we introduce NOAM (Non-visible Object Allocation Model), a hybrid agent pipeline that combines structured scene understanding with large language model inference. NOAM converts visual input into natural language descriptions of spatial context and visible containers, then prompts a language model (e.g., GPT-4) to infer the most likely hidden storage location. This integrated vision-language agent exhibits emergent commonsense reasoning and is designed for modular deployment within broader robotic systems. We evaluate NOAM against baselines including random selection, vision-language pipelines (Grounding-DINO + SAM), leading multimodal models (e.g., Gemini, GPT-4o, Kosmos-2, LLaMA, Qwen), and human performance. NOAM significantly improves prediction accuracy and approaches human-level results, highlighting best practices for deploying cognitively capable agents in domestic environments.

cs / cs.CL / cs.AI / cs.CV / cs.RO