タイトル & 超要約 LLM(かしこいAI)で研究計画作るの、目新しさ評価してみたよ!💖
ギャル的キラキラポイント✨ ● LLMちゃん、研究計画も作れちゃう時代なのよ~! ● 「スマート盗用」問題に、エージェント型ワークフローで挑む💪 ● AI研究支援サービス、めっちゃアツくない?🔥
詳細解説
リアルでの使いみちアイデア💡
続きは「らくらく論文」アプリで
The integration of Large Language Models (LLMs) into the scientific ecosystem raises fundamental questions about the creativity and originality of AI-generated research. Recent work has identified ``smart plagiarism'' as a concern in single-step prompting approaches, where models reproduce existing ideas with terminological shifts. This paper investigates whether agentic workflows -- multi-step systems employing iterative reasoning, evolutionary search, and recursive decomposition -- can generate more novel and feasible research plans. We benchmark five reasoning architectures: Reflection-based iterative refinement, Sakana AI v2 evolutionary algorithms, Google Co-Scientist multi-agent framework, GPT Deep Research (GPT-5.1) recursive decomposition, and Gemini~3 Pro multimodal long-context pipeline. Using evaluations from thirty proposals each on novelty, feasibility, and impact, we find that decomposition-based and long-context workflows achieve mean novelty of 4.17/5, while reflection-based approaches score significantly lower (2.33/5). Results reveal varied performance across research domains, with high-performing workflows maintaining feasibility without sacrificing creativity. These findings support the view that carefully designed multi-stage agentic workflows can advance AI-assisted research ideation.