超要約:LLMでスマートコントラクトの弱点(再入可能性)を見つけるぞ!
🌟 ギャル的キラキラポイント✨ ● LLM(大規模言語モデル)を使って、スマートコントラクトのセキュリティを最強にするってこと💖 ● 専門知識がなくても、データ少なめでも、高精度で脆弱性(ぜいじゃくせい)を見つけられるようにするんだって✨ ● IT業界で大活躍間違いなし!ブロックチェーン技術をもっと安全にするんだって💎
続きは「らくらく論文」アプリで
Large language models (LLMs) demonstrate remarkable capabilities in natural language understanding and generation. Despite being trained on large-scale, high-quality data, LLMs still fail to outperform traditional static analysis tools in specialized domains like smart contract vulnerability detection. To address this issue, this paper proposes a post-training algorithm based on atomic task decomposition and fusion. This algorithm aims to achieve combinatorial generalization under limited data by decomposing complex reasoning tasks. Specifically, we decompose the reentrancy vulnerability detection task into four linearly independent atomic tasks: identifying external calls, identifying state updates, identifying data dependencies between external calls and state updates, and determining their data flow order. These tasks form the core components of our approach. By training on synthetic datasets, we generate three compiler-verified datasets. We then employ the Slither tool to extract structural information from the control flow graph and data flow graph, which is used to fine-tune the LLM's adapter. Experimental results demonstrate that low-rank normalization fusion with the LoRA adapter improves the LLM's reentrancy vulnerability detection accuracy to 98.2%, surpassing state-of-the-art methods. On 31 real-world contracts, the algorithm achieves a 20% higher recall than traditional analysis tools.