iconLogo
Published:2026/1/11 10:08:22

Lemmanaid:数学証明をAIで楽々💖

I. 研究の概要

  1. 研究の目的

    • 課題解決: 数学の証明って難しいじゃん?それをAIでサポートできないか?ってこと!
    • 提案: ニューロシンボリックな補題推測ツール「Lemmanaid」を開発!
    • 成果: 新しい補題を自動生成して、証明を効率化するよ✨
    • 影響: IT分野でのAI活用を広げるかも!
  2. 研究の背景

    • 現状: 証明支援系は数学研究で重要度UP!
    • 課題: 形式化作業が大変なのよね…
    • 研究動向: LLMを使った研究が進んでる!
    • IT業界との関連: 形式化技術はITでも役立つ!

II. 研究の詳細

続きは「らくらく論文」アプリで

Lemmanaid: Neuro-Symbolic Lemma Conjecturing

Yousef Alhessi / S\'olr\'un Halla Einarsd\'ottir / George Granberry / Emily First / Moa Johansson / Sorin Lerner / Nicholas Smallbone

Mathematicians and computer scientists are increasingly using proof assistants to formalize and check correctness of complex proofs. This is a non-trivial task in itself, however, with high demands on human expertise. Can we lower the bar by introducing automation for conjecturing helpful, interesting and novel lemmas? We present the first neuro-symbolic lemma conjecturing tool, LEMMANAID, designed to discover conjectures by drawing analogies between mathematical theories. LEMMANAID uses a fine-tuned LLM to generate lemma templates that describe the shape of a lemma, and symbolic methods to fill in the details. We compare LEMMANAID against the same LLM fine-tuned to generate complete lemma statements (a purely neural method), as well as a fully symbolic conjecturing method. LEMMANAID consistently outperforms both neural and symbolic methods on test sets from Isabelle's HOL library and from its Archive of Formal Proofs (AFP). Using DeepSeek-coder-6.7B as a backend, LEMMANAID discovers 50% (HOL) and 28% (AFP) of the gold standard reference lemmas, 8-13% more than the corresponding neural-only method. Ensembling two LEMMANAID versions with different prompting strategies further increases performance to 55% and 34% respectively. In a case study on the formalization of Octonions, LEMMANAID discovers 79% of the gold standard lemmas, compared to 62% for neural-only and 23% for the state of the art symbolic tool. Our result show that LEMMANAID is able to conjecture a significant number of interesting lemmas across a wide range of domains covering formalizations over complex concepts in both mathematics and computer science, going far beyond the basic concepts of standard benchmarks such as miniF2F, PutnamBench and ProofNet.

cs / cs.AI / cs.LO