超要約: 小さいモデル(SLM)でプログラムのバグを直す技術がすごい!しかも安上がりで、みんなに使えるって話だよ~💖
ギャル的キラキラポイント✨
詳細解説
リアルでの使いみちアイデア💡
続きは「らくらく論文」アプリで
Background: Large language models (LLMs) have greatly improved the accuracy of automated program repair (APR) methods. However, LLMs are constrained by high computational resource requirements. Aims: We focus on small language models (SLMs), which perform well even with limited computational resources compared to LLMs. We aim to evaluate whether SLMs can achieve competitive performance in APR tasks. Method: We conducted experiments on the QuixBugs benchmark to compare the bug-fixing accuracy of SLMs and LLMs. We also analyzed the impact of int8 quantization on APR performance. Results: The latest SLMs can fix bugs as accurately as--or even more accurately than--LLMs. Also, int8 quantization had minimal effect on APR accuracy while significantly reducing memory requirements. Conclusions: SLMs present a viable alternative to LLMs for APR, offering competitive accuracy with lower computational costs, and quantization can further enhance their efficiency without compromising effectiveness.