iconLogo
Published:2026/1/5 11:52:55

LLMコードの精度UP術!IT企業向け💅✨

1. LLMのコード、もっと正確に!

ギャル的キラキラポイント✨

  • ● LLM内部の"すごい情報" を見つける方法を発見🌟
  • ● 確率だけじゃない!コードの"良さ"を評価できる💖
  • ● 開発効率爆上がり! バグ減って最高🙌

詳細解説

続きは「らくらく論文」アプリで

On LLMs' Internal Representation of Code Correctness

Francisco Ribeiro / Claudio Spiess / Prem Devanbu / Sarah Nadi

Despite the effectiveness of large language models (LLMs) for code generation, they often output incorrect code. One reason is that model output probabilities are often not well-correlated with correctness, and reflect only the final output of the generation process. Inspired by findings that LLMs internally encode concepts like truthfulness, this paper explores if LLMs similarly represent code correctness. Specifically, we identify a correctness representation inside LLMs by contrasting the hidden states between pairs of correct and incorrect code for the same programming tasks. By experimenting on four LLMs, we show that exploiting this extracted correctness representation outperforms standard log-likelihood ranking, as well as verbalized model confidence. Furthermore, we explore how this internal correctness signal can be used to select higher-quality code samples, without requiring test execution. Ultimately, this work demonstrates how leveraging internal representations can enhance code generation systems and make LLMs more reliable, thus improving confidence in automatically generated code.

cs / cs.SE / cs.AI / cs.LG