iconLogo
Published:2025/8/22 20:08:09

AIが「見る」コード生成、爆誕!🤖✨(超要約:AIのコード力爆上げ作戦)

1. ギャル的キラキラポイント✨

  • ● 人間の「見方」をAIにパクらせた!(視線データ活用)賢い~!😎
  • ● コード生成AIが、より人間らしく進化💖 コード書くのが楽になるかも!
  • ● 既存のAIにちょい足しでOK!コスパも最強ってこと!💰

2. 詳細解説

  • 背景:AIちゃん(CodeLLMs)はコード書くの上手だけど、人間の見方が分からなかったの。人間のプログラマーは、コードのどこに注目するか分かってるじゃん?👀 それをAIに教えてあげよう!って話。
  • 方法:人間の視線データ(どこ見てるか記録)をAIに学習させたよ!EyeMulatorって名前の技術を使って、AIちゃんの「見る力」をアップ⤴️
  • 結果:コード翻訳とか、色んなタスクでAIちゃんの成績が爆上がり!✨ 見るべきとこ、ちゃんと見てるみたい!
  • 意義(ここがヤバい♡ポイント):AIが人間のプログラマーみたいにコードを見れるようになったら、もっと色んなことができるようになるじゃん?💕開発効率アップ、バグ(プログラムのエラー)減、マジ神!

続きは「らくらく論文」アプリで

EyeMulator: Improving Code Language Models by Mimicking Human Visual Attention

Yifan Zhang / Chen Huang / Yueke Zhang / Jiahao Zhang / Toby Jia-Jun Li / Collin McMillan / Kevin Leach / Yu Huang

Code language models (so-called CodeLLMs) are now commonplace in software development. As a general rule, CodeLLMs are trained by dividing training examples into input tokens and then learn importance of those tokens in a process called machine attention. Machine attention is based solely on input token salience to output token examples during training. Human software developers are different, as humans intuitively know that some tokens are more salient than others. While intuition itself is ineffable and a subject of philosophy, clues about salience are present in human visual attention, since people tend to look at more salient words more often. In this paper, we present EyeMulator, a technique for training CodeLLMs to mimic human visual attention while training for various software development tasks. We add special weights for each token in each input example to the loss function used during LLM fine-tuning. We draw these weights from observations of human visual attention derived from a previously-collected publicly-available dataset of eye-tracking experiments in software engineering tasks. These new weights ultimately induce changes in the attention of the subject LLM during training, resulting in a model that does not need eye-tracking data during inference. Our evaluation shows that EyeMulator outperforms strong LLM baselines on several tasks such as code translation, completion and summarization. We further show an ablation study that demonstrates the improvement is due to subject models learning to mimic human attention.

cs / cs.SE / cs.AI / cs.HC