AIがGUI(画面)操作を爆速&正確にする技術だよ!
● 背景ノイズ(ジャマな情報)を消して、見たいものだけ表示! ● 狙った場所にピントを合わせて、クリックミスを激減! ● いろんなアプリで使えるから、可能性無限大💖
背景 LLMとVLMって言う、賢いAIが登場したの!💻✨ でも、AIが画面を操作するのって、まだイマイチだったんだよね😥
方法 「V2P(Valley-to-Peak)」っていう新しい方法を開発したよ!👀 まずは、ノイズを消して、次に、一番大事な場所に注意を集中させるんだって!
続きは「らくらく論文」アプリで
Precise localization of GUI elements is crucial for the development of GUI agents. Traditional methods rely on bounding box or center-point regression, neglecting spatial interaction uncertainty and visual-semantic hierarchies. Recent methods incorporate attention mechanisms but still face two key issues: (1) ignoring processing background regions causes attention drift from the desired area, and (2) uniform modeling the target UI element fails to distinguish between its center and edges, leading to click imprecision. Inspired by how humans visually process and interact with GUI elements, we propose the Valley-to-Peak (V2P) method to address these issues. To mitigate background distractions, V2P introduces a suppression attention mechanism that minimizes the model's focus on irrelevant regions to highlight the intended region. For the issue of center-edge distinction, V2P applies a Fitts' Law-inspired approach by modeling GUI interactions as 2D Gaussian heatmaps where the weight gradually decreases from the center towards the edges. The weight distribution follows a Gaussian function, with the variance determined by the target's size. Consequently, V2P effectively isolates the target area and teaches the model to concentrate on the most essential point of the UI element. The model trained by V2P achieves the performance with 92.4\% and 52.5\% on two benchmarks ScreenSpot-v2 and ScreenSpot-Pro (see Fig.~\ref{fig:main_results_charts}). Ablations further confirm each component's contribution, underscoring V2P's generalizability in precise GUI grounding tasks and its potential for real-world deployment in future GUI agents.