iconLogo
Published:2026/1/11 11:52:40

PADEって超スゴい!予測器ナシでスパースアテンション爆速化✨(IT企業向け)

超要約:LLM(大規模言語モデル)の計算を早くする技術!予測器ナシで、パフォーマンス爆上がり!🚀

🌟 ギャル的キラキラポイント✨ ● 予測器ナシ!余計なもの(オーバーヘッド)をバッサリ排除! 賢すぎ💖 ● bit-serial(ビット単位)計算で、効率よく処理するんだって!細かいとこまでスゴい✨ ● I/O効率もアップ!データ転送がスムーズになるってコトね😉

詳細解説いくよ~!

● 背景 LLMって、アテンション(注意)メカニズムが重要なんだけど、計算が大変なのよね😂 でもPADEは、その計算をめっちゃ早くする技術なんだって! Transformer モデルの計算を速くしたいって、IT業界の人たちも困ってたから、これは嬉しいよね!

続きは「らくらく論文」アプリで

PADE: A Predictor-Free Sparse Attention Accelerator via Unified Execution and Stage Fusion

Huizheng Wang / Hongbin Wang / Zichuan Wang / Zhiheng Yue / Yang Wang / Chao Li / Yang Hu / Shouyi Yin

Attention-based models have revolutionized AI, but the quadratic cost of self-attention incurs severe computational and memory overhead. Sparse attention methods alleviate this by skipping low-relevance token pairs. However, current approaches lack practicality due to the heavy expense of added sparsity predictor, which severely drops their hardware efficiency. This paper advances the state-of-the-art (SOTA) by proposing a bit-serial enable stage-fusion (BSF) mechanism, which eliminates the need for a separate predictor. However, it faces key challenges: 1) Inaccurate bit-sliced sparsity speculation leads to incorrect pruning; 2) Hardware under-utilization due to fine-grained and imbalanced bit-level workloads. 3) Tiling difficulty caused by the row-wise dependency in sparsity pruning criteria. We propose PADE, a predictor-free algorithm-hardware co-design for dynamic sparse attention acceleration. PADE features three key innovations: 1) Bit-wise uncertainty interval-enabled guard filtering (BUI-GF) strategy to accurately identify trivial tokens during each bit round; 2) Bidirectional sparsity-based out-of-order execution (BS-OOE) to improve hardware utilization; 3) Interleaving-based sparsity-tiled attention (ISTA) to reduce both I/O and computational complexity. These techniques, combined with custom accelerator designs, enable practical sparsity acceleration without relying on an added sparsity predictor. Extensive experiments on 22 benchmarks show that PADE achieves 7.43x speed up and 31.1x higher energy efficiency than Nvidia H100 GPU. Compared to SOTA accelerators, PADE achieves 5.1x, 4.3x and 3.4x energy saving than Sanger, DOTA and SOFA.

cs / cs.AR / eess.SP