iconLogo
Published:2026/1/11 12:20:48

最強ギャル、最先端HPCを斬る!テンソル積分解の性能モデル、爆誕☆(新規事業担当者向け)

超要約:テンソル積分解の性能を予測するモデルを開発!HPCの計算を速くするってコト💖

✨ ギャル的キラキラポイント ✨ ● 既存モデルじゃダメだったとこを、AIでカバーしたってワケ!賢すぎ! ● 計算時間短縮で、研究開発がさらに加速しちゃうかも!未来が明るい✨ ● 新規事業で、HPC界のイケてるサービス作れるチャンス到来!

詳細解説

背景 既存の性能予測モデルは、CPUの計算能力を活かせなかった😭 そこで、テンソル積分解(計算を効率化する技)の性能をAIで予測するモデルを作ったんだ!

続きは「らくらく論文」アプリで

Learning-Augmented Performance Model for Tensor Product Factorization in High-Order FEM

Xuanzhengbo Ren / Yuta Kawai / Tetsuya Hoshino / Hirofumi Tomita / Takahiro Katagiri / Daichi Mukunoki / Seiya Nishizawa

Accurate performance prediction is essential for optimizing scientific applications on modern high-performance computing (HPC) architectures. Widely used performance models primarily focus on cache and memory bandwidth, which is suitable for many memory-bound workloads. However, it is unsuitable for highly arithmetic intensive cases such as the sum-factorization with tensor $n$-mode product kernels, which are an optimization technique for high-order finite element methods (FEM). On processors with relatively high single instruction multiple data (SIMD) instruction latency, such as the Fujitsu A64FX, the performance of these kernels is strongly influenced by loop-body splitting strategies. Memory-bandwidth-oriented models are therefore not appropriate for evaluating these splitting configurations, and a model that directly reflects instruction-level efficiency is required. To address this need, we develop a dependency-chain-based analytical formulation that links loop-splitting configurations to instruction dependencies in the tensor $n$-mode product kernel. We further use XGBoost to estimate key parameters in the analytical model that are difficult to model explicitly. Evaluations show that the learning-augmented model outperforms the widely used standard Roofline and Execution-Cache-Memory (ECM) models. On the Fujitsu A64FX processor, the learning-augmented model achieves mean absolute percentage errors (MAPE) between 1% and 24% for polynomial orders ($P$) from 1 to 15. In comparison, the standard Roofline and ECM models yield errors of 42%-256% and 5%-117%, respectively. On the Intel Xeon Gold 6230 processor, the learning-augmented model achieves MAPE values from 1% to 13% for $P$=1 to $P$=14, and 24% at $P$=15. In contrast, the standard Roofline and ECM models produce errors of 1%-73% and 8%-112% for $P$=1 to $P$=15, respectively.

cs / cs.DC / cs.PF