超要約: 機械学習(ML)を速くする為、HPC(ハイパフォーマンスコンピューティング)のI/O(入出力)を調べまくった話💖
🌟 ギャル的キラキラポイント✨ ● MLのデータアクセスは、HPCの足を引っ張る原因だったみたい!😩 ● 既存の技術じゃ、MLのI/O問題は解決できないってことが判明!🤔 ● 新しいI/O最適化技術の開発で、AIがもっと賢くなるかも!😎
詳細解説いくよ~!
背景 最近のAIブーム、やばくない?😍 でも、AIの学習(トレーニング)ってデータがいっぱい必要で、そのデータの出し入れ(I/O)がHPCのボトルネックになってたの! 従来の技術じゃ、ML特有のI/Oパターンに対応しきれてなかったみたい💦
続きは「らくらく論文」アプリで
Growing interest in Artificial Intelligence (AI) has resulted in a surge in demand for faster methods of Machine Learning (ML) model training and inference. This demand for speed has prompted the use of high performance computing (HPC) systems that excel in managing distributed workloads. Because data is the main fuel for AI applications, the performance of the storage and I/O subsystem of HPC systems is critical. In the past, HPC applications accessed large portions of data written by simulations or experiments or ingested data for visualizations or analysis tasks. ML workloads perform small reads spread across a large number of random files. This shift of I/O access patterns poses several challenges to modern parallel storage systems. In this paper, we survey I/O in ML applications on HPC systems, and target literature within a 6-year time window from 2019 to 2024. We define the scope of the survey, provide an overview of the common phases of ML, review available profilers and benchmarks, examine the I/O patterns encountered during offline data preparation, training, and inference, and explore I/O optimizations utilized in modern ML frameworks and proposed in recent literature. Lastly, we seek to expose research gaps that could spawn further R&D