超すごいNN(ニューラルネットワーク)で、組込みAIがアゲアゲになるって話だよ~!
🌟 ギャル的キラキラポイント ● 深いNNと浅いNNを自由自在に変身させちゃう魔法🪄 ● AIの精度(せいど)を保ちつつ、消費電力も抑えられちゃうエコ仕様💚 ● いろんな機械(ハードウェア)にピッタリ合うようにカスタマイズできるの!
詳細解説いくね!
背景
最近のAIブームで、賢いNNが色々出てきたけど、組込みシステム(スマホとか)には向いてなかったの😥 高性能なNNは、電気めっちゃ食うし、組込みシステムはパワーが限られてるからね! この研究は、その問題を解決するために、**深層NN(めっちゃ賢い)と浅層NN(省エネ)**を自由に行き来できる、すごいNNを作ったってこと!
続きは「らくらく論文」アプリで
Thanks to the evolving network depth, convolutional neural networks (CNNs) have achieved remarkable success across various embedded scenarios, paving the way for ubiquitous embedded intelligence. Despite its promise, the evolving network depth comes at the cost of degraded hardware efficiency. In contrast to deep networks, shallow networks can deliver superior hardware efficiency but often suffer from inferior accuracy. To address this dilemma, we propose Double-Win NAS, a novel deep-to-shallow transformable neural architecture search (NAS) paradigm tailored for resource-constrained intelligent embedded systems. Specifically, Double-Win NAS strives to automatically explore deep networks to first win strong accuracy, which are then equivalently transformed into their shallow counterparts to further win strong hardware efficiency. In addition to search, we also propose two enhanced training techniques, including hybrid transformable training towards better training accuracy and arbitrary-resolution elastic training towards enabling natural network elasticity across arbitrary input resolutions. Extensive experimental results on two popular intelligent embedded systems (i.e., NVIDIA Jetson AGX Xavier and NVIDIA Jetson Nano) and two representative large-scale datasets (i.e., ImageNet and ImageNet-100) clearly demonstrate the superiority of Double-Win NAS over previous state-of-the-art NAS approaches.