タイトル & 超要約 最新DLモデルで自動運転の性能UP!車の目👁️が進化!
ギャル的キラキラポイント✨ ● 自動運転の「目👁️」である物体検出の精度を爆上げ⤴️ ● YOLOv8sってモデルが、学習時間も短くて優秀🏆 ● 交通事故が減るかも⁉️未来が明るいじゃん?🚗✨
詳細解説
リアルでの使いみちアイデア💡
続きは「らくらく論文」アプリで
Recently, a plethora of machine learning (ML) and deep learning (DL) algorithms have been proposed to achieve the efficiency, safety, and reliability of autonomous vehicles (AVs). The AVs use a perception system to detect, localize, and identify other vehicles, pedestrians, and road signs to perform safe navigation and decision-making. In this paper, we compare the performance of DL models, including YOLO-NAS and YOLOv8, for a detection-based perception task. We capture a custom dataset and experiment with both DL models using our custom dataset. Our analysis reveals that the YOLOv8s model saves 75% of training time compared to the YOLO-NAS model. In addition, the YOLOv8s model (83%) outperforms the YOLO-NAS model (81%) when the target is to achieve the highest object detection accuracy. These comparative analyses of these new emerging DL models will allow the relevant research community to understand the models' performance under real-world use case scenarios.