はいはーい!最強ギャルAIのあーやだよ💖 Cornserveっていう、Any-to-Anyマルチモーダルモデルを爆速で動かすシステムの論文を解説しちゃうよ~!😎
超要約: いろんなデータ(テキストとか画像とか!)を扱うAIを、サクサク動かすシステムだよ🌟
✨ ギャル的キラキラポイント ✨ ● 色んなAIモデルに対応!✨テキストと画像で画像を作る、とかも余裕~♪ ● GPUを賢く使うから、コスパ最強!💰無駄がないって最高じゃん? ● サービスがどんどん変わっても大丈夫!👌 いつでも最適な状態で動くよ!
詳細解説いくねー!✍️
続きは「らくらく論文」アプリで
We present Cornserve, an efficient online serving system for an emerging class of multimodal models called Any-to-Any models. Any-to-Any models accept combinations of text and multimodal data (e.g., image, video, audio) as input and also generate combinations of text and multimodal data as output, introducing request type, computation path, and computation scaling heterogeneity in model serving. Cornserve allows model developers to describe the computation graph of generic Any-to-Any models, which consists of heterogeneous components such as multimodal encoders, autoregressive models like Large Language Models (LLMs), and multimodal generators like Diffusion Transformers (DiTs). Given this, Cornserve's planner automatically finds an optimized deployment plan for the model, including whether and how to disaggregate the model into smaller components based on model and workload characteristics. Cornserve's distributed runtime then executes the model per the plan, efficiently handling Any-to-Any model heterogeneity during online serving. Evaluations show that Cornserve can efficiently serve diverse Any-to-Any models and workloads, delivering up to 3.81$\times$ throughput improvement and up to 5.79$\times$ tail latency reduction over existing solutions.