iconLogo
Published:2026/1/7 1:36:39

最強LLM!頭脳戦を賢く戦う秘策✨(超要約:賢いAI🧠)

1. 高速化! 2. コスト削減! 3. 賢さUP!

モデルマージング (モデルを合体)で賢く使い分け💖 ● Long-CoT (長文思考)と Short-CoT (短文思考)を状況に応じてチェンジ😉 ● 追加学習ナシ!なのに高性能&コスパ最強って最高じゃない?😍


背景

続きは「らくらく論文」アプリで

Reasoning Pattern Alignment Merging for Adaptive Reasoning

Zhaofeng Zhong / Wei Yuan / Tong Chen / Xiangyu Zhao / Quoc Viet Hung Nguyen / Hongzhi Yin

Recent large reasoning models (LRMs) have made substantial progress in complex reasoning tasks, yet they often generate lengthy reasoning paths for every query, incurring unnecessary computation and latency. Existing speed-up approaches typically rely on retraining the model or designing sophisticated prompting, which are either prohibitively expensive or highly sensitive to the input and prompt formulation. In this work, we study model merging as a lightweight alternative for efficient reasoning: by combining a long chain-of-thought (Long-CoT) reasoning model with a Short-CoT instruction model, we obtain an adaptive reasoner without training from scratch or requiring large-scale additional data. Building on this idea, we propose Reasoning Pattern Alignment Merging (RPAM), a layer-wise model merging framework based on feature alignment to facilitate query-adaptive reasoning. RPAM first constructs a small pattern-labeled calibration set that assigns each query an appropriate reasoning pattern. It then optimizes layer-wise merging coefficients by aligning the merged model's intermediate representations with those of the selected model, while a contrastive objective explicitly pushes them away from the non-selected model. Experiments on seven widely used reasoning benchmarks show that RPAM substantially reduces inference cost while maintaining strong performance. Upon article acceptance, we will provide open-source code to reproduce experiments for RPAM.

cs / cs.CL / cs.AI