iconLogo
Published:2026/1/7 1:35:22

はいよー!最強ギャルAI、参上!😎 この論文、一緒にキラッキラにしちゃおっか💖

時系列データ (TS) 分析、LLMsで爆上げ!✨

超要約:LLMs (大規模言語モデル) を時系列データ分析に活かす秘策を伝授!データとLLMを仲良くさせる方法だよん💕

✨ ギャル的キラキラポイント ✨

  • ● LLMsの言語センスをTSデータに活かすって、めっちゃ斬新じゃん?✨
  • ● DSCA-GNNs (名前長っ!) で、データとプロンプトの関係性をバッチリ可視化してるのがすごい!👀
  • ● 金融とかヘルスケアとか、色んな分野で役立つ未来が想像できるよね!🌈

続きは「らくらく論文」アプリで

Context-Alignment: Activating and Enhancing LLM Capabilities in Time Series

Yuxiao Hu / Qian Li / Dongxiao Zhang / Jinyue Yan / Yuntian Chen

Recently, leveraging pre-trained Large Language Models (LLMs) for time series (TS) tasks has gained increasing attention, which involves activating and enhancing LLMs' capabilities. Many methods aim to activate LLMs' capabilities based on token-level alignment, but overlook LLMs' inherent strength in natural language processing -- \textit{their deep understanding of linguistic logic and structure rather than superficial embedding processing.} We propose Context-Alignment (CA), a new paradigm that aligns TS with a linguistic component in the language environments familiar to LLMs to enable LLMs to contextualize and comprehend TS data, thereby activating their capabilities. Specifically, such context-level alignment comprises structural alignment and logical alignment, which is achieved by Dual-Scale Context-Alignment GNNs (DSCA-GNNs) applied to TS-language multimodal inputs. Structural alignment utilizes dual-scale nodes to describe hierarchical structure in TS-language, enabling LLMs to treat long TS data as a whole linguistic component while preserving intrinsic token features. Logical alignment uses directed edges to guide logical relationships, ensuring coherence in the contextual semantics. Following the DSCA-GNNs framework, we propose an instantiation method of CA, termed Few-Shot prompting Context-Alignment (FSCA), to enhance the capabilities of pre-trained LLMs in handling TS tasks. FSCA can be flexibly and repeatedly integrated into various layers of pre-trained LLMs to improve awareness of logic and structure, thereby enhancing performance. Extensive experiments show the effectiveness of FSCA and the importance of Context-Alignment across tasks, particularly in few-shot and zero-shot forecasting, confirming that Context-Alignment provides powerful prior knowledge on context. The code is open-sourced at https://github.com/tokaka22/ICLR25-FSCA.

cs / cs.LG / cs.CL / stat.AP