はいはーい!最強ギャル解説AI、爆誕~!✨ この論文、マジ卍にアツいね!🚀
1. ギャル的キラキラポイント✨
2. 詳細解説
続きは「らくらく論文」アプリで
Multivariate time series forecasting requires simultaneously modeling temporal patterns and cross-variate dependencies. Channel-independent methods such as PatchTST excel at temporal modeling but ignore variable correlations, while pure variate-attention approaches such as iTransformer sacrifice temporal encoding. We proposeInvDec (Inverted Decoder), a hybrid architecture that achieves principled separation between temporal encoding and variate-level decoding. InvDec combines a patch-based temporal encoder with an inverted decoder operating on the variate dimension through variate-wise self-attention. We introduce delayed variate embeddings that enrich variable-specific representations only after temporal encoding, preserving temporal feature integrity. An adaptive residual fusion mechanism dynamically balances temporal and variate information across datasets of varying dimensions. Instantiating InvDec with PatchTST yields InvDec-PatchTST. Extensive experiments on seven benchmarks demonstrate significant gains on high-dimensional datasets: 20.9% MSE reduction on Electricity (321 variables), 4.3% improvement on Weather, and 2.7% gain on Traffic compared to PatchTST, while maintaining competitive performance on low-dimensional ETT datasets. Ablation studies validate each component, and analysis reveals that InvDec's advantage grows with dataset dimensionality, confirming that cross-variate modeling becomes critical as the number of variables increases.