iconLogo
Published:2025/11/8 1:53:01

RoMAE爆誕!時系列データ解析をギャルでもイケるようにしたった💖

  1. RoMAEでデータ解析革命!
  2. 時系列データも画像もイケる最強AI
  3. Transformerの進化系!使いやすさも◎

詳細解説いくよ~!

● 背景 時系列データ (時間の流れがあるデータ) の解析って難しいよね💦 Transformer(モデルの一種)は優秀だけど、時系列データにはちょっと合わない部分も…🤔 そこで登場したのがRoMAE!

続きは「らくらく論文」アプリで

Rotary Masked Autoencoders are Versatile Learners

Uros Zivanovic / Serafina Di Gioia / Andre Scaffidi / Mart\'in de los Rios / Gabriella Contardo / Roberto Trotta

Applying Transformers to irregular time-series typically requires specializations to their baseline architecture, which can result in additional computational overhead and increased method complexity. We present the Rotary Masked Autoencoder (RoMAE), which utilizes the popular Rotary Positional Embedding (RoPE) method for continuous positions. RoMAE is an extension to the Masked Autoencoder (MAE) that enables interpolation and representation learning with multidimensional continuous positional information while avoiding any time-series-specific architectural specializations. We showcase RoMAE's performance on a variety of modalities including irregular and multivariate time-series, images, and audio, demonstrating that RoMAE surpasses specialized time-series architectures on difficult datasets such as the DESC ELAsTiCC Challenge while maintaining MAE's usual performance across other modalities. In addition, we investigate RoMAE's ability to reconstruct the embedded continuous positions, demonstrating that including learned embeddings in the input sequence breaks RoPE's relative position property.

cs / cs.LG