iconLogo
Published:2025/11/10 6:49:45

最強SER爆誕!多言語対応の感情認識AI✨

超要約:感情理解AI、多言語&高精度で爆誕!

🌟 ギャル的キラキラポイント✨ ● 多言語対応(英語&東南アジア言語)が神!🌐 ● 感情のカテゴリー&次元を同時に分析する欲張りモデル!💖 ● 既存のSERより精度UPって、マジ卍!😎

詳細解説いくよ~!🎤

背景 SER(音声感情認識)って、AI界隈(かいわい)じゃ今アツい技術🔥 でも、言語の壁とか、感情表現の曖昧さとか、色んな問題があったんだよねー💦 特に、色んな国の言葉に対応できるモデルは少なかったの!

続きは「らくらく論文」アプリで

MERaLiON-SER: Robust Speech Emotion Recognition Model for English and SEA Languages

Hardik B. Sailor / Aw Ai Ti / Chen Fang Yih Nancy / Chiu Ying Lay / Ding Yang / He Yingxu / Jiang Ridong / Li Jingtao / Liao Jingyi / Liu Zhuohan / Lu Yanfeng / Ma Yi / Manas Gupta / Muhammad Huzaifah Bin Md Shahrin / Nabilah Binte Md Johan / Nattadaporn Lertcheva / Pan Chunlei / Pham Minh Duc / Siti Maryam Binte Ahmad Subaidi / Siti Umairah Binte Mohammad Salleh / Sun Shuo / Tarun Kumar Vangani / Wang Qiongqiong / Won Cheng Yi Lewis / Wong Heng Meng Jeremy / Wu Jinyang / Zhang Huayun / Zhang Longyin / Zou Xunlong

We present MERaLiON-SER, a robust speech emotion recognition model de- signed for English and Southeast Asian languages. The model is trained using a hybrid objective combining weighted categorical cross-entropy and Concordance Correlation Coefficient (CCC) losses for joint discrete and dimensional emotion modelling. This dual approach enables the model to capture both the distinct categories of emotion (like happy or angry) and the fine-grained, such as arousal (intensity), valence (positivity/negativity), and dominance (sense of control), lead- ing to a more comprehensive and robust representation of human affect. Extensive evaluations across multilingual Singaporean languages (English, Chinese, Malay, and Tamil ) and other public benchmarks show that MERaLiON-SER consistently surpasses both open-source speech encoders and large Audio-LLMs. These results underscore the importance of specialised speech-only models for accurate paralin- guistic understanding and cross-lingual generalisation. Furthermore, the proposed framework provides a foundation for integrating emotion-aware perception into future agentic audio systems, enabling more empathetic and contextually adaptive multimodal reasoning.

cs / cs.SD / cs.AI