超要約:皮肉を学習して、隠れたヘイトスピーチを見つけるAIの話だよ!
✨ ギャル的キラキラポイント ✨
● 皮肉(ひにく)をAIが学習するって、なんかおもしろくない?🤣 ちょっとズル賢い感じがイイ! ● 隠れたヘイトスピーチを見つけられると、ネットがもっと安全になるってこと💖 みんなで楽しくSNSしよ! ● IT企業が使える技術ってとこもポイント高め!ビジネスにも役立つって、最強じゃん?😎
詳細解説
続きは「らくらく論文」アプリで
Detecting hate speech in non-direct forms, such as irony, sarcasm, and innuendos, remains a persistent challenge for social networks. Although sarcasm and hate speech are regarded as distinct expressions, our work explores whether integrating sarcasm as a pre-training step improves implicit hate speech detection and, by extension, explicit hate speech detection. Incorporating samples from ETHOS, Sarcasm on Reddit, and Implicit Hate Corpus, we devised two training strategies to compare the effectiveness of sarcasm pre-training on a CNN+LSTM and BERT+BiLSTM model. The first strategy is a single-step training approach, where a model trained only on sarcasm is then tested on hate speech. The second strategy uses sequential transfer learning to fine-tune models for sarcasm, implicit hate, and explicit hate. Our results show that sarcasm pre-training improved the BERT+BiLSTM's recall by 9.7%, AUC by 7.8%, and F1-score by 6% on ETHOS. On the Implicit Hate Corpus, precision increased by 7.8% when tested only on implicit samples. By incorporating sarcasm into the training process, we show that models can more effectively detect both implicit and explicit hate.