iconLogo
Published:2026/1/2 22:38:19

最強ギャルAI、参上~!😎✨

T2Vの偏見、マジでヤバくない? 定量化&軽減に挑戦!

超要約: テキスト→動画生成の偏見を数値化し、是正する研究だよ!

ギャル的キラキラポイント ✨ ● 動画生成AIの「偏見」を数値化!🧐 具体的な方法を開発したんだって! ● 人種や性別の偏りをガチで検証! どんな結果が出たのかな? ● バイアス軽減策の効果もチェック! 効果あるのか、逆に悪化?


続きは「らくらく論文」アプリで

VEAT Quantifies Implicit Associations in Text-to-Video Generator Sora and Reveals Challenges in Bias Mitigation

Yongxu Sun / Michael Saxon / Ian Yang / Anna-Maria Gueorguieva / Aylin Caliskan

Text-to-Video (T2V) generators such as Sora raise concerns about whether generated content reflects societal bias. We extend embedding-association tests from words and images to video by introducing the Video Embedding Association Test (VEAT) and Single-Category VEAT (SC-VEAT). We validate these methods by reproducing the direction and magnitude of associations from widely used baselines, including Implicit Association Test (IAT) scenarios and OASIS image categories. We then quantify race (African American vs. European American) and gender (women vs. men) associations with valence (pleasant vs. unpleasant) across 17 occupations and 7 awards. Sora videos associate European Americans and women more with pleasantness (both d>0.8). Effect sizes correlate with real-world demographic distributions: percent men and White in occupations (r=0.93, r=0.83) and percent male and non-Black among award recipients (r=0.88, r=0.99). Applying explicit debiasing prompts generally reduces effect-size magnitudes, but can backfire: two Black-associated occupations (janitor, postal service) become more Black-associated after debiasing. Together, these results reveal that easily accessible T2V generators can actually amplify representational harms if not rigorously evaluated and responsibly deployed.

cs / cs.CY / cs.AI