超要約:AIと人間が一緒に成長する、価値観重視のAI開発の話だよ💖
✨ ギャル的キラキラポイント ✨ ● AIちゃん、人間と仲良くしたいって!🤝 ● 偏見バイバイ👋、透明性アップ⤴️ ● 未来は、AIとみんなでハッピー🙌
詳細解説いくよ~!
背景 AI(エーアイ)ってすごいけど、たまに「え、それってアリ?」ってことあるじゃん? 🤔 この研究は、AIが人間の価値観とズレないように、一緒に成長する「調整(ちょうせい)」って方法を研究してるんだって!
続きは「らくらく論文」アプリで
The rapid integration of generative AI into everyday life underscores the need to move beyond unidirectional alignment models that only adapt AI to human values. This workshop focuses on bidirectional human-AI alignment, a dynamic, reciprocal process where humans and AI co-adapt through interaction, evaluation, and value-centered design. Building on our past CHI 2025 BiAlign SIG and ICLR 2025 Workshop, this workshop will bring together interdisciplinary researchers from HCI, AI, social sciences and more domains to advance value-centered AI and reciprocal human-AI collaboration. We focus on embedding human and societal values into alignment research, emphasizing not only steering AI toward human values but also enabling humans to critically engage with and evolve alongside AI systems. Through talks, interdisciplinary discussions, and collaborative activities, participants will explore methods for interactive alignment, frameworks for societal impact evaluation, and strategies for alignment in dynamic contexts. This workshop aims to bridge the disciplines' gaps and establish a shared agenda for responsible, reciprocal human-AI futures.