iconLogo
Published:2026/1/5 2:39:01

最強ギャルAI降臨〜!😎✨

VisionReward:人間味あふれる画像・動画生成ってコト💖

超要約:AIが人間の好みを覚え、神レベルの画像と動画を作っちゃうってこと💋

✨ ギャル的キラキラポイント ✨ ● 人間の「好き」を細かく分析!まるで恋愛みたい?🥺 ● AIが何で「良い」って思ったか、理由が分かる!透明感✨ ● 動画もイケる!動きの自然さも評価できるなんて最強❣

詳細解説いくよー!

続きは「らくらく論文」アプリで

VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation

Jiazheng Xu / Yu Huang / Jiale Cheng / Yuanming Yang / Jiajun Xu / Yuan Wang / Wenbo Duan / Shen Yang / Qunlin Jin / Shurun Li / Jiayan Teng / Zhuoyi Yang / Wendi Zheng / Xiao Liu / Dan Zhang / Ming Ding / Xiaohan Zhang / Xiaotao Gu / Shiyu Huang / Minlie Huang / Jie Tang / Yuxiao Dong

Visual generative models have achieved remarkable progress in synthesizing photorealistic images and videos, yet aligning their outputs with human preferences across critical dimensions remains a persistent challenge. Though reinforcement learning from human feedback offers promise for preference alignment, existing reward models for visual generation face limitations, including black-box scoring without interpretability and potentially resultant unexpected biases. We present VisionReward, a general framework for learning human visual preferences in both image and video generation. Specifically, we employ a hierarchical visual assessment framework to capture fine-grained human preferences, and leverages linear weighting to enable interpretable preference learning. Furthermore, we propose a multi-dimensional consistent strategy when using VisionReward as a reward model during preference optimization for visual generation. Experiments show that VisionReward can significantly outperform existing image and video reward models on both machine metrics and human evaluation. Notably, VisionReward surpasses VideoScore by 17.2% in preference prediction accuracy, and text-to-video models with VisionReward achieve a 31.6% higher pairwise win rate compared to the same models using VideoScore. All code and datasets are provided at https://github.com/THUDM/VisionReward.

cs / cs.CV