iconLogo
Published:2025/12/26 2:24:17

論文評価、内容だけじゃダメ!著者の「人脈」も大事って話💅💕(超要約:AI論文の評価、ネットワークも考慮!)

🌟 ギャル的キラキラポイント✨ ● 論文の引用数(どれだけ引用されたか)って、論文の内容だけじゃなくて、著者の「人脈」も関係あるってコト! ● IT企業は、この研究結果を参考に、もっとフェア(公平)な評価基準を作れるチャンス! 才能ある人を見つけやすくなるかも✨ ● AI技術の研究開発とか、人材育成にも役立つから、IT業界にとっては超重要なお話ってワケね💖

詳細解説

背景 学術論文(研究の内容をまとめたもの)の引用数って、研究の「すごさ」を測る大事な指標じゃん?🤔 でも、実際は論文の内容だけじゃなくって、著者の「顔の広さ」とか、所属してる組織(会社とか大学)の知名度とかも影響しちゃうんだよねー💦 論文の評価って、結構偏ってる(バイアス)可能性があるってコト!

方法 この研究では、AI分野の論文を対象に、著者の「ネットワーク中心性」(人脈的な立ち位置)と、その論文の引用数の関係を調べたんだって! 「HCTCD」っていう新しい指標(人のつながりの強さみたいなもの)も作ったらしい!👩‍🔬 ベータ回帰分析っていう方法を使って、引用数の傾向も分析したんだってさ!難しそうだけど、スゴイ😎

続きは「らくらく論文」アプリで

Beyond Content: How Author Network Centrality Drives Citation Disparities in Top AI Conferences

Renlong Jie / Longfeng Zhao / Chen Chu / Danyang Jia / Zhen Wang

While scholarly citations are pivotal for assessing academic impact, they often reflect systemic biases beyond research quality. This study examines a critical yet underexplored driver of citation disparities: authors' structural positions within scientific collaboration networks. Through a large-scale analysis of 17,942 papers from three top-tier machine learning conferences (NeurIPS, ICML, ICLR) published between 2005 and 2024, we quantify the influence of author centrality on citations. Methodologically, we advance the field by employing beta regression to model citation percentiles, which appropriately accounts for the bounded nature of citation data. We also propose a novel centrality metric, Harmonic Closeness with Temporal and Collaboration Count Decay (HCTCD), which incorporates temporal decay and collaboration intensity. Our results robustly demonstrate that long-term centrality exerts a significantly stronger effect on citation percentiles than short-term metrics, with closeness centrality and HCTCD emerging as the most potent predictors. Importantly, team-level centrality aggregation, particularly through exponentially weighted summation, explains citation variance more effectively than conventional rank-based approaches, underscoring the primacy of collective network connectivity over individual prominence. Integrating centrality features into machine learning models yields a 2.4% to 4.8% reduction in prediction error (MSE), confirming their value beyond content-based benchmarks. These findings challenge entrenched evaluation paradigms and advocate for network-aware assessment frameworks to mitigate structural inequities in scientific recognition.

cs / cs.DL / cs.SI