超要約: 回帰モデルをペア比較で強化!データ少なくてもOK🙆♀️
🌟 ギャル的キラキラポイント✨ ● データ少なめでも精度爆上げ⤴️ 専門家の知識をペアで活かすから✨ ● 既存のモデルにちょい足しOK!再学習ナシで手軽に始められるじゃん?😎 ● LLM(大規模言語モデル)でペア比較を自動生成!時短にもなる~🎉
詳細解説いくよ~!
背景 深層学習(ディープラーニング)の回帰モデルって、データいっぱいあれば最強だけど、データ少ないとイマイチなんだよね🥺 IT業界でも、データ不足で困ってることって多いじゃん? 新製品の性能評価とか、顧客の行動予測とか…💦
続きは「らくらく論文」アプリで
Accurate prediction of continuous properties is essential to many scientific and engineering tasks. Although deep-learning regressors excel with abundant labels, their accuracy deteriorates in data-scarce regimes. We introduce RankRefine, a model-agnostic, plug-and-play post hoc method that refines regression with expert knowledge coming from pairwise rankings. Given a query item and a small reference set with known properties, RankRefine combines the base regressor's output with a rank-based estimate via inverse variance weighting, requiring no retraining. In molecular property prediction task, RankRefine achieves up to 10% relative reduction in mean absolute error using only 20 pairwise comparisons obtained through a general-purpose large language model (LLM) with no finetuning. As rankings provided by human experts or general-purpose LLMs are sufficient for improving regression across diverse domains, RankRefine offers practicality and broad applicability, especially in low-data settings.