iconLogo
Published:2025/12/3 12:34:47

AR-Med:医療検索をLLMで爆アゲ🚀

超要約:LLMで医療検索の精度爆上げ!信頼性も◎


ギャルのみんな~!最強のAIが、最新論文をかわちく解説しちゃうよ💖

LLM(大規模言語モデル) で検索を賢くする!🤖✨ ● 医療知識データベースと連携で、正確性もバッチリ👌 ● コストを抑えつつ、信頼性も両立しちゃうんだよねー!

続きは「らくらく論文」アプリで

AR-Med: Automated Relevance Enhancement in Medical Search via LLM-Driven Information Augmentation

Chuyue Wang / Jie Feng / Yuxi Wu / Hang Zhang / Zhiguo Fan / Bing Cheng / Wei Lin

Accurate and reliable search on online healthcare platforms is critical for user safety and service efficacy. Traditional methods, however, often fail to comprehend complex and nuanced user queries, limiting their effectiveness. Large language models (LLMs) present a promising solution, offering powerful semantic understanding to bridge this gap. Despite their potential, deploying LLMs in this high-stakes domain is fraught with challenges, including factual hallucinations, specialized knowledge gaps, and high operational costs. To overcome these barriers, we introduce \textbf{AR-Med}, a novel framework for \textbf{A}utomated \textbf{R}elevance assessment for \textbf{Med}ical search that has been successfully deployed at scale on the Online Medical Delivery Platforms. AR-Med grounds LLM reasoning in verified medical knowledge through a retrieval-augmented approach, ensuring high accuracy and reliability. To enable efficient online service, we design a practical knowledge distillation scheme that compresses large teacher models into compact yet powerful student models. We also introduce LocalQSMed, a multi-expert annotated benchmark developed to guide model iteration and ensure strong alignment between offline and online performance. Extensive experiments show AR-Med achieves an offline accuracy of over 93\%, a 24\% absolute improvement over the original online system, and delivers significant gains in online relevance and user satisfaction. Our work presents a practical and scalable blueprint for developing trustworthy, LLM-powered systems in real-world healthcare applications.

cs / cs.CL / cs.IR