iconLogo
Published:2025/12/24 5:34:05

LLMで詐欺師を罠にハメる!チャットボットLURE爆誕☆

超要約: LLM(大規模言語モデル)を使って、チャット詐欺をあぶり出すよ!

✨ ギャル的キラキラポイント ✨ ● LLMを使い、人間みたいに会話して詐欺師を騙す作戦💖 ● 詐欺師の裏情報(行動パターンとか)をゲットできちゃうかも🎵 ● 企業はセキュリティ対策を強化、顧客も安心安全って最高じゃん?😊

詳細解説いくよ~!

背景 チャットアプリ(チャットツール)って便利だけど、詐欺の温床(詐欺の舞台)にもなってるの!対策が難しいから困っちゃうよね😭 方法 LLMを使って、詐欺師に「引っかかる」チャットボットを作ったんだって!人間になりすまして会話するから、詐欺師も油断するでしょ?😏 結果 詐欺師の情報を集めたり、犯罪を阻止(そし)したりするのに成功✨まさにイミテーションゲームだね! 意義(ここがヤバい♡ポイント) セキュリティ(安全)対策が進化して、企業もユーザーもハッピーになれる!画期的(かくしんてき)アイデア👏

続きは「らくらく論文」アプリで

The Imitation Game: Using Large Language Models as Chatbots to Combat Chat-Based Cybercrimes

Yifan Yao / Baojuan Wang / Jinhao Duan / Kaidi Xu / ChuanKai Guo / Zhibo Eric Sun / Yue Zhang

Chat-based cybercrime has emerged as a pervasive threat, with attackers leveraging real-time messaging platforms to conduct scams that rely on trust-building, deception, and psychological manipulation. Traditional defense mechanisms, which operate on static rules or shallow content filters, struggle to identify these conversational threats, especially when attackers use multimedia obfuscation and context-aware dialogue. In this work, we ask a provocative question inspired by the classic Imitation Game: Can machines convincingly pose as human victims to turn deception against cybercriminals? We present LURE (LLM-based User Response Engagement), the first system to deploy Large Language Models (LLMs) as active agents, not as passive classifiers, embedded within adversarial chat environments. LURE combines automated discovery, adversarial interaction, and OCR-based analysis of image-embedded payment data. Applied to the setting of illicit video chat scams on Telegram, our system engaged 53 actors across 98 groups. In over 56 percent of interactions, the LLM maintained multi-round conversations without being noticed as a bot, effectively "winning" the imitation game. Our findings reveal key behavioral patterns in scam operations, such as payment flows, upselling strategies, and platform migration tactics.

cs / cs.CR