iconLogo
Published:2025/12/24 19:56:06

タイトル & 超要約:LAMLAD!LLMでAndroidマルウェアをイジメる方法💖

ギャル的キラキラポイント✨ ● LLM (大規模言語モデル) でマルウェアの弱点を暴く!賢すぎ! ● 敵対的攻撃(悪意ある攻撃)をLLMで超進化させるのがスゴい! ● Androidのセキュリティを爆上げする、未来感あふれる研究なの☆

詳細解説 ● 背景 スマホにウイルス🦠って怖いよね!でも、今の技術じゃ見つけにくいマルウェアもいるみたい。そこで、LLMを使ってマルウェア検出システムの弱点をガッツリ見つけ出す研究が登場!MLモデルのセキュリティを強化するんだって😎

● 方法 LAMLADって名前の攻撃フレームワークを使うよ!LLMにマルウェアをこっそりイジってもらって、バレないようにする方法を研究してるんだ。LLMの頭脳🧠を使って、マルウェアをさらに巧妙化させちゃうってコト!

● 結果 LAMLADは、従来の攻撃よりも成功率が高いことが判明😳!LLMの力ってマジ卍!Androidのセキュリティをレベルアップさせるために、LAMLADがめっちゃ役立つことが証明されたってこと!

続きは「らくらく論文」アプリで

LLM-Driven Feature-Level Adversarial Attacks on Android Malware Detectors

Tianwei Lan / Farid Na\"it-Abdesselam

The rapid growth in both the scale and complexity of Android malware has driven the widespread adoption of machine learning (ML) techniques for scalable and accurate malware detection. Despite their effectiveness, these models remain vulnerable to adversarial attacks that introduce carefully crafted feature-level perturbations to evade detection while preserving malicious functionality. In this paper, we present LAMLAD, a novel adversarial attack framework that exploits the generative and reasoning capabilities of large language models (LLMs) to bypass ML-based Android malware classifiers. LAMLAD employs a dual-agent architecture composed of an LLM manipulator, which generates realistic and functionality-preserving feature perturbations, and an LLM analyzer, which guides the perturbation process toward successful evasion. To improve efficiency and contextual awareness, LAMLAD integrates retrieval-augmented generation (RAG) into the LLM pipeline. Focusing on Drebin-style feature representations, LAMLAD enables stealthy and high-confidence attacks against widely deployed Android malware detection systems. We evaluate LAMLAD against three representative ML-based Android malware detectors and compare its performance with two state-of-the-art adversarial attack methods. Experimental results demonstrate that LAMLAD achieves an attack success rate (ASR) of up to 97%, requiring on average only three attempts per adversarial sample, highlighting its effectiveness, efficiency, and adaptability in practical adversarial settings. Furthermore, we propose an adversarial training-based defense strategy that reduces the ASR by more than 30% on average, significantly enhancing model robustness against LAMLAD-style attacks.

cs / cs.CR / cs.AI