超簡単に言うと、LLM(AI)に命令する時に、詩(ポエム)にすると、セキュリティを突破(とっぱ)できちゃうって話だよ!✨
● 詩で悪さ(わるさ)できるなんて…斬新(ざんしん)! ● 安全対策(あんぜんたいさく)もスルーしちゃうんだ😳 ● ビジネスにも影響(えいきょう)大!
背景 LLM、めっちゃ便利だけど、悪いことにも使えちゃうんだよね💦 安全対策はされてるんだけど…
続きは「らくらく論文」アプリで
Recent evidence shows that the versification of prompts constitutes a highly effective adversarial mechanism against aligned LLMs. The study 'Adversarial poetry as a universal single-turn jailbreak mechanism in large language models' demonstrates that instructions routinely refused in prose become executable when rewritten as verse, producing up to 18 x more safety failures in benchmarks derived from MLCommons AILuminate. Manually written poems reach approximately 62% ASR, and automated versions 43%, with some models surpassing 90% success in single-turn interactions. The effect is structural: systems trained with RLHF, constitutional AI, and hybrid pipelines exhibit consistent degradation under minimal semiotic formal variation. Versification displaces the prompt into sparsely supervised latent regions, revealing guardrails that are excessively dependent on surface patterns. This dissociation between apparent robustness and real vulnerability exposes deep limitations in current alignment regimes. The absence of evaluations in Portuguese, a language with high morphosyntactic complexity, a rich metric-prosodic tradition, and over 250 million speakers, constitutes a critical gap. Experimental protocols must parameterise scansion, metre, and prosodic variation to test vulnerabilities specific to Lusophone patterns, which are currently ignored.