1. ギャル的キラキラポイント✨ ● 本物か怪しい(真偽不明)コンテンツを判定できるようになった!疑わしいものはハッキリさせよってこと💖 ● 敵(悪意のある人)の攻撃にも強い!防御力MAXのAIって最強じゃん?😎 ● 画像だけじゃなく動画にも対応!色んなメディアで使えるって、マジ卍~!
2. 詳細解説 ● 背景 最近のAIは、本物そっくりの偽物(ディープフェイク)を作るのが得意なの!😱 でも、それは困るってことで、本物かどうか見分ける技術が求められてるんだよね!
● 方法 画像や動画をAIが「もう一回」作ってみるの!🎨 それで、元の画像とどれだけ似てるかを「A-index」ってので評価するんだって!似てれば本物、全然違うと怪しいってわけ🤔
● 結果 すっごい精度で本物と偽物を区別できるようになったの!✨ しかも、敵が「騙してやろう!」って攻撃してきても、なかなか騙せないらしい!
続きは「らくらく論文」アプリで
Generative models can synthesize highly realistic content, so-called deepfakes, that are already being misused at scale to undermine digital media authenticity. Current deepfake detection methods are unreliable for two reasons: (i) distinguishing inauthentic content post-hoc is often impossible (e.g., with memorized samples), leading to an unbounded false positive rate (FPR); and (ii) detection lacks robustness, as adversaries can adapt to known detectors with near-perfect accuracy using minimal computational resources. To address these limitations, we propose a resynthesis framework to determine if a sample is authentic or if its authenticity can be plausibly denied. We make two key contributions focusing on the high-precision, low-recall setting against efficient (i.e., compute-restricted) adversaries. First, we demonstrate that our calibrated resynthesis method is the most reliable approach for verifying authentic samples while maintaining controllable, low FPRs. Second, we show that our method achieves adversarial robustness against efficient adversaries, whereas prior methods are easily evaded under identical compute budgets. Our approach supports multiple modalities and leverages state-of-the-art inversion techniques.