iconLogo
Published:2025/12/25 18:54:16

生成音楽モデル、危ないトコ見抜く! IT企業向けビジネスチャンス爆誕✨

超要約:生成AIの音楽、著作権とプライバシー守る研究!ビジネスにもなるよ☆

ギャル的キラキラポイント✨

● 生成AI(MuseGAN)の音楽モデルに、秘密の攻撃(MIA)がどれくらい効くか試すんだって!怖いね😱

● 著作権侵害(パクりとか)や、個人情報漏洩(ユーザーの音楽がバレる!)のリスクをチェックできる!

続きは「らくらく論文」アプリで

Assessing the Effectiveness of Membership Inference on Generative Music

Kurtis Chow / Omar Samiullah / Vinesh Sridhar / Hewen Zhang

Generative AI systems are quickly improving, now able to produce believable output in several modalities including images, text, and audio. However, this fast development has prompted increased scrutiny concerning user privacy and the use of copyrighted works in training. A recent attack on machine-learning models called membership inference lies at the crossroads of these two concerns. The attack is given as input a set of records and a trained model and seeks to identify which of those records may have been used to train the model. On one hand, this attack can be used to identify user data used to train a model, which may violate their privacy especially in sensitive applications such as models trained on medical data. On the other hand, this attack can be used by rights-holders as evidence that a company used their works without permission to train a model. Remarkably, it appears that no work has studied the effect of membership inference attacks (MIA) on generative music. Given that the music industry is worth billions of dollars and artists would stand to gain from being able to determine if their works were being used without permission, we believe this is a pressing issue to study. As such, in this work we begin a preliminary study into whether MIAs are effective on generative music. We study the effect of several existing attacks on MuseGAN, a popular and influential generative music model. Similar to prior work on generative audio MIAs, our findings suggest that music data is fairly resilient to known membership inference techniques.

cs / cs.CR / cs.LG