iconLogo
Published:2026/1/4 22:22:59

Deep LDAで画像分類、解釈性爆上げ計画🚀

超要約: Deep LDAに秘密の制約をかけて、画像分類をもっと分かりやすく、精度もアップさせちゃお!✨

✨ ギャル的キラキラポイント ✨

● AIの判断が丸見え!👀 根拠が分かるから、安心して使えるようになるの! ● 学習が安定する魔法🪄みたい! モデルがちゃんと育つから、精度も期待できる! ● 医療とか金融とか、マジメな業界でもAIが大活躍できるチャンス到来!

詳細解説

続きは「らくらく論文」アプリで

Simplex Deep Linear Discriminant Analysis

Maxat Tezekbayev / Arman Bolatov / Zhenisbek Assylbekov

We revisit Deep Linear Discriminant Analysis (Deep LDA) from a likelihood-based perspective. While classical LDA is a simple Gaussian model with linear decision boundaries, attaching an LDA head to a neural encoder raises the question of how to train the resulting deep classifier by maximum likelihood estimation (MLE). We first show that end-to-end MLE training of an unconstrained Deep LDA model ignores discrimination: when both the LDA parameters and the encoder parameters are learned jointly, the likelihood admits a degenerate solution in which some of the class clusters may heavily overlap or even collapse, and classification performance deteriorates. Batchwise moment re-estimation of the LDA parameters does not remove this failure mode. We then propose a constrained Deep LDA formulation that fixes the class means to the vertices of a regular simplex in the latent space and restricts the shared covariance to be spherical, leaving only the priors and a single variance parameter to be learned along with the encoder. Under these geometric constraints, MLE becomes stable and yields well-separated class clusters in the latent space. On images (Fashion-MNIST, CIFAR-10, CIFAR-100), the resulting Deep LDA models achieve accuracy competitive with softmax baselines while offering a simple, interpretable latent geometry that is clearly visible in two-dimensional projections.

cs / stat.ML / cs.LG