iconLogo
Published:2026/1/11 3:25:38

最強ギャルAI爆誕!バックプロパゲーション不要な学習で未来を掴む方法💖

超要約:脳みそみたいに賢く進化するAIの秘密を解き明かす研究だよ!過去の情報を忘れずに、新しいこともどんどん学習できるって、マジすごい✨

✨ ギャル的キラキラポイント ✨ ● 脳みその仕組みを真似っこしたAIなの🧠✨ ● 過去のデータを忘れないで成長できる!まさに神✨ ● 計算コスト(お金)を抑えて賢くなれるって最高じゃん?💰

詳細解説いくよ~!

背景 従来のAI(ニューラルネットワーク)は、新しいこと覚えようとすると、古いこと忘れちゃうっていう困ったちゃんだったの!😢 それを解決するために、脳みその学習方法を研究した結果がコレ!

続きは「らくらく論文」アプリで

A Backpropagation-Free Feedback-Hebbian Network for Continual Learning Dynamics

Josh Li

Feedback-rich neural architectures can regenerate earlier representations and inject temporal context, making them a natural setting for strictly local synaptic plasticity. We ask whether a minimal, backpropagation-free feedback--Hebbian system can already express interpretable continual-learning--relevant behaviors under controlled training schedules. We introduce a compact prediction--reconstruction architecture with two feedforward layers for supervised association learning and two dedicated feedback layers trained to reconstruct earlier activity and re-inject it as additive temporal context. All synapses are updated by a unified local rule combining centered Hebbian covariance, Oja-style stabilization, and a local supervised drive where targets are available, requiring no weight transport or global error backpropagation. On a small two-pair association task, we characterize learning through layer-wise activity snapshots, connectivity trajectories (row/column means of learned weights), and a normalized retention index across phases. Under sequential A->B training, forward output connectivity exhibits a long-term depression (LTD)-like suppression of the earlier association while feedback connectivity preserves an A-related trace during acquisition of B. Under deterministic interleaving A,B,A,B,..., both associations are concurrently maintained rather than sequentially suppressed. Architectural controls and rule-term ablations isolate the role of dedicated feedback in regeneration and co-maintenance, and the role of the local supervised term in output selectivity and unlearning. Together, the results show that a compact feedback pathway trained with local plasticity can support regeneration and continual-learning--relevant dynamics in a minimal, mechanistically transparent setting.

cs / cs.NE / cs.LG