Lattice Neural Networks for Incremental Learning

p. 107-120

Résumé

In incremental learning, it is necessary to conquer the dilemma of plasticity and stability. Because neural networks usually employ continuously distributed representation for state space, learning newly added data affects the existing memories. We apply a neural network with algebraic (lattice) structure to incremental learning, that has been proposed to model information processing in the dendrites of neurons. It has been proposed as a mathematical model of information processing in the dendrites of neurons. Because of the operation 'maximum' in lattice algebra weakening the continuously distributed representation, our proposed model succeeds in incremental learning.

Texte

Version Fac-similé [PDF, 3.1M]

Citer cet article

Référence papier

Daisuke Uragami, Hiroyuki Ohta et Tatsuji Takahashi, « Lattice Neural Networks for Incremental Learning », CASYS, 24 | 2010, 107-120.

Référence électronique

Daisuke Uragami, Hiroyuki Ohta et Tatsuji Takahashi, « Lattice Neural Networks for Incremental Learning », CASYS [En ligne], 24 | 2010, mis en ligne le 06 September 2024, consulté le 20 September 2024. URL : http://popups.lib.uliege.be/1373-5411/index.php?id=3065

Auteurs

Daisuke Uragami

School of Computer Science, Tokyo University of Technology, 1404-1 Katakuramachi, Hachioji City, Tokyo 192-0982, Japan

Articles du même auteur

Hiroyuki Ohta

Department of Physiology, National Defense Medical College, 3-2 Namiki, Tokorozawa, Saitama 359-8513, Japan

Tatsuji Takahashi

Division of Information System Design, School of Science and Technology, Tokyo, Denki University, Hatoyama, Hiki, Saitama 350-0394, Japan

Articles du même auteur

Droits d'auteur

CC BY-SA 4.0 Deed