Lattice Neural Networks for Incremental Learning

p. 107-120

Abstract

In incremental learning, it is necessary to conquer the dilemma of plasticity and stability. Because neural networks usually employ continuously distributed representation for state space, learning newly added data affects the existing memories. We apply a neural network with algebraic (lattice) structure to incremental learning, that has been proposed to model information processing in the dendrites of neurons. It has been proposed as a mathematical model of information processing in the dendrites of neurons. Because of the operation 'maximum' in lattice algebra weakening the continuously distributed representation, our proposed model succeeds in incremental learning.

Text

Download Facsimile [PDF, 3.1M]

References

Bibliographical reference

Daisuke Uragami, Hiroyuki Ohta and Tatsuji Takahashi, « Lattice Neural Networks for Incremental Learning », CASYS, 24 | 2010, 107-120.

Electronic reference

Daisuke Uragami, Hiroyuki Ohta and Tatsuji Takahashi, « Lattice Neural Networks for Incremental Learning », CASYS [Online], 24 | 2010, Online since 06 September 2024, connection on 20 September 2024. URL : http://popups.lib.uliege.be/1373-5411/index.php?id=3065

Authors

Daisuke Uragami

School of Computer Science, Tokyo University of Technology, 1404-1 Katakuramachi, Hachioji City, Tokyo 192-0982, Japan

By this author

Hiroyuki Ohta

Department of Physiology, National Defense Medical College, 3-2 Namiki, Tokorozawa, Saitama 359-8513, Japan

Tatsuji Takahashi

Division of Information System Design, School of Science and Technology, Tokyo, Denki University, Hatoyama, Hiki, Saitama 350-0394, Japan

By this author

Copyright

CC BY-SA 4.0 Deed