<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>conditioning</title>
    <link>http://popups.lib.uliege.be/1373-5411/index.php?id=1769</link>
    <description>Index terms</description>
    <language>fr</language>
    <ttl>0</ttl>
    <item>
      <title>AIM Networks : Autolncursive Memory Networks for Anticipation Toward Learned Goals</title>
      <link>http://popups.lib.uliege.be/1373-5411/index.php?id=2622</link>
      <description>The ability to anticipate future states is a key adaptive property of living systems (Glenberg, 1997). Robert Rosen (1985) suggested that an anticipatory system is characterized by finality, and &quot;is a system containing a predictive model of itself and/or of its environment, which allows it to change state at an instant in accord with the model's predictions pertaining to a later instant&quot;. Daniel Dubois (Dubois &amp;amp; Resconi, 1992; Dubois, 1998a, 2000) defined the concept of incursive and hyperincursive anticipatory systems, able to generate respectively one or several anticipations influencing the computing of the next state of the system. In this article, the concept of autoincursion is proposed as the ability for a system to compute its successive internal states as a function of its past, present and anticipated states, to select among several anticipated states, and to autonomously change its own equation parameters by learning. Some fundamental properties of a neural network architecture and dynamics are proposed to define Autolncursive Memory Networks. AIM Networks can learn and activate multiple attractors simultaneously, exhibiting synergic dynamics of attractors encoding external inputs. This allows them (l) to compute their successive states as a function of past, present, and multiple anticipated states, (2) to change the way they compute their successive states through symmetric or asymmetric modification of the synaptic structure during autonomous leaming, and (3) to select sequences of anticipations oriented toward learned goals.  </description>
      <pubDate>Thu, 29 Aug 2024 15:16:38 +0200</pubDate>
      <lastBuildDate>Thu, 29 Aug 2024 15:16:50 +0200</lastBuildDate>
      <guid isPermaLink="true">http://popups.lib.uliege.be/1373-5411/index.php?id=2622</guid>
    </item>
    <item>
      <title>Neural Network Modeling of Learning of Contextual Constraints on Adaptive Anticipations</title>
      <link>http://popups.lib.uliege.be/1373-5411/index.php?id=1767</link>
      <description>Anticipatory processes take into account of the contextual events occurring in the environment to anticipate probable upcoming events, and to select the best behavioral responses. The necessary knowledge for prediction of events adapted to context can be learned by classical associative conditioning, which allows associations between events occurring close in a sequence. Context can then correspond to events perceived in the environment as well as to the reinforcing valence of the event eliciting emotional states in the system, both orienting anticipations in memory. Knowledge for anticipation of adapted behaviors to context can be learned by operant reinforced conditioning, which allows associations between behaviors and reinforcing events in the environment, as a function of the reinforcing valence of the event (positive or negative). In this case the processing of a contextual event can select behavioral responses orienting the system to positive reinforcers rather than to negative reinforcers. An attractor neural network model is proposed to account for the different types of anticipatory processes presented as well as for the leaming principles of conditioning allowing adapted anticipations. </description>
      <pubDate>Tue, 16 Jul 2024 15:30:43 +0200</pubDate>
      <lastBuildDate>Tue, 16 Jul 2024 15:30:58 +0200</lastBuildDate>
      <guid isPermaLink="true">http://popups.lib.uliege.be/1373-5411/index.php?id=1767</guid>
    </item>
  </channel>
</rss>