<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>Did Artificial Systems Need Random for Learning Strategies ?</title>
    <link>http://popups.lib.uliege.be/1373-5411/index.php?id=643</link>
    <description>Many analogies found in natural systems give evidence that the role of noise in a complex system might well lead to further organization. So, noise seems a good way in order to create novelty or to test the strength of algorithms. In this paper, we are going to analyse some artificial learning mechanisms such as genetic algorithms or neural networks, which may be generally formulated as an optimization problem by specifying a performance criterion, and then by using the simple but powerful technique of stochastic hill-climbing along the gradient. In these algorithms, the integration of random is a good way to maintain the exploration property during searching, useful for avoiding local optima or when environment is dynamic. We claim that artificial learning must overcome their limitations using the expedient of random search. This is due to attractors always present inside search procedures. We discuss in order to find another way to create order without having any presupposed attractors. This is also a central question for anticipatory systems which must learn about themselves and their environment. </description>
    <category domain="http://popups.lib.uliege.be/1373-5411/index.php?id=65">Full text issues</category>
    <category domain="http://popups.lib.uliege.be/1373-5411/index.php?id=66">Volume 1</category>
    <category domain="http://popups.lib.uliege.be/1373-5411/index.php?id=70">Neural Networks</category>
    <language>fr</language>
    <pubDate>Fri, 28 Jun 2024 16:01:34 +0200</pubDate>
    <lastBuildDate>Tue, 08 Oct 2024 14:07:42 +0200</lastBuildDate>
    <guid isPermaLink="true">http://popups.lib.uliege.be/1373-5411/index.php?id=643</guid>
    <ttl>0</ttl>
  </channel>
</rss>