Sunday, January 28, 2007

Entropy engine, again

Let's recap: the entropy engine is the core of the decision making: it should be something able to do simple I/O operation towards the world, and then "evolve". Let's think to the behaviour of a baby. He starts just experimenting the world, so touching things, and accumulating informations. Then he is able to correlate simple events. Then, again, more complex events. Of course, he is human, and from this point of view he has big "computational" capabilities. Let's simplify this process, just to give a look to it: the baby starts doing associations, from generic informations he receives from the world. Some inputs generates positive reactions, some others end making him crying. The baby gets informations, apply some sorts of rules, prduce a reaction.
An example: if the baby hears the mother's voice, he smiles. If he is hungry he starts crying. More important: after the baby has touched a very hot object, he never will do it again. This means:

  1. Persistence of knowledge
  2. Events correlation
  3. Rules for "good" and "bad"
  4. Abstraction, combination... in other terms: information composition

This is maybe very important: information composition, a key concept in my simplistic view of intelligence

No comments: