- A system to be useful must exist --> system must take care of itself --> simple needs must de outlined --> system must guarantee that rules enabling surviving must be accomplished.
- System must save experience not to repeat always the same errors. It could die in a loop with the same error, first rule wouldn't be satisfied.
Hmmm... assuming that the system is able to achieve these two intents, what would happen, would they be sufficient to have a "good" system ? (I need to define the way to measure how much this type of system is good)
No, it is not suffiecient, the system would map the environment where it moves and not do anything else, stopping. Stopping for this type of system, in my opinion, means system died with no success. The knwoledge base of the system, in this situation, would be useless, as the system would know just walls, but no other data, no other need, just inactivity --> limited knowledge increase.
3. Target of the system is a continuously increasing knowledge/experience base.
The question, so, has now changed: we still don't know if it is better to replicate a human like approach to reality, or a set of new ideas. Let's choose the second and integrate with human ideas where the gap is too big to jump over.
Target: how to make possible that the system is always capable to find a new trigger to continue the learning process ?