Sunday, January 28, 2007

Compose-able information unit

It is needed the creation of a compose-able informstion unit, an evolved way to represent informations. Thus is possible to proceed with generalized infos composition.

Reiterative incremental (pattern based ?) information composition process

Interesting: it is not necessary to "understand" the information, to make it part of an "information composition" process !!! :) Veeery interesting. Moreover, it is not necessary to compose always the same informations. Already built pattern, and compositions can be stored in the knwoledge base and pattern recognition technics can help. This make the information composition something more evolved: reiterative incremental (pattern based ?) information composition process.

Information composition and data input load mediation layer

To create entropy in the system, we can assume that an information composition agent can be useful.

Note: this is a concept, non a solution. To achieve the generalized composition strategy, many nested strategies can be used.

Note: with composition I mean mean not only analogy, but also other composition "algorithm" like negation, data patterns and data models etc.

Note: an artificial system, that wants to aim to hit off "reasoning" on data, can't hope to replicate the whole wide range of informations and composition "algorithm" humans have. Of course, to build something simple and possible, it is necessary to identify a subset of reality, a subset of composition "algorithm". For this pourpose, is again useful to have this "agent" living in a virtualized world where the coder can define rules, adding a layer between the so complicated real world, and the ability of the system. At this point I have another reason to insert the virtualized world between the system core and the real world massive input production. This element would be now also playing the role of an "information load" mediator.

Entropy engine, again

Let's recap: the entropy engine is the core of the decision making: it should be something able to do simple I/O operation towards the world, and then "evolve". Let's think to the behaviour of a baby. He starts just experimenting the world, so touching things, and accumulating informations. Then he is able to correlate simple events. Then, again, more complex events. Of course, he is human, and from this point of view he has big "computational" capabilities. Let's simplify this process, just to give a look to it: the baby starts doing associations, from generic informations he receives from the world. Some inputs generates positive reactions, some others end making him crying. The baby gets informations, apply some sorts of rules, prduce a reaction.
An example: if the baby hears the mother's voice, he smiles. If he is hungry he starts crying. More important: after the baby has touched a very hot object, he never will do it again. This means:

  1. Persistence of knowledge
  2. Events correlation
  3. Rules for "good" and "bad"
  4. Abstraction, combination... in other terms: information composition

This is maybe very important: information composition, a key concept in my simplistic view of intelligence

The entropy engine

Let's assign this task of creating entropy in the system life routine, to something, in order to further understand the nature of this agent. Let's call this the entropy engine. This will live in the emulated world created by the whole software.

Wednesday, January 10, 2007

Does it exist an "open source" like research approach on these themes ?

Does it exist an "open source" like research approach on these themes ? Active on the web, that everyone can join and try to contribute and get ? More or less it is like (maybe just is) software, with some more discussion.

Monday, January 08, 2007

What to replicate ? How to create "incoerence" in system behaviour ?

So, it is obvious to me that the system for the decision maker as it designed by me now is absolutely still linear and not, let's say "creative". It is missing something very important that can generate new schemas, new rules for the rules engine foreseen by the system. At this point a model is needed: a human intelligence model or something completely new, with different "rotism" ? Both of them are maybe too diffult, let' s try to make things easier... let' define something theoretical and then dig into this:

  1. A system to be useful must exist --> system must take care of itself --> simple needs must de outlined --> system must guarantee that rules enabling surviving must be accomplished.
  2. System must save experience not to repeat always the same errors. It could die in a loop with the same error, first rule wouldn't be satisfied.

Hmmm... assuming that the system is able to achieve these two intents, what would happen, would they be sufficient to have a "good" system ? (I need to define the way to measure how much this type of system is good)

No, it is not suffiecient, the system would map the environment where it moves and not do anything else, stopping. Stopping for this type of system, in my opinion, means system died with no success. The knwoledge base of the system, in this situation, would be useless, as the system would know just walls, but no other data, no other need, just inactivity --> limited knowledge increase.

3. Target of the system is a continuously increasing knowledge/experience base.

The question, so, has now changed: we still don't know if it is better to replicate a human like approach to reality, or a set of new ideas. Let's choose the second and integrate with human ideas where the gap is too big to jump over.

Target: how to make possible that the system is always capable to find a new trigger to continue the learning process ?

Sunday, January 07, 2007

More details on the decision maker agent

Attemps to better define decision maker internal logic.


It is important for this software to save the experience gained. This build non linear behaviour from data, not from algorithm, this should be genetic to do this. what would happen if most of the software could gain experience ? Could be usefull ?

I'm not yet happy for this drawing... something I feel is missing. Need to investigate.

Saturday, January 06, 2007

IMPORTANT, ESSENTIAL REQUIREMENT (idea from the night :] )

SOFTWARE MUST REMEMBER/SAVE THE EVOLUTION AND THE EXPERIENCE ACHIEVED IN THE PREVIOUS RUNNING SESSIONS OR IN THE PREVIOUS DECISION MAKER INSTANCE LIFES, OTHERWISE IT IS USELESS. THE MAIN INCOME OF THE SYSTEM IS AN ENRICHED KNOWLEDGE BASE THAT CAN BE REUSED ALSO IF THE SYSTEM CHANGES (FOR EXAMPLE IS PERIPHERIALS), AND VERSIONED. OTHRWISE IT IS LIKE HUMANS HAVE NO SCHOOL, AND EVERY TIME THEY HAD TO REINVENT THE WHEEL.

:)

THIS IS IMPORTANT. VERY IMPORTANT. OTHERWISE NO SYSTEM EVOLUTION IS POSSIBLE.

Thursday, January 04, 2007

Pause prototyping and move on with lower level analysis

It is important to detail when the network numeric computation is used: some more detailed analysis is needed on the architecture of the decision maker. The virtualized world is in some way simpler to understand and with less risks.

A test on the network prototype takes too much time

It takes too much to compute ten input value on ten layers, 10 neurons per layer, using BigDecimal values with "," maybe it is better to test with integers and then performe some post processing on the result values. Maybe.

Software evolutionary prototypes, faseability

It is now time to proceed with some evolutionary prototypes :)
Starting to write some code, to define faseability for the hottest part: the sigmoid with back propagation, after some tuning on the values for weights, we have (code will be soon available):








These drawings are the result of the network procesing, providing a data model and real data. The red small boxes are the point produced by some ten neurons test, working only on Y.
An idea, not yet defined in details, these should be architectural components:

Another note on my ideas

I ask to myself:

when the first of a new one decision maker is created, it needs to find devices in its world: curious question: when a baby discovers his/her hands for the first time, does he/she know that that is an hand ? Ot it is known to the brain just as an "endpoint" ? Does the prisoner, the decision maker needs to do an auto mapping of all the terminations (or endpoints) that are present in the prison ?
Terminations can be grouped into two classes of elements:

1 capable of reading input from the world --> TERMINATIONS_INPUT
2 capable to modify - dping output - the virtalized world TERMINATIONS_OUTPUT

Hmmm, a termination capable of doing BOTH INPUT and OUTPUT from the world, is not one, it is two :)

and it could be discovered two times. No one needs to be aware that a termination capable of input is the same of one capable of output.

I can change my way to say, now, in accordance with this new idea: the auto mapping will be done on all the input interfaces and on all the output ineterfaces, where the distinguishing element is the input and output ability.

!!!! A piece of code developing a terminations capable of providing 3 input terminations, is one for the author of the code: they are 3 input terminations for the system :)

Approach to analysis

Strange element to analize in an OO approach: maybe I can try to use a reiterative incremental approach to have contimuous checks and feed back from my ideas. In this case tiers and layers are not common, layers will be just few, maybe; tiers: I need to think on this. Can't use a J2EE or SunTone (r) architectural model, not suitable in my opinion for the tiers I can see in the fog I still have in my plans :)

Need some thinking on this.

Wednesday, January 03, 2007

Ideas on "soft" architecture

The system will keep in it leaving a nucleus free to behave and with a sigmoid back propagation engine, capable of some computation. Available to this numeric computational engine, will be a knowledge storage, as an in index of models available for processing by the numeric processor part of code. Looking at this:

http://www.youtube.com/watch?v=TFnsjDDnLck

I had a funny idea: a software agent capable of taking pseduo itelligent decisions, can't be more or less in any way directly interfaced to the real world. It must be left free to behave, "sure" to live in a world more suitable for its nature. For this funny idea, the "intelligent" agent will be as a prisoner in a virtualized code world, where it can do whatever he wants in an easy way for the coder and the tester. This virtual world will be instead a quite stupid software shell that - without saying the reality - to the intelligent prisoner, will translate real world data coming from sensors, into changes in the virtualized code world. Viceversa: every modification introduced from the intelligent prisoner (the computational intelligent decision maker) will be transformed to "impacts" on the virtualized world, and this virtualized code world will translate these modifications received, into output for actuators, for example. The virtualized world of code, the prison ( :] ) is just a shell simulating a convenient environment for my system kernel.

To recap: an intelligent decision maker using a network with back propagation numeric engine will live and behave and evolve as it likes, in a world that will undersgoes to the actions taken by the prisoner. This component will be called by me as the "prison". It will also change to the ineterface exposed to the prisoner, in accordance with what it will read via sensors from the real world:

1 prisoner-decision-maker
2 prisoner-decision-maker world (transponder from/to reality)

at this point a note:

the prisoner-decision-maker has rules, and priorities, has needs, has life conditions, and can die if it will not be succesfull. In this case a new generation of decision maker can be created automatically by an entity, a sort of "provvidence", to continue the system living. The prisoner death (a big system failure) event will be kept in the experience of the whole system, to try to avoid the same mistake again.
Software will be "non linear": this means that it will have an evolutionary routine and a sort of knowledge base. As time passes, the experience of the system need to evolve and cause the response of the software to be different. The system (made up in its scope of software only, at this stage) will keep experience also when switched off. It is expected that the software will be able to be better as "time goes by".

Software will use mainly java :)

Software will use Linux :)

Sources, also if with no value, will be open source and available to everyone who wants to loose some time.

Software will have lot of bugs :]

Hmmm... aree bugs some spontaneous native code intelligence to subdue ? Bugs... what a mistery are they.

Steps to proceed

Here follows a list of possible steps to go through (as this is an experiment, it can be found to be NON FEASIBLE at a certain point):

Main steps:

1 Create software as I want and can
2 Choose that hardware that is suitable for software needs and costs
3 Assemble 1 and 2
4 Tune
5 Find capabilities and hard limits

Nested steps:

1a Define requirements, what does this software need to do ?
1b Refine dreams with reality (skill, costs, time)
1c Increase success chances, build a prototype, then new versions, possibly everyone richer than the previous. This means obtaining results, and continue...

2a Choose hardware that will fit software, keeping in mind that this will be a mobile platform: software decides hardware.
2b Evaluate the whole cost: do/do not continue (the software will be the only deliverable: this is an experiment, I can do this)

What is this ?

Hello world :)

This is the diary of my experiment to play with code and robots.
I don't know theory, I have bad English but at the same time I love toys going around autonoumously. I'm not a student, I didn't graduate in electronics or physics, I have neither theory nor experience with this type of stuff but I like it, and being so ignorant in this allows everyone who wants, to give me advices.
This is only a game I play because I just like it.

Take it easy :)

caramelleas