Monday, December 24, 2007

Recap to go on

Ok, to go on now it is really necessary to recap and put order in all that stuff I wrote about up to now.

Composable information theory is just my idea to represent evolutionary and relational information derived from the interaction with the abstracted virtual world. It is something like applying collaborative info collations (information unit chains) with an intelligence working with it.

Has been now understood that somewhat logic able to work with these information unit chains is needed. It is necessary to define this "logic" before proceeding.

Data and business logic, just for fun

If I remember correctly, two of the 5 tiers in a J2EE architecture are the resource tier and the business logic tier. So, in the resource tier normally we have informations stored in different types of repository. The other tier contains only the logic to manipulate data, and infact it is called business logic tier. Then we have a SOA architecture where all business logic functionalities are kept connected and available as services, but this is another story. Ok now, as here we have no client tier by now, no presentation tier, no integration, by now, let's remap the structure organized by the composable information units into the resource tier while we need to map the logic to manipulate this structured tree of data, into the business logic tier. This second remapping action is the second ingredient of two in the mix I told about before.

Sunday, December 23, 2007

Composition is one ingredient in a mix of two.

Let's google for some docs, let's see what comes out: many documents say that the most interesting approach is the "collaborative programming", whose meaning I'd like to adapt to my intent. Let's redefine collaborative programming something that let me create a software bee, or ant, or whatever swarm unit, to put in a software world where evolutionary behaviour can reward the unit whose behaviour is the best one in terms of target tasks accomplishment, Ok, now let's put a wide high level scenario on this and let's see what are the common trends:

1 Neural network
2 Virtual, collaborative, swarm programming
3 My simjple game, composable information paradigm programming, that's what I've been playing with here sinnce now.

The intent of this absurd and shy comparison, is just to find an element to add to my composable information paradigm game, in order to have a new approach that is not just a clone of the first two ways. Infact it has the idea of "information changing and assembling" but it misses in my opinion an active side that is somethijng more than linking similar pieces of informations. Let's see the list of features I added some post ago:

  1. Persistence of knowledge
  2. Events correlation
  3. Rules for "good" and "bad"
  4. Abstraction, combination... in other terms: information composition
I can say I have an idea of points 1, 4, but misses a component for point 2 (3 doesn't matter by now)

Thursday, December 20, 2007

I have a question: what's the difference between swarm programming (where the swarm population is not made up of devices, of elettromechanical units) and neural networks ? Is there a virtual swarm programmin model, where behavioural composite programming can benefits of a logical model ? In other terms a swarm of software units whose every unit contribution is the same as that one of an "ant" or a "bee" for a real swarm. In these cases the trend of the swarm get advantage of the successfully trend of a unit. Units strive to "accunulate" around the correctly behaving one. Correctly means succesfully in terms of unit life intents. A neural networks is still a cooperative system ? Maybe not, as at the end of the day I see a neural network as an information unit weight router and switcher, every unit, every neuron has NO behavioural intelligence (a threshold is not intelligence). On the other side, a neural network is a good computation pattern more abstracted and recyclable. A swarm programmin seems to me another way to obtain neural networks results, with less generalized engine algorithm. Is this last one more "modern" and less computation power consuming ? I need advice.

Monday, May 07, 2007

Defining the model to study the compos-able :] information unit problem

Let's find a model to drive the technological archite
cture definition. To have an idea of what will be a composable information unit, we need first to have alook at it. As this would be the egg and the chicken problem, to have the composable information unit and to have it to study its nature and obtain the composable information unit itself, maybe it could be fine to dive into the world the composable information unit is an essential part of: let's create a virtual information unit generator, the simplest one, to derive the CIU itself (CIU is for me a Composable Information Unit). The simplest CIU generator is a virtual device able to do I/O and to generate according to its nature, some CIU. This will turn some light over the problem. Giving back a look at the virtualized world that was defined at the beginning of this blog, in the picture we see the virtual INPUT and OUTPUT devices. To use a model driven architecture to build the first CIU, let's use the virtualized world model proposed (again.. the one in the picture). This model can be seen as a physical one. Where pseudo physical rules rule. So, first step to dig into a CIU nature, is to define the simplest of its generator. This is possible, and not a recursive non solvable problem :]

I will build first a CIU generator: a virtual I/O device. This will drive next steps and give advices on the CIU features.

Eureka for me ! (I spent so much time in the traffic to find a solution)

Thursday, April 26, 2007

Attempt to define the surface for a composable information unit

Trying to approach a simplified version of the problem: how this composable information unit surface should be ? These features will define in other terms, how this unit will be able to link to other units, then to build unit chains. Features of the surface should be "stuff" like:


Monday, April 16, 2007

Sizing the effort

The next part of this work and all that is already been done is intended for very simple systems, NEITHER TRYING to REPLICATE COMPLEX INTELLIGENT SYSTEMS, nor human one of course. The pouposes of such a system is to be able to learn from experience and increase the quality of reactions for a very simple set of input.

Sunday, April 15, 2007

Reshaping of the basic information unit composition problem

Information unit can compose and link each other. To enable this idea it would be necessary to build a suitable information unit surface. The problem of "software experience" now reshapes. The problem some days was to build a composable information unit; now it is to be able to build this information unit surface, as to make it able to do linkage. This linkage surface is the repository of the logic linking criteria. Maybe the information unit (the IU) is empty inside, as nothing is more important than the abilty to created links and therefore "chains" of informations.

It is now relevant to proceed defining the features of this surface. Digging into these "details" it becomes necessary to use some technical terms, maybe near the code language.

Thursday, February 22, 2007

Simple composable information unit


the problem has now changed, the issue has moved to the need of having a new element, that is a self sufficient very simple object, whose shape is like a "ball" with many hooks. This information-balls :) flow in the environment and for their nature are able to link each other according to basic logic operations, like analogy, completion... and few others. This object can be a single thread, trying to build long chains of information units, like the components of a DNA. Just to play on this: a "threaded hashtable", serilizable, that links other information units. These chains can be cut and composed as well, and of course can be read and saved to keep memory of the knowledge acquired. I suppose that after that this chains have been built, it is simple to continue on a reiterative process, to make the system more and more complex. It is then a software problem, to read the chains and execute the logic kept in these. Hmmm... looking at this they seem an evolution of a neural networks and synapses, but not from a computational point of view, from an information composition point of view. Am I wrong ? World ! What do you think about ?

Sunday, January 28, 2007

Compose-able information unit

It is needed the creation of a compose-able informstion unit, an evolved way to represent informations. Thus is possible to proceed with generalized infos composition.

Reiterative incremental (pattern based ?) information composition process

Interesting: it is not necessary to "understand" the information, to make it part of an "information composition" process !!! :) Veeery interesting. Moreover, it is not necessary to compose always the same informations. Already built pattern, and compositions can be stored in the knwoledge base and pattern recognition technics can help. This make the information composition something more evolved: reiterative incremental (pattern based ?) information composition process.

Information composition and data input load mediation layer

To create entropy in the system, we can assume that an information composition agent can be useful.

Note: this is a concept, non a solution. To achieve the generalized composition strategy, many nested strategies can be used.

Note: with composition I mean mean not only analogy, but also other composition "algorithm" like negation, data patterns and data models etc.

Note: an artificial system, that wants to aim to hit off "reasoning" on data, can't hope to replicate the whole wide range of informations and composition "algorithm" humans have. Of course, to build something simple and possible, it is necessary to identify a subset of reality, a subset of composition "algorithm". For this pourpose, is again useful to have this "agent" living in a virtualized world where the coder can define rules, adding a layer between the so complicated real world, and the ability of the system. At this point I have another reason to insert the virtualized world between the system core and the real world massive input production. This element would be now also playing the role of an "information load" mediator.

Entropy engine, again

Let's recap: the entropy engine is the core of the decision making: it should be something able to do simple I/O operation towards the world, and then "evolve". Let's think to the behaviour of a baby. He starts just experimenting the world, so touching things, and accumulating informations. Then he is able to correlate simple events. Then, again, more complex events. Of course, he is human, and from this point of view he has big "computational" capabilities. Let's simplify this process, just to give a look to it: the baby starts doing associations, from generic informations he receives from the world. Some inputs generates positive reactions, some others end making him crying. The baby gets informations, apply some sorts of rules, prduce a reaction.
An example: if the baby hears the mother's voice, he smiles. If he is hungry he starts crying. More important: after the baby has touched a very hot object, he never will do it again. This means:

  1. Persistence of knowledge
  2. Events correlation
  3. Rules for "good" and "bad"
  4. Abstraction, combination... in other terms: information composition

This is maybe very important: information composition, a key concept in my simplistic view of intelligence

The entropy engine

Let's assign this task of creating entropy in the system life routine, to something, in order to further understand the nature of this agent. Let's call this the entropy engine. This will live in the emulated world created by the whole software.

Wednesday, January 10, 2007

Does it exist an "open source" like research approach on these themes ?

Does it exist an "open source" like research approach on these themes ? Active on the web, that everyone can join and try to contribute and get ? More or less it is like (maybe just is) software, with some more discussion.

Monday, January 08, 2007

What to replicate ? How to create "incoerence" in system behaviour ?

So, it is obvious to me that the system for the decision maker as it designed by me now is absolutely still linear and not, let's say "creative". It is missing something very important that can generate new schemas, new rules for the rules engine foreseen by the system. At this point a model is needed: a human intelligence model or something completely new, with different "rotism" ? Both of them are maybe too diffult, let' s try to make things easier... let' define something theoretical and then dig into this:

  1. A system to be useful must exist --> system must take care of itself --> simple needs must de outlined --> system must guarantee that rules enabling surviving must be accomplished.
  2. System must save experience not to repeat always the same errors. It could die in a loop with the same error, first rule wouldn't be satisfied.

Hmmm... assuming that the system is able to achieve these two intents, what would happen, would they be sufficient to have a "good" system ? (I need to define the way to measure how much this type of system is good)

No, it is not suffiecient, the system would map the environment where it moves and not do anything else, stopping. Stopping for this type of system, in my opinion, means system died with no success. The knwoledge base of the system, in this situation, would be useless, as the system would know just walls, but no other data, no other need, just inactivity --> limited knowledge increase.

3. Target of the system is a continuously increasing knowledge/experience base.

The question, so, has now changed: we still don't know if it is better to replicate a human like approach to reality, or a set of new ideas. Let's choose the second and integrate with human ideas where the gap is too big to jump over.

Target: how to make possible that the system is always capable to find a new trigger to continue the learning process ?

Sunday, January 07, 2007

More details on the decision maker agent

Attemps to better define decision maker internal logic.

It is important for this software to save the experience gained. This build non linear behaviour from data, not from algorithm, this should be genetic to do this. what would happen if most of the software could gain experience ? Could be usefull ?

I'm not yet happy for this drawing... something I feel is missing. Need to investigate.

Saturday, January 06, 2007

IMPORTANT, ESSENTIAL REQUIREMENT (idea from the night :] )




Thursday, January 04, 2007

Pause prototyping and move on with lower level analysis

It is important to detail when the network numeric computation is used: some more detailed analysis is needed on the architecture of the decision maker. The virtualized world is in some way simpler to understand and with less risks.

A test on the network prototype takes too much time

It takes too much to compute ten input value on ten layers, 10 neurons per layer, using BigDecimal values with "," maybe it is better to test with integers and then performe some post processing on the result values. Maybe.

Software evolutionary prototypes, faseability

It is now time to proceed with some evolutionary prototypes :)
Starting to write some code, to define faseability for the hottest part: the sigmoid with back propagation, after some tuning on the values for weights, we have (code will be soon available):

These drawings are the result of the network procesing, providing a data model and real data. The red small boxes are the point produced by some ten neurons test, working only on Y.
An idea, not yet defined in details, these should be architectural components:

Another note on my ideas

I ask to myself:

when the first of a new one decision maker is created, it needs to find devices in its world: curious question: when a baby discovers his/her hands for the first time, does he/she know that that is an hand ? Ot it is known to the brain just as an "endpoint" ? Does the prisoner, the decision maker needs to do an auto mapping of all the terminations (or endpoints) that are present in the prison ?
Terminations can be grouped into two classes of elements:

1 capable of reading input from the world --> TERMINATIONS_INPUT
2 capable to modify - dping output - the virtalized world TERMINATIONS_OUTPUT

Hmmm, a termination capable of doing BOTH INPUT and OUTPUT from the world, is not one, it is two :)

and it could be discovered two times. No one needs to be aware that a termination capable of input is the same of one capable of output.

I can change my way to say, now, in accordance with this new idea: the auto mapping will be done on all the input interfaces and on all the output ineterfaces, where the distinguishing element is the input and output ability.

!!!! A piece of code developing a terminations capable of providing 3 input terminations, is one for the author of the code: they are 3 input terminations for the system :)

Approach to analysis

Strange element to analize in an OO approach: maybe I can try to use a reiterative incremental approach to have contimuous checks and feed back from my ideas. In this case tiers and layers are not common, layers will be just few, maybe; tiers: I need to think on this. Can't use a J2EE or SunTone (r) architectural model, not suitable in my opinion for the tiers I can see in the fog I still have in my plans :)

Need some thinking on this.

Wednesday, January 03, 2007

Ideas on "soft" architecture

The system will keep in it leaving a nucleus free to behave and with a sigmoid back propagation engine, capable of some computation. Available to this numeric computational engine, will be a knowledge storage, as an in index of models available for processing by the numeric processor part of code. Looking at this:

I had a funny idea: a software agent capable of taking pseduo itelligent decisions, can't be more or less in any way directly interfaced to the real world. It must be left free to behave, "sure" to live in a world more suitable for its nature. For this funny idea, the "intelligent" agent will be as a prisoner in a virtualized code world, where it can do whatever he wants in an easy way for the coder and the tester. This virtual world will be instead a quite stupid software shell that - without saying the reality - to the intelligent prisoner, will translate real world data coming from sensors, into changes in the virtualized code world. Viceversa: every modification introduced from the intelligent prisoner (the computational intelligent decision maker) will be transformed to "impacts" on the virtualized world, and this virtualized code world will translate these modifications received, into output for actuators, for example. The virtualized world of code, the prison ( :] ) is just a shell simulating a convenient environment for my system kernel.

To recap: an intelligent decision maker using a network with back propagation numeric engine will live and behave and evolve as it likes, in a world that will undersgoes to the actions taken by the prisoner. This component will be called by me as the "prison". It will also change to the ineterface exposed to the prisoner, in accordance with what it will read via sensors from the real world:

1 prisoner-decision-maker
2 prisoner-decision-maker world (transponder from/to reality)

at this point a note:

the prisoner-decision-maker has rules, and priorities, has needs, has life conditions, and can die if it will not be succesfull. In this case a new generation of decision maker can be created automatically by an entity, a sort of "provvidence", to continue the system living. The prisoner death (a big system failure) event will be kept in the experience of the whole system, to try to avoid the same mistake again.
Software will be "non linear": this means that it will have an evolutionary routine and a sort of knowledge base. As time passes, the experience of the system need to evolve and cause the response of the software to be different. The system (made up in its scope of software only, at this stage) will keep experience also when switched off. It is expected that the software will be able to be better as "time goes by".

Software will use mainly java :)

Software will use Linux :)

Sources, also if with no value, will be open source and available to everyone who wants to loose some time.

Software will have lot of bugs :]

Hmmm... aree bugs some spontaneous native code intelligence to subdue ? Bugs... what a mistery are they.

Steps to proceed

Here follows a list of possible steps to go through (as this is an experiment, it can be found to be NON FEASIBLE at a certain point):

Main steps:

1 Create software as I want and can
2 Choose that hardware that is suitable for software needs and costs
3 Assemble 1 and 2
4 Tune
5 Find capabilities and hard limits

Nested steps:

1a Define requirements, what does this software need to do ?
1b Refine dreams with reality (skill, costs, time)
1c Increase success chances, build a prototype, then new versions, possibly everyone richer than the previous. This means obtaining results, and continue...

2a Choose hardware that will fit software, keeping in mind that this will be a mobile platform: software decides hardware.
2b Evaluate the whole cost: do/do not continue (the software will be the only deliverable: this is an experiment, I can do this)

What is this ?

Hello world :)

This is the diary of my experiment to play with code and robots.
I don't know theory, I have bad English but at the same time I love toys going around autonoumously. I'm not a student, I didn't graduate in electronics or physics, I have neither theory nor experience with this type of stuff but I like it, and being so ignorant in this allows everyone who wants, to give me advices.
This is only a game I play because I just like it.

Take it easy :)