Monday, February 25, 2008

New hypothesis

Doc Mud,

this peripatetic way to approach a discussion about a theory, is easy and allow to fix ideas before they go away. Listen to this approach.

Let's assume we have shots of the systems at time t0, t1, t2, tn. All the information units captured at time t0, t1, t2, tn are chained by a common factor, the capture time. So we would have:

chain of units capured at t0, let's call it Ct0
chain of units capured at t1, let's call it Ct1
chain of units capured at t2, let's call it Ct2
chain of units capured at tn, let's call it Ctn

every chain is a entity. Now, except for lots of experiences collected by every individual, what makes an actual "behaviour" for an entity is a "trend" in acting. Let's assume that a system collect pieces of experiences as our theory says: scanning the system at time t0, t1, t2, tn, but until there is not correlation between these small experiences, there is no bahaviour. Now: what happens if the system tries to link together Ct0, Ct1, Ct2, Ctn in several empiric way, where only succesfull chains of chains are succesfull ? Assume that a chain is not only made by values read from virtualized sensors but also by value written on virtualized actuators (why a shot should be only made up of read value ?). Let's assume that the system core has a population of relations between t-chains. If this problem is migrated in a competitive genetic context, only links that are succesfull will survive.



Note: an info unit is an abstraction of an INPUT or OUTPUT agnostic piece of information, part of ONE experience the system had a t time. The knowledge shape (as we imagine a knowledge base as a multi dimensional space) is made up of relations. Relations constitute the population of a genetic compuational model. The fitness function is empiric, at is just a calculation of the success of Ct0,1,2,n linke in different way. Direct feedback from the system will help in correcting the links between micro chains (Ct0, Ct1, Ct2, Ctn). This will add information composition, that is relevant in terms of system response quality. There is a way to experiment this approach and create a perfect model to train and measure (take this as a reference model) the system. It could also be built by code. I'm going to describe my idea of perfect model success, where T-chains linking has been completed successfully:
take a ball, a virtual ball. Put it inside a box, a square box. The ball is the system covered on its surface with sensors, virtualized abstracted sensors. Move the ball-system applying a small force. The ball will bounce on a surface inside of the box where it is contained at time t0. After the bounce at t0, the ball will move in the box and bounce somewhere else in the box, at time t1. This again and and again at t2, tn. The ball can read bumpers and regulate, measure also its velocity and direction. Well, if the ball-system will be able to correlate this info units-chains captured via shots at t0, t1, t2, tn, at a certain point the ball, will stop itself far from every box wall, maybe at the centre of the box volume. Crazy idea, but, what happens if the system if applied to this experiment and the difference (in a mteric system, on every coordinate axis) between the perfect centre of the virtual volume and the position of the ball driven by the real system, is the measure of the quality of the response ?

An idea...

Please doc Mud, since today don't post comments but enter the blog and post directly as it is too difficult to read and write playing with comments.

Welcome doc Mud.


Godzilla :]

Info linking architecture design slowing down

There is a design problem about the info units and the knowledge base structured space. Because of this it is necessary to review the design of the patterns that build the knowledge base via info unit.

In the meanwhile, the low level tiers of the code are going on, new classes are going to be added and development will stop after that the code will be at the point of being able able to provide the simulation labs. This because the abstraction is decoupled from any other component.

Remember, code is growing at:

code on google svn



Thursday, February 21, 2008

Logical system architecture nightly build. Please, read me again tomorrow morning.

Thanks to the contribution of Doc Mud it is assumed now that it is possible to use genetics algorithm to translate info unit chains into "generations". It will also possible maybe to decouple the virtualized swarm made of info units, from the visitor that will explore units tp apply composition logic. In this way it will be possible to do somethinf very interesting: change the translation analysis context keeping the same info unit (bees and bees chains) producer. I try to explain:

Informations producer is always the virtual swarm made up of info units priduced by virtual devices.

The information consumer is the logic using the swarm of inormation unit to compose the system knowledge base shape (called shape as we like to think to this code object as a 3D space reshaping every time it is enriched with experience).

Delegating to a proxy between the info producer and the info consumer, the logic used to manipulate available info units to enrich a knowledge base can be changed without redrawing the info producer. Less system parts to rewrite in case of error, more time earned. Hmm more than a computing model can be applied to the swarm in order to make comparison and change the system computing model. It will be necessaru ti ad some more drawing.

Recap on the fly:

Physical devices are abstracted by the virtual world, via an HAL impl.
Every real device is forwarded inside the reality of the virtual world via a virtual device alias, to decouple, generalize, allow testing and data mutation/inijection

Every device alias has n channels, one for each port it has on the physical device it is mapping. Every device alias is registered in a suitable logic repository.

Every channel is registered in a repository, almost the same it happens for virtual devices.

At a t time, the system read all the channels that are INPUT capable, as all the channels "observe" the same observer-notifier.

This, via the channel repository, will cause all the virtual device channels to wake up, read (if they can) and communicate a generalized and channel and device independent DTO, keeping the value found and the read metadata.

All these infos-DTO will be a shot of the system

Every inofmation unit generated and part of a shot will be activated becoming a bee of a virtual swarm in a virtual beehive.

When an information unit DTO is activated (let's say because it implements a Runnable paradigm as well), it becomes a bee too.

Polymorphism on info unit makes it possible that an info unit is also a thread worker and when activated is it a bee.

This is the information producer subsystem

A proxy for the bee swarm computing logic act with bee creating info units/bees and hiding the algorithm build a polymer of informations/bees that cause the system knowledge base to reshape. System knowledge base is enriched by an hidden login working with the virtalized swarm.

Good night.

Tuesday, February 19, 2008

Analysis via context migration, this to define the way composable information units build structured information space

Ok doc mud, let's adopt a problem context migration problem solving approach. Let's adopt a competitive space, for genetics evolution. It can be important to define some minor points.

Information units are bees of a virtual swarm: [info unit] -> [DTO for data linking and association] -> [runnable DTO ?] -> [bee]

All the information units created in a shot makes a chain and this is a generation.

The fact that in a shot all the information units (-> bees) have been created at the same shot time "t" is extremely relevant. This says that the approach of considering timed shots capturing is coherent with the rest of te theory.

Let's go to see your post doc mud, but the fusion of genetic programming on a virtualized swarm computing model needs more care.

DTO domains separation



Change problem context, moving it (the problem) in a more comfortable one (context)

The challenge is to try to imagine the software problem of information units composition/chaining as "chemical bonds", as links between composable info units. May or may not be a way to find a solution, but it is a so strange and interesting analysis approach, why not to try ? Can a person extract a problem from its original context, forget this context and translate it in another different context, to have a different and more familiar (for our brains) point of view ?

Composable info unit composition, message from doctor Mud needs a response

I saw that comment, doctor Mud :] (Vincenzo) and that's the most interesting issue about phase 2 of info processing. Let's assume it is feasible to abstract virtual device alias, and to let them produce generalized information units. Then, let's consider this type of DTO, where info piece of infos move through the pattern of INFO THREE STATES. To solve this very "delicious" problem, I'd like to suggest a new approach in designing software, this is a game, so it is possible to say silly things. This approach is based on the idea that every software design problem and every sofware entity partecipating a component model, can be seen as a physical virtualized world, where it is possible, for software agent and software design problems, to apply physical laws and a 3D spatial model. I hope this will make eaiser to view the entities and the problems, for software. In our case: let's immagine the information unit as spheres, floating in a 3D space, where a sort of Coulomb law is valid for these "balls". I want to suggest not to proceed with a statinc rule based linking approach. Let' suppose these balls are able to manifest spontaneous attaction, according to their intrinsic nature. Nature defined by the information features itself. It is like to have some magnets with more than the usual two poles, in a gel. Is it possible to imagine a law that define spontaneous chaining by these information units ? Doctor Mud ? Do you know that strange vegetable that produce green balls with small hooks ? Look here to the Bardana plant.




As I hate cold numbers, let's proceed with a pure qualitative approach, and let's extract the number at the end, from the running prototype, just to evaluate the prototype success and quality.
It is relaxing to do software analysis this way, it seems to be the "take it easy developement model" :)

Sunday, February 17, 2008

Code is here, growing slowly with theory

A simple shopping list to build this theory compliant non linear system:

  • The system will be made up of a virtual system able to generalize and reduce the order of magnitude of informations coming from the environment where the system has been put. The virtual system is important because the target of this blog and game, is to obtain (also if simple) a system running for a proof of concept. This could be also an evolutionary prototype in a reiterative incremental developement model, if the system will be interesting.
  • Actuators will be abstracted in the same way, via virtual devices, speaking from a virtualized simplified reality versus the external real world. The intent of the virtual world is to generalize and simplify, abstracting and decoupling system tiers.
  • An hadrware abstraction layer able to generalize a generic device interface and the I/O calls to the underlaying hardware layer. An inventory object will implment a visitor pattern and load all the interfaces/device abstractors.
  • Every virtualized sensor/actuator will produce an abstracted basic simple unit of information. This will be a sort of DTO able to carry the value received/transmitted, the type of operation (I or O), the name of the virtualized device that generated or processed data. Every device has one or more communication channel. This because every virtual device (or device mapper, or device alias) can have more then one only access to hardware resources. Every device has one or more channel.
  • The system try to use swarm computing, but the swarm will be made up of software bee, in a virtual beehive. Every bee of this virtual swarm computing will be the previosly described basic unit of information. This DTO will be an anomalous DTO pattern implementation, cause it will be also a worker for a dedicated thread. The DTO of the piece of information built by a virtual device, will run autonomously to link with other information unit, building chains of information units. Chains can be linked toghether, to obtain longer chains. This will be the virtual swarm computing model for this system theory.
  • With a given frequency the virtual system state of all the virtual devices mapped on the virtual world, will be captured in a sort of virtual devices shot.
  • This will be a collection of unit of informations running everyone as a stand alone thread and build link-correlation with other information unit and with other chains of information units.
  • Let's call the group of information units captured in a cycle of reading virtual devices status, simply a "shot".
This first shape of the state of the system is the status of the system at a "t" moment, and it is also the current system status to who the system itself must provide a response/resolution. This system status will be then digested by the virtual swarm and made part of the knowledge base of the system.

The system has three states and three different names:

1 Shot of the system. At a certain time all the virtual devices status are captured and some information units are created. This is the system shot. This is the question the system must provide an answer with, before becoming part of the system experience.

2 System shot is digested and the "n" DTOs composing this shots (DTOs are composable information units) are animated as threads. These threads are all bee of the virtual swarm that live in the virtual beehive.

3 Shot has been digested and more and more links are built in terms of association with other information units and chains of information units. This will cause longer information units' chains to be built.

Code is growing (slowly) here:

http://code.google.com/p/swarm-i/

Non functional requirement notes

Before proceeding to the code, let' s define some non functional requirements notes. Usually intelligent systems need to performe real time processing in a very fast "pseudo real time" way. It can be interesting to change this approach, but first of all it is fundamental to define the parameters to measure the quality of a non linear system response. If the non linear pseudo intelligent system is seen as a black box, let's try to point out the quality parameters for the results produced by this box. Some elegible, not "exotic" parameters for system response quality can be:

1 Time requested to produce an output, after that the system has succesfully received, validated , accepted the input data.

2 Assumed an IDEAL perfect answer for the input given at point 1, it is possible to define the difference between the perfect IDEAL answer and the REAL answer produced by the real system. The difference can be evaluated for simple systems whose values can be obtained in any metric reference system.

Let's assume that there is a functional dependency between the time taken to produce the answer and the answer quality:

Response Quality = f(time taken to produce the response)

As the target of this experiment is to obtain a sort of answer, let's slow down the system.

As the parameters to define the system response are two, it could be interesting to choose parameter 1 as quality parameter to reduce. Let's state: it is possible to have a system that takes a some time to produce a response.

Because the system works with some information correlation technology (building a virtual swarm of info elements that need correlation) let's reduce the response time and give time to the info correlation phase to build deeper links. Let's analize now for a while, what happens limiting the time used to performe information correlation. With less time to scan composable information unit chains, links that will be build will be mainly "surface" immediate links. Let's say simpler links. Giving more time to information unit/information chains correlation, the links produced will be "deeper", more in numer, more complex. This is interesting as it is possible to have a view of the meaning of the generally valid function:

Response quality = f(Time taken to produce the response)

expressed in the context of this real system engaged in the experiment:

if we state that:

(Time taken to produce the response) = (Number and depth of information unit links and associations)

accepting a response time decrease we should observe an increase in the response quality. This is a very simple and logic assumption that doesn't need a blog to be accepted. It is now simple to catch a sight of another important parameter, that could be more relevant then the other two. If this "f" correlation (or assumption) is correct, this means that it not only acceptable to have good system response in short times (at least for the system drawn up to now), but also it is not possible at all. Let's introduce a new quality parameter to enrich the metrics of system quality:

3 Number of guaranted and quality fixed responses per time unit

This because according to the needs of the system produced data consumer, it could be better to have a constant number of response per time unit, OR a constant response quality, no matter the response time. This, of course, is a matter of usage of the systemn designed. If the relation assumed is corrected, it should not be difficult for a system provide with a quality feedback agent, to auto-regulate the time spent to link information units, and the depth of links built. This in order to handle the oscillations of the two main parameters. Let's assume, so, another function (if hardware capacity is a fixed constant):

Number of guaranted and quality fixed response per time unit = f[(Response quality),(Time taken to produce a response)]


A sort of master quality controller can handle info unit processing depth and response time, to act as a feedback retro-active agent, to keep parameter 3 (how much constant is the way the system asnwer) stable. Well, in a certain way, let's keep in mind that sometimes system could need a sort of "nap" to have a complete full information linking phase for a quality best-effort answer, no matter the time :)