Sunday, February 17, 2008

Code is here, growing slowly with theory

A simple shopping list to build this theory compliant non linear system:

  • The system will be made up of a virtual system able to generalize and reduce the order of magnitude of informations coming from the environment where the system has been put. The virtual system is important because the target of this blog and game, is to obtain (also if simple) a system running for a proof of concept. This could be also an evolutionary prototype in a reiterative incremental developement model, if the system will be interesting.
  • Actuators will be abstracted in the same way, via virtual devices, speaking from a virtualized simplified reality versus the external real world. The intent of the virtual world is to generalize and simplify, abstracting and decoupling system tiers.
  • An hadrware abstraction layer able to generalize a generic device interface and the I/O calls to the underlaying hardware layer. An inventory object will implment a visitor pattern and load all the interfaces/device abstractors.
  • Every virtualized sensor/actuator will produce an abstracted basic simple unit of information. This will be a sort of DTO able to carry the value received/transmitted, the type of operation (I or O), the name of the virtualized device that generated or processed data. Every device has one or more communication channel. This because every virtual device (or device mapper, or device alias) can have more then one only access to hardware resources. Every device has one or more channel.
  • The system try to use swarm computing, but the swarm will be made up of software bee, in a virtual beehive. Every bee of this virtual swarm computing will be the previosly described basic unit of information. This DTO will be an anomalous DTO pattern implementation, cause it will be also a worker for a dedicated thread. The DTO of the piece of information built by a virtual device, will run autonomously to link with other information unit, building chains of information units. Chains can be linked toghether, to obtain longer chains. This will be the virtual swarm computing model for this system theory.
  • With a given frequency the virtual system state of all the virtual devices mapped on the virtual world, will be captured in a sort of virtual devices shot.
  • This will be a collection of unit of informations running everyone as a stand alone thread and build link-correlation with other information unit and with other chains of information units.
  • Let's call the group of information units captured in a cycle of reading virtual devices status, simply a "shot".
This first shape of the state of the system is the status of the system at a "t" moment, and it is also the current system status to who the system itself must provide a response/resolution. This system status will be then digested by the virtual swarm and made part of the knowledge base of the system.

The system has three states and three different names:

1 Shot of the system. At a certain time all the virtual devices status are captured and some information units are created. This is the system shot. This is the question the system must provide an answer with, before becoming part of the system experience.

2 System shot is digested and the "n" DTOs composing this shots (DTOs are composable information units) are animated as threads. These threads are all bee of the virtual swarm that live in the virtual beehive.

3 Shot has been digested and more and more links are built in terms of association with other information units and chains of information units. This will cause longer information units' chains to be built.

Code is growing (slowly) here:

http://code.google.com/p/swarm-i/

Non functional requirement notes

Before proceeding to the code, let' s define some non functional requirements notes. Usually intelligent systems need to performe real time processing in a very fast "pseudo real time" way. It can be interesting to change this approach, but first of all it is fundamental to define the parameters to measure the quality of a non linear system response. If the non linear pseudo intelligent system is seen as a black box, let's try to point out the quality parameters for the results produced by this box. Some elegible, not "exotic" parameters for system response quality can be:

1 Time requested to produce an output, after that the system has succesfully received, validated , accepted the input data.

2 Assumed an IDEAL perfect answer for the input given at point 1, it is possible to define the difference between the perfect IDEAL answer and the REAL answer produced by the real system. The difference can be evaluated for simple systems whose values can be obtained in any metric reference system.

Let's assume that there is a functional dependency between the time taken to produce the answer and the answer quality:

Response Quality = f(time taken to produce the response)

As the target of this experiment is to obtain a sort of answer, let's slow down the system.

As the parameters to define the system response are two, it could be interesting to choose parameter 1 as quality parameter to reduce. Let's state: it is possible to have a system that takes a some time to produce a response.

Because the system works with some information correlation technology (building a virtual swarm of info elements that need correlation) let's reduce the response time and give time to the info correlation phase to build deeper links. Let's analize now for a while, what happens limiting the time used to performe information correlation. With less time to scan composable information unit chains, links that will be build will be mainly "surface" immediate links. Let's say simpler links. Giving more time to information unit/information chains correlation, the links produced will be "deeper", more in numer, more complex. This is interesting as it is possible to have a view of the meaning of the generally valid function:

Response quality = f(Time taken to produce the response)

expressed in the context of this real system engaged in the experiment:

if we state that:

(Time taken to produce the response) = (Number and depth of information unit links and associations)

accepting a response time decrease we should observe an increase in the response quality. This is a very simple and logic assumption that doesn't need a blog to be accepted. It is now simple to catch a sight of another important parameter, that could be more relevant then the other two. If this "f" correlation (or assumption) is correct, this means that it not only acceptable to have good system response in short times (at least for the system drawn up to now), but also it is not possible at all. Let's introduce a new quality parameter to enrich the metrics of system quality:

3 Number of guaranted and quality fixed responses per time unit

This because according to the needs of the system produced data consumer, it could be better to have a constant number of response per time unit, OR a constant response quality, no matter the response time. This, of course, is a matter of usage of the systemn designed. If the relation assumed is corrected, it should not be difficult for a system provide with a quality feedback agent, to auto-regulate the time spent to link information units, and the depth of links built. This in order to handle the oscillations of the two main parameters. Let's assume, so, another function (if hardware capacity is a fixed constant):

Number of guaranted and quality fixed response per time unit = f[(Response quality),(Time taken to produce a response)]


A sort of master quality controller can handle info unit processing depth and response time, to act as a feedback retro-active agent, to keep parameter 3 (how much constant is the way the system asnwer) stable. Well, in a certain way, let's keep in mind that sometimes system could need a sort of "nap" to have a complete full information linking phase for a quality best-effort answer, no matter the time :)