Sunday, November 30, 2008

A program is as a car, can I think first to what car could be according to program features ? (FR, NFR)

It happens often that working on a piece of code, its design has to be resilient to stress, fast, flexible to react to changes, able to change the environment where it lives. What else ? Need to resist in time for its lifecicle. Well, enumerating these way the needs for a program it seems to be speaking about an iron made structure fully physical, 3D, with its own materials, like a car. It must be fast, resilient, resistent, enduring, reactive to stress and strain being elastic and so able to return to its previous shape without permament modifications or fractures. They seem the same, isn't it ?

Is it possible, this is my question, to use the inverse procedure, so starting from the model of thinking a software architecture to proceed then starting to think to design ? Choosing a model prototype more suitable to the features the design will have to have, first then starting the design itself can actually help the design phase ?

Can be:

NON functional requirements --> real world physical structure satisfying the requirements (like prototype) --> design ?

This could be applied I suppose just to very high level non functional requirements that state HOW the system should behave and at the same time, they are responsible for deciding actually the architecture design.

Design first, design archetype first ?

Tuesday, September 02, 2008

Cooperative intelligence, cooperative knowledge and social abstractions of swarm patterns

Crossing by car a street, coming back home yesterday, I went through a crossroads with a lot of glass pieces on the floor. Terrible to say, but a lot of cars almost every month crash there one with another, with the usual common problems. Obviously it is a problem for the traffic department, as that crossroads is error prone for humans driving vehicles. Going there driving, I was thinking to a new post here, and suddenly a strange idea came to my mind. Every driver that crashed his car there, now KNOWS that there there is a particular situation to take into account, and also who saw the crash happening knows and and is aware of this. Now... all the drivers that for a reason or another had the crash experience will obviously have the benefit of avoiding it. The remain of the drivers will potentially be subdued to the same risk. What's the difference ? Simple of course: the difference is in having the experience for that crossroads or not. Let's imagine now that some experience could be shared by drivers suddenly, in a sort of common knowledge base, populated by everyone and accesible to everyone. Let's say this would be a sort of Big Internet, based on drivers' personal experiences, using a knowledge base produced by the community of this knowledge contributors. Different type of experience, different knowledge context: cooperative shared knowledge. This fantastic landscape really seems a social abstraction of a swarm computing model. Moreover, let's think to the world wide web. May we consider that the WWW is just the first manifestation of a need for a real documentary social knowledge base, made feasible by technology ? What about the so called internet "social networking" ? Again, are humans "swarm-ing" their behaviour at least for information sources ? This would imply thar swarm approach is not unknown to humans as well, and that maybe it could be a relevant role in the "rotism" that build human social intelligence and knowledge.

Correct.: intelligence and knowledge, what is the relationship between these two "pivots" ? What's the interaction among swarm model, knowledge, intelligence ?

Wednesday, August 13, 2008

Introducing parameters' factory and registry support

Introducing support classes to create objects, register dynamic parameters:
public class Test extends Observable implements Observer{
public void
long hearBeatYeldPause = 1000;
long heartBeatsNumber =
String parameterName = "fuel";
dynamicParameterDecresingImpl = null;
dynamicParameterDecresingImpl =
for(int i =0; i < 102;
* @param
public static void main(String[] args) {
Test test = new
public void update(Observable
observable, Object arg1) {
System.out.println("UPDATE RECEIVED by " +
observable.toString() + "");

Tuesday, August 12, 2008

Associating objects in the virtualization lab, with effects on the simulation vehicle

Let's state on my game journal that:

Boxes (square boxes, but by now these are the only present) are OBSTACLES

Capsules are FUEL REFILLS

Spheres... hmmm... could be "unknown effect", to impact to check ? Dunno yet. Let's wait they are needed.

Monday, August 11, 2008

How fuel changes

Fuel decreasing sample starts to run:

Fuel tank collisions, refuelling sample:

Fuel is blinking RED sample:

Links (some in Italian)


Why dynamic parameters ?

It is correct to explain.

A thread running an algorithm to change a value can be whatever in the context of a mechanical (only ?) system simulation. It is feasible to imagine fuel, decreasing, in a "threaded" autonomous way, while time passes, with a define model of variation. With the code done, and the interface foreseeing the capability to receive a "set" of the parameter name (that's the parameter nature of course, eg: fuel), it can be reused. The observer/observable double mutual pattern, collision with fuel tanks can be notified to the parameter thread, and fuel increased. Thus establishing a relation between the value, changing by itself, and what happens in the virtual environment. Every new parameter, with its own meaning add more virtual devices and multiplicity to the simulation scenario, increasing the number of data types.

A virtual parameter with a thread in its soul, to simulate a new range of virtual sensors, based on the changing of a value, like fuel

It is intersting actually, two observer/observable patterns, one against the other, to monitor the ongoing value of a ... value ! That simply thanks to a thread and to the common pattern of the earth beat, to decrease or increase with the time, in its own process, with different color levels, a named floating value. This could be fule, for example, or engine oil. It is self sufficient, not needing nothjing more than jdk runtime, and it is so easy that the few classes involved in this game are posted here to let playing everyone who likes:

Parameters threshold for colors:

DynamicParameter factory (actually only the DEcreasing impl is available):

Paradigm interface for this entity:

The decreasing (and the only written up to now) impl:

To try just use these lines of test class:

import java.util.Observable;
import java.util.Observer;

public class Prova extends Observable implements Observer{
public Prova(){
private void init(){
long hearBeatYeldPause = 1000;
DynamicParameter dynamicParameterDecresingImpl = new DynamicParameterDecreasingImpl(this);
dynamicParameterDecresingImpl.setHeartBeat(100, hearBeatYeldPause);
Thread daemonThread = new Thread(dynamicParameterDecresingImpl);
try {
} catch (InterruptedException e) {
// TODO Auto-generated catch block
for(int i =0; i < 102; i++){
} catch (InterruptedException e) {
// TODO Auto-generated catch block
* @param args
public static void main(String[] args) {
Prova prova = new Prova();
public void update(Observable observable, Object arg1) {
System.out.println("UPDATE RECEIVED by " + observable.toString() + "");

Monday, July 07, 2008

Composable information unit 3D shape definition goes through single dimension abstraction definition

Let's assume code in Google is ready to produce informations to be transformed into a new java object, standardized for the application domain of the decision maker, via some interfaces ad hoc defined. These infos will be de-boxed and re-boxed into what has been previously defined compose-able information units. It is important to create a spatial 3D concetual shapes for the compose-able information units. This means that these units will be shapes, whose shape changes accordingly to their data content. Let's say a pojo whose value is an integer whose value is 3, will be a box, a pojo whose value is an integer whose value is 5 will be a sphere. This is very simplified, but could help in understanding as it really seems to me that human brain is more confident in approaching spatial problems. So: for this conceptual 3D shapes, how could be defined in term of software data representation, a single DIMENSION ? Every compose-able information unit will be threaded to become a virtual swarm belonging bee, able to link other bees. The behaviour of this swarm wil be translated into a system "expert" behaviour.

Last built hardware testing platform

These are some few shots of the hardware assembled to perform tests for a previous design of the same system. Motherboard is a pcm4823 by Advantech (c). CCD, frame grabbers, bump sensors, inclination sensors, two big legs.

side view

front view

legs not mounted on body: front view

legs not mounted on body: side view

rear view

Linux debian is on board, installed is also a GSM modem but it is not used yet. To provide minimum perimeter penetration events' notifications, there are Sharp GPD2D12 infrared barriers. Legs, when the system is used, are connected to the body one, per side. The motherboard (a POS like one) based on the industrial PC104 standard, for high level computation. Another board uquipped with a Motorola 68HC11 performs the readings from all sensors, makes all writing on all actuators. This board is quipped with a customized jvm created for this type of device. This board by Grifo implements the "man in the middle" design pattern.
It is possible to see details about the board and the java runtime at The motherboard is instead visible, here: The problem is always the same, buy hardware after you know exactly the software requirements. Also when hardware is very interesting and an excuse is needed to have it, just because it has on board a 16MB disk on chip storage (DOC2000). A note on the "vision" system: the CCD was mounted inside a thermal insulation provided box with a Peltier cell just behind the bottom of the camera box. This because the intent was to keep cold the CCD and perceive infra red frequecies in few enlightened environments, making a bigger difference between the internal camera box temperature and the external world temperature. Simple pattern recognition was performed using PNG image files captured by a digital parallel port converter, connected to the analogue camera output.

This prototype name was "virgola", "comma" in English

(Every trademark is of its own properietary)

Thursday, July 03, 2008

Title for the previous post

Name for the previous taken note on my game journal:

Intelligence extraction and internal to external driven mutation, in a vitualized swarm equipped system.

About the importance of social relationship between virtual swarm entities, in the economy of the intelligence extraction from whole swarm behaviour

What I'm asking to myself is what happens manipulating the social relations between virtual swarm entities. First it is important to recall that the virtual swarm is a swarm computing like pattern implementation, where the swarm is made up of software agents (for this reason it is called virtual). The behaviour of the whole swarm is captured as "intelligent" behaviour of the system equipped with this implementation. I find this extremely interesting. Code on google is not yet ready to receive such elegant manipulation so, by now, this is just a note taken not to forget the idea. Assuming that it could be possible to change the flavour of social relation manipulations, these could be a lot. Some samples that come to my mind just on the fly go from the "basic instincts" level to "ethic" and more noble instincts of cooperation established for a social common intent between the entities being part of the virtual swarm. It is easy to note that more evolved schemas have as a main intent the safeguard, the protection, not of the sigle entity, but of the whole swarm, or social community. Very interesting, evolving social behaviours the virtual swarm changes into a social community. Evolving the level of relations inside the virtual swarm, it is not only cause of a mutation of social intents, but maybe can cause also the mutation of the swarm itself nature. A swarm is just a group without social conscience or perception of itself, but can mutate into a social group of evolved entities. From a virtual swarm to a social group, in a driven evolutive mutation. Question: how mill mutate in consequence of internal mutations, the intelligent behaviour extracted from the system ? Will mutation in the"virtual swarm brain" be propagated to the "virtual swarm brain" equipped system, and how ?

Wednesday, June 11, 2008

Collision detection

Reading and analizing ConcaveDemo, still from jbullet:

I changed the vehicle demo adding a callback for the collision. For sure collisions detected by the jbullet API layer, will have to cause an unsolicited message post from the virtual sensor port to the decision maker layer, inside the system. Position is instead read on demand with solicited port read commands, from the system decision maker to the virtual sensor port.

Tuesday, June 10, 2008

I'm using jbullet from

I'm using jbullet with some simple modifications in order to create a test world to link virtual sensors to the swarm-i system. The idea is to let swarm-i read from the VehicleDemo as it was reading from real sensors, and let the same swarm-i system drive the vehicle of the VehicleDemo sample. I added some more shapes like sphere and capsules, in order to associate some shapes with obstacles, some others with "things to search for", some others with dangers (capsules for example).

So, what could be better than a simulation API for physics ? It also allows a wireframe view in order to see relative reference system:

Sunday, May 04, 2008

A question about intelligence models


intelligence is the way of solving propblems using the best patterns, is intelligence itself a pattern to analyze and solve problems ? This doesn't want to appear as a pun, but, if this is true, so intelligence could be "taught" like every other "mechanism", design the mechanism and intelligence will not be anymore something special. Yes, we are still in the artificial intelligence toys context, and this is still a game, but this is maybe what is happening when trying to design an intelligent system: the draftsman copy its algorithm, or the algorithm he thinks to be intelligence, inside the system object of the design phase he is facing with. Does this mean that the only possible "intelligence model" is the human one ? So success in producing a non linear (that for me it means somehow intelligent) algorithm is "just" producing the most possible human-like one. What a pity, it could be so interesting to face with a model of intelligence, that is completely unknown.

Tuesday, April 08, 2008

Simple class from jbullet modified just for a quick and dirty feasibility test.

Code following has been decompiled from the JBullet basic demo class, changed to have one cube, and trace of linear velocity as well as x, y ,z coordinates of the box. Numbers are from a stupid System.out.println(), then this GUI wil lbe integrated in swarm-i to send these numbers to Lab connected dummy devices. In a correct way.

// Decompiled by Jad v1.5.8g. Copyright 2001 Pavel Kouznetsov.
// Jad home page:
// Decompiler options: packimports(3)
// Source File Name:

package javabullet.demos.basic;

import java.util.ArrayList;
import java.util.List;
import javabullet.collision.broadphase.BroadphaseInterface;
import javabullet.collision.broadphase.SimpleBroadphase;
import javabullet.collision.dispatch.CollisionDispatcher;
import javabullet.collision.dispatch.DefaultCollisionConfiguration;
import javabullet.collision.shapes.*;
import javabullet.demos.opengl.*;
import javabullet.dynamics.*;
import javabullet.dynamics.constraintsolver.ConstraintSolver;
import javabullet.dynamics.constraintsolver.SequentialImpulseConstraintSolver;
import javabullet.linearmath.*;

import javax.vecmath.Vector3f;
import org.lwjgl.LWJGLException;

public class Basic2Demo extends DemoApplication

RigidBody body2 = null;

private static final int MAX_SHAPE_SIZE = 1;

public Basic2Demo(IGL gl)
collisionShapes = new ArrayList();

public void clientMoveAndDisplay()


if(body2 != null){
System.out.println("# box: x=" + body2.getCenterOfMassPosition().x + " y=" + body2.getCenterOfMassPosition().y + " z=" + body2.getCenterOfMassPosition().z + " linear velocity=" + body2.getLinearVelocity());

float ms = clock.getTimeMicroseconds();
if(dynamicsWorld != null)
dynamicsWorld.stepSimulation(ms / 1000000F);



public void displayCallback()
if(dynamicsWorld != null)

public void initPhysics()
collisionConfiguration = new DefaultCollisionConfiguration();
dispatcher = new CollisionDispatcher(collisionConfiguration);
//Vector3f worldAabbMin = new Vector3f(-10000F, -10000F, -10000F);
//Vector3f worldAabbMax = new Vector3f(10000F, 10000F, 10000F);
overlappingPairCache = new SimpleBroadphase(1149);
SequentialImpulseConstraintSolver sol = new SequentialImpulseConstraintSolver();
solver = sol;
dynamicsWorld = new DiscreteDynamicsWorld(dispatcher, overlappingPairCache, solver, collisionConfiguration);
dynamicsWorld.setGravity(new Vector3f(0.0F, -10F, 0.0F));
CollisionShape groundShape = new StaticPlaneShape(new Vector3f(0.0F, 1.0F, 0.0F), 50F);
Transform groundTransform = new Transform();
groundTransform.origin.set(0.0F, -56F, 0.0F);
float mass = 0.0F;
boolean isDynamic = mass != 0.0F;
Vector3f localInertia = new Vector3f(0.0F, 0.0F, 0.0F);
groundShape.calculateLocalInertia(mass, localInertia);
DefaultMotionState myMotionState = new DefaultMotionState(groundTransform);
RigidBodyConstructionInfo rbInfo = new RigidBodyConstructionInfo(mass, myMotionState, groundShape, localInertia);
RigidBody body = new RigidBody(rbInfo);
CollisionShape colShape = new BoxShape(new Vector3f(1.0F, 1.0F, 1.0F));
//CollisionShape colShape = new SphereShape(30F);
//CollisionShape colShape = new CylinderShape(new Vector3f(1.0F, 1.0F, 1.0F));
Transform startTransform = new Transform();
float mass2 = 1.0F;
boolean isDynamic2 = mass2 != 0.0F;
Vector3f localInertia2 = new Vector3f(0.0F, 0.0F, 0.0F);
colShape.calculateLocalInertia(mass2, localInertia2);
float start_x = -7F;
float start_y = -5F;
float start_z = -5F;
for(int k = 0; k <>
for(int i = 0; i <>
for(int j = 0; j <>
startTransform.origin.set(2.0F * (float)i + start_x, 2.0F * (float)k + start_y, 2.0F * (float)j + start_z);
DefaultMotionState myMotionState2 = new DefaultMotionState(startTransform);
RigidBodyConstructionInfo rbInf2 = new RigidBodyConstructionInfo(mass2, myMotionState2, colShape, localInertia2);
body2 = new RigidBody(rbInf2);




public static void main(String args[])
throws LWJGLException
Basic2Demo ccdDemo = new Basic2Demo(LWJGL.getGL());
ccdDemo.getDynamicsWorld().setDebugDrawer(new GLDebugDrawer(LWJGL.getGL()));
LWJGL.main(args, 800, 600, "Bullet Physics Demo.", ccdDemo);


private static final int ARRAY_SIZE_X = 5;
private static final int ARRAY_SIZE_Y = 5;
private static final int ARRAY_SIZE_Z = 5;
private static final int MAX_PROXIES = 1149;
private static final int START_POS_X = -5;
private static final int START_POS_Y = -5;
private static final int START_POS_Z = -3;
private List collisionShapes;
private BroadphaseInterface overlappingPairCache;
private CollisionDispatcher dispatcher;
private ConstraintSolver solver;
private DefaultCollisionConfiguration collisionConfiguration;

OK HORRIBLE NIGHTLY BUILT FIX but doing so, Doc Mud has his numbers in the meanwhile:

numbers are like these:

# box: x=-5.2613173 y=-5.000006 z=-5.6992044 linear velocity=(7.113341, 2.5081635E-4, -1.8705462)

# box: x=-5.1365914 y=-5.0000033 z=-5.7320027 linear velocity=(7.483558, 1.6021729E-4, -1.9678993)

# box: x=-5.0183263 y=-5.0000024 z=-5.7619495 linear velocity=(7.095896, 6.771088E-5, -1.7968078)

# box: x=-4.905269 y=-5.0000014 z=-5.78896 linear velocity=(6.783436, 6.9618225E-5, -1.6206144)

# box: x=-4.726717 y=-5.211472 z=-5.7739735 linear velocity=(10.713142, -12.6882305, 0.8991995)

# box: x=-4.6235833 y=-5.126884 z=-5.765317 linear velocity=(6.1880307, 5.0752907, 0.5193877)

# box: x=-4.511769 y=-5.0761304 z=-5.755932 linear velocity=(6.7088833, 3.04521, 0.5631053)

# box: x=-4.3964 y=-5.045679 z=-5.7462482 linear velocity=(6.9221334, 1.8270884, 0.5810045)

# box: x=-4.3964 y=-5.045679 z=-5.7462482 linear velocity=(6.9221334, 1.8270884, 0.5810045)

This is an easy, funny fast way to produce numbers to infer with a new intelligent prototype for the swarm-i hig level logic.

Sunday, April 06, 2008

Graphical generation of test data for the system

Doc Mud (thanks for the link to the great jbullet toy:,

to gnerate test numbers to let you play with high level (this.)system intelligence design, can we use povray ? No, he suggested blender3d and jbullet. This to allow us to use a physic simulation library, to go through a software design of the sample. The sample will run producing the numbers for the tests. Result should be: we will design the cube and add its physic features, for example a cube falling down, then capture the numbers we need. Ah ? :)

I dunno yet if it is feasibile (too night to test it now and tomorrow it is another bloody monday), but doing so, while you move your cube here running in the same jvm of the "system", postion of the cube is sent to the dummy devices (in real) time, these will post back to the virtual world values read, and transform what you do here on the guy, into composable information units !! Then it will be possible to apply different algorithm to collate composable information unit and have some experiment scenarios. Good lab to play.

PS: on try to run the demos. They are very interesting.

Sunday, March 30, 2008

Data samples to infer high level abstraction logic ?

Doc Mud, that is a great reader of intelligence design patterns (neural networks, expert systems... this, that...) asked me for some data samples to test one of his ideas. My first answer was: as we have now the first level of hardware abstraction, let's use the written part of system to get data, using the dummy devices added in the "lab" subpackage. Well, I tried and I found a reason to smile. Nature, physic, has its logic, it is like if it has a rule to produce numbers according to hidden perfect rules, able to take into consideration all the aspects. In order to have the system producing data samples that are valid for any game (DocMud one is this today) the data must have a rule behind (dummy devices should contain this logic), just like nature likes to do. Now, or I try to calculate some numbers using a principle every time, simple one, to give data to DocMud (for example some numbers that are coordinates belonging to a geometric place) or I can put a number generator inside the system, that allow to generate numbers every time, different numbers, but always having a sense behind, changing the rule. The sense is the rule (all numbers are coordinates in a bidimensional space for example). In this way it could be possible to use the system to read data generated by the dummy devices. Dummy devices will have to generate numbers with a rule (the coordinates) defined by a virtual space, that has constraints and principles like physic has in the real world. Of course in this virtual space abstractor, rules will be very simple and possible to be changed. This could be a part of the laboratory.

Example: with the virtual space simulation it should be possible to define a cube, or a face of a cube, and ask for some coordinates that fit the rule to belong from the cube surface, or from one cube surface, or... or...

This could complete the lab environment.

Doc Mud, your opinion please :)

Monday, March 24, 2008

Boxing infos and what will be "swarm-ed"

Propostions and review:

what about if the unit of the swarm is a behaviour of the system at a "t" instant ? What is a system behaviour ? A system behaviour is the collections of all the input and output performed by the system at that "t" time. Swarming should help in deciding what behaviours can be associated togheter to bring system to success. What is system success ? System success is bigger when the system is closer to the fullfillment of its needs. How could we define needs ? They are just simple master-rules or "perfect pre built" behavioural "trends" for the system itself. Now for the code is time to go versus data boxing, to bring to the upper logic subsystem the information to compute in a non linear way.

Saturday, March 15, 2008

Info unit flattening eterogeneity and incapsulated specificity ?

Is it a good idea to flat the complexity of different sources (device aliasese and device alias ports) generated info units, to the common paradigm of a single representation model ? Some info unit could hide incapsulated complexity:

a camera sensor could produce numbers, as pixels, or the whole image could be an info unit. In this case the info unit could be for example the context, the meaning of the whole image, and pixels as info units (1 pxel as 1 inof unit) could be useless. This is what I mean when I use the term of "incapsulated complexity".

New implementation diagrams

Jconsole view of the jmx implementation

Activity digramClass diagram

Collaboration diagram

Component diagram

Friday, March 14, 2008

Recap and status update

Up to now a small result is the comprehension of the path followed to performe the first analysis part: it is small income but interesting.

First: define the distance between reality and the intelligence that have to understand reality

Second: provide abstraction for the level of complexity the system will digest

Third: design the "gnoseologia", whose translation in English I don't know, I'm sorry

Fourth: define subsystems, the low level layers abstracting the world or delivering infos to the intelligence upper-subsystem is essential and must be suitable for the target of the project in terms of functionalities and non functional features.

Note: system design try normalize data received by hardware abstracted, acting also like a normalized message router. This because common units can be manipulated when they are all flatted down to the same type of object (for the code, it will be the same interface).

Status of coding: as suggested by doc mud, the two observer/observable patterns have been replaced by a jmx implementation. Parts missing are:

From the virtual device alias ports, data must be rendered as generalized abstracted information units.

The intelligent agent, that will act like the class provided for this pourpose in order to have a sample:

Sunday, March 02, 2008

LDAP as a jndi provider: this allow an easy view and management of configuration , in a centralized way. Easy to view, easy to understand

I'm going to post now two views of the jndi repository tree, keeping the software identities being part of the system hardware abstraction layer. Code on the google site is now mature up to this.

UML diagram of the hardware abstraction layer, up to now and nightly built

Uml for the hal subsystem

Monday, February 25, 2008

New hypothesis

Doc Mud,

this peripatetic way to approach a discussion about a theory, is easy and allow to fix ideas before they go away. Listen to this approach.

Let's assume we have shots of the systems at time t0, t1, t2, tn. All the information units captured at time t0, t1, t2, tn are chained by a common factor, the capture time. So we would have:

chain of units capured at t0, let's call it Ct0
chain of units capured at t1, let's call it Ct1
chain of units capured at t2, let's call it Ct2
chain of units capured at tn, let's call it Ctn

every chain is a entity. Now, except for lots of experiences collected by every individual, what makes an actual "behaviour" for an entity is a "trend" in acting. Let's assume that a system collect pieces of experiences as our theory says: scanning the system at time t0, t1, t2, tn, but until there is not correlation between these small experiences, there is no bahaviour. Now: what happens if the system tries to link together Ct0, Ct1, Ct2, Ctn in several empiric way, where only succesfull chains of chains are succesfull ? Assume that a chain is not only made by values read from virtualized sensors but also by value written on virtualized actuators (why a shot should be only made up of read value ?). Let's assume that the system core has a population of relations between t-chains. If this problem is migrated in a competitive genetic context, only links that are succesfull will survive.

Note: an info unit is an abstraction of an INPUT or OUTPUT agnostic piece of information, part of ONE experience the system had a t time. The knowledge shape (as we imagine a knowledge base as a multi dimensional space) is made up of relations. Relations constitute the population of a genetic compuational model. The fitness function is empiric, at is just a calculation of the success of Ct0,1,2,n linke in different way. Direct feedback from the system will help in correcting the links between micro chains (Ct0, Ct1, Ct2, Ctn). This will add information composition, that is relevant in terms of system response quality. There is a way to experiment this approach and create a perfect model to train and measure (take this as a reference model) the system. It could also be built by code. I'm going to describe my idea of perfect model success, where T-chains linking has been completed successfully:
take a ball, a virtual ball. Put it inside a box, a square box. The ball is the system covered on its surface with sensors, virtualized abstracted sensors. Move the ball-system applying a small force. The ball will bounce on a surface inside of the box where it is contained at time t0. After the bounce at t0, the ball will move in the box and bounce somewhere else in the box, at time t1. This again and and again at t2, tn. The ball can read bumpers and regulate, measure also its velocity and direction. Well, if the ball-system will be able to correlate this info units-chains captured via shots at t0, t1, t2, tn, at a certain point the ball, will stop itself far from every box wall, maybe at the centre of the box volume. Crazy idea, but, what happens if the system if applied to this experiment and the difference (in a mteric system, on every coordinate axis) between the perfect centre of the virtual volume and the position of the ball driven by the real system, is the measure of the quality of the response ?

An idea...

Please doc Mud, since today don't post comments but enter the blog and post directly as it is too difficult to read and write playing with comments.

Welcome doc Mud.

Godzilla :]

Info linking architecture design slowing down

There is a design problem about the info units and the knowledge base structured space. Because of this it is necessary to review the design of the patterns that build the knowledge base via info unit.

In the meanwhile, the low level tiers of the code are going on, new classes are going to be added and development will stop after that the code will be at the point of being able able to provide the simulation labs. This because the abstraction is decoupled from any other component.

Remember, code is growing at:

code on google svn

Thursday, February 21, 2008

Logical system architecture nightly build. Please, read me again tomorrow morning.

Thanks to the contribution of Doc Mud it is assumed now that it is possible to use genetics algorithm to translate info unit chains into "generations". It will also possible maybe to decouple the virtualized swarm made of info units, from the visitor that will explore units tp apply composition logic. In this way it will be possible to do somethinf very interesting: change the translation analysis context keeping the same info unit (bees and bees chains) producer. I try to explain:

Informations producer is always the virtual swarm made up of info units priduced by virtual devices.

The information consumer is the logic using the swarm of inormation unit to compose the system knowledge base shape (called shape as we like to think to this code object as a 3D space reshaping every time it is enriched with experience).

Delegating to a proxy between the info producer and the info consumer, the logic used to manipulate available info units to enrich a knowledge base can be changed without redrawing the info producer. Less system parts to rewrite in case of error, more time earned. Hmm more than a computing model can be applied to the swarm in order to make comparison and change the system computing model. It will be necessaru ti ad some more drawing.

Recap on the fly:

Physical devices are abstracted by the virtual world, via an HAL impl.
Every real device is forwarded inside the reality of the virtual world via a virtual device alias, to decouple, generalize, allow testing and data mutation/inijection

Every device alias has n channels, one for each port it has on the physical device it is mapping. Every device alias is registered in a suitable logic repository.

Every channel is registered in a repository, almost the same it happens for virtual devices.

At a t time, the system read all the channels that are INPUT capable, as all the channels "observe" the same observer-notifier.

This, via the channel repository, will cause all the virtual device channels to wake up, read (if they can) and communicate a generalized and channel and device independent DTO, keeping the value found and the read metadata.

All these infos-DTO will be a shot of the system

Every inofmation unit generated and part of a shot will be activated becoming a bee of a virtual swarm in a virtual beehive.

When an information unit DTO is activated (let's say because it implements a Runnable paradigm as well), it becomes a bee too.

Polymorphism on info unit makes it possible that an info unit is also a thread worker and when activated is it a bee.

This is the information producer subsystem

A proxy for the bee swarm computing logic act with bee creating info units/bees and hiding the algorithm build a polymer of informations/bees that cause the system knowledge base to reshape. System knowledge base is enriched by an hidden login working with the virtalized swarm.

Good night.

Tuesday, February 19, 2008

Analysis via context migration, this to define the way composable information units build structured information space

Ok doc mud, let's adopt a problem context migration problem solving approach. Let's adopt a competitive space, for genetics evolution. It can be important to define some minor points.

Information units are bees of a virtual swarm: [info unit] -> [DTO for data linking and association] -> [runnable DTO ?] -> [bee]

All the information units created in a shot makes a chain and this is a generation.

The fact that in a shot all the information units (-> bees) have been created at the same shot time "t" is extremely relevant. This says that the approach of considering timed shots capturing is coherent with the rest of te theory.

Let's go to see your post doc mud, but the fusion of genetic programming on a virtualized swarm computing model needs more care.

DTO domains separation

Change problem context, moving it (the problem) in a more comfortable one (context)

The challenge is to try to imagine the software problem of information units composition/chaining as "chemical bonds", as links between composable info units. May or may not be a way to find a solution, but it is a so strange and interesting analysis approach, why not to try ? Can a person extract a problem from its original context, forget this context and translate it in another different context, to have a different and more familiar (for our brains) point of view ?

Composable info unit composition, message from doctor Mud needs a response

I saw that comment, doctor Mud :] (Vincenzo) and that's the most interesting issue about phase 2 of info processing. Let's assume it is feasible to abstract virtual device alias, and to let them produce generalized information units. Then, let's consider this type of DTO, where info piece of infos move through the pattern of INFO THREE STATES. To solve this very "delicious" problem, I'd like to suggest a new approach in designing software, this is a game, so it is possible to say silly things. This approach is based on the idea that every software design problem and every sofware entity partecipating a component model, can be seen as a physical virtualized world, where it is possible, for software agent and software design problems, to apply physical laws and a 3D spatial model. I hope this will make eaiser to view the entities and the problems, for software. In our case: let's immagine the information unit as spheres, floating in a 3D space, where a sort of Coulomb law is valid for these "balls". I want to suggest not to proceed with a statinc rule based linking approach. Let' suppose these balls are able to manifest spontaneous attaction, according to their intrinsic nature. Nature defined by the information features itself. It is like to have some magnets with more than the usual two poles, in a gel. Is it possible to imagine a law that define spontaneous chaining by these information units ? Doctor Mud ? Do you know that strange vegetable that produce green balls with small hooks ? Look here to the Bardana plant.

As I hate cold numbers, let's proceed with a pure qualitative approach, and let's extract the number at the end, from the running prototype, just to evaluate the prototype success and quality.
It is relaxing to do software analysis this way, it seems to be the "take it easy developement model" :)

Sunday, February 17, 2008

Code is here, growing slowly with theory

A simple shopping list to build this theory compliant non linear system:

  • The system will be made up of a virtual system able to generalize and reduce the order of magnitude of informations coming from the environment where the system has been put. The virtual system is important because the target of this blog and game, is to obtain (also if simple) a system running for a proof of concept. This could be also an evolutionary prototype in a reiterative incremental developement model, if the system will be interesting.
  • Actuators will be abstracted in the same way, via virtual devices, speaking from a virtualized simplified reality versus the external real world. The intent of the virtual world is to generalize and simplify, abstracting and decoupling system tiers.
  • An hadrware abstraction layer able to generalize a generic device interface and the I/O calls to the underlaying hardware layer. An inventory object will implment a visitor pattern and load all the interfaces/device abstractors.
  • Every virtualized sensor/actuator will produce an abstracted basic simple unit of information. This will be a sort of DTO able to carry the value received/transmitted, the type of operation (I or O), the name of the virtualized device that generated or processed data. Every device has one or more communication channel. This because every virtual device (or device mapper, or device alias) can have more then one only access to hardware resources. Every device has one or more channel.
  • The system try to use swarm computing, but the swarm will be made up of software bee, in a virtual beehive. Every bee of this virtual swarm computing will be the previosly described basic unit of information. This DTO will be an anomalous DTO pattern implementation, cause it will be also a worker for a dedicated thread. The DTO of the piece of information built by a virtual device, will run autonomously to link with other information unit, building chains of information units. Chains can be linked toghether, to obtain longer chains. This will be the virtual swarm computing model for this system theory.
  • With a given frequency the virtual system state of all the virtual devices mapped on the virtual world, will be captured in a sort of virtual devices shot.
  • This will be a collection of unit of informations running everyone as a stand alone thread and build link-correlation with other information unit and with other chains of information units.
  • Let's call the group of information units captured in a cycle of reading virtual devices status, simply a "shot".
This first shape of the state of the system is the status of the system at a "t" moment, and it is also the current system status to who the system itself must provide a response/resolution. This system status will be then digested by the virtual swarm and made part of the knowledge base of the system.

The system has three states and three different names:

1 Shot of the system. At a certain time all the virtual devices status are captured and some information units are created. This is the system shot. This is the question the system must provide an answer with, before becoming part of the system experience.

2 System shot is digested and the "n" DTOs composing this shots (DTOs are composable information units) are animated as threads. These threads are all bee of the virtual swarm that live in the virtual beehive.

3 Shot has been digested and more and more links are built in terms of association with other information units and chains of information units. This will cause longer information units' chains to be built.

Code is growing (slowly) here:

Non functional requirement notes

Before proceeding to the code, let' s define some non functional requirements notes. Usually intelligent systems need to performe real time processing in a very fast "pseudo real time" way. It can be interesting to change this approach, but first of all it is fundamental to define the parameters to measure the quality of a non linear system response. If the non linear pseudo intelligent system is seen as a black box, let's try to point out the quality parameters for the results produced by this box. Some elegible, not "exotic" parameters for system response quality can be:

1 Time requested to produce an output, after that the system has succesfully received, validated , accepted the input data.

2 Assumed an IDEAL perfect answer for the input given at point 1, it is possible to define the difference between the perfect IDEAL answer and the REAL answer produced by the real system. The difference can be evaluated for simple systems whose values can be obtained in any metric reference system.

Let's assume that there is a functional dependency between the time taken to produce the answer and the answer quality:

Response Quality = f(time taken to produce the response)

As the target of this experiment is to obtain a sort of answer, let's slow down the system.

As the parameters to define the system response are two, it could be interesting to choose parameter 1 as quality parameter to reduce. Let's state: it is possible to have a system that takes a some time to produce a response.

Because the system works with some information correlation technology (building a virtual swarm of info elements that need correlation) let's reduce the response time and give time to the info correlation phase to build deeper links. Let's analize now for a while, what happens limiting the time used to performe information correlation. With less time to scan composable information unit chains, links that will be build will be mainly "surface" immediate links. Let's say simpler links. Giving more time to information unit/information chains correlation, the links produced will be "deeper", more in numer, more complex. This is interesting as it is possible to have a view of the meaning of the generally valid function:

Response quality = f(Time taken to produce the response)

expressed in the context of this real system engaged in the experiment:

if we state that:

(Time taken to produce the response) = (Number and depth of information unit links and associations)

accepting a response time decrease we should observe an increase in the response quality. This is a very simple and logic assumption that doesn't need a blog to be accepted. It is now simple to catch a sight of another important parameter, that could be more relevant then the other two. If this "f" correlation (or assumption) is correct, this means that it not only acceptable to have good system response in short times (at least for the system drawn up to now), but also it is not possible at all. Let's introduce a new quality parameter to enrich the metrics of system quality:

3 Number of guaranted and quality fixed responses per time unit

This because according to the needs of the system produced data consumer, it could be better to have a constant number of response per time unit, OR a constant response quality, no matter the response time. This, of course, is a matter of usage of the systemn designed. If the relation assumed is corrected, it should not be difficult for a system provide with a quality feedback agent, to auto-regulate the time spent to link information units, and the depth of links built. This in order to handle the oscillations of the two main parameters. Let's assume, so, another function (if hardware capacity is a fixed constant):

Number of guaranted and quality fixed response per time unit = f[(Response quality),(Time taken to produce a response)]

A sort of master quality controller can handle info unit processing depth and response time, to act as a feedback retro-active agent, to keep parameter 3 (how much constant is the way the system asnwer) stable. Well, in a certain way, let's keep in mind that sometimes system could need a sort of "nap" to have a complete full information linking phase for a quality best-effort answer, no matter the time :)

Tuesday, January 29, 2008

System architecture proposal

Here follows my last updated idea about the system architecture, this is the fruit of my night time meditations: