Tuesday, January 20, 2009

Answer is not a secret

As staten before, answer to that question is in:

http://en.wikipedia.org/wiki/Natural_Select

This can be applied to a swarm of composable information unit population ? Ok, but so linking between units, as they are composable, how does it much with natural selection algorithm ?

Problem is here again: two main features:

  1. composable
  2. natural selection population
Is one of these two wrong ?

Monday, January 19, 2009

"Rational" of the swarm common consciousness

At http://en.wikipedia.org/wiki/Natural_Select I can read about natural selection rules, and it is clear that looking for a rational cause for the swarm consciousness, it is possible to find that most of the units composing the swarm will act in a certain way. This will build "the trend" in the swarm behaviour. Now, let's change the previous question into this one: why most of the units act in that correct way that is positive in the balance of the swarm, whose intent is to survive and perfect the life of the swarm itself ? Why most of the units behave in "that" way ? Back to the unit, is it unit that somehow choose how to behave, as most of them choose in the correct way, how does it choose ?

Sunday, January 18, 2009

Natural selection of better software units in the composable information unit theory

It is not so common to have to think to natural selection theory, if you are not in the traffic. Yes. It is not something complex but interesting for me, so I take note to avoid to forget, here in my composable information studying journal. In a community the weakest unit dies to allow the community to have more change to survive (the community makes it die). The selection is performed by the swarm itself, like to say that the swarm makes choices. Swarm makes choices ? So swarm has conscience ? I correct: so the swarm has an its own intelligence ? Who gaves it to the swarm ? If no one gaves to the swarm any intelligence, is this a sort of side effect of the single intelligences of the units that compose the group ? There is a trend so of all the units to go in a "direction", in that way instead of a different one ? Moreover, according to natural selection the weakest units are selected to be abandoned because not productive for the whole swarm ? So the swarm knows which unit is weak, and so knows about every unit ? It really seems to me that the swarm projects over itself a super entity that is an emanation of all the units. This is like the behaviour of all the units creates its own GOD in a trend put in place by the fact that every unit main intent is to preserve its life, this through the existence of the species it belongs to.
I see incredible scenarios about the possible social aspects of the natural selection theory applied to the composable information units, swarm computing theory derivation. is the super entity the result of driving chaos of units ' behaviour behind the common and single unit intent of survival ? What can produce a swarm of much more complex entities (for example units having social and ethic rules) when projecting the common super entity that will performe the natural selection of strongests units ? Yes this is already part of genetic programming, but what happens if the code tries to distil the creation of a swarm conscience in the context of the composable information unit theory ? The intent in this context so, is to use this collateral derived intelligence to drive the system equipped with this software. I need to go to read about natural selection and Darwin, topics like random mutations created by "nature" to produce better units.

Monday, January 05, 2009

A particular learning curve: using inouts and outputs being unaware of what they are intended for

Let's play a game: a person is closed in an cockpit of an unknown vehicle type with no labels on leds and no labels on displays, no labels either on buttons and control levers. The person there inside can only see that when values changes on displays and leds turn on/off, "good" or bad "things" happen. The person in the cockpit has not a driving expertise for the misterious vehicle it is in, but starts associating things to do and values to keep under controls with the correct things to do to keep the vehicle running, preserving the vehicle. Maybe the driver will not associate the correct meaning to each display or button. Simply, the driver cans drive the vehicle without being aware of what controls are but just associating commands with useful or useless conseguences. This seems important, the driver associate values read and commands given on buttons and levers, the first with the second each other creating commands/value read associations in "paths" refined with experience. More experience will lead the driver to create "named paths", so specific path to allow the driver to do "macro" operations on the vehicle. More macros combined together gives "higher" macros. Here there is a new result: until the driver learns all the paths (combinations of controls usage and values to read from displays) the diver has an important learning curve that is essential. It is a sort of consciousness raising the driver has, without it the driver will lead the vehicle to crash somewhere.
What happens if the vehicle is compared to the a software application, displays and leds are vitual input devices, buttons and levers are virtual output devices ? Now new elements are needed to actually compare the system and the driver of the misterious vehicle:

1 The concepts of "good" and "bad" = "advantageous" and "disadvantageous" for the systems, rules
2 Rules to associate "advantageous" and "disadvantageous" with display reads and buttons/levers activations
3 Rules for associating reads and commands activations together in "paths"
4 Rules to update the paths refining them with increasing system eperience, improve the associations

We fall again in the concept of composable info unit but, this time, there are big news: system can absorbe the controls available to the system, without the need to define what displays and buttons to push are. There is an initial learning curve to transforme an unaware system into an aware one. Aware of unnamed inputs and actuators.