this peripatetic way to approach a discussion about a theory, is easy and allow to fix ideas before they go away. Listen to this approach.
Let's assume we have shots of the systems at time t0, t1, t2, tn. All the information units captured at time t0, t1, t2, tn are chained by a common factor, the capture time. So we would have:
chain of units capured at t0, let's call it Ct0
chain of units capured at t1, let's call it Ct1
chain of units capured at t2, let's call it Ct2
chain of units capured at tn, let's call it Ctn
every chain is a entity. Now, except for lots of experiences collected by every individual, what makes an actual "behaviour" for an entity is a "trend" in acting. Let's assume that a system collect pieces of experiences as our theory says: scanning the system at time t0, t1, t2, tn, but until there is not correlation between these small experiences, there is no bahaviour. Now: what happens if the system tries to link together Ct0, Ct1, Ct2, Ctn in several empiric way, where only succesfull chains of chains are succesfull ? Assume that a chain is not only made by values read from virtualized sensors but also by value written on virtualized actuators (why a shot should be only made up of read value ?). Let's assume that the system core has a population of relations between t-chains. If this problem is migrated in a competitive genetic context, only links that are succesfull will survive.
Note: an info unit is an abstraction of an INPUT or OUTPUT agnostic piece of information, part of ONE experience the system had a t time. The knowledge shape (as we imagine a knowledge base as a multi dimensional space) is made up of relations. Relations constitute the population of a genetic compuational model. The fitness function is empiric, at is just a calculation of the success of Ct0,1,2,n linke in different way. Direct feedback from the system will help in correcting the links between micro chains (Ct0, Ct1, Ct2, Ctn). This will add information composition, that is relevant in terms of system response quality. There is a way to experiment this approach and create a perfect model to train and measure (take this as a reference model) the system. It could also be built by code. I'm going to describe my idea of perfect model success, where T-chains linking has been completed successfully:
take a ball, a virtual ball. Put it inside a box, a square box. The ball is the system covered on its surface with sensors, virtualized abstracted sensors. Move the ball-system applying a small force. The ball will bounce on a surface inside of the box where it is contained at time t0. After the bounce at t0, the ball will move in the box and bounce somewhere else in the box, at time t1. This again and and again at t2, tn. The ball can read bumpers and regulate, measure also its velocity and direction. Well, if the ball-system will be able to correlate this info units-chains captured via shots at t0, t1, t2, tn, at a certain point the ball, will stop itself far from every box wall, maybe at the centre of the box volume. Crazy idea, but, what happens if the system if applied to this experiment and the difference (in a mteric system, on every coordinate axis) between the perfect centre of the virtual volume and the position of the ball driven by the real system, is the measure of the quality of the response ?