AUTONOMOUS SYSTEMS:
Description and Construction

E. von Goldammer , C. Kennedy, J. Paul, H. Lerchner and R. Swik
Institut für Kybernetik & Systemtheorie
- ICS Harpener Hellweg 532, D-44388 Dortmund

and FB Informatik, FH Dortmund

 

Abstract

Adaptive and learning systems with high degrees of autonomy will be discussed from both a mono-contextural and a poly-contextural point of view. Staatistical learning algorithms as well as the new class of adaptive computational models such as neural networks, genetic algorithms, or fuzzy logic, which have recently been termed "soft" logical computation, are categorized in this paper as mono-contextural conceptions. Mono-contextural descriptions are always hierarchically structured, i.e., the triangle inequality as a defining relationship of metricity strictly holds. All input/output systems with (or without) implemented feedback algorithms belong to this category. On the other hand, all models of self-referential (cognitive) processes belong to the class of heterarchically structured descriptions which lead to logical antinomies and ambiguities if described in a mono-contextural logical framework.

 

1. Introduction

In a conventional sense, control is the capability of a system to present inputs to processes, plants or machines such that only the desired outputs or actions are observed. To illustrate the use of simple classification algorithms, the reception of pulse signals in a noisy background may be considered. An adaptive receiver must generate the decision rule that would classify the receiver signal (where signal and noise are mixed) into two classes where signal and noise are separated.
The first question that must be asked for a control system of any kind is: what is it for - for what purpose will it be designed? It is usually a dangerous oversimplification to imagine that the answer is a single easily evaluated function which remains constant over time. A real-world control system should be able to respond to changing demands and situations and should be oriented towards satisfying many requirements at once. It should be able to handle complex tasks containing unpredictable events and changing environments within a given context of control; not only must the plant output be maintained within specified limits, but it must also be done cheaply, quickly, and efficiently.

Adaptive control is a branch of control theory in which a controlled system is modeled, typically by means of a set of linear difference or differential equations, some of whose parameters are unknown and must be estimated. For the latter cases fuzzy logic and/or neural nets have become an attractive alternative in modern control theory.

The application of the classical techniques of adaptive control theory, can be extremely powerful. However, its range of applicability is limited to tasks with very restrictive characteristics. These are characteristics of the system model and the ways in which it may be adapted. Although the assumptions of linearity are far broader than is commonly thought, many real-world tasks which are of scientific and/or industrial interest are excluded on the basis of model-dependent approaches.

For many industrial control situations the necessity of classification of data or datasets does not exist. However, it becomes indispensible for some processes. Examples are: automated quality control and the construction of goal-seeking robots which are required to act in an unstructured (dynamically changing) environment. If automated quality control is required within an industrial production line, deviations from various standard situations or standard patterns must be detected and classified. It is then necessary to relate the measured sensor data to datasets corresponding to a given class of templates or standard objects. The function of the control system has changed towards a decision-making device whose input corresponds to the recognized class and whose output is an optimum control strategy relating the desired actuator reaction to the particular situation.

If control processes are roughly divided into two classes:

- CLASS_A_PROCESSES characterized by constantly recurring situations,
and
- CLASS_B_PROCESSES characterized by new and unpredicted situations,

it is the second class which demands a context-dependent classifier-control system including image and/or object interpretations as well as an adaptive decision-making capability.

The purpose of sensors and sensory processing is to detect the state of the environment, i.e. the position, orientation, and spatial-temporal relationships of objects in the world, so that control signals appropriate to the task goal can be generated. This implies among other things that the processing of sensory data (including their classification) must be done not only in real-time but also in the context of the control problem.

However, what is the meaning of 'CONTEXT_OF_CONTROL' in relation to the context of classification?

 

2. Adaptive Learning and Control

The answer to this question differs completely for the two classes of processes and should be discussed within the context of adaptive learning, adaptive control, or - using the connectionist parlance - in the context of supervised and unsupervised learning. For clarification, a short cybernetic definition of 'learning' will be given [Bateson 1972; Goldammer and Kaehr, 1988]:

 

· LEARNING_0 (ZERO_LEARNING)

Phenomena which approach this degree of simplicity occur in various contexts. Technical examples are given by:

- electronic circuits where the circuit structure itself is not subject to changes resulting from the passage of electronic signals within the circuit as it occurs in conventional control;

- look-up tables where the input-output relations of the control and/or classifier system have been collected in some way and stored as an invariable dataset.

Conceptually the connectionist models of 'supervised learning' belong to this category.

 

· LEARNING_I (1st_ORDER_LEARNING)

While ZERO_LEARNING by definition is characterized by specifity of response which - right or wrong - is not subject to corrections, LEARNING_I is the change in specifity of response by corrections of the datasets within the control system. Such processes correspond to Hebb's principle of 'self-organization', i.e., the internal organization of datasets changes by adaptation to new situations within a given 'CONTEXT_OF_CONTROL' and/or classification. In connectionist parlance 'unsupervised learning' corresponds to LEARNING_I.

 

· LEARNING_II (2nd_ODER_LEARNING)

LEARNING_I is a process characterized by corrections of errors within a set of alternatives. LEARNING_II is defined (in the present terminology) as learning to find a label for all changes in the process of LEARNING_I. For phenomena of this order, various terms have been proposed in the literature, e.g., 'learning to learn' or 'set learning'. Effectively LEARNING_II changes sets of alternatives, while LEARNING_I makes corrections within one (single) set. Technically this means that not only the (internal) datasets but also the algorithm, which defines the structure of the (formal) system, changes simultaneously during such an (autonomous) learning process.

For LEARNING_I, it is the variation of the internal organization of the data structure which is adapting or self-organizing. In LEARNING_II, which is interwoven with LEARNING_I, it is the relationship between the system (e.g., a robot) and the 'environment' which is of self-organizing nature. The relationship represents a cognitive process as the basic requirement for any technical or living system acting in an unstructured (dynamically changing) environment. It follows that it is necessary to model the process of distinction which occurs between an autonomous system and its environment.

In his text on 'Adaptive Control: the model reference approach' Landau [Landau, 1979] states that

"while a feedback control system is oriented towards the elimination of the effect of state perturbations, the adaptive control system is oriented towards the elimination of the effect of structural perturbations upon the performance of the control system".

Therefore any adaptive control system necessarily requires adaptive classification algorithms either in the sense of ZERO_LEARNING, LEARNING_I, or LEARNING_II depending on the purpose for which the control system is designed.

Although the tools and capabilities of conventional classification and control theory should not be discarded, their limitations should be recognized.

As stated above, their application is restricted to certain well-defined problems.

All statistical machine learning algorithms which are discussed in the literature including the connectionist learning models (neural nets) belong to the category of ZERO_LERANING or LEARNING_I. They are characterized by

- a causally determined input/output relationship with (or without) implemented feedback algorithms;

- an (ultra-)metrically organized structure [Rammal et al. 1986; Parisi, 1987] corresponding to indexed hierarchies [Benzércri, 1984];

and

- operational openness (see below).

Open systems do not have an environment, which means that cognition cannot exist. That is, the only "environment" they have is specified by an outside observer and does not exist relative to the system itself. This means that cognition cannot exist [Goldammer and Kaehr, 1988; 1990; 1992]. Open systems such as connectionist models of neural nets are NON-COGNITIVE. From a more technically oriented point of view, these models can be regarded conceptually as (non-linear) adaptive signal or data filters. Their usefulness for solving special problems of classification, control and prediction in the sense of LEARNING_I is generally accepted.

Systems or processes which can be described and modeled on the basis of ZERO_LEARNING or LEARNING_I algorithms are characterized by a hierarchically structured organization. Needless to say, all CLASS_A_PROCESSES belong to this category. From CLASS_B, only those processes associated with a narrow or slowly varying context belong to the category of LEARNING_I. The following simple example of a control problem for a model car may illustrate the situation:

a) A model car should drive into a garage as a human driver does. The garage is surrounded by two walls. The space in front of the garage is too narrow to put a car directly into the garage so that one needs some parking techniques. The walls enable the car to measure its direction and position using its sensing device. For the position, one has three variables, x (front wall distance), y (side wall distance), q (heading angle). The model car (shown on a video tape) has the following configuration: Body length 46cm, width 22cm, weight appr. 5kg, drive force on the front wheel, DC precision motor, one IR sensor driven by a stepping motor for the measurement of distance and direction, on board high end 16bit microcontroller SAB 80C167 (features: register-oriented context switch, 8 channels 10bit AD converter, capture/compare registers/counters used as high resolution switching DA converters, clock frequency 40MHz), external memory 256kRAM-static, lead acid accumulator 12V (9Ah). The car microprocessor can be connected alternatively to a human teacher via radio telecontrol or to a host computer via bidirectional telemetric system. For further detail see ref. [Swik et al., 1996].

Step 1 (ZERO_LEARNING or supervised learning):
In order to gather data, the model car is driven by a teacher (via radio telecontrol) from different positions of a given area into the garage. During the teaching process the car microprocessor collects all sensor and corresponding actuator data. The result of the teaching process is a look-up table that is stored in the on-board memory.

Step 2 (LEARNING_I or unsupervised learning):
The teaching process includes only a finite number (typically five to ten) of trajectories. Following the teach-in phase (step 1), the car must drive into the garage from all arbitrary positions within a given area. During this period, the look-up table is complemented by new input/output (i.e., sensor/actuator) data. In addition to this, the car must optimize the trajectories (or more generally the task) according to same pre-defined or pre-programmed criteria, without any help by the external human teacher.

Step 3 (LEARNING_II or learning to learn): In order to reach its goal (the garage) the car must drive round some obstacles which did not exist during the teach-in process. If the car meets such an obstacle several times, it must be able to make a decision (without the help of an external human teacher) whether or not this new object should be added to the data in the look-up table.

From the trajectories five variables can be measured:
- the position of the car given by x, y, Theta (these are the sensor or input data);
- the angle of the front wheel in moving foreward (actuator or output data);
- the angle of the front wheel in moving backward (actuator or output data);
- the speed control (actuator or output data);
- the angle of the direction between the sensor position and the main axis of the car.
The input (or sensor) data are labeled as x
1k, x2k,....., xik and the corresponding output (or actuator) data as yk with k=1,2,....,5 as for the present example.

Between the input variables x1k, x2k, .... and the output or actuator data ym a functional relation exists. The question is whether or not it can be given in an explicit way, viz.

ym = g(x1k, x2k,...., xik)       [1]

The problem to be solved for the design of any technical system with the capability of ZERO_LEARNING and LEARNING_I (such as the model car given above) is to find an efficient algorithm which solves the problem described by eq.1. This is not a trivial problem but it can be solved (at least in principle) for many technical applications. (cf. ref. [Swik et al., 1996]).

For processes such as the one described by step 1-3 it is important to realize that their 'CONTEXT_OF_CONTROL' is constant in time. This also holds for obstacles introduced in step 3 which can be considered as perturbations or "noise" within the global context of control, namely 'parking_in_a_predefined_position'.

 

3. Classification and Interpretation - Polycontextural Control

While ZERO_LEARNING and LEARNING_I can be modeled by statistical algorithms or by the new class of adaptive computational models such as neural networks, genetic algorithms or fuzzy logic, which have been subsumed very recently under the term of "soft logical" computation, it is not possible to model processes such as LEARNING_II on the basis of these algorithms; here they will be categorized as 'mono-contextural' computational conceptions instead of "soft logical" computation. Mono-contextural descriptions are always hierarchically structured, i.e., the triangle inequality as a defining relationship of metricity strictly holds. All input/output-systems with (or without) implemented feedback algorithms belong to this category.

However, if self-referential processes such as cognition and volition are included then any mono-contextural description necessarily leads to logical antinomies and ambiguities. This is an immediate result from the requirement that any self-referential system must be able to distinguish between itself and its environment. Models of cognitive processes belong to the class of LEARNING_II, which are characterized by an interplay of heterarchically and hierarchically structured processes. In a poly-contextural framework 'heterarchy' is established inter-contexturally (by transitions between different contextures), whereas hierarchical structures are defined intra-contexturally, i.e., within a contexture [Goldammer and Kaehr, 1988]. Thus a polycontextural calculus offers the possibility to model systems where different contexts are allowed to be simultaneously active and interfering with each other.

The poly-contextural logic offers an extension of the idea intended by the term 'context-dependence' by introducing the concept of the 'contexture'. It also represents a theory which provides the basis for modeling and simulating changes of contextures (contexts) on logical machines in a formal mathematical sense, thereby opening new possibilities for any theory of cognition as well as communication, classification, control and decision.

A contexture is a logical domain were all classical logical rules hold rigorously. Poly-contexturality results from the mediation by order and exchange relations between different contextures, i.e., logical domains or contextures do not exist in isolation, but are mediated with each other by non-classical logical operators. An example is the 'transjunction' which allows the modeling of parallel and simultaneously existing processes [Goldammer and Kaehr, 1992].

In the sense of Russell's theory of logical types, LEARNING_II as defined above is of higher logical type than LEARNING_I. This corresponds to the well known logical relation between the name (or image) of an object and the object itself. The name/image (or operator) of an object always belongs to a different logical type from that of the corresponding object (or operand). The problem which arises in any mono-contextural formal representation of 2nd order phenomena such as LEARNING_II (or more generally in all cognitive processes) arises from the necessity of modeling transitions between different logical types (or domains). A logically unambiguous description of transitions between different logical domains is essential for the modeling and simulation of heterachically organized processes.

The theory of polycontexturality is a conceptual foundation for the modeling of those transition [Kaehr, 1981; 1996]. It allows the modeling of bifurcation from one logical domain into at least two parallel simultaneously existing contextures. In this way, a non-classical form of parallelism results from the distributed circularity of operator (image of an object) and operand (object) between different logical domains. The whole system then forms a totality which cannot be reduced to a collection of sequential processes without describing a completely different system, i.e. its essential features would be lost in such a reduction. The theory provides the necessary complexity for a formal description of parallel simultaneity.

In a broader context the problem of scene analysis includes image interpretation. Any kind of automatic interpretation necessarily leads to the problem of modeling 'context-dependencies'; a problem which belongs (in the present terminology) to the category of 2nd_ORDER_LEARNING. Artificial Intelligence is one of the youngest fields of research concerned with 'induction', a scientific concept characterized by its pronounced dependence on particular contexts. This, however, results in profound difficulties if the apparatus of formal logic is applied with the same rigor for inductive inference as it can be done for deduction. The Carnapian approach to inductive reasoning, for example, has given rise to numerous paradoxes. In other words, all classic logical systems as well as the so-called (linguistic) context-logics, such as modal logics, non-monotonic logics, or fuzzy logic, are too weak to put the problem on context-dependencies in strictly scientific (formal) terms.

Although there are many examples discussed in the literature on advanced robotics and autonomous control which require 2nd_ORDER_LEARNING [Brady et al., 1988; Mueller et al., 1990; Antsaklis and Passino, 1993], the model car will be selected to point out the differences between first and second order conceptions:

b) If the model car reaches its parking position and this position is blocked by an obstacle, the normal technical approach for such a problem within a given global context of control, i.e. 'parking_in_a_pre-defined_position' would be to stop the vehicle and to give an error signal. Nobody would expect the car to make a decision of its own in a new and unpredicted situation, e.g. to define a new parking place, unless the system was prepared for such a decision by its programmer. However, if we insist that the system itself (autonomy) must find a solution for the new and unpredicted situation (from the view of the system), then the new situation (object) must not only be detected but it must also be classified as a different situation (in comparison with the object/obstacle of step 3). However, a classification is only possible within a certain context, i.e., the (autonomous) system has to choose a context by itself, which is a volitive process. It is the choice of a context which corresponds to an interpretation of the classified object by the system itself. If the system is a vehicle then it may either decide to ignore the new object, to stop, or to look for a new parking place. The decision depends on the chosen context, i.e. on the interpretation of the new situation.

If we consider this example conceptually as a first step towards a domestic robot, it should be clear that the context chosen by the system itself in the first approach may vary during the operation of the learning system. These variations correspond to changes within its knowledge-base (resulting from the knowledge acquisition) during the operation of the system.

 

4. Conclusion

To summarize, learning in an unstructured (dynamically changing) environment as required, for example, by 'Advanced Robotics' comprises at least two simultaneously interacting processes:

i) ..... a volitive (decision making) process structuring the environment by a determination of relevances and a corresponding context of significance within the semantical domain produced by (ii) .....

ii) ..... a classification and abstraction of the data by cognitive processes producing a representational structure of content and meaning within the context chosen in (i) .....

Both processes are complementary to each other which means that neither of the two can be considered or described separately - although from a physical, i.e. a 1st_ORDER point of view, they are usually assumed to be separable. It follows that very often only cognitive processes (such as learning) are considered without volition (the decision-making processes) and vice versa. Both processes - cognition and volition - involve parallel and simultaneously interacting events such as classification and control within an autonomous system.

 

References

P.J. Antsaklis amd K.M.Passino, An Introduction to Intelligent and Autonomous Control', Kluver Academic Publ., Dodrecht, 1993.

G. Bateson, Steps to an Ecology of Mind', Int. Textbook Publ., London, 1972.

J.P. Benzécri, L'analyse des données 1', La taxonomie Dumand, Paris, 1984.

M. Brady et al: (eds.), Robotics and Artificial Intelligence, NATO-ASI-Series F, Computer and Systems Sciences, Vol.11, Springer Verlag, Berlin, 1994.

E. von Goldammer and R. Kaehr, Problems of Autonomy & Discontexturality in the Theory of Living Systems, in: Analyse dynamischer Systeme in Medizin, Biologie & Oekologie (D.P.F.Moeller & O.Richter, eds.), Springer Verlag, Berlin, p.3-17, 1990

E. von Goldammer and R. Kaehr, Das Immunsystem als kognitives System', in: Berichte des Instituts fuer Informatik, TU-Clausthal, 1992 , 2nd cit.

R. Kaehr, Materialien zur Formalisierung der dialektischen Logik und der Morphogrammatik, in: G.Guenther, Idee und Grundriss einer nicht-Aristotelischen Logik, Felix Meiner Verlag, Hamburg, 1981

R. Kaehr and Th. Mahler, Introducing and Modeling Polycontextural Logics, this volume, 1996.

E. von Goldammer and R. Kaehr, Poly-contextural modeling of heterarchy in brain function, in: Models of Brain Function (R.M.J.Cottrill, ed.), Cambridge, Univ.Pess, Cambridge, p.463-497, 1988. , 2nd cit.

R. Kaehr and E. von Goldammer, Again Computers and the Brain, J.Molecular Electronics 4, p.31-37, 1988

Y.D. Landau, Adaptive control the model reference approach, Vol.8, Marcel Dekker Publ., N.Y., 1979

W.T. Mueller et al. (eds.), Neural Networks in Control, The MIT Press, Cambridge, 1990

G. Parisi, Facing Complexity, Physica Scripta 36,p.123-124, 1987

R. Rammal, G. Toulouse and M. A.Virasoro, Ultrametricity for physicists, Rev.Mod.Phys.,58, p.765-788, 1986

R. Swik, H. Lerchner, R. Wehner, A fuzzy controlled model car, to be pulished, 1996 ,
2nd cit.


webmaster@xpertnet.de
Copyright © 1996, ICS, revised Apr 1996