acquisition of knowledge for autonomous cooperating agents

14
1302 IEEE TRANSACTIONS ON SYSTEMS, MAN. AND CYBERNETICS. VOL 13. NO. 5. SEPTEMBER/OCTOBER 1993 Acquisition of Knowledge for Autonomous Cooperating Agents Edward Szczerbicki Abstract-In an organizational context autonomous agents consist of groups of people, machines, robots, and/or guided vehicles tied by the flow of information between an agent and its external environment as well as within an agent. Mathe- matical modeling is used to evaluate such an information flow. The evaluation of an information flow is performed for dif- ferent types of external and internal environments. Two major cases are taken into account, Le., static and dynamic processes describing the external environment. Such issues as the role of correlation as well as interaction and the losses caused by in- complete and delayed information are addressed. Only actions that are described by real numbers and utility functions that are twice differentiable are considered. The results of the model-based evaluation of an information flow in different decision situations are formulated as IF . . . AND . .. THEN rules that provide some useful knowledge about autonomous agents functioning. Generation of such knowledge is an important factor in successful implementation of ideas re- lated to atomized organizations and provides a bridge into the distributed systems and artificial intelligence (AI) communi- ties. To support the development of such a bridge, an approach is suggested that combines knowledge expressed by traditional IF . . . THEN rules with machine learning technique based on the training of a neural network. A three-layer neural config- uration is used. It is demonstrated that the use of networks shows promise as a tool for the development of information structure for autonomous agents. The concepts included in the paper are illustrated with examples providing interpretation and relation to real situations. I. INTRODUCTION: THE PROBLEM AND THE MODELING APPROACH A. The Problem TOMIZED organizations consist of a limited number A of autonomous agents each of which decides about its own information input and output requirements [5], [39]. Autonomous agents can still be interrelated and embedded in larger systems, as autonomy and indepen- dence are not equivalent concepts [ 191, [29]. To maintain the autonomy, control over information flow and process- ing should be distributed among the agents. This paper addresses the problem of formal modeling of an infor- mation flow in autonomous systems formed by groups of people making decisions and working together. The for- Manuscript received August 30, 1991; revised April IO. 1992, October 3, 1992, and February 11, 1993. The author was with the Department of Industrial and Management En- gineering, University of Iowa, Iowa City, IA 52242. He is now with the Technical University of Gdansk, 80-952 Gdansk, Poland. IEEE Log Number 9209694. mal (mathematical) model is then used to extract some knowledge about autonomous groups functioning. Although few management functions have been for- malized and automated, advances in information re- trieval, processing, display, computer-aided management (CAM), and decision technologies have certainly led to significant modeling applications that help people per- form management functions [4], [6], [Ill, [13], [45], [46], [47]. Mathematical models in management and be- havioral science are designed to describe, understand, and finally support processes and activities that are primarily intellectual, such as drawing conclusions from evidence, making predictions from past performance, and deciding on an appropriate course of action to follow. In other words, mathematical models representing industrial, so- cial, and behavioral systems are developed mainly to cre- ate knowledge. The creation of knowledge and generalization in deci- sion science is a difficult task [21]. Empirical knowledge representation for management information systems and decision support have been discussed in [2], [8]. Various practical systems capable of extracting descriptive deci- sion-making knowledge from data are presented in [27]. The role of formal models in knowledge creation has been discussed in [ 121. In this paper, the formal model is used to show how some knowledge connected with an infor- mation flow evaluation in autonomous systems can be cre- ated. This knowledge is codified in IF . . . AND . . . THEN . . . rules. Such rules describe information flow in certain decision situations in static and dynamic external environments. They are also essential in the development of rule-based group decision support systems. The modeling problem with which this paper is con- cerned could be linked with any technologically based or- ganization (especially an organization with the type of technology known as long-linked [42]) in which an en- gineer, a manager, and a member of a group of workers are, in their professional capacity, required to make de- cisions. The most imperfect feature of most industrial sys- tems is that while their results may be satisfactory in the perfect situation they are unsatisfactory in real life. Real life is of course, somewhat imperfect. Materials and parts are not delivered on time; no matter how carefully they are scheduled and progressed, operators are absent, ma- chines break down, specifications are altered, additional labor cannot be recruited. The list is long, indeed, but basically it covers two major problems: 0018-9472/93$03.00 0 1993 IEEE

Upload: e

Post on 22-Sep-2016

214 views

Category:

Documents


2 download

TRANSCRIPT

1302 IEEE TRANSACTIONS ON SYSTEMS, MAN. AND CYBERNETICS. VOL 13. NO. 5 . SEPTEMBER/OCTOBER 1993

Acquisition of Knowledge for Autonomous Cooperating Agents

Edward Szczerbicki

Abstract-In an organizational context autonomous agents consist of groups of people, machines, robots, and/or guided vehicles tied by the flow of information between an agent and its external environment as well as within an agent. Mathe- matical modeling is used to evaluate such an information flow.

The evaluation of an information flow is performed for dif- ferent types of external and internal environments. Two major cases are taken into account, Le., static and dynamic processes describing the external environment. Such issues as the role of correlation as well as interaction and the losses caused by in- complete and delayed information are addressed. Only actions that are described by real numbers and utility functions that are twice differentiable are considered.

The results of the model-based evaluation of an information flow in different decision situations are formulated as IF . . . AND . . . THEN rules that provide some useful knowledge about autonomous agents functioning. Generation of such knowledge is an important factor in successful implementation of ideas re- lated to atomized organizations and provides a bridge into the distributed systems and artificial intelligence (AI) communi- ties. To support the development of such a bridge, an approach is suggested that combines knowledge expressed by traditional IF . . . THEN rules with machine learning technique based on the training of a neural network. A three-layer neural config- uration is used. It is demonstrated that the use of networks shows promise as a tool for the development of information structure for autonomous agents. The concepts included in the paper are illustrated with examples providing interpretation and relation to real situations.

I. INTRODUCTION: THE PROBLEM AND THE MODELING APPROACH

A . The Problem TOMIZED organizations consist of a limited number A of autonomous agents each of which decides about

its own information input and output requirements [ 5 ] , [39]. Autonomous agents can still be interrelated and embedded in larger systems, as autonomy and indepen- dence are not equivalent concepts [ 191, [29]. To maintain the autonomy, control over information flow and process- ing should be distributed among the agents. This paper addresses the problem of formal modeling of an infor- mation flow in autonomous systems formed by groups of people making decisions and working together. The for-

Manuscript received August 30, 1991; revised April I O . 1992, October 3, 1992, and February 1 1 , 1993.

The author was with the Department of Industrial and Management En- gineering, University of Iowa, Iowa City, IA 52242. He is now with the Technical University of Gdansk, 80-952 Gdansk, Poland.

IEEE Log Number 9209694.

mal (mathematical) model is then used to extract some knowledge about autonomous groups functioning.

Although few management functions have been for- malized and automated, advances in information re- trieval, processing, display, computer-aided management (CAM), and decision technologies have certainly led to significant modeling applications that help people per- form management functions [4], [6], [ I l l , [13], [45], [46], [47]. Mathematical models in management and be- havioral science are designed to describe, understand, and finally support processes and activities that are primarily intellectual, such as drawing conclusions from evidence, making predictions from past performance, and deciding on an appropriate course of action to follow. In other words, mathematical models representing industrial, so- cial, and behavioral systems are developed mainly to cre- ate knowledge.

The creation of knowledge and generalization in deci- sion science is a difficult task [21]. Empirical knowledge representation for management information systems and decision support have been discussed in [2], [8]. Various practical systems capable of extracting descriptive deci- sion-making knowledge from data are presented in [27]. The role of formal models in knowledge creation has been discussed in [ 121. In this paper, the formal model is used to show how some knowledge connected with an infor- mation flow evaluation in autonomous systems can be cre- ated. This knowledge is codified in IF . . . AND . . . THEN . . . rules. Such rules describe information flow in certain decision situations in static and dynamic external environments. They are also essential in the development of rule-based group decision support systems.

The modeling problem with which this paper is con- cerned could be linked with any technologically based or- ganization (especially an organization with the type of technology known as long-linked [42]) in which an en- gineer, a manager, and a member of a group of workers are, in their professional capacity, required to make de- cisions. The most imperfect feature of most industrial sys- tems is that while their results may be satisfactory in the perfect situation they are unsatisfactory in real life. Real life is of course, somewhat imperfect. Materials and parts are not delivered on time; no matter how carefully they are scheduled and progressed, operators are absent, ma- chines break down, specifications are altered, additional labor cannot be recruited. The list is long, indeed, but basically it covers two major problems:

0018-9472/93$03.00 0 1993 IEEE

SZCZERBICKI: KNOWLEDGE FOR AUTONOMOUS COOPERATING AGENTS 1303

1 ) information about the events in the environment is uncertain and changes with time

2) the time required to accumulate information that is needed in decision making causes its delay and the age of information must be treated as one of its at- tributes.

Thus, important real-world problems faced by groups invariably involve dynamics and uncertainties about events or variables describing the environment in which groups are functioning. Inferences and decisions must be made in the face of such factors, and information that helps to reduce the uncertainty can play a crucial role in the inferential and decision-making processes.

After obtaining additional information, either through observation or exchange, a decision maker might be able to make more precise estimates of random variables and might be able to reduce the risk associated with potential actions that are being considered for implementation. The notion that more information leads to better inferences and decisions motivates the consideration of all information available. The decision to obtain full information is usu- ally motivated by a desire to reduce uncertainty about the environment and thus to increase precision.

In decision-making problems under uncertainty, the de- sire to increase precision is related to the expectation that greater precision will generally lead to improved deci- sions. Information acquisition, however, is a time con- suming process and we would often expect full informa- tion to be somewhat delayed. This delay is being caused by the dynamic character of the environment in which the organization is functioning and, intuitively, it seems that this phenomena should reduce the overall content of full information. This potential reduction, which is mainly the reduction in precision, could decrease the expected value of information.

In [35] it is demonstrated that changes in value of full and delayed information as a result of dynamic environ- ment do in fact generally move in the opposite directions and that the magnitude of such changes can be substantial. It is believed that the above has important implications for the acquisition and use of information in decision- making problems. Information delay represents one of significant theoretical problems that are encountered once we begin to give advice to real decision makers about the best ways to organize for the performance of real tasks.

An autonomous group has been chosen as a subject for further analysis. Such a group is usually functioning in the external environment that determines the decision- making process of the group members. This decision- making process depends mainly on the realization of cer- tain variables describing the state in which the external environment is placed. Information about realizations of those variables together with their interrelationship rep- resent the knowledge of a group member. At the same time, however, the group is functioning in a so-called in- ternal environment 1371, which represents for example, through the manufacturing process in which the group is based, the concept of a linkage and network inside the

group. Information about the character of this linkage also represents knowledge of a group member. Thus, such knowledge could be described by the following:

1) characteristic of the external environment (relation- ship between variables describing the environment and its dynamics)

2) characteristic of the internal environment, i.e., the relationship between the actions of the members of a group

3) the range of information about variables describing external environment.

The formal representation of the above knowledge is presented in the next section. It is assumed that all group members possess the “common knowledge” about the characteristics listed above as 1) and 2) (all other possible categories of decision situations faced by a group are dis- cussed in [36]). Their knowledge about the realizations of variables describing external environment may differ and it depends on the flow of information (observation and communication). Information in this context means the message about realizations of variables in external envi- ronment. Such information can be exchanged between the members of a group adding to their common knowledge.

A general discussion of some principles of the func- tioning of autonomous groups and the role of information flow in an “information society” is included in [24] and in [29]. The role of information flow in the achievement of organizational goals is analyzed in [28]. Earlier works on team functioning, modeling, and simulation include [ 181, [ 151, and [34]. Mathematical representation of au- tonomous group functioning in static environments is de- veloped in [37]. The process of structuring an information flow in dynamic environments is discussed in [38]. In this paper the approach developed earlier is used as the basis for extracting rules that can govern the flow of informa- tion in various decision situations.

B. The Approach In an organizational context autonomous systems can

consist of groups of people working together. Generally, functioning of such groups depends to a large extent on information structures that connect the members of a group both with one another and with the environment in which the group is functioning. Such an environment has at least two different attributes. The first one is connected with the “outside world” (external environment) and the second with the interaction between the members of a group (internal environment). Thus, one has three com- ponents that are the key to understanding the functioning of an autonomous system. They are information structure, external environment, and internal environment. If one is able to formally describe these components, the connec- tions between them, and the ways of influencing each other, it will be a big step into understanding the func- tioning of autonomous systems that will eventually enable one to improve their performance. The growth of interest in and the demand for such ability in all kinds of organi- zations is apparent [50].

1304 IEEE TRANSACTIONS ON SYSTEMS.

ENERGY

MAN. AND CYBERNETICS. VOL 2 3 . NO. 5. SEPTEMBERIOCTOBER 1993

A Formal representation of decision-making processes represents the core of the generalized model of autono- mous groups functioning. Since most autonomous sys- tems involve decision analysis with incomplete knowl- edge under uncertainty, the best alternative is very often defined, by assuming that the description of the environ- ment is known statistically, as the one that maximizes the expected utility [7], [14], [15], [40]. This utility ranges from a monetary value to a subjective, qualitative feeling and usually is designed for use in certain application areas. For knowledge extraction purposes, a general approach is needed that captures the whole of the behavior of a group. Such an approach, based on correlation between infor- mation and energy, is outlined next. In this approach it is assumed that actions are represented by real numbers and consequence functions are twice differentiable. Certain features implemented in previous research presented in [36] are included for the sake of completeness.

Let A represent the set of possible actions that can be undertaken by the members of a group, Z the set of cor- responding consequences, and X random variables de- scribing the actual state of the external environment. It can be assumed that z = f (a , x) as the particular conse- quence ( z ) depends usually on an action ( a ) undertaken in the particular state of the environment (x). On the other hand, the decision about a particular action depends on information that is available about the state of the envi- ronment. If p stands for the decision function, we have a = p (a) where d represents information.

For a general description of the function f (a , x), let us consider certain correlation between information, action, and energy. Its theory is relatively recent, but is has al- ready been pointed out that energy can be replaced by information and vice versa. This replacement is of statis- tical character and can be expressed graphically as in Fig. 1 [36]. The curve in Fig. 1 shows the hypothetical hyper- bolic regression line of energy on information.

Biological brains that control living organisms are ex- amples of systems that function according to the replace- ment depicted in Fig. 1. As stated in the emerging theory of intelligence [ 11, the performance of a task (task is de- fined as a piece of work to be done by the system [ l ) ) needs both task knowledge (information that is processed by the brain) and the expenditure of energy. The more information available as to what tools, time, resources, materials, and conditions are required to perform a named task, the smaller the expenditure of energy needed for this performance. With less information supplied, more en- ergy is spent for the trial-and-error process. The above relation between the amount of information and the ex- penditure of energy is also the basis for the energy func- tion approach to modeling artificial neural network dy- namics [ 171.

Knowledge-based manufacturing systems perform given tasks with smaller energy expenditures if they con- tain more information on the manufacturing process [44]. Also, fault diagnosis systems need less energy to detect a

'

I

fault if they use more-information on the process that is

Fig. I . The general character of energyiinformation replacement

diagnosed [ 3 3 ] . Thus, the replacement as suggested in Fig. 1 seems to be general enough to assume it as the framework for the subsequent development of the ap- proach presented in this section.

According to the curve in Fig. 1 the more information one has the less energy he/she needs to perform a given task. This is only true for a limited amount of information there are problems of complexity for large volumes of in- formation) so the curve in Fig. 1 approaches asymptoti- cally a certain energy level and never goes below it. For the amount of information described in Fig. 1 as C1, a certain task can be done using El energy and let us assume that El = E,,,. Then, for given C1 there exists the best way (action Aopl) to fulfill the job, i.e., the action that uses E , energy. Actions different than Aopt result in more en- ergy consumption. The above concept is shown in Fig. 2.

For the preference evaluation we have L = e = f ( ~ , x) where e stands for energy. Generally, f ( a , x ) is a certain function specified in n-dimensional space and its mini- mum can be found using the second order of a Taylor series representation [ 161. This is the basis for thef(a, x) approximation in quadratic form:

(1)

where Bo = bo@), A = [a,] , B = [b,(x)], symmetric ma- trix Q = [q,,] ( i , j = 1 , 2 , * * . , n ) , and n represents the number of members of a group. A minimum of (1) exists if Q is a positive definite matrix.

To arrive at the best decision functions p, (i = 1, 2, , a) let n = 4. The external environment of a group

is described by random variables X = {XI, X,, X 3 , X4} with realizations x = {x l , x2, x3, x4}. Thus we have

f(a, x) = Bo - 2B'A + A'QA

. . .

f(a, x)

a2 q 2 2 4 2 3 q24

q 3 2 4 3 3 934

( 2 )

SZCZERBICKI: KNOWLEDGE FOR AUTONOMOUS COOPERATING AGENTS 1305

. ACTION

A opt A Fig. 2 . Actionienergy dependence for given amount of information

Matrix Q is symmetric and thus

f ( a , x) = h ( x ) - 2b1(xl)ai - 2b2(xJa2 - 2b3(x3)~3

-2b4@4)a4 + 4114 + 4224 + 9 3 3 4 + 9444

+ 2q12ala2 + 2q13ala3 + 2q23a2a3 f ZqI4aIa4

+ 2q24a2a4 + 2q34a3a4. (3)

The second derivative 63f/(6a,6aj) describes the relation between actions a; and ai because for certain x , it char- acterizes the way in which the change in uj affects f ( a , x) depending on a;. Thus, qiJ can formalize an internal en- vironment of a group, Le., the interaction between the members of a group. For the sake of simplicity, let the interaction qiJ be constant (during the group’s work on a certain task this is usually true):

1 f o r i = j

q for i # j (4) ( i , j = 1, 2, 3 , 4) 40 =

and q is called the coefficient of interaction. If q = 0 the group members actions are independent. They are de- pendent for q # 0 (the above relationship in an internal environment has been introduced in 1371). For given in- formation d = {dl, d2, d3, d4}, where d, stands for ith member information (i = 1, 2, 3 , 4), action aopt = {al , u2, a3, a4} is chosen as the one for which we have min E [ f ( a , x)ld]. Because f ( a , x) is a convex function, the best decisions a l = PI (dl), u2 = P2(d2), a , = P,(d3), a4 = P4(d4) will be obtained from

E(.f i(d,) = 0 i = 1 , 2, 3 , 4 ( 5 )

wherex = df/da;(i = 1, 2, 3, 4). We have

where i , j = 1, 2, . . . , n. Formalization of the group decision-making process

expressed by (10) is a tool necessary for the modeling and evaluation of information flow in an autonomous system. Information flow connects group members with the exter-

1306 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS. VOL 23. NO 5 . SEPTEMBERIOCTOBER 1993

nal environment described by random variables X . The connection is represented by the information structure. This structure is modeled by matrix C in which cii = 1 if the ith member has obtained (either by observation or communication) information about the jth variable X re- alization (if cii = 0 he/she has not got it). The ith variable X realization can be observed only by the ith member of the group. He/she can be informed about other realiza- tions only when communication (information exchange) inside the group is organized. Information exchange is all or none and partial communication is excluded from the discussion (for more details on observation and commu- nication processes and the resulting group information structure see [ 3 7 ] , [ 3 8 ] ) . The value of the information structure defined above is given, as in [ 3 8 ] , by the follow- ing:

(11)

where min E [ f ( a , X ) ( C O ] represents the utility of infor- mation structure CO in which cy = 0 for each i and j . Using (10) the VC can be represented, as in [ 3 8 ] , by

VC = min E [ f ( a , X ) ( C O ] - min E [ f ( a , X ) ( C ]

VC = E [ b T P ] . (12)

With the modeling tools given by (10) and ( 1 2 ) one can easily extract knowledge about autonomous systems func- tioning in various decision situations. In the next two sec- tions some samples of such knowledge are specified for static and dynamic environments. This knowledge is eas- ily codified and can be used in control, command, and management of autonomous systems.

11. STATIC ENVIRONMENT A. The Role of Correlation and Interaction

a two-person group: Let us consider the following information structures for

C l = [:, 3 c 2 = I:, ;]

Information structures C 1 and C2 are created only by ob- servation. In C3 and C4 both observation and commu- nication are involved. Before further analysis is carried on, a simple introductory example of a two-person group is presented to illustrate the relation of the model to real situations. In Section 111-D an n-person group model is also illustrated with an example to provide further inter-

are independent. They are dependent if PA and PB are assembled into the same end product.

For information structure C 2 [see (1 3 ) ] the external en- vironment of the group is described by MA and MB (XI = MA and X2 = M B ) . Information that is available for the group member A is given by d l = [MA] (c I I = 1). For the group member B we have d2 = [MB] (c22 = 1). As there is no communication between the members A and B , the values of cI2 and cZ1 are equal to zero. The ele- ments of vector b are given as bl = [MA] and b2 = [ M B ] . The group member A decides about the amount of mate- rial that is ordered for production of part PA. The member B decides the same about the amount of material for pro- duction of part PB.

To illustrate the best decision functions and the value of information structure C2 for the above example, we start with

b, = X , = MA = d l

b2 = Xl = MB = d2. (14) (Note that ( 1 4 ) means that information is not processed on its way from the external environment to group mem- bers.) Let us then suppose that

P I = MAm,

P2 = MBm2 (15)

as for the quadratic form of f (a , x), functions P I and P2 are linear. From (10) we have

mlMA + qE[m,MBIMA] = E[MA(MA]

m2MB + qE(mlMAIMB] = E[MB(MB] (16)

where q is the coefficient of interaction as introduced in Section I-B. It is obvious that in (16) E[MAIMA] = MA and E[MBIMB] = MB. For the sake of simplicity let us assume that the two-dimensional random variable (MA, MB) has a normal distribution. Thus the regression func- tion E [ M A ( M B ] is given as

E[MAIMB] = MBrsMA/sMB (17)

where r is the correlation coefficient between MA and MB random variables, s L A = Var [ M A ] , s i B = Var [ M B ] . For sMA = sMB (16) can be rewritten as

mlMA + qm2MAr = MA

m2MB + qm,MBr = MB. (18)

From (1 8) we have

(19) pretation.

Two parts (PA and PB) are manufactured by two group members ( A and B ) . The external environment of the group is described by the amount of material available for this production ( M A and MB). The assembly process of PA and PB determines the character of the internal envi- ronment of the group. For example, if PA and PB take part in the assembly of two different end products, or are the end products themselves, the group members actions

m l = m2 = 1/ (1 + qr)

which gives the following 0: P I = M A / ( 1 + q r ) , P2 = M B / ( 1 + qr) .

Finally we have from ( 1 2 ) :

(20)

V C ~ = 2 s 2 / ( 1 + qr) . (21)

The interpretation for other information structures is

~

SZCZERBICKI: KNOWLEDGE FOR AUTONOMOUS COOPERATING AGENTS 1307

similar to the one presented above. For example, for in- formation structure C4 (see (13)) we have XI = M A , X2 = MB, dl = [ M A , M B ] , d2 = [ M A , M B ] , cr, = 1 for all i andj , bl = [ M A ] , and b2 = [ M B ] .

Following the line of reasoning as presented for infor- mation structure C 2 it can be shown that

Figs. 3 through 6 describe two-person group function- ing and stress the role of correlation in the external en- vironment and interaction in the internal one.

The following conclusions can be extracted from the dependences given by (21), (22), and Figs. 3-6.

1) In a static environment the value of each informa- tion structure can be expressed as a function of q, r , and s2. This important fact allows one to simulate the flow of information for different external and internal environ- ments.

2) The value of a single piece of information depends on the variance of X; the bigger the variance the more valuable is the information about random variable X re- alization.

3) The value of an information structure depends on its completeness; the more information it contains the bigger the value. In fact, we have VC4 > = VC3 > = VC2 > vc 1 .

4) For q = 0 there is no interaction in the internal en- vironment and the actions of the members of a group are independent. For such a case information exchange inside the group does not affect the value of the information structure. For example, though information structure C4 contains a larger amount of information than C2, their values are the same for q = 0.

5 ) For q # 0 there is an interaction in the internal en- vironment of a group. For such a case each new piece of information improves the value of a resulting information structure if the relationship between variables describing the external environment is not given by function depen- dence ( r # 1 or r # -1).

6) For r = 1 or r = -1 it is possible to predict the exact XI realization if the X2 realization is known. In such cases we have VC2 = VC3 = VC4.

7) Interaction in the internal environment can be of substitute ( q > 0) or complementary ( q < 0) character. For q < 0 the actions of group members change in the same direction. Thus the external environment with the same character of change ( r -+ 1) improves the function- ing of a group. Conversely, it is easy to notice that for q > 0 the value of each information structure C increases with r + -1.

The conclusions are summed up as Rules 1 through 8 in Table I. The rules were developed applying the model solution for a two-person group. The same rules hold for any value of n (number of group members). It is easy to recognize that the rules in Table I are true for an n-person

I I I > -1 0 1 r

Fig. 3 . The value of a single piece of information modeled by information structure C 1 .

-1 0 1 r

Fig. 4. The value of an information structure that represents observation (information structure C2).

I I I 2s2 I I

-1 0 4

1 r Fig. 5 . The value of an information structure that represents observation

and partial communication (information structure C3).

1308 IEEE TRANSACTIONS ON SYSTEMS. MAN. AND CYBERNETICS. VOL. 23. NO. 5. SEPTEMBERIOCTOBER 1993

TABLE 1 (Continued.)

1 Rule 8 (Conrinued.) THEN

Rule 9 IF AND

AND THEN

Rule 10 IF AND

AND THEN

Rule 1 IF AND THEN

Rule 2 IF AND THEN

Rule 3 IF AND THEN

Rule 4 IF AND THEN

Rule 5 IF AND AND

THEN

Rule 6 IF AND

THEN

Rule I IF AND

THEN

Rule 8 IF AND

I I I > Rule 1 1 -1 0 1 r IF

Fig. 6 . The value of an information structure that represents full infor- mation (information structure C4).

AND

AND AND THEN

TABLE 1 IF . . . THEN RULES DESCRIBING GROUPS FUNCTIONIVC 13 A ST4TIC ~~l~ 12

ENVIRONMENT IF AND

an external environment of an autonomous group is static it is described by random variables the value of an information structure that represents the flow of information between the group and it5 environment de- pends on interaction between group members, correlation between random variables, and their variance.

an external environment of an autonomous group is static

AND AND THEN

~~l~ 13 IF AND THEN

it is described by a random variable the value of information about this variable realization is proportional to the value of its variance.

an external environment of an autonomous group is static it is described by random variables full information has the value that is always greater or equal to the value of any other information structure.

an external environment of an autonomous group is static there is no interaction in the internal environment it is enough to restrict the information flow only to obser- vation; organizing an information exchange does not im- prove the value of a resulting information structure.

an external environment of an autonomous group is static there is an interaction in the internal environment the relationship between variables describing the external environment is of statistical character information structure should include observation and com- munication.

an external environment of an autonomous group is static the relationship between variables describing the external environment is given by function dependence communication between group members does not affect the value of information structure; information flow should be restricted to observation.

an external environment of an autonomous group is static interaction in the internal environment is of substitute char- acter positive correlation in the external environment is pre- ferred.

an external environment of an autonomous group is static interaction in the internal environment is of complementary character

negative correlation in the external environment is pre- ferred.

an external environment of an autonomous group is static the relationship between variables describing the external environment is given by function dependence there is an interaction in the internal environment it is easier to improve the value of information flow for small groups than for larger groups.

an external environment of an autonomous group is static the relationship between variables describing the external environment is of statistical character there is no interaction in the internal environment efficiency of an information flow increases with increasing n .

an external environment of an autonomous group is static the relationship between variables describing the external environment is of statistical character there is an interaction in the internal environment there is no communication between group members the increase in the value of the information structure de- creases with increasing n.

an external environment of an autonomous group is static the relationship between variables describing the external environment is of statistical character there is an interaction in the internal environment there is communication between group members efficiency of an information flow increases with increasing n .

an external environment of an autonomous group is static there is an interaction in the internal environment the losses caused by incomplete information increase with decreasing correlation in the external environment.

group and the following information structures:

c 5 = i; : :::: 0 0 0 * . -

for which the corresponding values VC are given as

VC5 = s 2 , VC6 = ns’/[l + (n - l )qr]

VC7 = s’{r,[l + (n - 2)ql - (n - l ) n q r } /

((1 - 4) 11 + ( n - l)q13. (24)

SZCZERBICKI: KNOWLEDGE FOR AUTONOMOUS COOPERATING AGENTS 1309

For example, for the above information structures we have

0 THEN VC6 = VC7 (Rule 4 in Table I), and IF 0 < qr < 1 THEN VC7 > VC6 (Rule 5 in Table I).

To show the best decision functions for an n-person group let us consider, for illustrative purposes, informa- tion structure C6. Note that the line of reasoning is sim- ilar to the one presented earlier for information structure c 2 .

IF r = 1 THEN VC6 = VC7 (Rule 6 in Table I), IF q = VC

s2/9

8 2 We start with

b , = x, = d ,

b2 = X2 = d2

b, = X , = d, (25)

and

PI = m J 1 , P 2 = mJ2, . . . , PI, = m,,x,. (26)

With the assumptions similar to the ones introduced for C2 we have from (10):

m,X1 + qm2XIr + qm,Xlr + . * * + qmnX,r = X I

1 n > Fig. 7 . Relationship between VC and n for 4 = 0 and r = 1 .

vc

m,X, + qmlX,r + qm2X,r + . . . + qm,, ~ lX,r = X,.

(27)

From (27) we have

m, = 1/[1 + (n - l)qr] f o r i = 1, 2, . . . . n

(28)

which gives the following 0: 0, = X , / [ 1 + (n - I)qr] f o r i = 1, 2, . . , n

(29)

(30)

Finally we have from (12):

VC6 = n s 2 / [ 1 + (n - l )qr] .

B. The Role of a Number of Group Members In [38] the following three decision situations were dis-

cussed for VC6 and VC7 from the perspective of the role of n in an information flow:

i ) r = l , q f O

ii) q = 0, r # 1

iii) 0 < qr < 1 .

In Table I the rules describing the impact of n on the value of information flow are numbered 9 through 12. Figs. 7 and 8 depict the relationship between VC and n in static external environment.

1 n Fig. 8. Relationship between VC and n for 0 < 4r < 1 ,

>

C. Losses Caused by Incompleteness

information structure can be calculated as The impact of incomplete information on the value of

LVC' = VCf"1l - VC,", (31)

where LVC ' stands for losses caused by incompleteness, VCfull stands for the value of full information structure (in such a structure crJ = 1 for all i andj ) , and VC,,, stands for value of an information structure that is not complete. For information structures C 1 , C2, C3, and C4 we have

LVC3' = VC4 - v c 3 = [q2(1 - r2)]/(1 - q 2 )

L V C ~ ' = V C ~ - V C ~ = [2s'q2(1 - r2)1/

[ ( I - q2) (1 + 4r)l

LVC1' = V C ~ - V C ~ = s'(1 - 2qr + q 2 ) / ( 1 - q2).

(32)

The character of the above losses, for a given value of q , can be generally expressed as in Fig. 9.

As can be seen in Fig. 9, the smaller the correlation in the external environment the more desirable is the com- munication (information exchange) between the members of a group (Rule 13 in Table I).

1310 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS. VOL. 23. NO. 5. SEPTEMBERiOCTOBER 1993

1 LOSSES CAUSED BY INCOMPLETENESS I2 I

I AMOUNT OF INFORMATION

Fig. 9. The character of the losses caused by incompleteness for a given 4 and different values of r ( r , < r2 < r ? ) .

D. N-Person Group Functioning: An Example The following simplistic example is used to provide the

interpretation of the n-person group model and its relation to real situations. Decision functions and the values of some information structures for the n-person group were introduced at the end of Section 11-A.

N parts (PART-1, PART-2, * - * , PART-N) are man- ufactured by N group members (MEMBER-1, MEMBER-2, * * * , MEMBER-N). The external envi- ronment of this N-person group is described by the amount of material available for the production (MATERIAL-1, MATERIAL-2, * . * , MATERIAL-N) and the relation- ship between these amounts is of statistical character. Ith group member MEMBER-1' manufactures PART-i using MATERIAL-i. The assembly process of PART-1 through P A R T N determines the character of the internal environ- ment of the group. For example, if each part is assembled into a different end product, or each is the end product itself, the group members actions are independent (the coefficient of interaction q = 0). They are dependent if all parts are assembled into the same end product. Sup- pose, to avoid the unnecessary complications, that in the case of all parts being assembled into the same end prod- uct, the number of PART-1 must be equal to the number of PART-i for i = 2, 3, * . - , N. The model interpreta- tion is presented for two information structures introduced earlier in this section.

1) An information structure is given by C6. The ex- ternal environment of the group is described by the set

I A L N ) and we have {MATERIAL-1, MATERIAL-2, * * . , MATER-

MATERIAL-I' = X , , i = 1, 2, . . , N. (33)

Information that is available for the group member MEM- BER-i is given by

d, = [MATERIAL-i], i = 1, 2, 3, . . , N (34)

which means that c,, = 1 for each i. As there is no com- munication between group members the values of c,, are equal to zero for i # j . The group member MEMBER-i decides about the amount of material MATERIAL-i that

should be ordered for the production of PART-i. Infor- mation structure C6 is evaluated for this decision situa- tion by the formula

(35) VC6 = ns2/[1 + (n - l)qr].

2) An information structure is given by C7. For C7 we

MATERIAL-i = X,, i = 1, 2, 3, * - , N . (36)

Information that is available for the group member MEM- BER-i is given by

have

d, = [MATERIAL-1 , MATERIAL-2, . * * ,

MATERIAL-N] . (37) The decisions of the group members are the same as in 1) and they concern the amount of material that should be ordered for production. Information structure C7 is eval- uated for this decision situation by the formula

V C ~ = s'{n[l + (n - 2)ql - (n - l )nqr) /

{(l - 9) [ I + (n - 1)qI). (38)

If in both cases 1) and 2) the manufactured parts are the end products, the value of q is equal to zero (indepen- dence). The group member MEMBER-i is interested only in the amount of MATERIAL-i that is available for pro- duction. Information about MATER1AL-i is the only in- formation MEMBER-i needs to make decisions. The val- ues of MATERIAL-j ( i # j ) do not matter to MEMBER-i. This means that MEMBER-i does not need information about MATERIAL-j. In other words, such information should not add any value to the information structure. The same may be said about all group mem- bers. Thus we have

For q = 0 the value of information structure C6 should be the same as the value of information struc- ture C7 (VC6 = VC7).

If the manufactured parts PART-1, PART-2, . . * , P A R T N are all assembled into the same end product, the value of q is not equal to zero (dependence). There is a strong interaction between the group members and their actions change in the same direction. To assemble all parts into the same end product MEMBER-i, manufacturing P A R T i should follow the number of parts manufactured by other members of the group. This means that interac- tion within the internal environment is of complementary character and q -+ - 1. Thus the group member MEM- BER-i, is interested not only in the amount of MATER- IAL-i available for the production of PART-i, but also in the amount of MATERIAL-j (i # j ) (to achieve PART-i = PART-j for each i andj ) . We have

For q # 0 the value of information structure C7 should be larger than the value of information struc- ture C6 (VC7 > V C 6 ) .

These conclusions are in conformity with equations (35) and (38). The equations show clearly that VC6 = VC7 for q = 0 and that VC7 > VC6 for q # 0. The conclu-

SZCZERBICKI: KNOWLEDGE FOR AUTONOMOUS COOPERATING AGENTS 1311

sions are also in conformity with Rule 4 and Rule 5 (see Table I).

111. DYNAMIC ENVIRONMENT Let the external environment be described, as in [38],

by the autoregressive process of the first order. The first- order autoregressive process is given as [ 181, [4 11

X(t) = wX(t - 1) + p(t) (39)

where p ( f ) are uncorrelated random variables with zero mean and constant variance Var [p] = a, and w is the equation coefficient describing the dynamics of the pro- cess (the process is stable for w < 1, explosive for w > 1, and Brownian for w = I ) . The autoregressive process given by (39) can be used for modeling purposes if the dependence X(t) = f [X(O)] is known. This dependence is as follows [41]:

f - I

X(t) = W'X(0) + c W'np( t - m) (40) m = O

or more generally:

X ( t ) = W"'X(t - z - 1) + c Wrnp(z + 1 - m) Ill = 0

(41) where 1 < = z < = t - 1.

A . Nondelayed Information In [38] it is shown that the values of information struc-

tures modeled by (13), i.e., C1, C2, C3, and C4 are given, for information that is not delayed, as

VC1 = s2M, VC2 = 2s2M,

v c 3 = s2M(2 - q2)/(1 - $),

VC4 = s 2 M 2 / ( 1 - 8) (42)

where M = EL=, w ~ ~ , and the values of information structures modeled by (23), i .e., C5, C6, and C7 are given as

VC5 = s2M,

V C ~ = n s * ~ [ 1 + ( n - 2)ql/

VC6 = ns2M,

11 + (n - 2)q - (n - 1)q2] (43)

with the same value of M. General rules that can be for- mulated for a dynamic environment using (42) and (43) are included in Table I1 placed at the end of this section (Rules 14 through 16).

Equations (42) and (43) also offer the opportunity to analyze the flow of information in various dynamic situ- ations described by w. The details of such an analysis are given in [38] and they allow us to formulate the rules for stable (w < l ) , Brownian (w = l ) , and explosive (w > 1) character of the external environment (see Rules 17 through 19 in Table 11).

Fig. 10 shows the character of the relationship between

TABLE I1 IF . . . THEN RULES DESCRIBING GROUPS FUNCTIONING I N A DYNAMIC

ENVIRONMENT

Rule 14 IF

AND THEN

Rule 15 IF

AND AND THEN

IF

AND THEN

Rule 16

Rule 17 IF

AND AND THEN

IF

AND AND

THEN

Rule 18

Rule 19 IF

AND AND THEN

Rule 20 IF

AND AND THEN

Rule 21 IF

AND AND AND THEN

Rule 22 IF

AND AND AND

THEN

Rule 23 IF

AND AND AND THEN

an external environment of an autonomous group is de- scribed by a stochastic process information is not delayed the value of full information is, for each decision situation, greater than the values of other possible information struc- tures.

an external environment of an autonomous group is de- scribed by a stochastic process information is not delayed there is no interaction in the internal environment there is no need for communication inside the group.

an external environment of an autonomous group is de- scribed by a stochastic process information is not delayed the more uncertain the realizations in the external environ- ment the bigger the value of the information about these realizations.

an external environment of an autonomous group is de- scribed by a stochastic process information is not delayed the external environment is stable the value of information structures stabilizes with time.

an external environment of an autonomous group is de- scribed by a stochastic process information is not delayed the external environment is described by Brownian move- ment the value of information structures increases proportionally with the increase of time.

an external environment of an autonomous group is de- scribed by a stochastic process information is not delayed the external environment is of explosive character the value of information structures increases exponentially with time.

an external environment of an autonomous group is de- scribed by a stochastic process information is delayed the stochastic process is independent the value of any information structure is equal to zero.

an external environment of an autonomous group is de- scribed by a stochastic process information is delayed the stochastic process is dependent the external environment is stable the losses caused by delayed information stabilize with in- creasing value of delay.

an external environment of an autonomous group is de- scribed by a stochastic process information is delayed the stochastic process is dependent the external environment is described by Brownian move- ment the losses caused by delayed information increase propor- tionally with the increase of delay.

an external environment of an autonomous group is de- scribed by a stochastic process information is delayed the stochastic process is dependent the external environment is of explosive character the losses caused by delayed information increase exponen- tially with delay.

t and VC for different values of w.

1312 IEEE TRANSACTIONS ON SYSTEMS, MA&, AND CYBERNETICS. VOL 23. NO 5 . SEPTEMBERIOCTOBER 1993

VALUE OF INFORMATION STRUCTURE

(VC)

A A

LOSSES CAUSED BY DELAY

(LVC")

' ' DELAY OF TIME (t)

INFORMATION Fig. 10. Character of the relationship between t and VC for different val- ues of M'. Fig. 1 1 . Character of losses in the value of information structure caused

by delayed information.

B. Delayed Information

is described by random variables X ( T ) . Information about the realizations of these variables is said to be delayed if, L I / C I " = s2[(1 - b v 2 0 ) / ( l - w?)] for epoch T , it reaches the group member in the form of X ( T - e) (is not the current one but describes the state L V C ~ " = ns '[( l - w'" ) / ( l - cr?')]

of affairs in the environment in one of the former epochs) [351.

In [35] it is shown that the values of information struc- tures modeled by (231, i.e., c5, c 6 , and c 7 are given for delayed information as

1-0

N = O

For each epoch Tthe state of the dynamic environment for different values Of and and they are expressed as follOws:

(48)

(49)

L V C ~ " = ( n [ l + (n - 2 ) q ] s 2 / [ 1 + (n - 21q

- (n - l)q']) [ ( l - wZo) / (1 - w' ) ] . (50) The rules concerning delayed information are presented in Table I1 (Rules 21 through 23) .

Fig. 1 1 shows the character of the relationship between LVC" and 8 for different values of w. As expected, the character of losses in Fig. 1 1 is similar to the relationship presented in Fig. 10.

(44) vc50 = S2W2' c W2"

r - 0

VC60 = ns2w2' c w2" N = O

t - 8

- (n - 1)q2]} s2w2' c wZN (46)

where 8 stands for delay. It is easy to show that for 0 = 0 (information is not delayed) the values of the informa- tion structures given by (44)-(46) become identical with the ones given by (43).

An independent stochastic process can be modeled by (40) for w = 0. If 8 # 0 (information is delayed) and the stochastic process describing the external environment of a group is independent, the value of each information structure C5, C6, and C7 is equal to zero. Delayed in- formation in such a case cannot be used for inferences about the actual state of the environment, making it use- less (see Rule 20 in Table 11).

For a dependent stochastic process in the external en- vironment delayed information causes some losses in its value. Such losses are given as

(47)

where VCe = stands for the value of information structure without delay and VC, +,, represents the value of infor- mation structure with delayed information. The losses LVC " caused by delayed information can be calculated

N = 0

LVC" = VC,=, - VC0.O

IF . . . THEN rules, such as presented in Sections I1 (Table I) and I11 (Table II), are helpful in the development of traditional rule-based expert systems [20], [ 2 5 ] , [48]. In this section an approach is discussed that allows one to develop a connectionist expert system, i.e., neural net- work-based expert system [ 9 ] , [49].

Adaptation, or the ability to learn, is the most important property of neural networks. A neural network can be trained to map a set of input patterns onto a corresponding set of output patterns simply by means of exposure to ex- amples of the mapping. This training is performed by gradually adapting the internal weights of the network, so as to reduce the differences between the actual network outputs (for a given set of inputs) and the desired network outputs. Neural networks that learn mappings between sets of patterns are called mapping neural networks [ 3 ] . A key property of mapping networks is their ability to produce reasonable output vectors for input patterns outside of the set of training examples [ 2 6 ] , [30] . The above is espe- cially important in areas such as those discussed in this paper, i .e., areas for which it is possible to develop only a very limited number of IF . . . THEN rules and thus also to make inferences for a very limited number of decision situations.

The proposed development procedure includes three

SZCZERBICKI: KNOWLEDGE FOR AUTONOMOUS COOPERATING AGENTS 1313

steps. The first step involves the model based generation of knowledge for autonomous groups (Sections I1 and I11 of this paper). This knowledge is expressed in the form of IF . . . THEN rules.

In the second step, the rules are used to train the neural

TABLE Ill THE USF OF THt TRAINFD NFTUORK

Observation Exchange

Value Description (Sensoring) No

network. R = 0 9 5 Strong relationship The third step involves the use of the trained neural between vanables

network in the situations that are not covered by the rules used in the second step.

Problem-solving tasks, such as information structure development for an autonomous group, may be consid- ered pattern classification tasks. The system analyst learns mappings between input patterns, consisting of character- istics of the group’s extemal as well as internal environ- ment and output patterns, consisting of information struc- tures to apply to these characteristics. Thus, neural networks (neural-based expert systems) offer a promising solution for automating the learning process of the ana-

I

lyst.

A . Mapping Formulation, Training, and Use of the Network

A systems analyst, while developing an information structure for a group, transforms certain characteristics of a group into recommendations concerning the flow of in- formation. These characteristics represent the input for the system and they include five parameters: correlation in the extemal environment ( r ) , dynamics (I), interaction in the internal environment ( q ) , delay (e), and the type of process describing the external environment ( w ) . Output consists of two recommendations: 1) observation (or sen- soring) should be present and 2) an exchange of infor- mation should be present (the importance of the above parameters were discussed in Sections I1 and 111). An in- put portion together with an output portion of the data represents a training pair. The training pairs were used to train a 5-10-2 neural network. The number of input nodes (5) was chosen to match the number of relevant charac- teristics of a group, and the number of output nodes (2) was chosen to match the number of information flow rec- ommendations. The number of hidden layer nodes is not constrained in a definite way. If it is too small, the back- propagation algorithm will not converge upon a set of net- work weights and thresholds. If it is too large it will take an unreasonably long time to converge. Since the domain IF . . . THEN rules were known in advance, it was pos- sible to determine an approximate lower bound on the number of hidden units needed using the guidelines set forth by [22].

The target values for each output node were normalized in such a way that the maximum target for each node re- ceived a value of 0.75 and the minimum target for each node received a value of 0.25. This was done to bring the target values within the output range of the sigmoid out- put function. The training values for each input node were identically normalized. Prior to training, the network weights were initialized to values from the interval [ - 1,

describing extemal environment

T = O Extemal environment is static

Q = 0.01 There is no Yes No interaction in internal environment

0 = 0 Information is not delayed

w = O Process is independent

R = 0.2 Weak relationship between variables describing extemal environment

T = O External environment is static

Q = 0.90 There is interaction Yes Yes in intemal

environment 0 = 0 Information is

not delayed w = O Process is independent

R = O There is no relationship between variables describing extemal environment

is dynamic

in internal environment

3 T = 1 External environment

Q = I There is interaction No No

@ = 1 Information is delayed w = o Process is independent

11 and thresholds to values from the interval [-0.25, 0.251. The learning rate and momentum term of 0.9 were used in the network. These values were chosen on the basis of suggestions contained in the neural network lit- erature [31], [43]. The network was trained with a train- ing tolerance of 5 percent. 10 training pairs were used that were developed according to IF . . . THEN rules pre- sented earlier. The network was considered trained if, for all training pairs and output nodes, I(desired output - actual output)/(desired output) 1 < tolerance.

The backpropagation (called also error backpropaga- tion) procedure [26] was used for training purposes. The output error was determined by performing the forward computations in the network and comparing the results with the desired output. The sigmoid output function was assumed. The computed output error was propagated back through the network and the weights associated with the links were changed in order to reduce the local error frac- tion. The momentum term was included to decrease the tendency for oscillation during the training process.

After training, additional characteristics of an agent were generated for use by the network. Testing the net-

1314 IEEE TRANSACTIONS ON SYSTEMS. MAN. AND CYBERNETICS. VOL 23. NO. 5. SEPTEMBERIOCTOBER 1993

work’s trained performance was done by the comparison of the network’s recommendations conceming the flow of information with the rules that served as the basis for gen- erating the additional characteristics. Five sets of char- acteristics were submitted to the network. In response, the network suggested five information flow recommenda- tions (information structures). As an example, Table I11 presents three sets of characteristics submitted and the ob- tained recommendations after the trained network has been used. For the first input set (no. 1 in Table 111) the net- work recommends a decentralized information structure. For the second, a full information structure is recom- mended. In the third case, the network recommends rou- tine actions without observation or an exchange of infor- mation. In each case the recommendations agreed with the IF . . . THEN rules from which the group’s charac- teristics were delivered.

V . CONCLUSION There are 23 IF . . . AND . . . THEN rules developed

in the present paper. They provide general knowledge about groups functioning in static and dynamic extemal environments. In static environments such factors as cor- relation, interaction, number of group members, and in- completeness are considered. In dynamic environments the corresponding rules consider the character of dynam- ics (stable, Brownian, and explosive) as well as delay. The losses caused by incompleteness and the delay of in- formation are also included in the analysis.

In this paper the preliminary results of a procedure for the development of an information structure for autono- mous groups employing neural networks in conjunction with traditional IF . . . THEN rules has also been pre- sented. Although the training sets were limited by the number of IF . . . THEN rules generated so far, the pro- cedure itself has been successfully applied and demon- strated. The approach presented shows the potential for use in real world problems that are not intuitively straight- forward. The neural network approach uses a single meth- odology for generating useful inferences, rather than us- ing explicit generalization rules. Because the network only generates inferences as needed for a problem, there is no need to generate and store all possible inferences ahead of time. Further research is needed to provide a basis for the selection of a particular neural network for use in the procedure. Also, the effectiveness of the procedure for large problems, in which many information structure pa- rameters have to be determined, remains to be investi- gated.

The usefulness of any traditional expert system de- pends on the completeness of the expert knowledge it con- tains. Knowledge acquisition has been identified as a ma- jor bottleneck to the implementation of expert system technology in many areas of engineering [ lo] , [23], [32]. Another limitation is that traditional expert systems are unable to handle situations even slightly different from

one can see a good opportunity of blending of traditional expert systems with neural networks. Arguments that either expert systems or neural networks should replace one another should be ignored. The potential is great for both fields, which can be pursued simultaneously to achieve the goal of intelligent behavior.

ACKNOWLEDGMENT The author would like to thank the anonymous referees

for their very helpful comments, suggestions, and edito- assistance.

REFERENCES

1. S . Albus, “Outline for a theory of intelligence,” IEEE Trans. Sysr., Man. Cyberri., vol. 21, pp. 473-509, 1991. A. Bosman and H. G. Sol, “Knowledge representation and infor- mation systems design,” in Knowledge Representation f o r DSS, L. B. Methlie and R. H. Sprague. Eds. Amsterdam: North Holland, 1985. pp. 81-91. G. Chryssolouris, M. Lee, J . Pierce. and M. Domroese, “Use of neural networks for the design of manufacturing systems,” Munufac- fur. Rev. , vol. 3, no. 3. 187-194. 1990. C.-H. Chu, “Three blueprints for intelligent PC-based decision sup- port systems,“ Expert Sysr.. pp. 41-48. Winter 1990. T. Deal and A. Kennedy, Corporate Cultures. London: Penguin Business, 1988. H. B. Eom and S. M. Lee, “A survey of decision support system applications,” Interfaces, vol. 20, pp. 65-79, 1990. P. C . Fishburn, DeCiSicJn und Value Theory. New York: Wiley, 1964. M. S . Fox, “Knowledge representation for decision support,” in Knowledge Representationfor DSS. L. B. Methlie and R . H. Sprague Eds. S. I . Gallant, “Connectionist expert system,” Commun. Assoc. Com- put. Mach., vol. 31, no. 2. 152-169, 1988. F. Hayes-Roth, D. Waterman, and D. B. Lenat, Building expert sys- tems. Reading, MA: Addison-Wesley. 1983. J . C . Higgins. Computer-Based Planning Systems. London: Edward Arnold, 1985. R. M. Hogarth, “Generalization in decision research: The role of formal models,” IEEE Trans. Sysr.. Man, Cybern., vol. SMC-16,

C . Hsu and L. Rattner, “Information modeling for computerized manufacturing,” IEEE Trans. Syst . , Mun, Cybern., vol. 20, pp. 758- 776, 1990. R . L . Keeney and H . Raiffa, Decision.\ wirh Multiple Objectives. New York: Wiley. 1976. J . Kozielecki, Psychological Decision Theory. New York: Reidel Dordrecht, 1982. E. Kreyszig, Advanced Engineering Mathematics. New York: Wiley, 1983. A. J . Maren, C . T . Harston. and R . M. Pap, Handbook of Neural Computing Application.\. New York: Academic. J . Marschak and R. Radner, Economic T h e o p of Teams. New York: Wiley, 1976. J . Marti. “Cooperative autonomous behavior of aggregate units over large scale terrain,” in AI , Simulation and Planning in High Auron- omy Systems. B. Zeigler and J . Rozenblit, Eds. Washington, DC: IEEE Comput. Sac. Press, 1990. D. Michie. “Current developments in artificial intelligence and ex- oert svstems.” Z w o n . vol. 20. DD. 375-389. 1985.

Amsterdam: North Holland, 1985, pp. 3-26.

pp. 439-449, 1986.

r i . i. . . [21] T. C. Miller. “Two views of generality.” IEEE Trans. Sysr., Man,

1221 G. Mirchandani and W. Cao, “On hidden nodes for neural nets.”

1231 A. R . Mirzai. Arr$ciul Intelligence. Concepts and Applicarions in

[24] J . Naisbitt. Megatrends [25] D. S . Nau, “Expert computer systems,” Computer, pp. 63-85, Feb-

1261 H. Nielsen, “Counterpropagation networks,” Proc. IEEE 1st Int .

Cybern.. vol. SMC-16, pp. 450-453. 1986.

IEEE Trans. Circuits, Syst . , vol. 36, no. 5 , 661-664, 1989.

Engineering. London: Chapman and Hall Computing, 1990. New York: Warner, 1982.

ruary 1983. . . . . - - -

known prototype conditions. On the other hand, however, Conf. Neural NerNsorks, pp. 19-22, 1987

SZCZERBICKI: KNOWLEDGE FOR AUTONOMOUS COOPERATING AGENTS 1315

J . R. Quinlan, “Decision trees and decisionmaking.” IEEE Trans. Sysr., Man, Cybern . , vol. 20, pp. 339-346, 1990. R. H. Rasch, “Effect of information systems on the achievement of organizational goals: A modern control theory approach,” IEEE Syst . , Man, Cybern . , vol. 20, pp. 507-518, 1990. L. D. Richards and S . K. Gupta, “The systems approach in an in- formation society: A reconsideration.” J . Opt . Res. Soc . . vol. 36, pp. 833-843, 1985. D. E. Rumelhart, G. E. Hinton, and R. J . Williams, “Leaming in- ternal representation by error propagation,” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, D. E. Rumelhart and J. L. McClelland, Eds. Cambridge, MA: MIT Press.

D. E. Rumelhart and J . L. McClelland, Parallel Distributed Pro- cessing: Exploration in the Microstructure of Cognition. Cam- bridge, MA: MIT Press, 1986. J . N. Siddall, Expert Systems for Engineers. New York: Marcel Dekker, 1990. T. Sorsa, H . N. Koivo, and H . Koivisto, “Neural networks in process fault diagnosis,” IEEE Trans. Sys t . , Man, Cybern . , vol. 21, pp. 815- 825, 1991. E. Szczerbicki, “Technologically based organizations mathematical modelling and simulation,” Syst. Anal. Model. Simul., vol. 4 , pp. 295-307, 1987. - , “Information delay importance in group’s model base for DSS design,” Syst. Anal. Model. Simul. , vol. 6 , pp. 61-71, 1989. - , “Generalized group functioning evaluation and control sys- tem,” Syst. Anal. Model. Simul. , vol. 6, pp. 853-865, 1989. - , “Autonomous groups functioning: The role of correlation and interaction,” In?. J . Syst. Sc i . . , vol. 21, pp. 2037-2047, 1990. -, “Information flow evaluation in autonomous groups function- ing, IEEE Trans. Sys t . , Man, Cybern. , vol. 21 , pp. 402-408, 1991. - , “Design of atomized organization structure: A graph-theoretic approach,” In?. J . Syst. Sci . , vol. 23, pp, 109-118, 1992. T . Tekeguchi and H. Akashi, “Analysis of decisions under risk with incomplete knowledge,” IEEE Trans. Syst . , Man, Cybern . , vol.

H . Theil, Principles of Econometrics. J . D. Thompson, Organizations in Action. New York: McCraw- Hill, 1977. V. Vemuri, Artificial Neural Networks. Theoretical Concepts. Cam- bridge, MA: MIT Press, 1988. Y. Wei, G. W. Fischer, and J . L. Santos, “A concurrent engineering design environment for generative process planning using knowledge- based decisions,” in Concurrent Engineering of Mechanical Systems. E. J . Haug, Ed. J . Weinroth, “Model-based decision support and user modifiability,” IEEE Syst . , Man, Cybern. , vol. 20, pp. 513-518, 1990. T. Whalen, “Decisionmaking under uncertainty with various as- sumptions about available information,” IEEE Trans. Syst . , Man, Cybern. , vol. SMC-14, pp. 888-900, 1986. C . C. White 111, “A survey on the integration of decision analysis and expert systems for decision support,” IEEE Trans. Syst . , Man, Cybern. , vol. 20, pp. 358-364, 1990.

1986, pp. 318-362.

SMC-14, pp. 618-625, 1984. New York: Wiley, 1971.

Iowa City, IA: Univ. Iowa, 1989.

1481 P . H. Winston, Artificial Intelligence. Reading, MA: Addison-Wes- ley, 1984.

1491 Y. 0. Yoon. R. W. Brobst, P. R. Bergstreser, and L. L. Peterson, “A connectionist expert system for dermatology diagnosis,” Expert Syst . . vol. I , no. 4, pp. 22-31, 1990.

[SO] B. P. Ziegler. “High autonomy systems: Concepts and models,’’ in AI, Simulation and Planning in High Autonomy Systems, B. Zeigler and J . Rozenblit, Eds. Washington, DC: IEEE Comput. Soc. Press, 1990.

Edward Szczerbicki received the M.Sc. degree in 1977 from the Technical University of Gdansk, the Ph.D. in mechanical engineering in 1983 from the same university, and D.Sc. in management science from The University of Szczecin.

In 1978, he finished postgraduate study in CAD/ CAM and joined the Department of Technology and Organization at the Ship Research Institute, Gdansk, as a junior lecturer. In 1982 he joined the Institute of Organization, Production Systems, and Manaaement, Gdansk as a lecturer. He is now with

the Department of Management and Economics as an Associate Professor. In 1985 Dr. Szczerbicki was awarded a one-year Postdoctoral Fellowship by The British Council, London, and spent the 1985-1986 academic year with Strathclyde University, Glasgow, Scotland. In 1989 he was awarded a Research Grant by the Kosciuszko Foundation, New York, for his re- search at the University of Iowa, Iowa City. From 1989 until 1992 he was with the Department of Industrial Engineering, University of Iowa, as a Visiting Professor. In 1993 he was awarded a Senior Research Grant by The German Academic Exchange Service (DAAD, Bonn) for his research at GMD-Research Institute for Innovative Computer Systems and Tech- nologies, Berlin, Germany. He is now with The Technical University of Gdansk. His research interests include application of artificial intelligence in conceptual design, autonomous systems modeling, design, and integra- tion, and cybernetic design and evaluation of information flow. He is the author or coauthor of more than 70 research papers published in design, manufacturing, cybernetic, and systems journals and conference proceed- ings.

Dr. Szczerbicki is a member of the American Mathematical Society, Providence, RI, the Intemational Association for the Advancement of Modeling and Simulation Techniques in Enterprises, Tassin-la-Demi-Lune, France, the Polish Cybernetical Association, Warsaw, Poland, and the Po- lish Operational and Systems Research Association, Warsaw. In 1991 he was listed in Who’s Who in the Midwest and was elected a member of The New York Academy of Sciences. He is a reviewer for Mathematical Re- views, IEEE Transactions on Systems, Man, and Cybernetics, and Prog- ress in Cybernetics, Poland.