askin2012

11
520 IEEE SYSTEMS JOURNAL, VOL. 6, NO. 3, SEPTEMBER 2012 A Modeling Framework for Engineered Complex Adaptive Systems Moeed Haghnevis and Ronald G. Askin Abstract —The objective of this paper is to develop an inte- grated method to study emergent behavior and consequences of evolution and adaptation in a certain engineered complex adaptive system. A conceptual framework is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The proposed modeling approach allows examining complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. Electrical power demand is used to illustrate the applicability of the modeling approach. We describe and use the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build our framework. The framework allows focus on the critical factors of an engineered system, but also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems without complex modeling. This paper adopts concepts of complex systems science to management science and system-of-systems engineering. Index Terms—Complex adaptive systems (CASs), decentral- ization, emergence, engineered complexity, evolution, system of systems. I. Introduction T RADITIONALLY, we analyze a system by reductionism. In other words, we study behaviors of large systems by decomposing the system into components, analyzing the components, and then inferring system behavior by aggrega- tion of component behaviors. However, this bottom-up method of describing systems often fails to analyze complex levels and fully describe behavior. Holism reveals that the sum of components is less than the whole system [1]. This idea becomes important in studies of complex systems. Complex systems have been widely studied; however, there is not yet a comprehensive and widely accepted mathematical model for engineered systems. Defense Research and Devel- opment Canada-Valcartier, Valcartier, QC, Canada, distributed four comprehensive reports dedicated to the study of complex systems. The first document provides 471 references and 713 related Internet addresses in list of projects, organizations, journals, and conferences [2]. The second one provides differ- ent formulations and measures of complexity [3]. Their glos- sary defined 335 related keywords [4]. An overview of theo- retical concepts of complexity theory is presented in the fourth Manuscript received October 29, 2010; revised June 28, 2011; accepted January 12, 2012. Date of publication April 18, 2012; date of current version August 21, 2012. The authors are with the School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ 85287 USA (e-mail: [email protected]; [email protected]). Digital Object Identifier 10.1109/JSYST.2012.2190696 document [5]. Magee and Weck [6] classified complex systems and presented several examples for each group. While these surveys show the extent of prior research, they also indicate the lack of a comprehensive engineering model and motivate us to consider engineered complex adaptive systems (ECASs). Current research (mentioned in the surveys) usually con- siders natural systems (biological, physical, and chemical systems) where the emergence and evolutionary behaviors can be studied by thermodynamic laws, biological rules, and their intrinsic dynamics that are innate parts of these systems. How- ever, in engineered systems, decision makers or system design- ers develop or define rules and procedures to engineer the out- comes and control the possibilities as needed. In ECASs, ob- jectives are artificially defined and interoperabilities between components can be manipulated to achieve desired goals; however, objectives and interoperabilities of natural systems are naturally embedded. These facts motivate us to propose a new framework for modeling this class of complex adaptive systems (CASs). Our framework does not design CASs, it enables us to control or at least predict ECASs mutating issues. This paper considers the hallmarks of ECASs as emer- gence, evolution, and adaptation. We define emergence as the capability of components of a system to do something or present a new behavior in interaction and dependent to other components that they are unable to do or present individually. Also, we define evolution as a process of change and agility for the whole system. Adaptation is the ability of systems to learn and adjust to a new environment to promote their survival. Similar definitions can be found in [4]. We will explain how to study these hallmarks in our framework for ECASs in detail. Study of CASs is challenging because of abstract theoretical concepts, no applicable complete framework, and difficulty in understanding emergence [1]. The main barrier to analyzing ECASs by traditional methods stems from the theory of complex systems that focuses on emergence at the lower level and evolution at the upper system level whereas engineering focuses on purposes and outcomes. Some research has con- sidered complex system science in engineering environments. Scope and Scale [7] studied properties of the structure of complex systems and interdependence of components. The complexity profile [8] helps measure the amount of informa- tion needed to describe each level of detail. These methods are not mature enough to analyze and predict ECASs completely. While electricity consumption profiles will be utilized for illustration and validation, we will discuss how this framework could likewise be applied in other ECASs, such as traffic and 1932-8184/$31.00 c 2012 IEEE

Upload: richard-andres-cardozo-lemus

Post on 12-Dec-2015

212 views

Category:

Documents


0 download

DESCRIPTION

A modeling framework

TRANSCRIPT

520 IEEE SYSTEMS JOURNAL, VOL. 6, NO. 3, SEPTEMBER 2012

A Modeling Framework for EngineeredComplex Adaptive Systems

Moeed Haghnevis and Ronald G. Askin

Abstract—The objective of this paper is to develop an inte-grated method to study emergent behavior and consequencesof evolution and adaptation in a certain engineered complexadaptive system. A conceptual framework is provided to describethe structure of a class of engineered complex systems and predicttheir future adaptive patterns. The proposed modeling approachallows examining complexity in the structure and the behaviorof components as a result of their connections and in relation totheir environment. Electrical power demand is used to illustratethe applicability of the modeling approach. We describe and usethe major differences of natural complex adaptive systems (CASs)with artificial/engineered CASs to build our framework. Theframework allows focus on the critical factors of an engineeredsystem, but also enables one to synthetically employ engineeringand mathematical models to analyze and measure complexityin such systems without complex modeling. This paper adoptsconcepts of complex systems science to management science andsystem-of-systems engineering.

Index Terms—Complex adaptive systems (CASs), decentral-ization, emergence, engineered complexity, evolution, system ofsystems.

I. Introduction

TRADITIONALLY, we analyze a system by reductionism.In other words, we study behaviors of large systems

by decomposing the system into components, analyzing thecomponents, and then inferring system behavior by aggrega-tion of component behaviors. However, this bottom-up methodof describing systems often fails to analyze complex levelsand fully describe behavior. Holism reveals that the sum ofcomponents is less than the whole system [1]. This ideabecomes important in studies of complex systems.

Complex systems have been widely studied; however, thereis not yet a comprehensive and widely accepted mathematicalmodel for engineered systems. Defense Research and Devel-opment Canada-Valcartier, Valcartier, QC, Canada, distributedfour comprehensive reports dedicated to the study of complexsystems. The first document provides 471 references and 713related Internet addresses in list of projects, organizations,journals, and conferences [2]. The second one provides differ-ent formulations and measures of complexity [3]. Their glos-sary defined 335 related keywords [4]. An overview of theo-retical concepts of complexity theory is presented in the fourth

Manuscript received October 29, 2010; revised June 28, 2011; acceptedJanuary 12, 2012. Date of publication April 18, 2012; date of current versionAugust 21, 2012.

The authors are with the School of Computing, Informatics, and DecisionSystems Engineering, Arizona State University, Tempe, AZ 85287 USA(e-mail: [email protected]; [email protected]).

Digital Object Identifier 10.1109/JSYST.2012.2190696

document [5]. Magee and Weck [6] classified complex systemsand presented several examples for each group. While thesesurveys show the extent of prior research, they also indicatethe lack of a comprehensive engineering model and motivateus to consider engineered complex adaptive systems (ECASs).

Current research (mentioned in the surveys) usually con-siders natural systems (biological, physical, and chemicalsystems) where the emergence and evolutionary behaviors canbe studied by thermodynamic laws, biological rules, and theirintrinsic dynamics that are innate parts of these systems. How-ever, in engineered systems, decision makers or system design-ers develop or define rules and procedures to engineer the out-comes and control the possibilities as needed. In ECASs, ob-jectives are artificially defined and interoperabilities betweencomponents can be manipulated to achieve desired goals;however, objectives and interoperabilities of natural systemsare naturally embedded. These facts motivate us to propose anew framework for modeling this class of complex adaptivesystems (CASs). Our framework does not design CASs, itenables us to control or at least predict ECASs mutating issues.

This paper considers the hallmarks of ECASs as emer-gence, evolution, and adaptation. We define emergence as thecapability of components of a system to do something orpresent a new behavior in interaction and dependent to othercomponents that they are unable to do or present individually.Also, we define evolution as a process of change and agility forthe whole system. Adaptation is the ability of systems to learnand adjust to a new environment to promote their survival.Similar definitions can be found in [4]. We will explain howto study these hallmarks in our framework for ECASs in detail.

Study of CASs is challenging because of abstract theoreticalconcepts, no applicable complete framework, and difficulty inunderstanding emergence [1]. The main barrier to analyzingECASs by traditional methods stems from the theory ofcomplex systems that focuses on emergence at the lower leveland evolution at the upper system level whereas engineeringfocuses on purposes and outcomes. Some research has con-sidered complex system science in engineering environments.Scope and Scale [7] studied properties of the structure ofcomplex systems and interdependence of components. Thecomplexity profile [8] helps measure the amount of informa-tion needed to describe each level of detail. These methods arenot mature enough to analyze and predict ECASs completely.

While electricity consumption profiles will be utilized forillustration and validation, we will discuss how this frameworkcould likewise be applied in other ECASs, such as traffic and

1932-8184/$31.00 c© 2012 IEEE

HAGHNEVIS AND ASKIN: MODELING FRAMEWORK FOR ENGINEERED COMPLEX ADAPTIVE SYSTEMS 521

crowed behaviors, wholesale marketing, health care systems,urban design, robotics and AI, supply chain management,modern defense sectors, and other meta-systems. The Elec-tric Power Research Institute, Palo Alto, CA, estimates 26%growth of electricity consumption by 2030 in the U.S. (1.7%annually from 1996 to 2006) [9]. Electric power grids areECASs with high economic impact driven by the maximumconsumption rate and uniformity of aggregate regional de-mand. Applying our integrated model allows reduction ofdisuniformity in electricity consumption. Economic incentivesmotivate local consumers to adjust behavior to limit maximumsystem usage.

One of the most engineered and mathematically mod-eled complex systems is complex networks. Previous studiesquantify dynamics of small-world networks [10] and modelevolutionary structure of population and components in socialnetworks [11]. For example, structural properties of the powergrids of Southern California [12] and New York [13] havebeen analyzed. We will apply some of the concepts of complexnetwork science at the last step of our framework.

In this paper, we focus on human decision making. Humanscan adjust their structural artifacts and actions to respond to thechallenges and opportunities of their environment. This abilityusually increases complexity. Three developed approaches tomimic human decision behaviors are classified by [14]. Mostof the research on human networks assumes some kind ofhierarchy in the system. These studies are useful in organiza-tional systems that have different levels of authority, such asmilitary and education systems that have leaders and followers.However, complexities in heterarchical systems (componentsshare the same authority) are not studied.

The remainder of this paper is organized as follows. Sec-tion II presents our framework. Hallmarks and theoretical con-cepts of complexity are considered in building this framework.Other sections are mapped to the profiles of the framework.Sections III and IV detail the mathematical mechanismsof features and relationships of components (step 1 of theframework). These lead to analyzing the interoperabilities thatinduce emergence in Section V (step 2). Evolution of traitsas the process of system adaptation and their response to thechanges is covered in Section VI (steps 3, 4). Various examplesdemonstrate the validity of our method in each section.

II. Framework for Engineered Complex

Adaptive Systems

Couture and Charpentier [1] and Mostashari and Suss-man [15] presented a framework to study complex systems.Prokopenko et al. [16] depicted complex system scienceconcepts. Also, Sheard and Mostashari [17] visualized charac-teristics of complex systems. Frameworks for ECASs are stillincomplete and fragmented. In this paper, we propose a moredetailed framework for ECASs (Fig. 1). The framework canhelp us focus on critical factors that change the states of anECAS, and enables us to synthetically employ engineering andmathematical models to analyze and measure complexity in anadaptive system without complex modeling. Four profiles ofECASs and their characteristics are presented in component

Fig. 1. Framework for engineered complex adaptive systems.

and system levels to show behavior of the three hallmarks.In our proposed approach, a preparatory step identifies

adaptive complexity in an engineered system. This step isnecessary to make sure we do not spend unnecessary resourcesto analyze a normal system as a complex system. To identifya complex engineered system, we check [18] the following.

1) System structure:a) displays no or incomplete central organizing for

the system organization (prescriptive hierarchi-cally controlled systems are assumed to not becomplex systems);

b) behavioral interactions among components atlower levels are revealed by observing behaviorof the system at higher level.

2) Analysis of system behavior:a) analyzing components fails to explain higher level

behavior;b) reductionist approach does not satisfactorily de-

scribe the whole system.Total electricity consumption grows every year, affecting the

topology of power grids. Some researchers believe this hugegrowth supports the idea of transformation from a centralizednetwork to a less centralized one (from producer-controlledto consumer-interactive). This decentralization results in com-plexity in this system by decreasing central organization.Moreover, the interaction of physics with the design of thetransmission links increases its complexity as do the diversityof people, their interdependences, and their willingness tocooperate. Time dependence of the network [19], scale-freeor single-scale feature of these networks (their node degreedistribution follows a power-law or Gaussian distribution inlong run) [20], and human decisions based on other consumersall justify considering the electric power grid as an ECAS.These factors have placed the U.S. power grid beyond thecapability of mathematical modeling to date [13].

To take advantage of the fundamental theories of com-plex systems, we study and analyze complex systems based

522 IEEE SYSTEMS JOURNAL, VOL. 6, NO. 3, SEPTEMBER 2012

on the framework in Fig. 1. Systems are composed ofcomponents. Components possess individual features and in-teroperable behaviors. Systems then have traits and learningbehaviors. Together, these form the system profile comprisedof the following aspects (we define state of each profile inparentheses).

1) Features (components readjust themselves continu-ously): Here, dissection of features leads to decom-posability (e.g., number of each component type andpatterns of individual behaviors) and willingness (e.g.,fitness rate of each component and behavioral/decisionrules). The environment of the system may also affectcomponent actions. A measurable property of this profileis self-information (entropy) of components. Entropyis increased with the diversity of components and isdecreased with their compatibility. Sections III and IVmathematically model and analyze the dissection offeatures and show how self-organization appears.

2) Interoperabilities (components update their interdepen-dences): In this profile, emergence as the hallmarkof interoperability shows what components can do ininteraction and dependent to other components that theywould not do individually. Components have exchange-ability and synchronization. Autonomy increases anddependence decreases the interrelationship of compo-nents. This profile helps us to infer the behavior of thecomponents. Section V models this profile.

3) Traits (system tries to improve its efficiency and ef-fectiveness): In this profile, systems may evolve. Thewhole system applies its resilience and agile abilitiesto perform more effectively and efficiently. Categoriesof trait structures or behaviors will be considered here.The threshold for changing the nature or perceivedcharacteristic of the system is the measurable propertyof this profile. It is discussed in Section VI.

4) Learning (system has flexibility to perform in unforeseensituations): After evolving, the system must adapt to thenew situation. Systems need to be adaptive to survive;otherwise, they may collapse in dynamic conditions.Flexibility and robustness allow systems to adapt andshow the performance of the system. In some stud-ies, adaptation is one kind of evolution while otherresearchers delineate a difference between evolution andadaptation (modeled in Section VI).

We define complexity of a system with the measurableproperties of the profiles; entropy (E), interoperabilities (I),and evolution thresholds (τ). E measures diversity versuscompatibility of component features (Sections III, IV). Is de-fine sensitivity (autonomy versus dependence) to other relatedcomponents and their effects (Section V). τs are milestones forchanges and adjustments in the system performance that candifferentiate trait categories (Section VI). In addition, a systemmay have a goal. In our case, this is to minimize disuniformityof electricity demand, D, to be formally defined later in thispaper.

The framework starts with dissection of features. First, westudy dynamics of components similar to noncomplex systems

(Sections III-A, III-B). Then, we define a new measure todepict the relationships (Section III-C). These relationshipsare the initial source of emergence and are defined based onECAS goals (in natural CASs unlike ECASs this measure isembedded to the system and should be found by analyzing thesystem behavior).

Then, we focus on the emergence phenomena of ECASs asthe core concept of complex adaptive behaviors and the sourceof dynamic evolution. We present a comprehensive section ondissection of features and propose four detailed theorems toshow controllability and predictability of the framework at theemergence level of a system. Then, we generalize the theoremsin the comprehensive theory of mechanisms of components forECASs (Section IV).

To distinguish an ECAS from a pure multiagent system(MAS), we define interoperability as the behavioral changesthat are caused by interactions (Section V). In MASs com-ponents have relationships; however, in CASs the interactionsand behaviors evolve. Interoperability shows how componentscooperate/compete based on other components and interac-tions to evolve and adapt to new environments (see newmeasures in Section V). While either would suffice, we use theterm interoperability instead of interaction to indicate informa-tion sharing and beneficial behavior coordination. Finally, theframework shows the adaptability and learning behavior of asystem at Section VI.

III. Dissection of Features

Various studies apply the concept of information theory tostudy system complexities. The key point is that the requiredlength to describe a system is related to its complexity [21].Yu and Efstathiou defined a complexity measure based onentropy and a quantitative method to evaluate the performanceof manufacturing networks [22]. These studies applied theconcept of entropy in their research; however, they did notdiscuss other hallmarks of CASs. Here, we start with the sameidea then we extend it to the other hallmarks.

A. Exponential Fitness

Consider a system of components with n different patternsof behavior. For example, there may be n daily electricityusage profiles for the different classes of consumers. If popu-lation of pattern i (Xi, i = 1, ..., n) changes exponentially withfitness rate bi

Xi(t + 1) = bi · Xi(t) + Xi(t) or�Xi

�t= biXi. (1)

To increase the readability of the formulation in the followingsections all ts are suppressed from the expressions except whennecessary to compare different times. The probabilities of thepatterns can be measured by the percentage of each pattern

Pi =Xi∑Xi

. (2)

We obtain the growth equation for percentage of each group

dPi

dt=

biXi

∑Xi − Xi

∑biXi

(∑

Xi)2= biPi − Pi

∑biPi. (3)

HAGHNEVIS AND ASKIN: MODELING FRAMEWORK FOR ENGINEERED COMPLEX ADAPTIVE SYSTEMS 523

In the long run, we may assume small periods of tsas continuous intervals. In continuous time, the exponentialfunction (4) replaces (1)

Xi = αieβit or

dXi

dt= αiβie

βit (4)

dPi

dt= βiPi − Pi

∑βiPi (5)

where Pi = αieβit∑

iαie

βit. To find self-information of components,

we can measure the entropy of population by

E = −∑

Pi log2 Pi. (6)

So growth of entropy is

dE

dt= −

∑[dPi

dt(

1

ln 2+ log2 Pi)]. (7)

From (3) and (7)

dE

dt=

∑biPi(

∑Pi log2 Pi − log2 Pi). (8)

B. Logistic Fitness

If population Xi has limit Li, its growth follows a logisticfunction, (1) will change to

dXi

dt= biXi(1 − Xi

Li

). (9)

Thus, (3) becomes

dPi

dt= biPi(1 − Xi

Li

) − Pi[∑

biPi(1 − Xi

Li

)]. (10)

Define growth potential μi = 1 − Xi

Li, then

dPi

dt= Pi(biμi −

∑biμiPi). (11)

From (11) and (7), (8) can be rewritten as follows:

dE

dt=

∑μibiPi(

∑Pi log2 Pi − log2 Pi). (12)

Growth of entropy shows how the population changes intime by the exponential or logistic function (entropy is self-information). However, it is not sufficient for interpretingthe combination of components as any combination of threecomponents with 0.3, 0.3, and 0.4 probability leads to the sameentropy. In addition, engineered systems have a defined goalthat is not shown in the entropy (we call it disuniformity).

C. Disuniformity

Let Cti(w) be the average consumption of electricity at time

w for pattern i in period t. The disuniformity of pattern i intime t is as follows:

Di(t) =∫ w0

0(Ct

i(w) − Cti)

2dw

w � t

(13)

where Cti =

∫ w0

0Ct

i(w)dw

wand w is a cyclic time in period t.

For example, if we want to show patterns of consumptionin each quarterly season for the next 20 years, w0 coversthe 24 h of consumption each day while t (t = 1, ..., 80)shows each season. The pattern of consumption in the firstseason C1

i (w) may be different from the second one C2i (w).

We will illustrate how disuniformity can be extended to otherECASs. At first glance, the disuniformity of an individualcomponent, (13), looks similar to variance. We do not use thisterm because the consumption is not considered as a randomvariable. Furthermore, it is customary to refer to the varianceas the range/noise of consumption at a specific time w.

The control objective is to minimize the disuniformity(consumers cooperate to have uniform aggregate consumptionat each time). Thus, we seek to minimize D

D =∫ w0

0

⎛⎜⎝ (

∑i∈S Ci(w)Xi) −

∫ w0

0

∑i∈S

Ci(w)Xidw

w0∑i∈S Xi

⎞⎟⎠

2

dw (14)

where population S is a connected graph of components toshow their interactions. These interactions are the source ofinteroperabilities in Section V. Note that we remove ts in ourformula to increase readability. However, D, C, and X arefunctions of t.

Generally, we define disuniformity as a normalized measureof difference between the current state of components and thegoal state. Disuniformity could be reduced by incentives thatchange one or more profiles or rearrange class probabilities(source of self-organization).

Here, we use disuniformity to show how the system behavesas an ECAS (we will show how it causes dependences betweenbehaviors later). Concepts from information theory are adaptedto describe complexity, self-organization and emergence inthe context of our ECASs [16]. Controlling disuniformityis a source of self-organization in ECASs (see Section IV).Shalizi [23] and Shalizi et al. [24] defined a quantifyingself-organization for discrete random fields (e.g., cellular au-tomata). We reinterpret these concepts to apply them in ECASsthat may have continuous states and, unlike natural physicalsystems, may not have a natural embedded energy dynamic orself-directing law. Self-organization and adaptive agents areanalyzed by [25]. We will extend these concepts to all hall-marks of ECASs. Bashkirov [26] described self-organizationin a complex system by using Renyi and Gibbs-Shannonentropy. These studies are applicable in natural and physicalsystems. For example, a biological application, gene-geneand gene-environment interactions, is identified by interactioninformation and generalization of mutual information in [27].

IV. Entropy Versus Disuniformity, Source of

Self-Organization

In this section, we connect the concept of entropy anddisuniformity for component patterns. We prove lemmas for asystem with two components that interact in a basic dominancescenario. Then, we generalize our lemmas to more complicated

524 IEEE SYSTEMS JOURNAL, VOL. 6, NO. 3, SEPTEMBER 2012

structures of patterns and behaviors for the n-component case.These theorems allow control, predict behaviors of featuresand their relationships, and enable us to study emergence bymodeling interoperability in the next section.

Definition I:

1) Dominance: behavior i dominates behavior j (i � j) ifDi ≤ Dj .

2) Strict positive dominance: behavior i strictly positivelydominates behavior j (i � j) if Di < Dj , |Ci(w) −Ci| ≤ |Cj(w) − Cj| for all w and sgn(Ci(w) − Ci) =sgn(Cj(w) − Cj) for all w.

3) Positive dominance: behavior i positively dominates be-havior j (i � j) if Di < Dj , |Ci(w)−Ci| > |Cj(w)−Cj|for some w and sgn(Ci(w) − Ci) = sgn(Cj(w) − Cj) forall w.

4) Strict negative dominance: behavior i strictly negativelydominates behavior j (i � j) if Di < Dj , |Ci(w) −Ci| ≤ |Cj(w) − Cj| for all w and sgn(Ci(w) − Ci) �=sgn(Cj(w) − Cj) for all w.

5) Negative dominance: behavior i negatively dominatesbehavior j (i � j) if Di < Dj , |Ci(w) −Ci| > |Cj(w) −Cj| for some w and sgn(Ci(w) −Ci) �= sgn(Cj(w) −Cj)for all w.

Note that E is increasing in time (E ↑) means E(t + 1) >

E(t) and (E ↓) means E(t + 1) < E(t). We use the samedefinition for (D ↑) and (D ↓). Here, Pi refers to Pi(t).

Lemma I: Given two different patterns of behavior (i andj) in the population and i � j:

I.1) Pi < Pj (Xi < Xj) and bi > bj iff E is increasing intime (E ↑) and D decreases in time (D ↓);

I.2) Pi > Pj (Xi > Xj) and bi > bj iff E is decreasing intime (E ↓) and D decreases in time (D ↓);

I.3) Pi < Pj (Xi < Xj) and bi < bj iff E is decreasing intime (E ↓) and D increases in time (D ↑);

I.4) Pi > Pj (Xi > Xj) and bi < bj iff E is increasing intime (E ↑) and D increases in time (D ↑).

Proof (Sufficiency of Lemma I): We are given n = 2, Pi(t) +Pj(t) = 1, and Pi(t) < Pj(t), so, Pi(t) < 1/2 and Pj(t) > 1/2.Also, bi > bj results in

Xi(t)

Xi(t) + Xj(t)<

biXi(t) + Xi(t)

biXi(t) + Xi(t) + bjXj(t) + Xj(t)(15)

thus Pi(t) < Pi(t + 1) and similarly, Pj(t) > Pj(t + 1). So theprobabilities are closer to a uniform distribution (Pi is closerto Pj) in t + 1.

Recall that the uniform distribution of Xis (frequency ofpatterns) gives the maximum entropy of the system (see [28]for proof). Suppose Pi = 1

nis the uniform probability mass

function for Xi, i = 1, ..., n, so the maximum entropy of thesystem is log2 n.

From the recall max(E) = 1 when n = 2 and Pi(t) = Pj(t) =1/2 at time t, hence, E is an increasing function of time t, i.e.,E(t + 1) > E(t) while Pi(t) < Pj(t).

Furthermore, because i strictly dominants j, for all timeintervals w, and has similar sign with j, increasing the portion

of Xi

Xjdecreases disuniformity in (14) because here

|Ci(w) − Ci| ≤ |Cj(w) − Cj| (16)

Xi(t)

Xj(t)<

Xi(t + 1)

Xj(t + 1)(17)

and Di < Dj; therefore, D(t + 1) < D(t), i.e., D increases.It is easy to show that in Lemma I.2) E is a decreasing

function of t and apply the same argument for I.3) and I.4).Necessity of Lemma I (Proof by Contradiction): Suppose

E increases and D decreases but one or both conditions ofLemma I.1) do not hold. In this case, necessary conditionsfor one of the I.2), I.3), or I.4) hold. For example, if bi > bj

but Xi > Xj instead of Xi < Xj , this is Lemma I.2) and E

decreases which contradicts our assumption of I.1). Note thatwe do not consider bi = bj or Pi = Pj , because they are neutralcases and do not have any effect. So all four combinations ofbs and ps are generated in this lemma.

Corollary I: When conditions of Lemma I hold and t → ∞:I.1) in exponential growth Di is a lower bound for D and

E ∈ (0, 1) when D decreases [Lemma I.1), I.2)]; also,Dj is an upper bound for D and E ∈ (0, 1) when D

increases [Lemma I.3), I.4)];I.2) consider logistic growth where f , f ′, g, and g′ are

functions of the logistic limits Li; then, max{Di, f (Li)}is a lower bound for D and E ∈ (0, g(Li)) when D

decreases [Lemma I.1), I.2)]. Also, min{Dj, f′(Lj)} is

an upper bound for D and E ∈ (0, g′(Lj)) when D

increase [Lemma I.3), I.4)].Proof: In Corollary I.1), D decreases when proportion Xi

Xj

increases (due to the dominance condition), so min(D) = Di

when all components are i ( Xi

Xj→ ∞ and E = 0). And D

increases when proportion Xi

Xjdecreases, so max(D) = Dj

when all components are j ( Xi

Xj→ 0 and E = 0). However,

max(E) = log2 n and n = 2, so max(E) = 1 and E isnonnegative.

When the fitness follows a logistic function [Corollary I.2)],we have limits for the number of is and js, Xi

Xj< ∞ if Xj �= 0

and Xi

Xj> 0 if Xi �= 0. So min(D) is a function of the limit of

i when Xi

Xjincreases and max(D) is a function of the limit of

j when Xi

Xjdecreases. Clearly, min(D) = Di when Xj = 0 and

max(D) = Dj when Xi = 0. Using the same argument we canfind the range of E which is a function of limits.

Theorem I: Given n different patterns of behavior (i =1, ..., n) in population S, bk ≥ 0, ∀k ∈ S and i � j, fori ∈ S′ and j ∈ S − S′:I.1) E < − log2 Pi (

∑i∈S Pi log2 Pi > log2 Pi) and bi > bj

for i ∈ S′ and j ∈ S − S′ iff E is increasing in time(E ↑) and D decreases in time (D ↓);

I.2) E > − log2 Pi (∑

i∈S Pi log2 Pi < log2 Pi) and bi > bj

for i ∈ S′ and j ∈ S − S iff E is decreasing in time(E ↓) and D decreases in time (D ↓);

I.3) E < − log2 Pi (∑

i∈S Pi log2 Pi > log2 Pi) and bi < bj

for i ∈ S′ and j ∈ S − S′ iff E is decreasing in time(E ↓) and D increases in time (D ↑);

HAGHNEVIS AND ASKIN: MODELING FRAMEWORK FOR ENGINEERED COMPLEX ADAPTIVE SYSTEMS 525

I.4) E > − log2 Pi (∑

i∈S Pi log2 Pi < log2 Pi) and bi < bj

for i ∈ S′ and j ∈ S − S′ iff E is increasing in time(E ↑) and D increases in time (D ↑).

Proof (Sufficiency of Theorem I): This theorem generalizesLemma I to n components. Similar to Lemma I the entropyof the system increases when probability of components iscloser to uniform distribution. It happens when for exponentialgrowth in (8) or for logistic growth in (12),

∑i∈S Pi log2 Pi =

log2 Pi. To reach this point, E increases if there is a largerfitness rate for components which have probability less thanuniform distribution. In general, larger fitness rates increasethe entropy if − log2 Pi > E [Theorem I.1)] for the casesthat we cannot reach the uniform distribution or if we wantto compare some components where all have smaller or largerprobabilities than uniform.

Like Lemma I, increasing the number of dominant com-ponents decreases the total disuniformity (14). The sameargument will prove Theorem I.2), I.3), and I.4). We can alsoprove the necessity of Theorem I by contradiction.

Corollary II: When conditions of Theorem I hold andt → ∞:II.1) Corollary I.1) can be generalized to n components in

Theorem I with E ∈ (0, log2 n);II.2) Corollary I.2) can be generalized to n components in

Theorem I with different f , f ′, g, and g′ functions.Note that bk > 0, ∀k ∈ S means all Xis are growing overtime; however, some Pis may decrease.

Lemma II: Given two different patterns of behavior (i andj) in the population and i � j; Lemma I.1), I.2), I.3), and I.4)and Corollary I.1) and I.2) are valid.

Proof: This is a generalization of Lemma I to the positivedominance case. This case allows j to dominate i in sometime interval w; however, the proof is still valid because D istotal disuniformity.

Theorem II: Given n different patterns of behavior (i =1, ..., n) in population S, bk ≥ 0, ∀k ∈ S and i � j, fori ∈ S′ and j ∈ S − S′; Theorem I.1), I.2), I.3), and I.4) andCorollary II.1) and II.2) are valid.

Proof: This theorem is a generalization of Lemma II to n

components. We can use the same argument which we usedto generalize Lemma I to Theorem I to generalize Lemma IIto Theorem II.

Example 1: (Features in Fig. 1) assume there are 100 com-ponents in a complex system which only follows three patternsi, j, and k. At time t = 1, 15% of components follow patterni, 65% follow j, and 20% follow k. Let bi = 0.2, bj = 0.1,and bk = 0.3. Fig. 2(a) shows the patterns of electricityconsumption in 24 h. The objective is simulating and analyzingthe complex system for the next 20 years (80 seasons).

At t = 1 the system follows Theorem II.1)Pi(t = 1) = 0.15, Pj(t = 1) = 0.65, Pk(t = 1) = 0.2,E(t = 1) = 1.28, D(t = 1) = 65.08.At t = 9 we have max(i) [follows Theorem II.2)]

Pi(t = 9) = 0.18, Pj(t = 9) = 0.38, Pk(t = 9) = 0.44,E(t = 9) = 1.49, D(t = 9) = 59.08.At t = 19 disuniformity starts increasing again

Pi(t = 19) = 0.13, Pj(t = 19) = 0.12, Pk(t = 19) = 0.75,E(t = 19) = 1.07, D(t = 19) = 57.45 (D(t = 18) = 57.36).

Fig. 2. Example for Theorem II. (a) Patterns. (b) Fitness. (c) D versus E.

Fig. 2(b) shows the probability changes and Fig. 2(c)presents the behavior of components and simulates entropy anddisuniformity of the system for 80 seasons. Fig. 2(c) showsthe three different possible areas for Theorem II.

Lemma III: Given two different patterns of behavior (i andj) in the population and i � j:

III.1) Pi < Pj (Xi < Xj) and bi > bj iff E is increasing intime (E ↑) and D decreases in time (D ↓) until D = 0(Xi

∫(Ci(w)−Ci)dw = Xj

∫(Cj(w)−Cj)dw) afterward

D increases in time (D ↑);III.2) Pi > Pj (Xi > Xj) and bi > bj iff E is decreasing in

time (E ↓) and D decreases in time (D ↓) until D = 0(Xi

∫(Ci(w)−Ci)dw = Xj

∫(Cj(w)−Cj)dw) afterward

D increases in time (D ↑);III.3) Pi < Pj (Xi < Xj) and bi < bj iff E is decreasing in

time (E ↓) and D increases in time (D ↑);III.4) Pi > Pj (Xi > Xj) and bi < bj iff E is increasing

(E ↑) and D increases in time (D ↑).

Proof: To prove this lemma, we should consider differentsgn(Ci(w) − Ci) between disuniformity of i and j, for all w.So, the total disuniformity decreases until 0 and increasesafter that [because of power of 2 in (14)]. D = 0 whenthe weighted disuniformity for all components i is equal toweighted disuniformity for all components j. When the totaldisuniformity increases [Lemma III.3), III.4)] we do not needto consider any minimum point, because the function is nondecreasing.

526 IEEE SYSTEMS JOURNAL, VOL. 6, NO. 3, SEPTEMBER 2012

Corollary III: When conditions of Lemma III hold andt → ∞:

III.1) in exponential growth ∃ε > 0 where, D < ε (ε is alower bound for D) and E ∈ (0, 1) when D decreases[Lemma III.1), III.2)]; also, Dj is an upper bound forD and E ∈ (0, 1) when D increases [Lemma III.3),III.4)];

III.2) in logistic growth max{0, f (Li)} is a lower bound forD and E ∈ (0, g(Li)) when D decreases [Lemma III.1),III.2)]; also, min{Dj, f

′(Lj)} is an upper bound for D

and E ∈ (0, g′(Lj)) when D increase [Lemma III.3),III.4)].

Proof: Proof is similar to Corollary I; however, for a specificw = w0 where Xi

∫(Ci(w0)−Ci)dw0 ≈ Xj

∫(Cj(w0)−Cj)dw0,

we have D ≈ 0. This point may happen before all componentsbecome similar to is, so min(D) = 0 where E �= 0 and E = 0where D �= 0.

Theorem III: Given n different patterns of behavior (i =1, ..., n) in population S, bk ≥ 0, ∀k ∈ S and i � j for i ∈ S′

and j ∈ S − S′:III.1) E < − log2 Pi (

∑i∈S Pi log2 Pi > log2 Pi) and bi >

bj for i ∈ S′ and j ∈ S − S′ iff E is increasing intime (E ↑) and D decreases in time (D ↓) until D =0 (

∑Xi

∫(Ci(w) − Ci)dw =

∑Xj

∫(Cj(w) − Cj)dw)

afterward D increases in time (D ↑);III.2) E > − log2 Pi (

∑i∈S Pi log2 Pi < log2 Pi) and bi >

bj for i ∈ S′ and j ∈ S − S iff E is decreasing intime (E ↓) and D decreases in time (D ↓) until D =0 (

∑Xi

∫(Ci(w) − Ci)dw =

∑Xj

∫(Cj(w) − Cj)dw)

afterward D increases in time (D ↑);III.3) E < − log2 Pi (

∑i∈S Pi log2 Pi > log2 Pi) and bi < bj

for i ∈ S′ and j ∈ S − S′ iff E is decreasing in time(E ↓) and D increases in time (D ↑);

III.4) E > − log2 Pi (∑

i∈S Pi log2 Pi < log2 Pi) and bi < bj

for i ∈ S′ and j ∈ S − S′ iff E is increasing in time(E ↑) and D increases in time (D ↑).

Corollary IV: When conditions of Theorem III hold andt → ∞:

IV.1) Corollary III.1) can be generalized to n components inTheorem III with E ∈ (0, log2 n);

IV.2) Corollary III.2) can be generalized to n componentsin Theorem III with different f , f ′, g, and g′

functions.

Lemma IV: Given two different patterns of behavior (i andj) in the population and i � j, Lemma III.1), III.2), III.3), andIII.4) and Corollary III.1) and III.2) apply.

Theorem IV: Given n different patterns of behavior (i =1, ..., n) in population S, bk ≥ 0, ∀k ∈ S and i � j, for i ∈ S′

and j ∈ S − S′, Theorem III.1), III.2), III.3), and III.4) andCorollary IV.1) and IV.2) apply.

Example 2: (Features on Fig. 1) modify Example 1 to threecomponents with negative dominance, Theorem IV [Fig. 3(a)].Fig. 3(b) shows the behavior of the complex system. Fig. 3(c)shows the different possible cases of Theorem IV for scenarioFig. 3(b), respectively.

Summary: We can summarize the results of Theorems I, II,III, IV in Table I and conclude Theorem V as a general theo-

Fig. 3. Example for Theorem IV. (a) Pattern. (b) Fitness. (c) D versus E.

rem to control decomposability and willingness of componentsof a complex system in all dominance cases.

Theorem V (Mechanisms of Components): If i � j, i.e.,is dominate js, disuniformity of the system is decreasing intime if the entropy increases in time when − log2 Pi > E orif the entropy decreases in time when − log2 Pi < E while,∑

Xi

∫(Ci(w) − Ci)dw <

∑Xj

∫(Cj(w) − Cj)dw for both

conditions.We can apply this theorem to control or at least predict the

complex behaviors in large ECASs. Here, we provide incen-tives to motivate the components to decrease the disuniformityby adjusting their patterns (this adjustment changes the fitnessrates bis dynamically). This heterarchical rearrangement withexternal changes to the environment but without central or-ganization is a source of self-organizing in components. Asan illustration, assume n patterns of consumption in a system.When n is large (e.g., patterns of consumers in large metropoli-tan area), it is impossible to control and predict all behaviorsand their relationships. We can focus on a few groups (patterni where − log2 Pi > E) and increase the entropy by motivatingother consumers to adjust to this pattern (migrate to thispattern or increase its fitness portion). This phenomena makesnonlinear complex dynamic fitness rates, i.e., bi = K(R(D); E).Here, K is a function of R(D) and population of other patterns(i.e., E). R(D) shows the motivations based on D (e.g.,rewards that consumers receive by cooperating to reduce thedisuniformity). These changes in bis make Xi dependent oneach other. To predict the behaviors at each time, we can mapthe system conditions (dominance, entropy, and fitness rates)

HAGHNEVIS AND ASKIN: MODELING FRAMEWORK FOR ENGINEERED COMPLEX ADAPTIVE SYSTEMS 527

TABLE I

Summary of Emergence

− log2 Pi > E − log2 Pi < E

bi > bj bi < bj bi > bj bi < bj

i � j E ↑ ∧ D ↓ E ↓ ∧ D ↑ E ↓ ∧ D ↓ E ↑ ∧ D ↑i � j E ↑ ∧ D ↓ E ↓ ∧ D ↑ E ↓ ∧ D ↓ E ↑ ∧ D ↑i � j E ↑ ∧ D ↓ � D ↑ E ↓ ∧ D ↑ E ↓ ∧ D ↓ � D ↑ E ↑ ∧ D ↑i � j E ↑ ∧ D ↓ � D ↑ E ↓ ∧ D ↑ E ↓ ∧ D ↓ � D ↑ E ↑ ∧ D ↑

*Note: � means∑

Xi

∫(Ci(w) − Ci)dw >

∑Xj

∫(Cj(w) − Cj)dw changes

to∑

Xi

∫(Ci(w) − Ci)dw <

∑Xj

∫(Cj(w) − Cj)dw in time, or vice versa.

to an appropriate theorem. In the next section, we will showhow we can control the interoperability between patterns byusing a third pattern (catalyst), i.e., indirectly utilize TheoremV to decrease the disuniformity.

V. Emergence as the Effect of Interoperability

In this step of the framework, we study the engineeringconcept of emergence in ECASs. Bar Yam [29] conceptuallyand mathematically showed the possibility of defining a notionof emergence and described four concepts of emergence.Conceptual classification for emergence is proposed by Halleyand Winkler [30]. Prokopenko et al. [16] interpreted conceptsof emergence and self-organization by information theoryand compared them in CASs. We borrow some concept ofinformation theory to analyze and predict emergence behaviorsof ECASs and show the applicability of Theorem V.

Emergence cannot be defined by properties and relation-ships of the lower component level [23]. Assume there is aninteraction between pattern i and j at their current level. Then(6) becomes

E(i, j) = −Mi∑

mi=1

Mj∑mj=1

Pmimjlog2 Pmimj

(18)

where Pmimjis the joint probability to find simultaneously

pattern i and pattern j in state mi and mj . The interactioninformation (mutual information) of i and j

Ip(i; j) =Mi∑

mi=1

Mj∑mj=1

Pmimjlog2

Pmimj

PmiPmj

(19)

measures the interoperability between i and j which is theamount of information that i and j share and reduce theuncertainty of each other, where Pmi

is the marginal probabilityfor State mi. We can obtain (see [28])

E = E(i, j) = E(i) + E(j) − I(i; j) (20)

where E(i) = I(i, i) is the self-information of i.From (20) when I(i; j) increases (I ↑), E decreases (E ↓).

For the case of only two groups of patterns in the system,the mutual information is a positive number with maximumof one, 0 ≤ I ≤ 1 [from (19)]. E is minimal when i and j areidentical, I = 1 (one group follows the other one) and E is atits maximum when i and j are independent, I = 0 (groups are

completely autonomic). We can use this property to controlthe entropy in Lemmas I–IV.

The generalization of (20) to three-pattern cases is

E = E(i, j, k) = −[E(i) + E(j) + E(k)] − I(i; j; k) +

E(i, j) + E(i, k) + E(k, j) (21)

where interoperability I can be negative and

I(i; j; k) = I(i; j) − I(i; j|k). (22)

Positive I means k supports and increases the interoperabil-ity between i and j. However, negative I shows k inhibits anddecreases the interoperability.Definition II:

1) Catalyst: Pattern k is a positive catalyst for other patternsin the system if k supports their interoperability and isa negative catalyst if inhibits their interoperability.

It is possible to generalize (21) and (22) to n patterns [27]

E(μ) =∑

ν⊆μ,ν �=μ

(−1)(|μ|−|ν|−1)E(ν) − I(μ)

μ = {im|m = 1, ..., n}(23)

I(i1; ...; in) = I(i1; ...; in−1) − I(i1; ...; in−1|in). (24)

Generally, for multiple catalyst (k number of catalyst)

I(i1; ...; in) = I(i1; ...; in−k) − I(i1; ...; in−k|i(n−k+1); ...; in).(25)

In Theorem V, instead of increasing or decreasing theentropy we can change the interoperability. We add catalyst(s)to control (inhibit or support) the interoperability.Definition III:

1) Catalyst-associate interoperability (CAI)

CAI = I(μ|k) − I(μ). (26)

2) Effect of catalyst (EOC)

EOC =E′

t − Et

CAI(27)

where E′(t) and E(t) are entropy in time t after andbefore applying the catalyst(s), respectively.

Example 3: (Interoperability in Fig. 1) assume Table II isthe joint probabilities for i and j in Example 1 if i can be 0.2,0.4 or 0.6 and j can be 0.1, 0.15 or 0.2 of total consumers.Population of other patterns and their effects are negligible.

528 IEEE SYSTEMS JOURNAL, VOL. 6, NO. 3, SEPTEMBER 2012

TABLE II

Prior Probabilities for k ≈ 0

P(mi, mj) 0.10 0.15 0.200.20 0.20 0.05 0.020.40 0.15 0.15 0.150.60 0.05 0.05 0.18

TABLE III

Posterior Probabilities for k > 0

P(mi, mj |k) 0.10 0.15 0.200.20 0.23 0.03 0.020.40 0.15 0.19 0.130.60 0.02 0.03 0.20

From (18), (19), and (20), E(i) = 1.56, E(j) = 1.54,E(i, j) = 2.90, I(i; j) = 0.20. If adding catalyst k updates TableII to Table III (users k affect the interrelationships between isand js), E(i) = 1.56, E(j) = 1.53, E(i, j) = 2.73, I(i; j) = 0.36.

So we increase the entropy by increasing the interoperabilitywhich decreases the disuniformity in Example 1

CAI = 0.36 − 0.2 = 0.16EOC = 2.73−2.90

0.16 = −1.06.

We can use the concept of EOC to select an appropriatecatalyst. For example, assume n patterns of consumption in asocial population, where i1 and i2 have the majority of pop-ulation and thus the largest effect on the disuniformity of theconsumption. We are planning to decrease the disuniformitywith a limited amount of resources (e.g., some rewards to giveto cooperative consumers). Instead of distributing the rewardbetween a large group (say i1) to cooperate with the othergroup which is not so effective (because the portion of eachindividual is too low), we can reward a small group of catalyst(say i3) to improve the interoperability between i1 and i2. Thisidea is similar to finding and investing on hubs in a socialnetwork (based on power law the numbers of componentswith higher relationships decrease exponentially [12]). Thenext step is to show how these emergence phenomena causeevolution in the system.

VI. Evolution Because of Updates in the Traits

Here, we analyze the evolution process. Then, in the laststep of the framework we depict the adaptation and learning inthe system. Some measures are developed for the complexitythreshold parameter of physical complex systems in previousstudies [31]. Erd"os and Renyi [32] studied probability thresh-old function and evolution in random graphs. We borrow theconcept of threshold [32].

Let Mλ(t), λ = 1, ..., λ0, be the number of components inpatterns which have the trait λ at time t. Here, φ(λ, t) is abinary variable that shows the system possesses trait λ at time t

φ(λ, t) =

⎧⎨⎩

1, if Mλ(t)∑iXi(t)

≥ τλ

0, if Mλ(t)∑iXi(t)

< τλ

(28)

where τλ is the threshold for trait λ.

Fig. 4. Complicated example. (a) Nonstationary adaptation. (b) Fastadaptation.

Let (t) = (φ(λ, t); λ = 1, ..., λ0) be a vector of 0 and1s, where its λth position is 1 if φ(λ, t) = 1. Let �(t) be apredefined finite set of s at time t. Based on the definition,the system evolves when ∃t′ > t, (t) ∈ � & (t′) /∈ � (or(t) /∈ � & (t′) ∈ �).Definition IV:

1) Stagnation: systems are stagnant when they are notevolvable, i.e., (t) ∈ � (or (t) /∈ �) ∀t.

Example 4: (Traits in Fig. 1) assume τi = 0.2, τj = 0.4,τk = 0.3, and � = {[0 1 1], [1 1 1]} in Example 1

t = 4 : Mi(t)∑iXi(t)

= 26156 ≤ 0.2

t = 4 : Mj(t)∑iXi(t)

= 87156 ≥ 0.4

t = 4 : Mk(t)∑iXi(t)

= 44156 ≤ 0.3

⎫⎪⎪⎪⎪⎬⎪⎪⎪⎪⎭

⇒ (4) = [0 1 0]

t = 5 : Mi(t)∑iXi(t)

= 31183 ≤ 0.2

t = 5 : Mj(t)∑iXi(t)

= 95183 ≥ 0.4

t = 5 : Mk(t)∑iXi(t)

= 57183 ≥ 0.3

⎫⎪⎪⎪⎪⎬⎪⎪⎪⎪⎭

⇒ (4) = [0 1 1]

t = 9 : Mi(t)∑iXi(t)

= 55367 ≤ 0.2

t = 9 : Mj(t)∑iXi(t)

= 139367 ≤ 0.4

t = 9 : Mk(t)∑iXi(t)

= 163367 ≥ 0.3

⎫⎪⎪⎪⎪⎬⎪⎪⎪⎪⎭

⇒ (4) = [0 0 1].

So the system evolves when t = 5 and t = 9. If we assumethe system evolves only when it possesses all traits (i.e., � ={[1 1 1]}), this system is stagnant.

In this example, the system is adjusted by two evolutions.This adapting situation can be nonstationary. Fig. 4(a) simu-lates a case where components adjust their behaviors severaltimes to increase their objectives. Here, i, j, and k compete toget more rewards by reducing the disuniformity. However, toreduce the disuniformity they should cooperate by adjustingtheir behaviors (changing fitness rates, bis). Adding a learningprocedure (do not adjust to previous tried states) omits the non-stationary evolution and causes faster adaptation in Fig. 4(b).

To extend this framework, the concept of dissection offeatures can extend to other ECASs easily. Entropy of com-ponents is a general concept for all systems and disunifor-mity can be interpreted in different ECASs. For example,reducing demand fluctuations in wholesale marketing, resourceallocations in supply chain management, and synergism ofcommands to reduce the distances to a target in AI or

HAGHNEVIS AND ASKIN: MODELING FRAMEWORK FOR ENGINEERED COMPLEX ADAPTIVE SYSTEMS 529

defense sectors are other types of disuniformity. Decisionmakers may assign different objectives to ECASs based ontheir requirement and they are not limited to disuniformity.However, any ECAS of the class being addressed needs atleast one minimizing/maximizing measure to study dissectionof features other than component entropy. Other hallmarks(evolution and adaptation) are driven from the emergenceconcept (dissection of features and the interactions) and theirmathematical calculation is not limited to electricity usage.

VII. Conclusion and Future Work

In this paper, we presented a framework that helped us em-ploy engineering and mathematical models to analyze certainECASs. We can apply this framework to study and predict thehallmarks of complex heterarchical engineered systems. Theproposed method was used to engineer emergence of humandecisions in an ECAS, evolution of the behaviors, and itsadaptation to new environments. We illustrated how we canextend the concept of our measures to other ECASs.

We employed information theory in our mathematicalmodel. All possible dominance cases in complex systems weredefined and four theorems were presented to calibrate thecurrent situation and predict future behaviors of each case.Theorem V (mechanisms of components) can be employed tostudy self-organization in ECASs.

Catalyst associate interoperability and stagnation of thesystem are new concepts that can help us measure or scalethe emergence and evolution behaviors without complex mod-eling. Researchers may control the interoperability of com-ponents with CAI. Also, they can measure evolvability orstagnation of a complex system by a threshold function.

Varying fitness rates by time, bi(t), may lead to a newformulation in future research. We can consider statistical ordynamical functions for fitness rates. Agent-based modelingand simulation can support and extend the mathematical basisof this research for investigating real cases.

Acknowledgment

The authors would like to thank Prof. D. Armbruster, Schoolof Mathematical and Statistical Sciences, Arizona State Uni-versity, Tempe, for his constructive comments that improvedthe quality of this paper.

References

[1] M. Couture and R. Charpentier, “Elements of a framework for studyingcomplex systems,” in Proc. 12th Int. Command Control Res. Technol.Symp., Jun. 2007, pp. 1–17.

[2] M. Couture, “Complexity and chaos: State-of-the-art; list of works, ex-perts, organizations, projects, journals, conferences and tools,” DefenseRes. Develop. Canada-Valcartier, Valcartier, QC, Canada, Tech. Rep. TN2006-450, 2006.

[3] M. Couture, “Complexity and chaos: State-of-the-art; formulations andmeasures of complexity,” Defense Res. Develop. Canada-Valcartier,Valcartier, QC, Canada, Tech. Rep. TN 2006-451, 2006.

[4] M. Couture, “Complexity and chaos: State-of-the-art; glossary,” DefenseRes. Develop. Canada-Valcartier, Valcartier, QC, Canada, Tech. Rep. TN2006-452, 2006.

[5] M. Couture, “Complexity and chaos: State-of-the-art; overview of the-oretical concepts,” Defense Res. Develop. Canada-Valcartier, Valcartier,QC, Canada, Tech. Rep. TN 2006-453, 2007.

[6] C. L. Magee and O. L. de Weck, “Complex system classification,” inProc. 14th Annu. Int. Symp. INCOSE, Jun. 2004, pp. 1–18.

[7] Y. Bar-Yam, “Multiscale complexity/entropy,” Adv. Complex Syst., vol. 7,no. 1, pp. 47–63, 2004.

[8] Y. Bar-Yam. (2000). Complexity Rising: From Human Beings to HumanCivilization, a Complexity Profile [Online]. Available: http://necsi.org/Civilization.html

[9] N. Parks, “Energy efficiency and the smart grid,” Environ. Sci. Technol.,vol. 43, no. 9, pp. 2999–3000, May 2009.

[10] D. J. Watts and S. H. Strogatz, “Collective dynamics of ‘small-world’networks,” Nature, vol. 393, pp. 440–442, Jun. 1998.

[11] C. Avin and D. Dayan-Rosenman, “Evolutionary reputation gameson social networks,” Complex Syst., vol. 17, no. 3, pp. 259–277,2007.

[12] L. A. N. Amaral, A. Scala, M. Barthelemy, and H. E. Stanley, “Classesof small-world networks,” Proc. Nat. Acad. Sci., vol. 97, no. 21, pp.11 149–11 152, Oct. 2000.

[13] S. H. Strogatz, “Exploring complex networks,” Nature, vol. 410, pp.268–276, Mar. 2001.

[14] S. Lee, Y. Son, and J. Jin, “Decision field theory extensions for behaviormodeling in dynamic environment using Bayesian belief network,”Inform. Sci., vol. 178, no. 10, pp. 2297–2314, 2008.

[15] A. Mostashari and J. M. Sussman, “A framework for analysis, designand management of complex large-scale interconnected open sociotech-nological systems,” Int. J. Decision Support Syst. Technol., vol. 1, no. 2,pp. 53–68, 2009.

[16] M. Prokopenko, F. Boschietti, and A. J. Ryan, “An information-theoreticprimer on complexity, self-organization, and emergence,” Complexity,vol. 15, no. 1, pp. 11–28, 2009.

[17] S. Sheard and A. Mostashari, “Principles of complex systems forsystems engineering,” Syst. Eng., vol. 12, no. 4, pp. 295–311,2009.

[18] J. Ottino, “Engineering complex systems,” Nature, vol. 427, p. 399, Jan.2004.

[19] D. Braha and Y. Bar-Yam, “The statistical mechanics of complex productdevelopment: Empirical and analytical results,” Manage. Sci., vol. 53,no. 7, pp. 1127–1145, Jul. 2007.

[20] B. Shargel, H. Sayama, I. Epstein, and Y. Bar-Yam, “Optimization ofrobustness and connectivity in complex networks,” Phys. Rev. Lett.,vol. 90, no. 6, pp. 068701-1–068701-4, 2003.

[21] K. Kaneko and I. Tsuda, Complex Systems: Chaos and Beyond: A Con-structive Approach With Applications in Life Sciences. Berlin, Germany:Springer, 2001.

[22] S. B. Yu and J. Efstathiou, “An introduction to network complexity,” inProc. Manuf. Complexity Netw. Conf., Apr. 2002, pp. 1–10.

[23] C. R. Shalizi, “Causal architecture, complexity and self-organizationin time series and cellular automata,” Ph.D. dissertation, Center StudyComplex Syst., Univ. Michigan, Ann Arbor, May 2001.

[24] C. R. Shalizi, K. L. Shalizi, and R. Haslinger, “Quantifying self-organization with optimal predictors,” Phys. Rev. Lett., vol. 93, no. 14,pp. 1 187 011–1 187 014, 2004.

[25] S. E. Page, “Self organization and coordination,” Comput. Econ., vol. 18,no. 1, pp. 25–48, Aug. 2001.

[26] A. G. Bashkirov, “Renyi entropy as a statistical entropy for com-plex systems,” Theor. Math. Phys., vol. 1149, no. 2, pp. 1559–1573,2006.

[27] P. Chanda, L. Sucheston, A. Zhang, D. Brazeau, J. L. Freudenheim,C. Ambrosone, and M. Ramanathan, “Ambience: A novel approach andefficient algorithm for identifying informative genetic and environmentalassociations with complex phenotypes,” Genetics, vol. 180, no. 2, pp.1191–1210, 2008.

[28] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed.Hoboken, NJ: Wiley-Interscience, 2006.

[29] Y. Bar-Yam, “A mathematical theory of strong emergence using multi-scale variety,” Complexity, vol. 9, no. 4, pp. 15–24, 2004.

[30] J. D. Halley and D. A. Winkler, “Classification of emergence and itsrelation to self-organization emergence,” Complexity, vol. 13, no. 5, pp.10–15, 2008.

[31] C. Langton, “Computation at the edge of chaos: Phase transitions andemergent computation,” Physica D, vol. 42, nos. 1–3, pp. 12–37, 1990.

[32] P. Erd"os and A. Renyi, “On the evolution of random graphs,” PublicationMath. Instit. Hungarian Acad. Sci., vol. 5, pp. 17–61, 1960.

530 IEEE SYSTEMS JOURNAL, VOL. 6, NO. 3, SEPTEMBER 2012

Moeed Haghnevis received the B.Sc. and M.Sc.degrees in industrial and system engineering fromthe Amirkabir University of Technology, Tehran,Iran, and the University of Tehran, Tehran, respec-tively. He is currently pursuing the Ph.D. degree inindustrial engineering with the School of Comput-ing, Informatics, and Decision Systems Engineering,Arizona State University, Tempe.

Before pursuing the Ph.D. degree, he served asan Adjunct Instructor with two universities. He haspublished a book and several papers. His current

research interests include engineered complex systems, agent-based modeling,and simulation.

Ronald G. Askin received the Ph.D. degree fromthe Georgia Institute of Technology, Atlanta.

He is currently the Director of the School ofComputing, Informatics, and Decision Systems En-gineering, Arizona State University, Tempe. He has30 years of experience in systems modeling andanalysis.

Dr. Askin is a Fellow of the IIE. He has receivedthe National Science Foundation Presidential YoungInvestigator Award, the Shingo Prize for Excellencein Manufacturing Research, the IIE Joint Publishers

Book of the Year Award, and the IIE Transactions Development and Appli-cations Award.