Emergent computation: Self-organizing, collective, and cooperative phenomena in natural and artificial computing networks: Stephanie Forrest, ed.

Download Emergent computation: Self-organizing, collective, and cooperative phenomena in natural and artificial computing networks: Stephanie Forrest, ed.

Post on 15-Jun-2016

213 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

  • Artificial Intelligence 60 (1993) 171-183 171 Elsevier

    ARTINT 1041

    Book Review

    Stephanie Forrest, ed., Emergent Computation: Self-Organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks * Peter M. Todd The Rowland Institute.lot" Science, 100 Edwin tt. Land Boulevard, Cambridge, MA 02142, USA

    Received November 1992

    Emergent Computation is a collection of 31 papers from the Ninth An- nual Interdisciplinary Conference of the Center for Nonlinear Studies held at Los Alamos National Laboratory in 1989. As the subtitle indicates, it presents a broad look at the ways that emergent behavior can be employed to process information in natural and artificial systems. This proceedings volume does a better job than most at conveying a coherent picture of a dynamic field. While there are certainly a few papers that will be of interest mainly to specialists, there are also several clear threads that wind through the book and tie together individual papers. Happily, these threads form a web of concepts in a mutually-supporting network. In reading several pa- pers together, emergent phenomena themselves thus come into play: ideas are linked, connections and analogies made, and greater understanding is afforded than in reading these papers in isolation. This is a mark of good editing and selection, for which Forrest is to be commended. In this review, we will cover two of the main threads in this book. The first concerns the

    Correspondence to: P.M. Todd, The Rowland Institute for Science, 100 Edwin H. Land Boulevard, Cambridge, MA 02142, USA. E-mail: ptodd@spo.rowland.org.

    * (MIT Press, Cambridge, MA, 1991 ); 452 pages, $32.50.

    0004-3702/93/$ 06.00 @ 1993 - - Elsevier Science Publishers B.V. All rights reserved

  • 172 P.M. Todd

    nature of systems in which emergent computation is possible, and how to get such systems to perform more efficiently. The second thread considers the multiple levels of adaptation necessary to produce adaptive, responsive agents. Each thread has tendrils leading off, further afield, into other papers throughout the volume, as will be indicated.

    Forrest begins the volume by introducing the concept of emergent com- putation, defined as an emergent pattern of behavior that is interpretable as processing information. Such behavior can emerge when a number of agents designed to behave in a pre-determined way engage in myriad local inter- actions with each other, forming global information-processing patterns at a macroscopically higher level. From the collective low-level explicitly defined behavior of individuals, the higher-level implicit behavior of the system emerges. Parallel computation processes that take advantage of such emer- gence can be more efficient, flexible, and natural than those that struggle to impose a top-down hierarchical organization on the individual processing agents and their communications. Furthermore, emergent computation may be the only feasible way to achieve certain goals, such as modeling intel- ligent adaptive behavior. But because the components and interactions of emergent computation systems are typically nonlinear, their behavior can be very difficult to control and predict. It is these problems that the papers in this volume set out to address.

    Central themes in the realm of emergent computation include:

    self-organization, with no central authority to control the overall flow of computation;

    collective phenomena emerging from the interactions of locally-commu- nicating autonomous agents;

    global cooperation among agents, to solve a common goal or share a common resource, being balanced against competition between them to create a more efficient overall system:

    learning and adaptation replacing direct programming for building work- ing systems; and

    dynamic system behavior taking precedence over traditional AI static data structures.

    In all of these ideas, emergent computation dovetails with the "animat path" to simulating adaptive behavior (Wilson [14]) and the behavior- based approach to AI (Maes [8]). The papers in this volume, though, are not solely addressed toward simulating intelligent behavior for autonomous agents. Many of the authors are striving for greater understanding of natural parallel-processing systems such as the brain or the immune system, or for better designs for computer networks. (Forrest divides the papers into the categories of artificial networks, learning, and biological networks.)

  • Book Review 173

    1. The emergence of computation on the edge of chaos

    Langton asks a more fundamental question: What are the necessary foun- dations for the emergence of computational abilities themselves? That is, what characteristics does a system need in order to support information transmission, storage, and modification? Langton investigates this problem within the context of cellular automata (CAs), discrete deterministic spatial collections of cells that update their states over time based on the states of their neighbors at the previous point in time. Conway's game of Life (see Gardner [5]) is the archetypal example of a CA system. Langton believes that as abstractions of physical systems, CAs can provide a medium for char- acterizing the requirements of computation in any system. In the context of CAs, such computational abilities take the form of very long chains of CA states. This is because CA patterns that extend over a long range in time and space can store and transmit information, and the complex interactions these transients exhibit can modify that information. These three abilities make up the necessary components of computation.

    After giving a brief clear description of CA systems, Langton presents a method for describing a wide range of CAs with a single parameter that rep- resents the bias of the CA's rules toward a particular (arbitrary) state. This parameter can range from a maximum amount of single-state bias (causing the most homogeneous set of CA rules) to a minimum level (causing the most heterogeneous rules). In his studies, Langton varied the parameter over this range and recorded the dynamics of CAs randomly generated with these values. The results were very interesting. Langton discovered an important relationship between this single parameter and the length of transients (and hence support of computation) in the corresponding CAs. Very homogeneous rules created static behavior (the CA settled quickly into a fixed state), while very heterogeneous rules generated random behavior (the CA changed states chaotically). But at a crucial point in between these two extremes, a small range of parameter values yielded CAs with very long transients. Langton likens this crucial parameter range to a phase-transition between the solid (static) CA phase and the fluid (chaotic) phase, and con- cludes that computation is best supported at just such a transitional point. (Conway's Life turns out to be poised at just this point, which accounts for the long dynamic behavior that makes it so interesting.)

    This is illustrated very well in the qualitative figures Langton includes in his paper, showing the evolution of states for the different types of CAs. The quantitative analysis of this phase-transition effect is a bit harder to follow (and some terms are left undefined, such as site-percolation). But this analysis, adapted in part from information theory and thermodynamics, presents some useful measures of the complexity and information content inherent in these systems. This exciting work provides strong support for

  • 174 P.M. Todd

    the notion that "[c]omplex behavior involves a mix of order and disorder" (p. 32), the type of mix that happens when systems are poised "at the edge of chaos" between the solid and fluid phases. It has wide implications for the nature of computation--for instance, simulated annealing techniques, as discussed in Greene's paper in this volume, can be seen as a way of achieving useful computations by keeping the system "near freezing". However, the important work of developing ways to control (and communicate with) these long complex transients, so that they can be harnessed to do the computing that users want rather than just computing something, remains to be addressed. Finally, even more generally, this work speaks perhaps to the evolution of life itself, via a self-selecting process maintaining itself near a phase-transition point.

    2. Fitness landscapes and the requirements for evolvability

    In his paper later in this volume, Kauffman addresses a very similar issue from a different perspective, exploring the characteristics necessary for a system to evolve. The process of evolution can create complex structures in a quest for greater and greater fitness in some environment. But in order for evolution to work in a given system, that system must have the property of evolvability, which Kauffman defines as the ability to accumulate successive small improvements in the changing structures. One of the main criteria of evolvability is that the fitness landscape occupied by the evolving structures must be at least partially correlated. This partial correlation makes for a fairly smooth landscape, so that small changes in the structures will typically result in small changes in their fitness. In other words, nearby points in the fitness landscape will have similar fitness values. If this is not the case--if the fitness landscape is completely uncorrelated and rugged--then the small mutations that evolution typically capitalizes on will cause wildly varying changes in the fitness of structures, and no successive set of mutually beneficial mutations will be able to accumulate: evolution will be impossible.

    Most systems in nature that exhibit this sort of evolutionary adaptability (including learning and immune responses) are networks of simple pro- cessing agents acting in parallel. Following the lead of this observation, Kauffman investigates the property of evolvability in the artificial setting of Boolean networks. These networks consist of a set of interconnected bi- nary elements, each of which computes a Boolean function of all its inputs from other elements. When these networks are connected randomly and use random Boolean functions, their activity over time consists of a set of deterministic trajectories through binary-vector state space. These trajecto- ries, since they are finite and deterministic, end up either going to a fixed point, or looping endlessly around a state cycle, with the pattern of unit

  • Book Review 175

    activities (on or off) of the system as a whole recurring after some fixed length. Evolvability in such Boolean networks corresponds to small changes in the network (either changes in connectivity, or in the Boolean functions the units use) causing small changes in the behavior of the state cycles. That is, if changing one link in the network results in changing one or two states in one of the state cycles, the network exhibits high evolvability. If one different link causes the complete destruction of several state cycles, the network has low evolvability.

    Kauffman has explored several ways of parameterizing the evolvability of Boolean networks. First of all, the connectivity of the network appears to be critical. Networks in which every unit has only two inputs from other units show high evolvability, while those with greater or lesser levels of connectivity show drastically lower evolvability. Another critical parameter is the bias of the Boolean rules used, that is, the percentage of states that map to 1 (or 0), analogous to Langton's rule-bias parameter. As in Langton's CAs, Kauffman has found that there is a critical level of this bias parameter that leads to high evolvability of the Boolean network. This parameter setting too seems to be associated with a phase transition, which shows up quite dramatically in the networks' behavior. For low bias levels, the networks exhibit extremely long state cycles, with each unit's activity essentially changing randomly from one time-step to the next. When visualized as lights (or pixels), the units all seem to "twinkle" on and off. For high bias levels, the majority of the units get stuck, "frozen" either on or off, so that very little activity is seen in the network. But right at the critical value, a web of frozen elements percolates through the network, separating a number of isolated still-active ("melted") areas. In this part-solid, part-liquid state, relatively few, short state cycles are created, each with large basins of attraction. This is what gives these networks their high evolvability: changes (mutations) made to the network can only have a limited local effect, and are stopped from spreading further at the frozen boundaries. All of these properties--small attractor cycles, homeostasis (large basins of attraction and hence resistance to state-changes), and evolvability (highly correlated landscapes )--are linked: "Self-organization for one, bootstraps all" (p. 148).

    3. The emergence of symbols in classifier systems

    Both Langton's and Kauffman's papers are very elegant, the sort of papers that fill one with excitement and further ideas while reading. The type of work to which these over-arching ideas about the nature of computation and adaptability can be put is well-illustrated in the paper by Forrest and Miller, on emergent behavior in classifier systems. Classifier systems are designed to model intelligent behavior through the action of a large set of

  • 176 ~M. Todd

    condition/action rules. The actions of these rules post messages to a com- mon list, and their conditions register the presence in that list of messages from other rules (see the papers by Holland and Farmer for a fuller descrip- tion). The rules are constructed by a process akin to evolution (the genetic algorithm), and their strengths are modified by a credit-assigning learning algorithm (the bucket brigade). Forrest and Miller are interested in the emergence of symbol-processing behavior in classifier systems. What could symbols look like in the context of a classifier system? As the authors see it, symbols in a classifier system could take the form of co-adapted sets of rules, chained together by their messages to each other. Such linked rules can implement default hierarchies, where categorizations of inputs can activate other higher-level categories of greater abstraction. For instance, an input that signifies a "moving object" may turn on a general (default) rule whose output indicates "food", while two other inputs for "red" and "round" may trigger a more specific rule whose output also indicates "food". This second rule overrides the default of the first rule with the inclusion of more specific information (an exception to the default rule). But taken together, the two rules form a higher-level concept (a compound categorization) of "food". and the system should react in the same way no matter how the judgment of "...

Recommended

View more >