Emergent computation: Self-organizing, collective, and cooperative phenomena in natural and artificial computing networks: Stephanie Forrest, ed
Post on 15-Jun-2016
Embed Size (px)
Artificial Intelligence 60 (1993) 171-183 171 Elsevier
Stephanie Forrest, ed., Emergent Computation: Self-Organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks * Peter M. Todd The Rowland Institute.lot" Science, 100 Edwin tt. Land Boulevard, Cambridge, MA 02142, USA
Received November 1992
Emergent Computation is a collection of 31 papers from the Ninth An- nual Interdisciplinary Conference of the Center for Nonlinear Studies held at Los Alamos National Laboratory in 1989. As the subtitle indicates, it presents a broad look at the ways that emergent behavior can be employed to process information in natural and artificial systems. This proceedings volume does a better job than most at conveying a coherent picture of a dynamic field. While there are certainly a few papers that will be of interest mainly to specialists, there are also several clear threads that wind through the book and tie together individual papers. Happily, these threads form a web of concepts in a mutually-supporting network. In reading several pa- pers together, emergent phenomena themselves thus come into play: ideas are linked, connections and analogies made, and greater understanding is afforded than in reading these papers in isolation. This is a mark of good editing and selection, for which Forrest is to be commended. In this review, we will cover two of the main threads in this book. The first concerns the
Correspondence to: P.M. Todd, The Rowland Institute for Science, 100 Edwin H. Land Boulevard, Cambridge, MA 02142, USA. E-mail: firstname.lastname@example.org.
* (MIT Press, Cambridge, MA, 1991 ); 452 pages, $32.50.
0004-3702/93/$ 06.00 @ 1993 - - Elsevier Science Publishers B.V. All rights reserved
172 P.M. Todd
nature of systems in which emergent computation is possible, and how to get such systems to perform more efficiently. The second thread considers the multiple levels of adaptation necessary to produce adaptive, responsive agents. Each thread has tendrils leading off, further afield, into other papers throughout the volume, as will be indicated.
Forrest begins the volume by introducing the concept of emergent com- putation, defined as an emergent pattern of behavior that is interpretable as processing information. Such behavior can emerge when a number of agents designed to behave in a pre-determined way engage in myriad local inter- actions with each other, forming global information-processing patterns at a macroscopically higher level. From the collective low-level explicitly defined behavior of individuals, the higher-level implicit behavior of the system emerges. Parallel computation processes that take advantage of such emer- gence can be more efficient, flexible, and natural than those that struggle to impose a top-down hierarchical organization on the individual processing agents and their communications. Furthermore, emergent computation may be the only feasible way to achieve certain goals, such as modeling intel- ligent adaptive behavior. But because the components and interactions of emergent computation systems are typically nonlinear, their behavior can be very difficult to control and predict. It is these problems that the papers in this volume set out to address.
Central themes in the realm of emergent computation include:
self-organization, with no central authority to control the overall flow of computation;
collective phenomena emerging from the interactions of locally-commu- nicating autonomous agents;
global cooperation among agents, to solve a common goal or share a common resource, being balanced against competition between them to create a more efficient overall system:
learning and adaptation replacing direct programming for building work- ing systems; and
dynamic system behavior taking precedence over traditional AI static data structures.
In all of these ideas, emergent computation dovetails with the "animat path" to simulating adaptive behavior (Wilson ) and the behavior- based approach to AI (Maes ). The papers in this volume, though, are not solely addressed toward simulating intelligent behavior for autonomous agents. Many of the authors are striving for greater understanding of natural parallel-processing systems such as the brain or the immune system, or for better designs for computer networks. (Forrest divides the papers into the categories of artificial networks, learning, and biological networks.)
Book Review 173
1. The emergence of computation on the edge of chaos
Langton asks a more fundamental question: What are the necessary foun- dations for the emergence of computational abilities themselves? That is, what characteristics does a system need in order to support information transmission, storage, and modification? Langton investigates this problem within the context of cellular automata (CAs), discrete deterministic spatial collections of cells that update their states over time based on the states of their neighbors at the previous point in time. Conway's game of Life (see Gardner ) is the archetypal example of a CA system. Langton believes that as abstractions of physical systems, CAs can provide a medium for char- acterizing the requirements of computation in any system. In the context of CAs, such computational abilities take the form of very long chains of CA states. This is because CA patterns that extend over a long range in time and space can store and transmit information, and the complex interactions these transients exhibit can modify that information. These three abilities make up the necessary components of computation.
After giving a brief clear description of CA systems, Langton presents a method for describing a wide range of CAs with a single parameter that rep- resents the bias of the CA's rules toward a particular (arbitrary) state. This parameter can range from a maximum amount of single-state bias (causing the most homogeneous set of CA rules) to a minimum level (causing the most heterogeneous rules). In his studies, Langton varied the parameter over this range and recorded the dynamics of CAs randomly generated with these values. The results were very interesting. Langton discovered an important relationship between this single parameter and the length of transients (and hence support of computation) in the corresponding CAs. Very homogeneous rules created static behavior (the CA settled quickly into a fixed state), while very heterogeneous rules generated random behavior (the CA changed states chaotically). But at a crucial point in between these two extremes, a small range of parameter values yielded CAs with very long transients. Langton likens this crucial parameter range to a phase-transition between the solid (static) CA phase and the fluid (chaotic) phase, and con- cludes that computation is best supported at just such a transitional point. (Conway's Life turns out to be poised at just this point, which accounts for the long dynamic behavior that makes it so interesting.)
This is illustrated very well in the qualitative figures Langton includes in his paper, showing the evolution of states for the different types of CAs. The quantitative analysis of this phase-transition effect is a bit harder to follow (and some terms are left undefined, such as site-percolation). But this analysis, adapted in part from information theory and thermodynamics, presents some useful measures of the complexity and information content inherent in these systems. This exciting work provides strong support for
174 P.M. Todd
the notion that "[c]omplex behavior involves a mix of order and disorder" (p. 32), the type of mix that happens when systems are poised "at the edge of chaos" between the solid and fluid phases. It has wide implications for the nature of computation--for instance, simulated annealing techniques, as discussed in Greene's paper in this volume, can be seen as a way of achieving useful computations by keeping the system "near freezing". However, the important work of developing ways to control (and communicate with) these long complex transients, so that they can be harnessed to do the computing that users want rather than just computing something, remains to be addressed. Finally, even more generally, this work speaks perhaps to the evolution of life itself, via a self-selecting process maintaining itself near a phase-transition point.
2. Fitness landscapes and the requirements for evolvability
In his paper later in this volume, Kauffman addresses a very similar issue from a different perspective, exploring the characteristics necessary for a system to evolve. The process of evolution can create complex structures in a quest for greater and greater fitness in some environment. But in order for evolution to work in a given system, that system must have the property of evolvability, which Kauffman defines as the ability to accumulate successive small improvements in the changing structures. One of the main criteria of evolvability is that the fitness landscape occupied by the evolving structures must be at least partially correlated. This partial correlation makes for a fairly smooth landscape, so that small changes in the structures will typically result in small changes in their fitness. In other words, nearby points in the fitness landscape will have similar fitness values. If this is not the case--if the fitness landscape is completely uncorrelated and rugged--then the small mutations that evolution typically capitalizes on will cause wildly varying changes in the fitness of structures, and no successive set of mutually beneficial mutations will be able to accumulate: evolution will be impossible.
Most systems in nature that exhibit this sort of evolutionary ad