coupled dynamic systems: from structure towards stability ... · essary and su–cient conditions...
TRANSCRIPT
Coupled Dynamic Systems: From Structure Towards
Stability And Stabilizability
by
Zhiyun Lin
A thesis submitted in conformity with the requirementsfor the degree of Doctor of Philosophy
Graduate Department of Electrical and Computer EngineeringUniversity of Toronto
Copyright c© 2006 by Zhiyun Lin
Abstract
Coupled Dynamic Systems: From Structure Towards Stability And Stabilizability
Zhiyun Lin
Doctor of Philosophy
Graduate Department of Electrical and Computer Engineering
University of Toronto
2006
In this thesis, we study stability and stabilizability problems in the framework of
coupled dynamic systems. Particular attention is given to the class of coupled dynamic
systems whose equilibrium set is described by all states having identical state components.
Central to the stability and stabilizability issues of such systems is the graph describing
the interaction structure—that is, who is coupled to whom. A central question is, what
properties of the interaction graphs lead to stability and stabilizability? The thesis
initiates a systematic inquiry into this question and provides rigorous justifications.
Firstly, coupled linear systems and coupled nonlinear systems are investigated. Nec-
essary and sufficient conditions in terms of the connectivity of the interaction directed
graphs are derived to ensure that the equilibrium subspace is (globally uniformly) at-
tractive for systems with both fixed and dynamic interaction structures. We apply the
results to several analysis and control synthesis problems including problems in synchro-
nization of coupled Kuramoto oscillators, biochemical reaction network, and synthesis
of rendezvous controllers for multi-agent systems. Secondly, the stabilizability problem
of coupled kinematic unicycles is investigated when only local information is available.
Necessary and sufficient graphical conditions are obtained to determine the feasibility of
certain formations (point formations and line formations). Furthermore, we show that
under certain graphical condition, stabilization of the vehicles to any geometric formation
is also feasible provided the vehicles have a common sense of direction.
ii
Acknowledgements
During the last four years as a PhD student in the Systems Control Group, University
of Toronto, I have had the chance to meet and interact with many intelligent and inspiring
people. This journey would have been much harder and a lot less fun without them. For
this, I must sincerely thank all the wonderful people I have met.
First of all, my deepest gratitude must go to my PhD advisors, Professor Bruce Fran-
cis and Professor Manfredi Maggiore, who are, by all means, devoted teachers, caring
mentors, and role models to me. Their knowledge, insight, vision, inspiration, and en-
couragement have guided me through the wonder land of science and have taken me to
the frontier of scientific inquiry. Their pleasant personality and enthusiasm for the re-
search have certainly made such a journey extremely enjoyable. I will always be indebted
to them for everything they have taught and given to me.
Secondly, it is with great pleasure that I acknowledge Professor Mireille E. Broucke
and express my gratitude for her help and critical comments on my thesis. I would
like to thank Professor Edward J. Davison and Professor Gabriele M. T. D’Eleuterio for
serving on my committee. Also, I would like to thank Professor P. S. Krishnaprasad of
the Department of Electrical and Computer Engineering at the University of Maryland,
for serving as an external appraiser.
In addition, thanks are also due to my friends in and around the Systems Control
Group for lots of good conversation and for helping to provide an exciting research
environment. It is always a blessing to be surrounded by a big group of intelligent and
pleasant people for there is never lack of excellent research partners and wonderful friends.
Finally, I am forever indebted to my loving wife, Diyang, and our lovely little girl,
Ivy. It is them that make doing all things worthwhile.
iii
Contents
1 Introduction 1
1.1 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Digraphs and Matrices 9
2.1 Introductory Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 Digraphs, Neighbors, Degrees . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Walks, Paths, Cycles . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.3 Connectedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.4 Operations on Digraphs . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.5 Dynamic Digraphs . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.6 Undirected Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.7 A Result on Digraphs . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Nonnegative Matrices and Graph Theory . . . . . . . . . . . . . . . . . . 22
2.2.1 Nonnegative Matrices, Adjacency Matrices, and Digraphs . . . . . 23
2.2.2 Irreducible Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2.3 Primitive Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.4 Stochastic Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.5 SIA Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
iv
2.2.6 Wolfowitz Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3 Generator Matrices and Graph Theory . . . . . . . . . . . . . . . . . . . 44
2.3.1 Metzler Matrices and M-Matrices . . . . . . . . . . . . . . . . . . 44
2.3.2 Generator Matrices, Graph Laplacians, and Digraphs . . . . . . . 45
2.3.3 The Transition Matrix of a Generator Matrix . . . . . . . . . . . 48
2.3.4 The Zero Eigenvalue of Generator Matrix . . . . . . . . . . . . . . 49
2.3.5 H(α,m) Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3 Coupled Linear Systems 59
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.3 Coupled Linear Systems with Fixed Topology . . . . . . . . . . . . . . . 66
3.3.1 Cyclic Coupling Structure . . . . . . . . . . . . . . . . . . . . . . 67
3.3.2 Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.4 Coupled Linear Systems with Dynamic Topology . . . . . . . . . . . . . 71
3.4.1 Symmetric Coupling Structure . . . . . . . . . . . . . . . . . . . 72
3.4.2 Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.5 Examples and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.5.1 Time Dependent Switching . . . . . . . . . . . . . . . . . . . . . . 83
3.5.2 Time and State Dependent Switching . . . . . . . . . . . . . . . . 84
3.5.3 State Dependent Switching . . . . . . . . . . . . . . . . . . . . . 87
3.5.4 Switched Positive Systems . . . . . . . . . . . . . . . . . . . . . . 89
4 Coupled Nonlinear Systems 94
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.3 Mathematical Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.3.1 Convex Set and Tangent Cone . . . . . . . . . . . . . . . . . . . . 101
v
4.3.2 The Dini Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.3.3 The Invariance Principle . . . . . . . . . . . . . . . . . . . . . . . 106
4.4 Coupled Nonlinear Systems: Fixed Topology . . . . . . . . . . . . . . . . 106
4.5 Coupled Nonlinear Systems: Dynamic Topology . . . . . . . . . . . . . . 114
4.5.1 Set Invariance and Uniform Stability . . . . . . . . . . . . . . . . 116
4.5.2 Uniform Attractivity . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.5.3 Examples and Further Remarks . . . . . . . . . . . . . . . . . . . 134
4.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
4.6.1 Synchronization of Coupled Oscillators . . . . . . . . . . . . . . . 140
4.6.2 Biochemical Reaction Network . . . . . . . . . . . . . . . . . . . . 141
4.6.3 Water Tank Network . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.6.4 Synthesis of Rendezvous Controllers . . . . . . . . . . . . . . . . . 146
5 Coupled Kinematic Unicycles 151
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
5.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.2.1 Kinematic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.2.2 Information Flow Digraph . . . . . . . . . . . . . . . . . . . . . . 156
5.2.3 Stabilizability of Vehicle Formations . . . . . . . . . . . . . . . . 157
5.3 Stabilization of Point Formations . . . . . . . . . . . . . . . . . . . . . . 158
5.3.1 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
5.3.2 Simulation Example . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.4 Stabilization of Line Formations . . . . . . . . . . . . . . . . . . . . . . . 164
5.4.1 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.4.2 Simulation Example . . . . . . . . . . . . . . . . . . . . . . . . . 172
5.5 Stabilization of Any Geometric Formations . . . . . . . . . . . . . . . . . 172
5.5.1 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.5.2 Simulation Example . . . . . . . . . . . . . . . . . . . . . . . . . 176
vi
6 Conclusions and Future Work 178
6.1 Thesis Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Appendix 181
A Supplementary Material 181
A.1 Set Stability and Attractivity . . . . . . . . . . . . . . . . . . . . . . . . 181
A.2 Averaging Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Bibliography 185
vii
List of Figures
2.1 Digraphs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Walk, semiwalk, path, and cycle. . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Aperiodic and d-periodic digraphs. . . . . . . . . . . . . . . . . . . . . . 13
2.4 Digraphs with different connectivity. . . . . . . . . . . . . . . . . . . . . 14
2.5 Digraph and its opposite digraph. . . . . . . . . . . . . . . . . . . . . . 15
2.6 Digraphs and their union. . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.7 Induced subdigraph, strong component, and closed strong component. . 16
2.8 A switching signal σ(t). . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.9 Digraphs G1 and G2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.10 Undirected graph and bidirectional digraph. . . . . . . . . . . . . . . . . 18
2.11 Associated digraph and adjacency matrix. . . . . . . . . . . . . . . . . . 24
2.12 Gershgorin discs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.13 Associated digraphs (renumbering the nodes). . . . . . . . . . . . . . . . 26
2.14 Associated digraphs of (irreducible and reducible matrices). . . . . . . . 27
2.15 A walk from vi to vi is either a cycle or is generated by a number of cycles. 29
2.16 Associated digraphs of (non-primitive and primitive matrices). . . . . . 31
2.17 An associated digraph having two closed strong components. . . . . . . 34
2.18 Associated digraphs of SIA matrices. . . . . . . . . . . . . . . . . . . . . 40
2.19 Associated digraphs of a generator matrix. . . . . . . . . . . . . . . . . 47
2.20 A digraph. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
viii
2.21 Gershgorin discs of a generator matrix. . . . . . . . . . . . . . . . . . . 50
2.22 Shifting s units. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1 Interaction digraphs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.2 A switching signal σ(t). . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.3 Interaction digraphs representing two communication links. . . . . . . . 83
3.4 Cone-like fields of view. . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.5 The initial condition of three robots. . . . . . . . . . . . . . . . . . . . . 86
3.6 Trajectories of five agents and the interaction digraphs. . . . . . . . . . 88
3.7 The disk-like field of view. . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.8 Initial locations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.9 Range is 30. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.10 Range is 25. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.11 Range is 50. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.12 The interaction digraphs G1 and G2. . . . . . . . . . . . . . . . . . . . . 92
3.13 Time responses and switching signal. . . . . . . . . . . . . . . . . . . . . 92
3.14 Asymptotically stable trajectory. . . . . . . . . . . . . . . . . . . . . . . 93
4.1 A switching signal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.2 The interaction digraphs. . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.3 The set S, lin(S), ri(S), and rb(S). . . . . . . . . . . . . . . . . . . . . 102
4.4 Tangent cones T (x1,S) and T (x2,S) are obtained by translation of “T (x1,S)”
and “T (x2,S)” to the origin. . . . . . . . . . . . . . . . . . . . . . . . . 103
4.5 Properties of tangent cones to convex sets. . . . . . . . . . . . . . . . . 104
4.6 Two examples of vector fields f i satisfying assumption A2′. . . . . . . . 107
4.7 Illustrations for Ba(x) and f i(x). . . . . . . . . . . . . . . . . . . . . . . 109
4.8 A point q in M ′ but not in Ω. . . . . . . . . . . . . . . . . . . . . . . . . 111
4.9 Some examples of vector fields f ip satisfying assumption A2. . . . . . . . 115
ix
4.10 Illustration for the equaled point. . . . . . . . . . . . . . . . . . . . . . . 120
4.11 Illustration for notations. . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.12 Illustration for Lemma 4.9. . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.13 Illustration for Lemma 4.10. . . . . . . . . . . . . . . . . . . . . . . . . 126
4.14 A possible element in Oς(i, p, δ). . . . . . . . . . . . . . . . . . . . . . . 127
4.15 The time interval [t′, t′ + T ]. . . . . . . . . . . . . . . . . . . . . . . . . 132
4.16 A distribution of agents at time t′. . . . . . . . . . . . . . . . . . . . . . 132
4.17 The interaction digraphs Gp, p = 1, 2, 3. . . . . . . . . . . . . . . . . . . 135
4.18 Time evolution of three coordinates not tending to a common value. . . 136
4.19 A smooth function g(y). . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.20 Time evolution of two coordinates not tending to a common value. . . . 139
4.21 Three interaction digraphs Gp, p = 1, 2, 3. . . . . . . . . . . . . . . . . . 141
4.22 Synchronization of three oscillators with a dynamic interaction structure. 142
4.23 A tank of water. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.24 Two identical coupled tanks. . . . . . . . . . . . . . . . . . . . . . . . . 144
4.25 A network of water tanks. . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.26 The smallest enclosing circle. . . . . . . . . . . . . . . . . . . . . . . . . 149
5.1 Wheeled vehicle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.2 Frenet-Serret frame. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
5.3 Local information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.4 The information flow digraph. . . . . . . . . . . . . . . . . . . . . . . . 165
5.5 Trajectories of ten unicycles in the plane. . . . . . . . . . . . . . . . . . . 165
5.6 The information flow digraph. . . . . . . . . . . . . . . . . . . . . . . . 169
5.7 A uniform distribution on a line. . . . . . . . . . . . . . . . . . . . . . . 172
5.8 Vehicles in formation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5.9 A common sense of direction. . . . . . . . . . . . . . . . . . . . . . . . . 174
5.10 A circle formation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
x
A.1 Stability, attractivity, and global attractivity with respect to Ω. . . . . . 184
xi
Chapter 1
Introduction
Suppose that a large group of soldiers are scattered in a foggy battlefield,
where visibility is limited to only, say 20 meters. For instance, a soldier may
(faintly) see three other soldiers, but he might lose sight of them if he moves
even slightly. Under such a circumstance, is it possible for the soldiers to
gather, silently, at a single location? [5]
This thesis answers this and related questions using a formal model of coupled dynamic
systems. A coupled dynamical system is one composed of subsystems (or agents) with
coupling, that is, the states of certain agents affect the time-evolution of others. In this
thesis, special attention is given to the class of coupled dynamic systems for which the
equilibrium set is described by all states having identical state components. Stability
and stabilizability problems of these systems with respect to the equilibrium set are the
main issues. Central to these problems is the interaction structure among them—that is,
who is coupled to whom. The main goal of the thesis is to determine how the interaction
structure affects stability and stabilizability of the systems in a continuous time setup.
For instance, the problem of gathering a swarm of robots in a small region or a point
on the plane is within this framework involving the structure of the couplings among
them. Suppose ideally that each robot has infinite range visibility and each one takes an
1
Chapter 1. Introduction 2
action to move towards certain neighbors all the time. Thus, it gives rise to a fixed (static)
interaction topology. However, the more interesting and realistic situation is when each
robot has a limited field of view. Consequently, it brings a dynamic interaction topology
if robots may come into and go out of view of each other. Our primary interest is in
the connectivity properties of interaction structures that permit the solvability of the
gathering problem. This problem also arises in the more general notion of consensus or
agreement: A group of autonomous and distributed automata should come to agree on a
piece of information. In addition, it is linked to the formation problem, i.e., the problem
of arranging multiple robots in a certain spatial configuration. In a different context, a
system of coupled oscillators is another example in this framework where the coupling
structure plays a major role in synchronization.
1.1 Literature Review
The model we study encompasses, or is closely related to, several models reported in the
literature.
A prominent and well-studied example concerns synchronization, a phenomenon every-
where in nature and finding several applications in physics and engineering. Synchronous
motion was probably first reported by Huygens ( [46], 1673). The subject of synchro-
nization has received huge attention in recent decades. For example, arrays of chaotic
systems are studied in [14, 15, 85, 121, 122]. For coupled nonlinear oscillators, a seminal
study to understand synchronization was done by Kuramoto in [59]; the work is reviewed
in [104, 105]; more recently, the problem has been reinvestigated from the viewpoint of
system control in [49, 101]. In addition, synchronization of mechanical systems is dealt
with in [79].
Another interesting example concerns swarming, an emergent collective behavior ob-
served for a variety of organisms in nature such as ants, fishes, birds, and bacteria.
Chapter 1. Introduction 3
Through simple local agent interactions, desired cooperative behaviors emerge. Biolo-
gists have been working on understanding and modelling of group behavior for a long
time. See for example [21], [83], and references therein (some of which date back to the
1920’s). The work by Breder [21] is one of the early efforts to develop mathematical
models of the schooling behavior in fish. He suggests a simple model composed of at-
traction and repulsion components. Recently, in [113], Vicsek et al. proposed a simple
but compelling discrete-time model of n autonomous agents (i.e., points or particles that
they usually call self-driven or self-propelled particles), and study the collective behavior
due to their interaction, where they assume that particles are moving with constant ab-
solute velocity and at each time step each one travels in the average direction of motion
of the particles in its neighborhood with some random perturbation. In their paper,
Vicsek et al. provide a variety of interesting simulation results which demonstrate that
the nearest neighbor rule they are studying can cause all agents to eventually move in
the same direction despite the absence of centralized coordination and despite the fact
that each agent’s set of nearest neighbors changes with time as the system evolves. An
earlier model was introduced by Reynolds [88] in 1987. Reynolds wrote a program called
boids [87] that simulates the motion of a flock of birds; they fly as a flock, with a common
average heading, and they avoid colliding with each other. Each bird has a local control
strategy—there’s no leader broadcasting instructions—yet a desirable overall group be-
havior is achieved. The local strategy of each bird has three components: 1) separation,
steer to avoid crowding local flock mates; 2) alignment, steer towards the average heading
of local flock mates; 3) cohesion, steer to move toward the average position of local flock
mates. Recently, Jadbabaie et al. [48] studied the second of these strategies and proved
mathematically, under some conditions, that all agents will eventually move in a com-
mon direction. Thus, a local strategy produces a global objective. Besides, in [38,39,68],
stability of synchronous and asynchronous swarms with a fixed communication topology
is studied, where stability is used to characterize the cohesiveness of a swarm.
Chapter 1. Introduction 4
More recently, coordination and control of multi-agent/multi-vehicle systems in the
framework of coupled dynamic systems have attracted increasing interest in the field of
system control and robotics. Several researchers began investigating distributed algo-
rithms for multi-agent systems in the early 1990s. In [106] a group of simulated robots
forms approximations to circles and simple polygons using the scenario that each robot
orients itself to the furthest or nearest robot. In [4, 5, 80, 108], some distributed algo-
rithms are proposed with the objective of getting the robots to congregate at a common
location (rendezvous). These algorithms were extended to various synchronous and asyn-
chronous stop-and-go strategies in [24, 64, 65]. In addition to these modelling and simu-
lation studies, research papers focusing on the detailed mathematical analysis of coupled
dynamic systems began to appear. Theoretical development of information consensus
and agreement among a network of agents is made in discrete time [48,75], in continuous
time [12,43,66,76,82,86,95], and in a quantized data communication setup [51,100]. Sta-
bilization of vehicle formations with linear dynamics is studied in [33,34,62,92–94] using
potential functions. For vehicles with nonholonomic constraints, achievable equilibrium
formations are explored in [53, 67, 70–72, 109, 110, 123, 124]. Other relevant references
on formation control are [27–31, 35, 74, 81, 116, 117]. More detailed discussion of these
references is postponed to appropriate chapters.
1.2 Thesis Outline
The main body of the thesis consists of four chapters.
• Digraphs and matrices. (Chapter 2)
• Coupled linear systems. (Chapter 3)
• Coupled nonlinear systems. (Chapter 4)
• Coupled kinematic unicycles. (Chapter 5)
Chapter 1. Introduction 5
Chapter 2 essential covers the necessary preliminary work on graph theory and matrix
theory. It is broken into three main sections. Section 2.1 (especially subsections 2.1.3,
2.1.4, 2.1.5, and 2.1.7) is necessary and important for the whole thesis. Section 2.2 and
section 2.3 (especially subsections 2.2.5, 2.2.6, 2.3.3, 2.3.4) are very important in the
presentation of Chapter 3. Section 2.3 (especially subsection 2.3.5) is useful in Chapter
5. Since all the materials in Chapter 2 are related very tightly and can be treated as an
independent subject from the remaining chapters, which contribute to the research areas
of graph theory and nonnegative matrix theory, we keep them in one chapter.
Chapter 3 formulates the problem for coupled linear systems and studies their stability
and attractivity properties in terms of interaction structures.
Chapter 4 generalizes the problem of Chapter 3 to coupled nonlinear systems and
studies their stability and attractivity properties in terms of interaction structures.
Chapter 5 formulates the stabilizability problem for coupled kinematic unicycles and
studies its necessary and/or sufficient conditions for stabilization of vehicle formations in
terms of information flow structures.
1.3 Thesis Contributions
Coupled dynamic systems pose many interesting research problems that have remained
open. This thesis concentrates on a particular class of continuous-time coupled dy-
namic systems. The objective is to ensure the asymptotic coincidence of all states of
the subsystems. These systems are often encountered in physics, biology, and engineer-
ing applications such as synchronization, swarming, and multi-vehicle cooperation. By
formulating in a mathematical way, we explore two issues from the structure point of
view: the stability and stabilizability problems of coupled dynamic systems with respect
to an invariant set. Rather than focusing on a single application area, we consider a
general formalism for such problems and perform a systematic inquiry into it. We not
Chapter 1. Introduction 6
only investigate coupled dynamic systems with static (time-invariant) coupling structure,
but also study coupled dynamic systems with dynamic (time-varying) coupling structure.
The reason for that is a large number of real systems from different fields give rise to a
dynamic interaction structure.
The contribution of each chapter can be summarized as follows.
• Chapter 2: In this chapter we collect relevant common notions in graph theory and
define several new terms in order to better suit our development. Concentrating
on the deeper connections between nonnegative matrices and directed graphs, and
between generator matrices and directed graphs, we develop and present several
new results for digraphs, nonnegative matrices, and especially generator matrices,
which not only are important in the presentation of the remaining chapters, but
also may be of independent interest in the areas of graph theory and nonnegative
matrix theory.
• Chapter 3: In this chapter we study coupled linear systems with static and dynamic
interaction structure. Precursors for this chapter are [48, 75, 88, 113]. In [113],
Vicsek et al. propose a simple but compelling discrete-time model of a group of
autonomous agents and provide a variety of interesting simulation results showing
that if each agent updates its heading based on its neighbors’ headings, then all
agents will eventually move in the same direction. The Vicsek model turns out
to be a special version of an earlier model introduced by Reynolds in [88]. Later,
Jadbabaie et al. study this model by assuming that the interaction structure can
be modelled as an undirected graph and provide a sufficient graphical condition to
explain the observed behaviors. With different motivations, a few investigations
[12,82,86,95] have been working on the consensus seeking problem in a continuous
time setup. Indeed, they are related in some way, but there is a major difference.
That is, the discretization of a continuous-time system at the switching times gives
Chapter 1. Introduction 7
rise to infinitely many transition matrices governing the trajectory evolution, while
in a discrete time setup, it is assumed that the system matrix switches among a
finite family of matrices. In this chapter, we investigate coupled linear systems in
continuous time with different coupling structures. This chapter presents a most
general result in continuous time so far, namely, a necessary and sufficient condition
for the coupled linear system to be globally attractive with respect to an equilibrium
subspace, where the interaction structure is time-varying and directed. The proof
technique is borrowed from [48] and redeveloped in the thesis in order to better suit
our application (dealing with directed graphs and infinite many different matrices).
This generalization is one of the important insights contributed by the thesis.
• Chapter 4: In this chapter we study coupled nonlinear systems which are the gen-
eralization of the linear systems studied in Chapter 3. A key previous relevant
paper is [77], where a nonlinear discrete-time interconnected system is studied. A
central assumption in that paper is that each subsystem’s updated state at next
step is constrained to be a strict convex combination of the current states of itself
and its neighbors. Necessary and sufficient conditions on the interconnection topol-
ogy guaranteeing convergence of the individual agents’ states to a common value
is thereby obtained. We develop in this chapter a continuous-time counterpart of
the results in [77] on discrete-time coupled systems. However, continuous time is
more challenging than discrete time; for example, existence and/or uniqueness of
solutions may fail, Lyapunov-like functions may not be differentiable, and switching
may cause Zeno phenomena. These and related technical difficulties make lots of
questions remain unknown in coupled nonlinear continuous-time systems. Adapt-
ing the assumption in [77] for discrete-time systems, we impose some reasonable
assumptions on the vector fields of the individual systems, which are satisfied in
a variety of interesting real systems. Under these assumptions, we show that the
coupled nonlinear continuous-time system with dynamic coupling structure is uni-
Chapter 1. Introduction 8
formly attractive with respect to an equilibrium subspace if and only if certain
graphical condition holds, which is the same as the one in the discrete-time coun-
terpart. For the coupled system with fixed coupling structure, under some less
conservative assumptions, we also obtain a necessary and sufficient condition guar-
anteeing its attractivity. Our proof is using entirely different tools from [77]. It
is recognized that nonsmooth analysis serves as a main technique. Our main re-
sults are applied to the analysis of synchronization of coupled Kuramoto oscillators
with time-varying coupling, to the analysis of a biochemical reaction network and a
water tank network, and to the synthesis of rendezvous controllers for multi-agent
systems which provides the first solution for solving the rendezvous problem in
continuous time.
• Chapter 5: In this chapter we investigate the stabilizability problem of group vehicle
formations. Unlike the coupled dynamic systems studied in Chapter 3 and 4, the
subsystems (vehicles) are indeed dynamically decoupled. However, they are coupled
through information flow and output feedback controllers in order to achieve certain
desired objectives. This naturally gives rise to the question: Under what conditions
does there exist a controller based on local available information to solve certain
problems such as stabilizing a formation? This chapter is the first work solving
this problem and establishes necessary and sufficient conditions for stabilization of
point formations and line formations. In addition, under the assumption that the
group of vehicles have a common sense of direction, stabilization of vehicles to any
geometric formation is also feasible provided the condition for a point formation
holds.
• Chapter 6: In this chapter we review the results presented in this thesis and discuss
future research.
Chapter 2
Digraphs and Matrices
The main purpose of this chapter is to provide a mathematical foundation, based on the
theory of graphs and nonnegative matrices. We shall strive for rigor in presentation and
shall not discuss the applicability of the concepts in the real world. This is postponed
for the later chapters where we apply the results developed here to various aspects of
system analysis.
In this chapter we begin by surveying some basic notions from graph theory [10, 36]
and develop a very important result about connectivity of digraphs. Next, we explore
the theory of nonnegative matrices with emphasis on the deeper connections between
nonnegative matrices and directed graphs. An excellent reference on nonnegative matri-
ces is [17]. We develop several new results on this topic, which will become useful in the
proof of a key result in Chapter 3. These results can be treated as a complement to the
results in [17] for independent interest. Finally, we study in detail the exponential, zero
eigenvalues, and stability issues of generator matrices. Several new results are derived
that will be important in the presentation of Chapter 3 and 5.
9
Chapter 2. Digraphs and Matrices 10
2.1 Introductory Graph Theory
2.1.1 Digraphs, Neighbors, Degrees
A directed graph (or just digraph) G consists of a non-empty finite set V of elements called
nodes and a finite set E of ordered pairs of nodes called arcs (see Fig. 2.1). We call V
the node set and E the arc set of G. We will often write G = (V , E), which means that V
and E are the node set and arc set of G, respectively.
PSfrag replacements
v1v1v1
v2v2
v2
v3v3v3v4(a) (b) (c)
Figure 2.1: Digraphs.
For an arc (u, v) the first node u is its tail and the second node v is its head. We
also say that the arc (u, v) leaves u and enters v. The head and tail of an arc are its
end-nodes. A loop is an arc whose end-nodes are the same node. An arc is multiple if
there is another arc with the same end-nodes. A digraph is simple if it has no multiple
arcs or loops.
For example, consider the digraphs represented in Fig. 2.1. Here, digraph (a) is
simple; digraph (b) has multiple arcs, namely, (v3, v1); and digraph (c) has a loop, namely,
(v2, v2).
In what follows, unless otherwise specified, a digraph G = (V , E) is always assumed
to be simple.
The local structure of a digraph is described by the neighborhoods and the degrees
of its nodes. For a digraph G = (V , E) and a node v in V , we use the following notation:
N+v = u ∈ V − v : (v, u) ∈ E , N−
v = u ∈ V − v : (u, v) ∈ E .
Chapter 2. Digraphs and Matrices 11
The sets N+v and N−
v are called the out-neighborhood and in-neighborhood of v, respec-
tively. We call the nodes in N+v and N−
v the out-neighbors and in-neighbors of v. The
out-degree, d+v , of a node v is the cardinality of N+v . Correspondingly, the in-degree, d−v ,
of a node v is the cardinality of N−v . In symbols, d+v = |N+
v | and d−v = |N−v |.
As an illustration, consider digraph (a) in Fig. 2.1, in which we have, for the node v1,
N+v1
= v2, v3, N−v1
= v4, and d+v1= 2, d−v1
= 1.
2.1.2 Walks, Paths, Cycles
A walk in a digraph G is an alternating sequence
W : v1e1v2e2 · · · ek−1vk
of nodes vi and arcs ei such that ei = (vi, vi+1) for every i = 1, 2, . . . , k − 1. We say that
W is a walk from v1 to vk. The length of a walk is the number of its arcs. Hence the
walk W above has length k − 1. A semiwalk in a digraph G is an alternating sequence
v1e1v2e2 · · · ek−1vk
of nodes and arcs such that ei = (vi, vi+1) or ei = (vi+1, vi) for every i = 1, 2, . . . , k − 1.
If the nodes of a walk W are distinct, W is a path. If the nodes v1, . . . , vk−1 are
distinct and v1 = vk, W is a cycle. Since paths and cycles are special cases of walks, the
length of a path and a cycle is already defined. Cycles of length 1 are loops. A digraph
without cycles is said to be acyclic.
These concepts are now illustrated. For the digraph in Fig. 2.2,
v1(v1, v3)v3(v3, v4)v4(v4, v5)v5
is not only a walk but also a path from v1 to v5, and
v1(v1, v3)v3(v3, v2)v2(v2, v1)v1
Chapter 2. Digraphs and Matrices 12
PSfrag replacements
v1
v2
v3 v4
v5
v6
Figure 2.2: Walk, semiwalk, path, and cycle.
is not only a walk from v1 to v1 but also a cycle. However,
v1(v1, v3)v3(v3, v2)v2(v2, v1)v1(v1, v3)v3
is just a walk from v1 to v3 which is neither a path nor a cycle. In addition,
v1(v1, v3)v3(v3, v4)v4(v6, v4)v6
is not a walk but a semiwalk from v1 to v6. Nevertheless, all the walks are semiwalks.
When a digraph G is not acyclic, the period d of G is defined as the greatest common
divisor of all the lengths of cycles in G. We call the digraph d-periodic if d > 1 and
aperiodic if d = 1. For each node vi in G, let Si be the set of all the lengths, mki , of walks
from vi to vi and define di = g.c.d.mki ∈Si
mki , the greatest common divisor of all the lengths,
the period of the node vi. We call the node vi di-periodic if di > 1 and aperiodic if di = 1.
When we say that a node vi is di-periodic or aperiodic, there has to be a cycle through vi.
As an example, three digraphs are given in Fig. 2.3. Clearly, the digraph (a) is aperiodic
since it has a loop. For the digraph (b), there are two cycles: one of them is of length 3
and the other is of length 4, so it is also aperiodic. However, for the digraph (c), there
are still two cycles but the lengths are 3 and 6, respectively, so it is 3-periodic. Now we
look at the node v1 in the digraph (a). The lengths of walks from v1 to v1 are 2, 4, 6,
. . . , meaning that the walk starts at v1 and ends at v1 but it could repeatedly traverse
v2 and v1 for any positive integer times, so the node v1 has a period 2, which is different
Chapter 2. Digraphs and Matrices 13
PSfrag replacements
v1
v1 v1
v2 v2
v2
v3
v3 v3
v4
v4
v5v5 v6
v7
(a) (b) (c)
Figure 2.3: Aperiodic and d-periodic digraphs.
from the period of the digraph. However, if the digraph is strongly connected, then the
period of every node is the same as the period of the digraph, which we will show later
on. For example, in the digraph (c), we choose any node, say v4. The lengths of walks
from v4 to v4 are 6, 9, 12, 15, . . . , which are actually nonnegative combinations of the
lengths of two cycles, 3 and 6. Hence, the node v4 is of period 3, equaling the period of
the digraph.
2.1.3 Connectedness
One of the most important graph theoretic concepts is that of connectedness. We now
introduce some of the ideas concerned with this aspect of digraph structure.
For a digraph G, if there is a walk from one node u to another node v, then v is said
to be reachable from u, written u → v. If not, then v is said to be not reachable from
u, written u 9 v. In particular, a node v is reachable from itself by recalling that the
sequence v is a trivial walk of length 0.
A node v which is reachable from every node of the digraph G is called a globally
reachable node of the digraph. A node v from which every node of the digraph G is
reachable is called a centre node of the digraph.
A digraph G is fully connected if for every two nodes u and v there are an arc from
u to v and an arc from v to u; G is strongly connected if every two nodes u and v are
Chapter 2. Digraphs and Matrices 14
mutually reachable; G is unilaterally connected if for every two nodes u and v at least one
is reachable from the other; G is quasi strongly connected (QSC) if for every two nodes
u and v there is a node w from which u and v are reachable; G is weakly connected if
every two nodes u and v are joined by a semiwalk (disregarding the orientation of each
arc). A digraph G is disconnected if it is not even weakly connected. It is easy to see
that G is strongly connected if and only if every node of G is a globally reachable node,
or equivalently every node of G is a centre node. Clearly, a digraph consisting of only
one node is always strongly connected since the node is reachable from itself.
PSfrag replacements
v1v1 v1
v1v1v1
v2v2 v2
v2v2v2
v3v3 v3
v3v3v3
(d) (e) (f)
(a) (b) (c)
Figure 2.4: Digraphs with different connectivity.
Fig. 2.4 shows: (a) a fully connected digraph, (b) a strongly connected digraph, (c)
a unilaterally connected digraph, (d) a quasi strongly connected digraph, (e) a weakly
connected digraph, (f) a disconnected digraph.
Clearly, every fully connected digraph is strongly connected, every strongly connected
digraph is unilaterally connected, every unilaterally connected digraph is QSC, and every
QSC digraph is weakly connected, but the converses of these statements are not true in
general. Hence these kinds of connectedness for digraphs are overlapping.
2.1.4 Operations on Digraphs
We first study the operation of taking the “converse” of any digraph. We shall see that
this operation, which involves reversing the direction of every arc of a given digraph, sets
Chapter 2. Digraphs and Matrices 15
the stage for a powerful principle called “directional duality.” This principle will enable
us to establish certain theorems without effort once we have proved other corresponding
theorems. Note that the digraphs of Fig. 2.5 (a) and (b) are related to each other in
a particular way: either one can be obtained from the other simply by reversing the
directions of all arcs. Given a digraph G, its opposite digraph G∗ is the digraph with the
same node set formed by exchanging the orientations of all arcs in G.
PSfrag replacements
v1v1 v2v2 v3v3 v4v4
(a) (b)
Figure 2.5: Digraph and its opposite digraph.
Next we study the operation of taking the “union” of any two or more digraphs which
have the same node set. If G = (V , E) and G ′ = (V , E ′) are digraphs with the same node
set V , then their union G ∪ G ′ is the digraph with arc set E ∪ E ′. That is,
G ∪ G ′ = (V , E ∪ E ′) .
Fig. 2.6 provides an example of union operation for two digraphs, G and G ′, with the
same node set v1, v2, v3.
PSfrag replacements
v1 v1v1 v2 v2v2 v3 v3v3 ⋃=
G ∪ G ′G G ′
Figure 2.6: Digraphs and their union.
Finally, it is sometimes appropriate to examine just part of a digraph. This can be
done in the following way. For a digraph G = (V , E), if U is a nonempty subset of V , then
the digraph (U , E ∩ (U × U)) is termed the induced subdigraph by U . A strong component
of a digraph G = (V , E) is a maximal induced subdigraph of G which is strongly connected
Chapter 2. Digraphs and Matrices 16
(maximal induced subdigraph is not unique in general). If G1 = (V1, E1), . . . ,Gk = (Vk, Ek)
are the strong components of G = (V , E), then clearly V1 ∪ · · · ∪ Vk = V (recall that a
digraph with only one node is strongly connected). Moreover, we must have Vi ∩ Vj = φ
for every i 6= j as otherwise all the nodes in Vi ∪ Vj are reachable from each other,
implying that the nodes of Vi ∪ Vj belong to the same strong component of G. In other
words, every node belongs to exactly one strong component of G.
On the other hand, for a digraph G = (V , E), a nonempty node set U ⊆ V is closed if
the node v is not reachable from u for all u ∈ U and v ∈ V − U . In other words, there is
no arc leaving from the node set U . In particular, U = V is closed. A strong component
G1 = (V1, E1) of a digraph G is closed if V1 is closed in G. In fact, an induced subdigraph
having a minimal closed node subset in G is a strong component of G.
Fig. 2.7 provide examples of induced subdigraphs, G1, G2, and G3, of the first digraph
G, where G1 is not a strong component, G2 is a strong component but it is not closed,
and G3 is a closed strong component.
PSfrag replacements
v1
v1
v2
v2v2
v3
v3 v3
v4
v4 v4
G3G2
G
G1
Figure 2.7: Induced subdigraph, strong component, and closed strong component.
2.1.5 Dynamic Digraphs
We introduce the notion of a dynamic digraph, which is a digraph whose connectivity
changes over time. Consider a set of n nodes, V , and a set of all possible arc sets with n
Chapter 2. Digraphs and Matrices 17
nodes, Ep, p ∈ Q, where Q is the set of indices. A dynamic digraph Gσ(t) =(V , Eσ(t)
)
is a digraph together with a piecewise constant function σ : R → Q, which is called a
switching signal.
Given a dynamic digraph Gσ(t) =(V , Eσ(t)
), we denote by G ([t1, t2]) the union digraph
whose arcs are obtained from the union of the arcs in Gσ(t) over the time interval [t1, t2],
that is,
G ([t1, t2]) =
V ,
⋃
t∈[t1,t2]
Eσ(t)
.
Also, it is convenient to introduce a very important concept here, which is a property
of connectedness for dynamic digraphs. A dynamic digraph Gσ(t) is uniformly quasi
strongly connected (UQSC) (uniformly strongly connected) if there exists T > 0 such that
for all t, the union digraph G([t, t+T ]) is quasi strongly connected (strongly connected).
Example 2.1 Consider a dynamic digraph Gσ(t), where the switching signal σ(t) is a
periodic piecewise constant signal σ : R → P = 1, 2 depicted in Fig. 2.8, and G1, G2are shown in Fig. 2.9. It can be easily verified that there is a T = 3 such that for all
t, the union digraph G([t, t + T ]) = G1 ∪ G2 is QSC. Hence the dynamic digraph Gσ(t) is
UQSC.
2.1.6 Undirected Graphs
For completeness, we review some concepts for undirected graphs. Undirected graphs
form in a sense a special class of directed graphs (symmetric digraphs) and hence problems
that can be formulated for both directed and undirected graphs are often easier for the
latter.
An undirected graph G = (V , E) consists of a non-empty finite set V of elements called
nodes and a finite set E of unordered pairs of nodes called edges.
We can simply treat an undirected graph G as a bidirectional digraph by replacing
each edge (u, v) of G with the pair of arcs (u, v) and (v, u). Thus, they are completely
Chapter 2. Digraphs and Matrices 18
PSfrag replacements
1
1
2
3 4 6
t
σ(t)
Figure 2.8: A switching signal σ(t).
PSfrag replacements
v1v1 v2v2 v3v3
G1 G2
Figure 2.9: Digraphs G1 and G2.
the same in the sense of connectedness. An example is given in Fig. 2.10. Throughout
the thesis, we will use bidirectional digraphs instead of undirected graphs when such a
structural description is necessary.
PSfrag replacements
v1 v1v2 v2v3 v3
Bidirectional digraphUndirected graph
Figure 2.10: Undirected graph and bidirectional digraph.
Furthermore, it is worth pointing out that for bidirectional digraphs, the four kinds
of connectedness we introduced in the previous section, namely, strongly connected,
unilaterally connected, quasi strongly connected, and weakly connected, are equivalent,
Chapter 2. Digraphs and Matrices 19
and they are all referred to as connected in the context of undirected graphs.
2.1.7 A Result on Digraphs
In this subsection we present and prove a fundamental result on connectedness of di-
graphs, which will become extremely important in the necessity proofs of the main results
presented in Chapter 3, 4, and 5.
Theorem 2.1 For a digraph G = (V , E), the following statements are equivalent:
(a) The digraph G is QSC;
(b) The digraph G has a centre node;
(c) The opposite digraph G∗ has a globally reachable node;
(d) The opposite digraph G∗ has only one closed strong component.
In the above theorem, the conditions (a), (b), and (c) are equivalent because the
same property is stated using different terminologies. The condition (d) provides a new
useful characterization of this property. Due to lack of an appropriate term in graph
theory describing this property, we first introduced the notion of a globally reachable
node and then proved the equivalence of the conditions (c) and (d), which is also proved
independently in [77] with logically contrapositive form. Later on, we found the notion
of quasi strongly connected in [16] and became aware of the equivalent conditions (a) and
(b) presented in [16] are just directional dual properties of (c) and (d). The proof of the
equivalence of (a) and (b) can be found in [16], page 133, and so is omitted. In order
to prove the remaining, the next preliminary result is needed, which shows the existence
of a closed strong component for any digraph. The proof of the lemma also provides an
algorithm to find a closed strong component.
Chapter 2. Digraphs and Matrices 20
Lemma 2.1 A digraph G = (V , E) has at least one closed strong component. Further-
more, if a nonempty set U ⊂ V is closed in G, then G has a closed strong component
Gc = (Vc, Ec) satisfying Vc ⊆ U .
Proof: We prove the first assertion by means of a constructive algorithm.
Select any node, say v1, in V . Let V1 be the set of nodes from which v1 is reachable
and let V ′1 be the set of nodes which are reachable from v1. Recall that every node is
reachable from itself. So both V1 and V ′1 contain element v1.
Check whether V ′1 ⊆ V1.
If so, then the induced subdigraph G1 by V ′1 is a closed strong component of G. To
see this, firstly, notice that every two nodes u, v ∈ V ′1 ⊆ V1 are mutually reachable since
u→ v1 → v and v → v1 → u. So the induced subdigraph G1 by V ′1 is strongly connected.
On the other hand, for all v ∈ V ′1 and u ∈ V − V ′
1, v 9 u since otherwise v1 → v → u
and u ∈ V ′1. Hence, V ′
1 is closed, and the induced subdigraph by V ′1+ u is not strongly
connected since u is not reachable from any other node in V ′1. Therefore, G1 is a maximal
induced subdigraph which is strongly connected. In conclusion, G1 is a closed strong
component.
If instead the condition above is false, select any node, say v2, in V − V1. Let V2be the set of nodes from which v2 is reachable and let V ′
2 be the set of nodes which are
reachable from v2. Thus V ′2 must be a subset of V − V1 since otherwise v1 is reachable
from v2. Check whether V ′2 ⊆ V2. If so, the induced subdigraph by V ′
2 is a closed strong
component of G by the same argument as above. If it is not, repeat this procedure again
until this condition holds. The digraph G has a finite number of nodes and V ′k is getting
smaller each step by noting that
V ′k ⊆ V − V1 − · · · − Vk−1.
So eventually the condition must hold. Indeed, V ′m has only one element vm at some step
m if the condition is not satisfied before step m. Thus V ′m = vm ⊆ Vm by recalling
Chapter 2. Digraphs and Matrices 21
that vm is also an element of Vm. Therefore the closed strong component of G will have
been constructed.
If a nonempty node set U ⊂ V is closed in G, we let the induced subdigraph by U
be Gu = (U , Eu). By the first assertion, we know that the digraph Gu has at least one
closed strong component, say Gc = (Vc, Ec). Obviously Vc ⊆ U . It remains to show that
Gc is also a closed strong component of G. Clearly, Gc is also an induced subdigraph of
G. Moreover, Vc is closed in G. Therefore, Gc is the maximal induced subdigraph which
is strongly connected. Combining with the fact that Vc is closed in G, it follows that Gc
is a closed strong component of G ¥
Proof of Theorem 2.1: (b)⇐⇒ (c) By the definition of opposite digraph, immediately
we know that a centre node of the digraph G is a globally reachable node of the opposite
digraph G∗.
(c) =⇒ (d) If the opposite digraph G∗ has a globally reachable node, let V1 be the
subset of V consisting all the globally reachable nodes and let G1 be the induced subdi-
graph by V1, then we claim that G1 is the only closed strong component in G∗. The set
V1 may equal V or be a strict subset of V . In the first case, V1 = V , clearly G1 = G∗ is
strongly connected and so it is the unique closed strong component of G∗. In the second
case, V1 ⊂ V , we have that v is not reachable from u for all u ∈ V1 and v ∈ V − V1,
implying V1 is closed. (To see this point, suppose by contradiction that there are u ∈ V1and v ∈ V − V1 such that v is reachable from u. Notice that u is a globally reachable
node. So v is also globally reachable, which contradicts that v /∈ V1 is not a globally
reachable node.)
For any two distinct nodes u, v ∈ V1, there is a walk from u to v in G∗ since both nodes
are globally reachable. Furthermore, since V1 is closed, this walk cannot go through any
node not in V1 and must be in the induced subdigraph G1. That means G1 is strongly
connected.
Chapter 2. Digraphs and Matrices 22
Moreover, since V1 is closed and no node in V −V1 is reachable from any node in V1,
no node can be added to make the induced subdigraph strongly connected. This implies
G1 is the maximal induced subdigraph which is strongly connected. Hence it is a closed
strong component of G∗.
Finally we show it is the unique one in G∗. Suppose by contradiction that there is
another closed strong component in G∗, say G2 = (V2, E2). Recall that V1∩V2 = φ. Since
V2 is closed by assumption, for any node v ∈ V1 and any node u ∈ V2, v is not reachable
from u, which contradicts the fact that v is a globally reachable node.
(c)⇐= (d) If the opposite digraph G∗ has only one closed strong component, say G1 =
(V1, E1), we claim that every node in V1 is globally reachable. Suppose by contradiction
that there is a node v ∈ V1 which is not globally reachable. Let V2 be the set of nodes
from which v is reachable and let V3 the set of nodes from which v is not reachable.
Then for any node u ∈ V2 and any node w ∈ V3, w 9 u since otherwise w → u → v.
Notice that V2 ∪V3 = V . So it follows that V3 is closed. Let G3 = (V3, E3) be the induced
subdigraph by V3. Then G3 has a closed strong component by Lemma 2.1, which is
also a closed strong component of G∗ since V3 is closed. Furthermore, this closed strong
component is not the same as G1 since it does not have the node v while G1 has the node
v. Therefore, G∗ has two closed strong components, a contradiction. ¥
2.2 Nonnegative Matrices and Graph Theory
We shall deal in this section with square nonnegative matrices E = (eij), i, j = 1, . . . , n;
i.e., eij ≥ 0 for all i, j, in which case we write E º 0. If, in fact, eij > 0 for all i, j, we
shall put E Â 0 and call it positive.
This definition and notation apply to row vectors xT and column vectors x. We
shall use the notation E1 º E2 to mean E1 − E2 º 0, where E1, E2, and 0 are square
nonnegative matrices of compatible dimensions.
Chapter 2. Digraphs and Matrices 23
The approach in the main body of this section is combinatorial, using the element-wise
structure in which the zero-nonzero pattern plays an important role. The zero-nonzero
pattern of a nonnegative matrix is completely determined by an associated digraph. So
in the section we shall study the structural properties of nonnegative matrices which have
beautiful graph theoretic interpretations.
Finally, we shall use the notation Ek = (e(k)ij ) for the kth power of E and use the nota-
tion ρ(E) for the spectral radius of E, which is the maximum modulus of the eigenvalues
of E.
2.2.1 Nonnegative Matrices, Adjacency Matrices, and Digraphs
We start by defining the associated digraph of a nonnegative matrix.
For an n× n nonnegative matrix E, the associated digraph G(E) consists of n nodes
v1, . . . , vn where an arc leads from vi to vj if and only if eij 6= 0.
It is clear that if any other nonnegative matrix E has the same dimensions as E,
and has positive entries and zero entries in the same positions as E, then they have the
same associated digraph. If two nonnegative matrices E and E have the same associated
digraph, we will say that they are of the same structure, written as E ∼ E.
From the viewpoint of graph theory, there is a typical nonnegative matrix associated
with a digraph, which is called its adjacency matrix. An adjacency matrix E = (eij) ∈
Rn×n of a digraph G, with n nodes, is the matrix in which eij = 1 if there is an arc leading
from the node vi to the node vj in G and eij = 0 otherwise.
Thus, an adjacency matrix E corresponding to a given nonnegative matrix E replaces
all the positive entries of E by ones and of course E ∼ E.
We can make a number of observations about the adjacency matrix of a digraph:
(a) E is not necessarily symmetric.
(b) The sum of the entries in row i of E is equal to the out-degree of vi.
Chapter 2. Digraphs and Matrices 24
(c) The sum of the entries in column j of E is equal to the in-degree of vj.
Example 2.2 Let
E =
0 5 0 0
2 0 0 6
0 3 0 0
1.5 0 0.5 0
. (2.1)
Then the associated digraph and its adjacency matrix are shown in Fig. 2.11.
PSfrag replacements
v1 v2
v3v4
E =
0 1 0 0
1 0 0 1
0 1 0 0
1 0 1 0
Figure 2.11: Associated digraph and adjacency matrix.
Finally, we bring in the Gershgorin disk theorem that will be quite useful to explore
the eigenvalues distribution of nonnegative matrices.
For a square matrix E = (eij), around every entry eii on the diagonal of the matrix,
draw a closed disc of radius∑j 6=i
|eij|. Such discs are called Gershgorin discs.
Theorem 2.2 ( [18], Gershgorin, 1931) Every eigenvalue of E lies in a Gershgorin
disc.
As an example, for the matrix E in (2.1), the Gershgorin discs are drawn in Fig. 2.12
and the eigenvalues lie in the union of these discs, i.e., the largest disc.
Chapter 2. Digraphs and Matrices 25
2 4 6 8
Figure 2.12: Gershgorin discs.
2.2.2 Irreducible Matrices
A matrix E is cogredient [17] to a matrix E if there is a permutation matrix P such that
PEP T = E . An n× n matrix E is reducible if n = 1 or, when n > 1, if it is cogredient
to
E =
B 0
C D
, (2.2)
where B and D are nonempty square matrices. Otherwise, E is irreducible.
By a permutation operation for a nonnegative matrix, the associated digraph only
changes by a renumbering of the nodes. For example, given a nonnegative matrix E and
a permutation matrix P as follows:
E =
0 2 1
2 0 0
0 3 0
, P =
0 1 0
0 0 1
1 0 0
,
Chapter 2. Digraphs and Matrices 26
then
E = PEP T =
0 0 2
3 0 0
2 1 0
.
The associated digraphs G(E) and G(E) are given in Fig. 2.13 and we can see that they
can be mutually obtained by renumbering the nodes.
PSfrag replacements
v1v1 v2v2 v3v3
G(E) G(E)
Figure 2.13: Associated digraphs (renumbering the nodes).
Theorem 2.3 ( [17], page 30) An n × n nonnegative matrix E is irreducible if and
only if G(E) is strongly connected.
Example 2.3 Let
E1 =
0 2 0 0
0 0 3 0
0 0 0 4
5 0 0 0
and E2 =
0 2 0 0
1 0 3 0
0 0 0 4
0 0 5 0
.
Then the associated digraphs, G(E1) and G(E2), are given in Fig. 2.14. It can be easily
seen that G(E1) is strongly connected so E1 is irreducible and G(E2) is not strongly
connected so E2 is reducible. Indeed, we can choose a permutation
P =
0 0 1 0
0 0 0 1
1 0 0 0
0 1 0 0
such that PE2PT =
0 4 0 0
5 0 0 0
0 0 0 2
3 0 1 0
,
Chapter 2. Digraphs and Matrices 27
PSfrag replacements
v1v1 v2 v2
v3 v3v4v4G(E1) G(E2)
Figure 2.14: Associated digraphs of (irreducible and reducible matrices).
which is of the form in (2.2).
We end this subsection by invoking the following theorem which states the relationship
between the structure of the k-th power of a nonnegative matrix, i.e., Ek =(e(k)ij
), and
the walks in the associated digraph. This theorem is easily proved by induction on k.
Theorem 2.4 ( [13], page 87) Let E be a nonnegative matrix. Then e(k)ij > 0 if and
only if G(E) has a walk from the node vi to the node vj of length k.
2.2.3 Primitive Matrices
A nonnegative matrix E is said to be primitive [17] if there exists a positive integer k
such that Ek  0.
A primitive matrix is irreducible, but the converse is not true in general. This fact
can be easily seen from the following graph characterization that we developed.
Theorem 2.5 An n×n nonnegative matrix E is primitive if and only if G(E) is strongly
connected and aperiodic.
The proof requires the following lemmas.
Chapter 2. Digraphs and Matrices 28
Lemma 2.2 Let m1,m2 ≥ 1 be integers. If g.c.dm1,m2 = 1, then there is an integer
k ≥ 0 such that for any integer k ≥ k,
k = αm1 + βm2, where α, β are suitable nonnegative intergers.
Proof: Since g.c.dm1,m2 = 1, 1 is an integer combination of m1 and m2. Without
loss of generality, say
1 = α1m1 − β1m2, where α1, β1 are nonnegative integers. (2.3)
Let k = β1m22. Thus k ≥ 0 and for all k ≥ k,
k = β1m22 + i ·m2 + j, (2.4)
for some integers i, j satisfying i ≥ 0 and 0 ≤ j < m2. Substituting (2.3) into (2.4) leads
to
k = β1m22 + i ·m2 + j · (α1m1 − β1m2) = (j · α1) ·m1 + (β1(m2 − j) + i) ·m2.
Let α = j ·α1 and β = β1(m2− j) + i. Clearly, α, β are nonnegative integers by noticing
that j < m2. Therefore, the conclusion follows. ¥
The next result shows the relationship between the period of a digraph and the period
of each node in the digraph when it is strongly connected. This lemma is adapted from
Theorem 2.2.30 in nonnegative matrix theory [17] but we present a different statement
and a different proof in the context of graph theory.
Lemma 2.3 Let d be the period of a digraph G and di be the period of node vi, i = 1, . . . , n
in G. If G is strongly connected then d = d1 = · · · = dn.
Proof: Let S = m1, . . . ,mp be the set of all the lengths of cycles in G. Obviously it is
a finite set by the definition of cycles and d is the greatest common divisor of S. For any
node vi, let Si be the set of all the lengths of walks from vi to vi. Then di is the greatest
common divisor of Si. For any walk from vi to vi, it is either a cycle or is generated by a
Chapter 2. Digraphs and Matrices 29
PSfrag replacements
vi
Figure 2.15: A walk from vi to vi is either a cycle or is generated by a number of cycles.
number of cycles (see Fig. 2.15). So the length of any walk, i.e., any element in Si, is a
linear combination of mj, j = 1, . . . , p, with nonnegative integer coefficients and therefore
it divides d, the greatest common divisor of mj, j = 1, . . . , p. This further implies that
di divides d.
On the other hand, consider any cycle in the digraph. Let the length be mj. If it
goes through v1, then d1 divides mj. If not, then it has to go through some other node,
say v2. Since the digraph is strongly connected, there must be a cycle going through v1
and v2. Let’s say the length of this cycle mk. Thus d1 divides mk. Notice that these
two cycles generate a walk of length mk +mj from v1 to v1. So d1 divides mk +mj and
therefore d1 divides mj. Hence, d1 divides any mj in S. That means d1 divides d.
Hence, d = d1. By the same argument, we can show d = di for all i = 1, . . . , n if G is
strongly connected. ¥
Lemma 2.4 Let E be an n × n nonnegative matrix. If G(E) is strongly connected and
d-periodic, then e(k)ii = 0 for any i = 1, . . . , n and for any k that is not a multiple of d.
Proof: Let di, i = 1, . . . , n, be the periods of the nodes in G(E). Thus d = d1 = · · · = dn
by Lemma 2.3 since G(E) is strongly connected. Hence for any node vi in G(E), the
length of any walk from vi to vi is a multiple of d, and there is no walk from vi to vi with
Chapter 2. Digraphs and Matrices 30
length of k that is not a multiple of d. So it follows from Theorem 2.4 that e(k)ii = 0 for
any i = 1, . . . , n and any k that is not a multiple of d. ¥
Proof of Theorem 2.5: (⇐=) If G(E) is strongly connected and aperiodic, then by
Lemma 2.3 the period of G and the period of each node vi are all equal to 1. For any
node vi, let m1i ,m
2i (m1
i 6= m2i ) be the lengths of two walks from vi to vi. By Lemma 2.2
there is sufficiently large ki such that for any k ≥ ki, k can be expressed by a nonnegative
integer combination of m1i and m2
i , which means there is a walk of length k from vi to
vi. Let vj be another node. Since there is a path from vi to vj, we let its length be lij.
Thus for any k ≥ qij := ki + lij there is a walk of length of k from vi to vj. It follows
from Theorem 2.4 that e(k)ij > 0 for all k ≥ qij. Let q = maxqij : i, j = 1, . . . , n. Then
we have e(k)ij > 0 for all i, j = 1, . . . , n and k ≥ q. By the definition of primitive matrix,
E is primitive.
(=⇒) To prove the contrapositive form, assume that G(E) is not strongly connected,
or that it is strongly connected but it is not aperiodic. For the first case that G(E) is not
strongly connected, there is a pair of nodes vi and vj such that vj is not reachable from
vi. So by Theorem 2.4, e(k)ij = 0 for all k > 0. Hence there is no natural number k such
that Ek is positive and E is not primitive.
For the second case, G(E) is strongly connected but it is not aperiodic, that is, it is
d-periodic, where d > 1. So it follows from Lemma 2.4 that e(k′)ii = 0 for any positive
integer k′ that is not a multiple of d. Hence there is no natural number k such that Ek
is positive since otherwise if there is a natural number k∗ such that Ek∗ is positive, then
Ek is positive for any k ≥ k∗, which contradicts e(k′)ii = 0 for any positive integer k′ that
is not a multiple of d. Therefore, E is not primitive. ¥
Example 2.4 Let
E1 =
0 2 0
1 0 0
0 3 0
, E2 =
0 1 0
0 0 1
1 0 0
, and E3 =
0 1 0
0 0 1
1 1 0
.
Chapter 2. Digraphs and Matrices 31
Then the associated digraphs, G(E1), G(E2), and G(E3), are given in Fig. 2.16. As we
PSfrag replacements
v1v1v1 v2v2v2 v3 v3v3
G(E3)G(E1) G(E2)
Figure 2.16: Associated digraphs of (non-primitive and primitive matrices).
can see, G(E1) is not strongly connected so E1 is reducible and so it is not primitive. The
digraph G(E2) is strongly connected but it is periodic of period 3 so it is irreducible but
not primitive. Indeed, for any positive integer k,
E3k−22 =
0 1 0
0 0 1
1 0 0
, E3k−1
2 =
0 0 1
1 0 0
0 1 0
, and E3k
2 =
1 0 0
0 1 0
0 0 1
.
Obviously, there is no positive integer k such that the k-th power of E2 is positive. The
associated digraph G(E3) is strongly connected and aperiodic (d = g.c.d2, 3 = 1) so E3
is primitive. Indeed, when k = 5,
Ek3 =
1 1 1
1 2 1
1 2 2
is positive.
Finally we recall the well-known Perron-Frobenius theorem here, which actually makes
the theory of nonnegative matrices so attractive.
Theorem 2.6 ( [17], Perron-Frobenius, 1907) Let E be a nonnegative, irreducible
matrix. The following are true:
(a) ρ(E) is a simple eigenvalue and any eigenvalue of E of the same modulus is also
simple.
Chapter 2. Digraphs and Matrices 32
(b) The matrix E has a positive eigenvector x corresponding to ρ(E).
If, in addition, E is primitive, then all eigenvalues of E other than ρ(E) have modulus
less than ρ(E).
2.2.4 Stochastic Matrices
A class of very interesting and useful nonnegative matrices are stochastic matrices. We
first review some properties of stochastic matrices from [17] and then present a funda-
mental result on the reduced form of stochastic matrices, which is vital in the remaining
presentation of this chapter.
A square matrix E is row stochastic (stochastic for short) if it is nonnegative and
every row sum equals 1.
Let 1 denote the vector of all 1 components with appropriate dimension. Now we
present several well-known properties that stochastic matrices have.
• If a matrix E is stochastic, then ρ(E) = 1.
• A nonnegative matrix E is stochastic if and only if 1 is an eigenvector of E corre-
sponding to the eigenvalue λ = 1.
• If matrices E1 and E2 are stochastic, then the product E1E2 is also stochastic.
Theorem 2.7 Let E be an n×n nonnegative matrix. If G(E) has exactly k closed strong
components, then E is cogredient to
E =
Ek 0 0 0
0. . . 0 0
0 0 E1 0
Bk · · · B1 E0
, (2.5)
where Ei, i = 0, 1, . . . , k, are ri × ri matrices and ri are suitable integers satisfying
0 ≤ r0 < n and 0 < ri < n (i = 1, . . . , k). If, in addition, E is stochastic, then ρ(Ei) = 1
is a simple eigenvalue of Ei, i = 1, . . . , k and ρ(E0) < 1 when r0 6= 0.
Chapter 2. Digraphs and Matrices 33
Proof: If G(E) has exactly k closed strong components, denote them by G1 = (V1, E1), . . . ,
Gk = (Vk, Ek). If necessary, renumber the nodes and correspondingly permute the rows
and columns of E obtaining E, such that Vk = 1, . . . , rk,V2 = rk + 1, . . . , rk +
r(k−1), . . . ,V1 = rk + · · ·+ r2+1, . . . , rk + · · ·+ r1. Notice that each strong component
Gi is closed by assumption. This means that there are no outgoing arcs leaving Vi. So
for any l ∈ Vi and any j /∈ Vi, the (l, j)-th entry of E is 0. That is, E is cogredient to the
matrix in (2.5). When V = Vk ∪ · · · ∪V1, r0 = 0 and otherwise r0 = n− r1−· · ·− rk > 0.
If, in addition, E is stochastic, then for i = 1, . . . , k, Ei is stochastic and ρ(Ei) = 1.
Since Gi is a strong component, it follows from Theorem 2.3 that Ei is irreducible. Then
by Theorem 2.6 (Perron-Frobenius theorem) we obtain ρ(Ei) = 1 is a simple eigenvalue
of Ei for i = 1, . . . , k.
Finally, when r0 6= 0, denote V0 = V − V1 − · · · − Vk, which is not empty. We claim
that V0 is not closed in G(E) since otherwise by Lemma 2.1 G(E) has another closed
strong component Gc = (Vc, Ec) satisfying Vc ⊆ V0, which contradicts that G(E) has
exactly k closed strong components. Consequently, there is an arc leading from some
node vo1 ∈ V0 to some node vi1 ∈ Vi, which is actually a walk of length of 1.
Next we show that for any integer m there is a walk of length of m from vo1 to some
node vij ∈ Vi. If Vi has only one node vi1 , by recalling that Ei is stochastic, it follows
that Ei = 1, which in turn implies there is a loop from vi1 to vi1 . Hence there is a walk
of length m from vo1 to vi1 by repeatedly passing through the loop. Otherwise if Vi has
more than one node, then there is a walk of any length from vi1 to some other node
vij ∈ Vi since Gi is strongly connected. Hence for any integer m ≥ 1 there is a walk of
length of m from vo1 to some node vij ∈ Vi. Again, let Vo1 = V0 − vo1. If Vo1 is not
empty then it is also not closed in G(E) for the same reason that V0 is not closed. So
there is an arc leading from some node vo2 ∈ Vo1 to some node in V1 ∪ · · · ∪ Vk ∪ vo1.
Recalling that some node in Vi is reachable from vo1 , it follows from the same argument
above that for any integer m ≥ 2 there is a walk of length m from vo2 to some node
Chapter 2. Digraphs and Matrices 34
vjl ∈ Vj. Notice that V0 has at most n − k nodes. Repeating this argument eventually
leads to the result that for any integer m ≥ (n− k) there is a walk of length of m from
any node in V0 to some node in V1∪ · · · ∪Vk. Notice that Em (the m-th power of E) will
be of the form
Em =
Em 0
B(m) Em0
,
where E = diagEk, . . . , E1 and B(m) depends on E, E0 and Bi, i = 1, . . . , k. Thus by
Theorem 2.4 for any m ≥ (n − k) there is at least one entry in each row of B(m) that
is positive. On the other hand, by the properties of stochastic matrices we know Em
is stochastic. Hence each row sum of Em0 is less than 1 and therefore ρ(Em
0 ) < 1 by
Theorem 2.2 (Gershgorin disk theorem). Thus the conclusion follows that ρ(E0) < 1. ¥
Example 2.5 Let
E =
0 0 1 0
0 0 0.3 0.7
1 0 0 0
0 0 0 1
.
Then the associated digraph G(E) is shown in Fig. 2.17 (a). As we can see, this digraph
PSfrag replacements
v1v1 v2v2 v3v3 v4v4
(a) (b)
Figure 2.17: An associated digraph having two closed strong components.
has two closed strong components, which are circled with dotted lines. By appropriately
renumbering the nodes as shown in Fig. 2.17 (b), the corresponding nonnegative matrix
obtained from E after the corresponding permutation operation will have the form as
Chapter 2. Digraphs and Matrices 35
(2.5) which is given below:
E = PEP T =
E2 0 0
0 E1 0
B2 B1 E0
=
0 1 0 0
1 0 0 0
0 0 1 0
0 0.3 0.7 0
.
Clearly, ρ(E2) = 1, ρ(E1) = 1, and ρ(E0) = 0 < 1.
2.2.5 SIA Matrices
Matrix multiplication is associative and the product of two stochastic matrices is again
a stochastic matrix. Powers of stochastic matrices will be studied here in detail, with
emphasis on convergence. The definition of SIA matrices is introduced by Wolfowitz
in [118]. Here we derive a necessary and sufficient condition for a stochastic matrix to
be SIA, which will be important in the proof of a main result presented in Chapter 3.
A square matrix E is called stochastic, indecomposable, and aperiodic (SIA) if E is
stochastic as well as Q = limk→∞
Ek exists and all the rows of Q are the same.
The following theorem gives a graphical characterization of SIA matrices.
Theorem 2.8 A stochastic matrix E is SIA if and only if G(E) has a globally reachable
node which is aperiodic.
Proof: (⇐=) If G(E) has a globally reachable node, then by Theorem 2.1 it has only
one closed strong component, say G1 = (V1, E1). We renumber the nodes if necessary
such that V1 = v1, v2, . . . , vr. In other words, there is a permutation matrix P such
that
E = PEP T =
E1 0
B1 E0
,
where E1 and E0 are r × r and (n − r) × (n − r) matrices respectively. In particular,
when r = n, then E = E1. Immediately, we know that E1 is stochastic and so ρ(E1) = 1.
Chapter 2. Digraphs and Matrices 36
Moreover, by the condition that the globally reachable node is aperiodic, it follows from
Lemma 2.3 that G1 is aperiodic since the globally reachable node in G(E) is also a node in
G1 and G1 is strongly connected. Then from Theorem 2.5 the matrix E1, whose associated
digraph is G1, is primitive. By Perron-Frobenius Theorem (Theorem 2.6), ρ(E1) = 1 is a
simple eigenvalue of E1 and all other eigenvalues of E1 has modulus less than ρ(E1) = 1.
Thus the Jordan form of E1 is as follows:
E1JF =
1 0
0 D
,
where ρ(D) < 1. Clearly, E1 = UE1JFU−1 and the first column of U is 1r since 1 is an
eigenvector of E1 corresponding to the eigenvalue 1. Denote
U =
(1r U ′
)
and let the first row of U−1 be xT . Then since ρ(D) < 1, we obtain
limk→∞
Ek1 = U( lim
k→∞Ek1JF )U
−1 = U
1 0
0 0(r−1)×(r−1)
U−1 = 1r · xT . (2.6)
On the one hand, when r = n, it follows from the definition of SIA matrix that
E = E1 is SIA and so is E.
On the other hand, when r < n, it follows from Theorem 2.7 that ρ(E0) < 1 and so
limk→∞
Ek0 = 0. (2.7)
Denote the k-th power of E as follows:
Ek =
Ek1 0
B(k)1 Ek
0
for some matrix B(k)1 depending upon the matrix E1, E0 and B1. Notice that Ek is still
a stochastic matrix for every k. So
(B(k)1 Ek
0
)
1r
1(n−r)
= B
(k)1 · 1r + Ek
0 · 1(n−r) = 1(n−r),
Chapter 2. Digraphs and Matrices 37
and then we have
limk→∞
(B(k)1 · 1r
)= lim
k→∞
(1(n−r) − Ek
0 · 1(n−r)
)= 1(n−r)−
(limk→∞
Ek0
)·1(n−r) = 1(n−r). (2.8)
Express E2k and E2k+1 in terms of Ek. Then
E2k = Ek · Ek =
Ek1 0
B(k)1 Ek
0
Ek1 0
B(k)1 Ek
0
=
E2k1 0
B(k)1 Ek
1 + Ek0B
(k)1 E2k0
, (2.9)
E2k+1 = E · E2k =
E2k+11 0
B1E2k1 + E0
(B(k)1 Ek
1 + Ek0B
(k)1
)E2k+10
. (2.10)
Considering (2.7) and the fact that each entry of B(k)1 is lower-bounded by zero and
upper-bounded by one, we have
limk→∞
Ek0B
(k)1 = 0. (2.11)
On the other hand, we have
B(k)1 Ek
1 = B(k)1 UEk
1JFU−1 = B
(k)1
(1r U ′
)
1 0
0 Dk
U−1
=
(B(k)1 1r B
(k)1 U ′Dk
)U−1.
(2.12)
Notice that the entries of B(k)1 U ′ are also bounded and lim
k→∞Dk = 0. So
limk→∞
B(k)1 U ′Dk = 0. (2.13)
Combining (2.8) and (2.13), we obtain
limk→∞
B(k)1 Ek
1 =
(limk→∞
B(k)1 1r lim
k→∞B(k)1 U ′Dk
)U−1 =
(1(n−r) 0
)U−1 = 1(n−r)x
T .
Recalling (2.11), then it following that
limk→∞
(B(k)1 Ek
1 + Ek0B
(k)1
)= lim
k→∞B(k)1 Ek
1 + limk→∞
Ek0B
(k)1 = 1(n−r)x
T +0 = 1(n−r)xT , (2.14)
Chapter 2. Digraphs and Matrices 38
and therefore from (2.6), (2.7), and (2.14), we have
limk→∞
E2k =
limk→∞
E2k1 0
limk→∞
(B(k)1 Ek
1 + Ek0B
(k)1
)limk→∞
E2k0
=
1rxT 0
1(n−r)xT 0
=
(1nx
T 0
),
limk→∞
E2k+1 =
limk→∞
E2k+11 0
limk→∞
(B1E
2k1 + E0
(B(k)1 Ek
1 + Ek0B
(k)1
))limk→∞
E2k+10
=
1rxT 0
B1
(limk→∞
E2k1
)+ E0
(limk→∞
(B(k)1 Ek
1 + Ek0B
(k)1
))0
=
1rxT 0
B11rxT + E01(n−r)x
T 0
=
1rxT 0
1(n−r)xT 0
=
(1nx
T 0
).
Hence,
limk→∞
Ek =
(1nx
T 0
),
and so
limk→∞
Ek = T T
(1nx
T 0
)T
exists and has identical rows. Then it follows from the definition of SIA matrix that the
stochastic matrix E is SIA.
(=⇒) If a stochastic matrix E is SIA, we prove by contradiction that G(E) has a
globally reachable node which is aperiodic. Firstly suppose that G(E) does not have a
globally reachable node. Then by Theorem 2.1 it has at least two closed strong compo-
Chapter 2. Digraphs and Matrices 39
nents. Thus E is cogredient to
E =
E2 0 0
0 E1 0
B2 B1 E0
.
Recall that E is SIA and so is E. That is, limk→∞
Ek exists and has identical rows. However,
notice that Ek is of the form
Ek =
Ek2 0 0
0 Ek1 0
B(k)2 B
(k)1 Ek
0
.
Since limk→∞
Ek has identical rows, limk→∞
Ek has to be equal to 0, which contradicts that Ek
is stochastic.
Secondly suppose G(E) has a globally reachable node but this node is d-periodic with
d > 1. Then by Theorem 2.1 it has only one closed strong component, say G1, which is
d-periodic. Thus E is cogredient to
E =
E1 0
B1 E0
,
where G(E1) = G1. Since G(E1) is d-periodic, by Lemma 2.4 we know that the diagonal
entries of Ek1 are 0 for any k which is not a multiple of d. Hence, lim
k→∞Ek has to be equal
to 0 by recalling that E is SIA and noticing the form of Ek. This contradicts that Ek is
stochastic. ¥
Example 2.6 Let four stochastic matrices be given as follows:
E1 =
0 1 0 0
1 0 0 0
0 0.3 0 0.7
0 0 0 1
E2 =
0 1 0 0
0.4 0 0.6 0
0 1 0 0
0 0 1 0
Chapter 2. Digraphs and Matrices 40
E3 =
0 0.5 0.5 0
0.2 0 0.8 0
0 1 0 0
0 0 1 0
E4 =
0 0.9 0.1 0
0.5 0 0.5 0
0 0.4 0 0.6
0 0 1 0
Then the associated digraphs are given in Fig. 2.18. The digraph G(E1) does not have a
PSfrag replacements
v1v1
v1v1
v2v2
v2v2
v3v3
v3v3
v4v4
v4v4
G(E1) G(E2)
G(E3) G(E4)
Figure 2.18: Associated digraphs of SIA matrices.
globally reachable node so E1 is not SIA. Whereas G(E2) has a globally reachable node
(the nodes v1, v2, and v3 are all globally reachable node) but these nodes are periodic of
period 2, so E2 is not SIA, too. However, G(E3) has a globally reachable node v1 which is
aperiodic (d1 = g.c.d2, 3 = 1) so E3 is SIA. Furthermore, G(E4) is strongly connected
and aperiodic and of course it has a globally reachable node that is aperiodic, so E4 is
SIA. Actually E4 is a primitive matrix. In fact, all the primitive and stochastic matrices
are SIA.
2.2.6 Wolfowitz Theorem
In a number of important applications the asymptotic behavior of the product of k infinite
stochastic matrices as k →∞ and its dependence on the structure of these matrices are
of interest. The theorem stated below, which is due to J. Wolfowitz in 1963, explores
this problem. The version we use here is modified from the original version in [118]. The
Chapter 2. Digraphs and Matrices 41
original theorem of Wolfowitz deals with the case with finitely many matrices, whereas
here we present a version with possibly infinitely many matrices. Actually, Wolfowitz
states this generalization without proof in his paper [118]. For completeness, we provide
a proof for this generalization.
For a nonnegative matrix P = (pij) ∈ Rn×n, we define δ(P ) by
δ(P ) := maxj
maxi1,i2
|pi1j − pi2j|.
Thus δ(P ) measures, in a certain sense, how different the rows of P are. If the rows of
P are identical, δ(P ) = 0 and conversely.
On the other hand, we define λ(P ) by
λ(P ) := 1− mini1,i2,i1 6=i2
∑
j
min(pi1j, pi2j). (2.15)
If λ(P ) < 1 we will call P a scrambling matrix. The condition λ(P ) < 1 implies that,
for every pair of rows i1 and i2, there exists a column j (which may depend on i1 and i2)
such that pi1j > 0 and pi2j > 0, and conversely. Clearly, whether or not a nonnegative
matrix is a scrambling matrix depends solely on its associated digraph. Suppose two
nonnegative matrices P1 and P2 have the same associated digraph, i.e., P1 ∼ P2. Then
if P1 is a scrambling matrix, so is P2.
If the matrix P is stochastic, clearly 0 ≤ λ(P ) ≤ 1. Furthermore, λ(P ) = 0 if and
only if δ(P ) = 0. This is illustrated next.
Example 2.7 Let
E1 =
0.3 0 0 0.7
0.2 0.2 0.5 0.1
0 0.2 0.8 0
0.6 0 0.1 0.3
and E2 =
0.3 0 0 0.7
0.2 0 0.5 0.3
0 0.8 0 0.2
0.4 0.2 0.4 0
.
Then,
δ(E1) = max0.6, 0.2, 0.8, 0.7 = 0.8, λ(E1) = 1−min0.3, 0, 0.6, 0.7, 0.4, 0.1 = 1
Chapter 2. Digraphs and Matrices 42
and
δ(E2) = max0.4, 0.8, 0.5, 0.7 = 0.8, λ(E1) = 1−min0.5, 0.2, 0.3, 0.2, 0.6, 0.2 = 0.8.
Clearly by the definition, E1 is not a scrambling matrix but E2 is.
Next we will give several useful lemmas for future reference before Wolfowitz Theorem.
Let Ξ = E1, E2, . . . be a finite or infinite set of stochastic matrices of order n. By a
product in Ξ of length k we mean the product of k matrices from Ξ (repetitions are
permitted). Let m be the number of different associated digraphs of all SIA matrices of
order n.
Lemma 2.5 ( [118], Lemma 4) If any product in Ξ is SIA, then all products in Ξ of
length ≥ m+ 1 are scrambling matrices.
Lemma 2.6 ( [41], Theorem 2) For any j,
δ(Ek1Ek2 · · ·Ekj
)≤
j∏
i=1
λ (Eki) .
Theorem 2.9 ( [118], Wolfowitz, 1963) If there exists a constant d, 0 ≤ d < 1, such
that λ(P ) ≤ d for any product P in Ξ of length m + 1, then for each infinite sequence,
Ek1 , Ek2 , . . . , there exists a vector x such that
limj→∞
EkjEkj−1· · ·Ek1 = 1xT .
Proof: For any small ε > 0, there exists an integer p large enough so that
dp < ε
since 0 ≤ d < 1. Let j∗ = (m+1)p. Then by Lemma 2.6 and the condition that λ(P ) ≤ d
for any product P in Ξ of length m+ 1, it follows that, for any integer j ≥ j∗,
δ(EkjEkj−1
· · ·Ek1
)≤ dp < ε.
Chapter 2. Digraphs and Matrices 43
Hence, we obtain
limj→∞
δ(EkjEkj−1
· · ·Ek1
)= 0. (2.16)
Let
C1 = Ek1 ,
C2 = Ek2Ek1 ,
...
Cj = EkjEkj−1· · ·Ek1 ,
...
Denote by cjrs the (r, s)-th entry of the matrix Cj. From the definition of δ(·), we obtain
that, for any j,
0 ≤ maxr1,r2
|cjr11 − cjr21| ≤ δ(Cj).
Combining the inequality above and the equation (2.16) leads to
limj→∞
(maxr1,r2
|cjr11 − cjr21|)
= 0. (2.17)
On the other hand, since each Ekj is stochastic and Cj = EkjCj−1, each entry in the
first column of Cj is a convex combination of the entries in the first column of C j−1. So
0 ≤ maxr
(cjr1)≤ max
r
(cj−1r1
)and min
r
(cj−1r1
)≤ min
r
(cjr1)≤ 1.
Therefore, both maxr
(cjr1)and minr
(cjr1)have limits as j tends to ∞, say
limt→∞
maxr
(cjr1)= a,
limt→∞
minr
(cjr1)= b.
Noticing that
maxr
(cjr1)−min
r
(cjr1)= max
r1,r2|cjr11 − cjr2|,
it follows from (2.17) that a = b. Furthermore, notice that, for any j,
minr
(cjr1)≤ cji1 ≤ max
r
(cjr1), ∀ i.
Chapter 2. Digraphs and Matrices 44
Hence, for all i = 1, . . . , n,
limj→∞
cji1 = a = b.
This means every entry in the same column of C j converges to the same constant when
j tends to∞. Therefore, the conclusion follows, that is, there exists a vector x such that
limj→∞
EkjEkj−1· · ·Ek1 = 1xT .
¥
2.3 Generator Matrices and Graph Theory
Very often problems in biological, physical, and social systems can be reduced to problems
involving matrices which, due to certain constraints, have some special structure. One
of the most common situations is where the matrix A in question has nonnegative off-
diagonal and nonpositive diagonal entries, that is, A is a finite matrix of the type
A =
−a11 a12 a13 · · ·
a21 −a22 a23 · · ·
a31 a32 −a33 · · ·...
......
...
(2.18)
where the aij are nonnegative. It should come as no surprise that the theory of nonneg-
ative matrices plays a dominant role in the study of these matrices.
2.3.1 Metzler Matrices and M-Matrices
A square real matrix A whose off-diagonal entries are nonnegative is called a Metzler
matrix [69]. These matrices are studied quite often in positive systems. For a linear
dynamic system
x(t) = Ax(t),
Chapter 2. Digraphs and Matrices 45
if A is Metzler, then the transition matrix Φ(t) = exp(At) is nonnegative as we will show.
This implies that the first quadrant of the state-space is positively invariant.
Closely related properties are clearly possessed by matrices which have the form −A,
where A is a Metzler matrix.
Matrices of form B = −A, where A is a Metzler matrix, may be written in the form
B = sI − E,
where s ≥ 0 is a sufficiently large real number to make E º 0. If in fact this may be
done so that also s ≥ ρ(E), then B is called an M-matrix. If further, one can do this so
that s > ρ(E) then B is said to be a nonsingular M-matrix.
Examples of Metzler matrix, M -matrix, and nonsingular M -matrix are given as fol-
lows, respectively,
A1 =
−2 1 0
2 0 2
3 2 −1
, A2 =
2 −1 −1
−2 4 −2
−3 −2 5
, A3 =
3 −1 −1
−1 4 −2
−2 −2 5
.
Now comes a useful result for nonsingular matrices.
Lemma 2.7 ( [17], Theorem 6.2.3) Let A be a Metzler matrix. Then there exists a
positive diagonal matrix P such that
AP + PAT
is negative definite if and only if −A is a nonsingular M-matrix.
2.3.2 Generator Matrices, Graph Laplacians, and Digraphs
In this section we confine ourselves to a special class of Metzler matrices where their
row-sums are equal to zero.
A square matrix A is called a generator matrix if it is a Metzler matrix with row-sums
equal to zero. This notion is originally from continuous time Markov chains (CTMCs).
Chapter 2. Digraphs and Matrices 46
On the one hand, a generator matrix A could be written as
A = −D + E1, (2.19)
whereD is a diagonal nonnegative matrix whose diagonal entries are the negative diagonal
entries of A and E1 is a nonnegative matrix with diagonal entries all zero and off-diagonal
entries copied from A. Thus, a simple digraph G can completely capture the structure of
a generator matrix, namely, G(E1), the associated digraph of the nonnegative matrix E1.
We call this digraph the associated digraph of a generator matrix A. In order to avoid
causing confusion with the associated digraph of a nonnegative matrix E, we use GA to
denote the associated digraph of a generator matrix A, whereas we use G(E) to denote
the associated digraph of a nonnegative matrix E, which we once used in the preceding
section.
On the other hand, a generator matrix A could be also written as
A = −sI + E2, (2.20)
where s is a large enough nonnegative scalar so that E2 is nonnegative. Sometimes we
may use this alternative expression for A.
It is worth pointing out that G(E2) may be different from GA, but they surely have
the same connectivity properties. For instance, GA is strongly connected, quasi strongly
connected, or has k closed strong components if and only if G(E1) is strongly connected,
quasi strongly connected, or has k closed strong components. The only difference between
these two digraphs is that G(E2) may have loops but GA = G(E1) does not.
Example 2.8 Let
A =
−2 1 1
2 −3 1
2 2 −4
.
Chapter 2. Digraphs and Matrices 47
The matrix A above is a generator matrix since it is a Metzler matrix and its row sums
equal to zero. By the decomposition of (2.19), A is rewritten as
A = −D + E1 = −
2 0 0
0 3 0
0 0 4
+
0 1 1
2 0 1
2 2 0
.
By the decomposition of (2.20), A is rewritten as
A = −sI + E2 = −4I +
2 1 1
2 1 1
2 2 0
.
The associated digraph GA = G(E1) and G(E2) are given in Fig. 2.19 (a) and (b),
respectively, from which we can see that the only difference between them is: GA is a
simple digraph, but G(E2) has several loops.
PSfrag replacements
v1v1 v2v2 v3v3
GA G(E2)
Figure 2.19: Associated digraphs of a generator matrix.
From the graph theory point of view, a very important matrix associated with a
digraph, which is closely related to generator matrices, is the graph Laplacian. Given
a digraph G, let D be the diagonal matrix with the out-degree of each node along the
diagonal; it is called the degree matrix of G. The Laplacian of the digraph G is defined
as L = D−E, where E is the adjacency matrix that is defined in the preceding section.
It is clear that −L is a generator matrix.
Example 2.9 Let a digraph G be given in Fig. 2.20. Then the out-degree matrix, the
Chapter 2. Digraphs and Matrices 48PSfrag replacements
v1 v2 v3
G
Figure 2.20: A digraph.
adjacency matrix, and the Laplacian are
D =
1 0 0
0 2 0
0 0 1
, E =
0 0 1
1 0 1
0 1 0
, and L =
1 0 −1
−1 2 −1
0 −1 1
,
respectively.
2.3.3 The Transition Matrix of a Generator Matrix
Consider the system
x = Ax.
The well-known solution is
x(t) = exp(At)x(0).
The matrix exp(At) is usually called the transition matrix.
Theorem 2.10 If A is a generator matrix, then exp(At) is a stochastic matrix for each
t > 0.
Proof: First, we show that, for each t > 0, exp(At) is a nonnegative matrix. From
(2.20), it follows that for each t > 0,
exp(At) = exp(−stI + E2t) = e−st exp(E2t) = e−st ·(I + E2t+
(E2t)2
2!+ · · ·
).
Chapter 2. Digraphs and Matrices 49
It can be easily verified that any integer power of the nonnegative matrix E2 is also
nonnegative. Hence, looking at the right-hand side of the equality above, we see that
exp(At) is nonnegative for each t > 0.
For the exponential to be stochastic, it requires in addition that the rows sum to one.
The matrix A has row sums equal to zero. That is, A · 1 = 0. Hence we obtain
exp(At)1 =
(I + At+
(At)2
2!+ · · ·
)1 = 1+ 0 + 0 + · · · = 1.
Therefore, from the power expansion above we see that exp(At) is stochastic for each
t > 0. ¥
2.3.4 The Zero Eigenvalue of Generator Matrix
We now turn to study the eigenvalues of a generator matrix and use a very nice graphical
explanation.
Throughout this subsection, we let A be a generator matrix of order n and let GA
denote the associated digraph corresponding to A.
Theorem 2.11 Zero is an eigenvalue of A and 1 is an associated right-eigenvector, while
all other eigenvalues have negative real part.
Proof: By the definition of generator matrix, it follows that A1 = 0. So 0 is its
eigenvalue and 1 is its associated right-eigenvector. Furthermore, it follows from Theorem
2.2 (Gershgorin disk theorem) that all other eigenvalues have negative real part (see Fig.
2.21). ¥
Theorem 2.12 If GA is strongly connected, the zero eigenvalue of A is simple. Further-
more, associated with the zero eigenvalue, a right-eigenvector is 1 and a left-eigenvector
is a positive vector.
Proof: The generator matrix A may be written as
A = −sI + E,
Chapter 2. Digraphs and Matrices 50
PSfrag replacements
0
Figure 2.21: Gershgorin discs of a generator matrix.
where s is a positive scalar large enough so that E is nonnegative. Since A is a generator
matrix, all rows of A sum to zero. So all rows of E sum to s and the matrix E/s is
stochastic. Thus by the properties of stochastic matrices, ρ (E/s) = 1 and therefore
ρ(E) = s.
On the other hand, if GA is strongly connected then G(E) is also strongly connected.
Therefore E is irreducible by Theorem 2.3 and ρ(E) = s is a simple eigenvalue of
E by Theorem 2.6 (Perron-Frobenius theorem). Furthermore, E has a positive right-
eigenvector and a positive left-eigenvector corresponding to the eigenvalue ρ(E) = s.
Clearly this right-eigenvector is 1. In addition, we denote the positive left-eigenvector
by xT . Notice that A = −sI + E, so 0 is its simple eigenvalue and its associated right-
eigenvector and left-eigenvector are 1 and xT , respectively. ¥
Theorem 2.13 For the zero eigenvalue of A, algebraic multiplicity equals geometric mul-
tiplicity and equals the number of closed strong components in GA.
Proof: Write the generator matrix A as
A = −sI + E,
where s is a positive scalar and E is a nonnegative matrix. Furthermore, E/s is stochastic.
Without loss of generality, say GA has exactly k closed strong components. Then so does
Chapter 2. Digraphs and Matrices 51
G(E). By Theorem 2.7, the nonnegative matrix E is cogredient to
E =
Ek 0 0 0
0. . . 0 0
0 0 E1 0
Fk · · · F1 E0
,
where each block in the matrix form above has a suitable dimension and only E0 could
have possibly zero dimension. Notice that E is obtained from E only by permutation
operation, so E/s is also stochastic. Then by Theorem 2.7 again, ρ (Ei/s) = 1 is a simple
eigenvalue of Ei/s for i = 1, . . . , k and ρ (E0/s) < 1 if the dimension of E0 is not zero.
Moreover, 1 with suitable dimension is the eigenvector of each block Ei/s for i = 1, . . . , k
corresponding to the eigenvalue ρ (Ei/s) = 1. Hence, E has the property that s is its
eigenvalue of algebraic and geometric multiplicity k. This in turn implies that E has
the same property. By shifting s units left, we obtain that A has a zero eigenvalue of
PSfrag replacements 0s
Figure 2.22: Shifting s units.
algebraic and geometric multiplicity k (see Fig. 2.22). ¥
2.3.5 H(α,m) Stability
Now we relate the structure of generator matrices to a special type of stability that we
call H(α,m) stability, which is modified from [44] in order to better suit our applications.
Chapter 2. Digraphs and Matrices 52
Given the mactrices A ∈ Rn×m (with elements A = (aij)) and B ∈ R
p×q, the Kro-
necker product of A and B, denoted A⊗B, is the np×mq matrix
A⊗B =
a11B · · · a1mB
.... . .
...
an1B · · · anmB
∈ R
np×mq.
The Kronecker product has several useful properties. If the orders of the matrices involved
are such that all the operations below are defined, then
(a) (A+B)⊗ C = A⊗ C +B ⊗ C;
(b) (A⊗B)T = AT ⊗BT ;
(c) If A,C ∈ Rm×m and B,D ∈ R
n×n, then (A⊗B)(C ⊗D) = (AC)⊗ (BD);
(d) If λ1, . . . , λm are the eigenvalues of A ∈ Rm×m and µ1, . . . , µn are the eigenvalues
of B ∈ Rn×n, then the eigenvalues of A ⊗ B are the mn numbers λrµs, r =
1, . . . ,m, s = 1, . . . n.
Let α = α1, α2, . . . , αp be a partition of 1, 2, . . . , n. A block diagonal matrix with
diagonal blocks indexed by α1, α2, . . . , αp is said to be α-diagonal. A scalar multiple of
the identity matrix is said to be a scalar matrix. An α-diagonal matrix D is said to be
an α-scalar matrix if each block matrix in D is a scalar matrix.
For α = 1, 2, 3, simple examples of α-diagonal matrix and α-scalar matrix are,
respectively,
D1 =
1 3 0
2 2 0
0 0 5
and D2 =
2 0 0
0 2 0
0 0 5
.
Definition 2.1 Let α = α1, α2, . . . , αp be a partition of 1, 2, . . . , n and m ≥ 0 be an
integer. An n × n matrix A is said to be H(α,m)-stable if, for every α-diagonal sym-
metric positive definite matrix S, zero is an eigenvalue of SA of algebraic and geometric
multiplicity m, while all other eigenvalues have negative real part.
Chapter 2. Digraphs and Matrices 53
Obviously, an H(α, 0)-stable matrix is asymptotically stable (Hurwitz), but the con-
verse does not hold in general. A simple counterexample is
A =
−5 −4
4 3
and α = 1, 2 .
Then A is asymptotically stable with eigenvalues −1,−1 but SA is unstable with the
choice of α-diagonal symmetric positive definite matrix
S =
1 0
0 2
.
In what follows, we let A be a generator matrix of order n and let GA denote the
associated digraph corresponding to A. Moreover, we let A(m) be the Kronecker product
of A and Im, an m×m identity matrix, i.e., A(m) = A⊗ Im, and let
α = 1, . . . ,m, m+ 1, . . . , , 2m, . . . , (n− 1)m+ 1, nm
be a partition of 1, 2, . . . , nm.
Theorem 2.14 If GA is strongly connected then A(m) is H(α,m) stable.
Proof: If GA is strongly connected, then by Theorem 2.12, zero is a simple eigen-
value of A with an associated right-eigenvector 1 and a positive left eigenvector xT =
( x1 · · · xn ). That is,
A1 = 0 and xTA = 0.
Furthermore,
ker(A) = span1.
Let P = diag(x1, x2, . . . , xn). Thus, P is positive definite and P1 = x. Let
Q = ATP + PA.
Then we have
Q1 = ATP1+ PA1 = ATx = 0. (2.21)
Chapter 2. Digraphs and Matrices 54
Notice that a generator matrix A may be written as
A = −D + E,
whereD is a nonnegative diagonal matrix whose diagonal entries are the negative diagonal
entries in A, and E is a nonnegative matrix whose diagonal entries are zero and whose
off-diagonal entries are the same as the off-diagonal entries in A. Thus,
Q = ATP + PA = (−D + E)TP + P (−D + E) = −2PD + (ETP + PE).
Recalling that P is a positive diagonal matrix, we know that PD is also a positive diagonal
matrix and ETP +PE is nonnegative. This implies that Q has nonnegative off-diagonal
entries. Combining with (2.21), it follows that Q is also a generator matrix. Moreover,
recall that GA is just the associated digraph G(E). So G(E) is strongly connected. Since
P is a positive diagonal matrix, we obtain
G(PE) = G(E).
In addition,
G(ETP + PE) = G(PE) ∪ G(ETP ) = G(E) ∪ G(ETP ).
Hence, G(ETP + PE
)is strongly connected and so is GQ. Applying Theorem 2.12 again,
we obtain
ker(Q) = span1.
Also, by Theorem 2.11, except for the zero eigenvalue, the other eigenvalues of Q are in
the open left half-plane, implying that Q is negative semi-definite.
We now define a positive-definite α-scalar matrix P(m) = P ⊗ Im and, then by prop-
Chapter 2. Digraphs and Matrices 55
erties (a), (b) and (c) of the Kronecker product, we have
AT(m)P(m) + P(m)A(m) = (A⊗ Im)
T (P ⊗ Im) + (P ⊗ Im)(A⊗ Im)
= (ATP )⊗ Im + (PA)⊗ Im
= (ATP + PA)⊗ Im
= Q⊗ Im
= Q(m).
Using property (d) of the Kronecker product, we conclude that both A(m) and Q(m) have
zero eigenvalue of algebraic and geometric multiplicity m and the others are in the open
left half-plane. Hence,
ker(Q(m)
)= ker
(A(m)
)
and Q(m) is negative semi-definite.
Given any α-diagonal symmetric positive definite matrix S, its inverse S−1 is also α-
diagonal, symmetric, and positive definite. Recall that P(m) is a positive definite α-scalar
matrix, so we obtain that
P(m)S−1 = S−1P(m),
which is positive definite. It now follows that
(SA(m)
)T (S−1P(m)
)+(P(m)S
−1) (SA(m)
)= AT
(m)P(m) + P(m)A(m) = Q(m).
Thus, by the well-known Lyapunov theory, the eigenvalues of SA(m) are in the open left
half-plane or on the imaginary axis. Further, from the spectral properties of A(m) we
can infer that zero is an eigenvalue of SA(m) of algebraic and geometric multiplicity m.
Therefore, to show A(m) is H(α,m) stable, it remains to show that no other eigenvalues
of SA(m), except these m zero eigenvalues, are on the imaginary axis. Suppose by way
of contradiction that λ = jω (ω 6= 0) is one of the eigenvalues with corresponding
Chapter 2. Digraphs and Matrices 56
eigenvector y. Then
0 = y∗(SA(m)
)∗ (S−1P(m)
)+(P(m)S
−1) (SA(m)
)−Q(m)
y
= −jωy∗(S−1P(m)
)y + jωy∗
(P(m)S
−1)y − y∗Q(m)y
= −y∗Q(m)y.
It follows that y ∈ ker(Q(m)
)and therefore ω = 0, a contradiction. ¥
Theorem 2.15 The graph GA has a globally reachable node if and only if A(m) is H(α,m)
stable.
Proof: (=⇒) If GA has a globally reachable node then by Theorem 2.1 it has only
one closed strong component, say G1 = (V1, E1). If G1 = GA, it means GA is strongly
connected and then by Theorem 2.14 A(m) is H(α,m) stable. Otherwise, we let
V1 = 1, 2, . . . , r
(without loss of generality since if necessary we could renumber the nodes) where r < n.
Recall that A can be written as
A = −sI + E,
where s and E are positive scalar and nonnegative matrix, respectively. The graph GA
has only one closed strong component and so does G(E). Then by Theorem 2.7, E is of
the form
E =
E1 0
B1 E0
,
where E1 is r × r matrix and ρ(E0) < s. Similarly, A is of the form
A =
A1 0
B1 A0
=
−sI + E1 0
B1 −sI + E0
.
Notice that GA1 is just G1. So it is strongly connected. Then it follows from Theorem
2.14 that A1 is H(α1,m) stable where α1 = 1, . . . ,m, . . . , (r − 1)m+ 1, . . . , rm.
Chapter 2. Digraphs and Matrices 57
On the other hand, notice that A0 = −sI + E0 and that ρ(E0) < s. So it follows
from the definition of nonsingular M -matrix that −A0 is a nonsingular M -matrix. Thus
by Lemma 2.7 there exists a positive diagonal matrix P such that
Q = AT0 P + PA0
is negative definite. Applying the Kronecker product with Im to both sides of the above
equation and using properties (a), (b) and (c) of the Kronecker product yield
Q(m) = AT0(m)
P(m) + P(m)A0(m).
Clearly, P(m) is an α-scalar positive definite matrix and Q(m) is negative definite.
Let α2 = 1, . . . ,m, . . . , (n− r − 1)m+ 1, . . . , (n− r)m. Then for any α2-
diagonal symmetric positive definite matrix S2,
P(m)S−12 = S−1
2 P(m),
which is positive definite. It can be easily verified that
(S2A0(m)
)T (S−12 P(m)
)+(P(m)S
−12
) (S2A0(m)
)= Q(m).
Thus by the well-known Lyanponov theorem, all the eigenvalues of S2A0(m)have negative
real part and therefore A0(m)is H(α2, 0) stable.
Finally notice that, for any α-diagonal positive definite symmetric matrix S, it can
be written as
S =
S1 0
0 S2
with suitable dimension for each block. Furthermore,
SA(m) =
S1A1(m)0
S2B1(m)S2A0(m)
.
Hence, it follows from the argument above and the definition of H(α,m) stability that
A(m) is H(α,m) stable.
Chapter 2. Digraphs and Matrices 58
(⇐=) On the other hand, if A(m) is H(α,m) stable then A(m) has a zero eigenvalue
of algebraic multiplicity m by its definition. Applying the property (d) of Kronecker
product leads to that A has a simple eigenvalue at zero. By Theorem 2.13 and Theorem
2.1, the digraph GA has a globally reachable node. ¥
Chapter 3
Coupled Linear Systems
3.1 Introduction
We study in this chapter the stability properties of coupled linear systems with fixed and
dynamic topologies. Such systems are represented by a switched continuous time linear
system whose system matrix is a generator matrix. The equilibrium set contains all states
with identical state components. The stability and attractivity properties with respect
to the equilibrium set imply that all state components converge to a common value. This
class of problems naturally arise in the context of distributed decision making problems,
coordination and consensus seeking problems in multi-agent systems, and synchronization
problems.
The study of interaction among subsystems (agents) plays an important role in un-
derstanding these problems. The first effort in this direction is reported in [26]. Consider
a group of individuals who must act together as a team or committee, and suppose
that each individual in the group has his own subjective probability distribution for
the unknown value of some parameter. A coupled linear system model in discrete time
is presented [26] which describes how the group might reach agreement on a common
subjective probability distribution for the parameter by pooling individuals’ opinions.
59
Chapter 3. Coupled Linear Systems 60
This is also related to the Delphi method which was originally developed at the RAND
Corporation by Olaf Helmer and Norman Dalkey.
In many applications involving multi-agent/multi-vehicle systems, the interaction
topology between agents may change dynamically. For instance, links may be dropped
and formed due to communication disturbances and/or subject to sensor range limita-
tions. Reference [113] provides an example. In that work, Vicsek et al. propose a simple
but compelling discrete-time model of n autonomous agents (i.e., particles) all moving
in the plane with the same speed but with different headings. Each agent updates its
heading using a nearest neighbor rule based on the average of the headings of itself and
its neighbors. Each agent’s neighbors are defined to be those agents in the disk of a pre-
specified radius centered at its current location. Thus, the nearest neighbor rule leads
to a dynamically changing topology among them. In their paper, the authors provide a
variety of interesting simulation results which demonstrate that this rule can cause all
agents to eventually move in the same direction. Then [48] provides a theoretical expla-
nation for this observed behavior. The authors study the Vicsek discrete-time model in
the framework of switched system (that is, the system matrix switches among a finite
family of stochastic matrices), where the interaction structure of each mode is modeled
by an undirected graph. To analyze such models, the authors adopt a Wolfowitz’s theo-
rem (a convergence property of infinite products of certain type of stochastic matrices)
from ergodic theory to prove that all n agents’s headings converge to a common steady
state provided that the union of undirected graphs is connected with sufficient frequency.
Later on, an extension is made in [75] to allow for time-dependent, bidirectional and
unidirectional interactions.
In contrast to the discrete time setup, much research work in continuous time emerges
for coupled linear systems with fixed and/or dynamic topologies recently. In [82,95], the
common Lyapunov function technique is used to show that strongly connected and bal-
anced digraphs play a key role in addressing the average-consensus problem for coupled
Chapter 3. Coupled Linear Systems 61
linear systems. In [12, 86], the problem of information consensus among multiple agents
under fixed communication links is studied. Two other works on this problem are [43],
which deals with the agreement problem over random information networks, and [76],
which addresses the deterministic time-varying case where a sufficient condition for con-
vergence is presented and robustness with respect to an arbitrary delay is also taken into
account. In addition, [100] and [51] study the agreement problem of a group of agents,
agreement of speed, position, etc., when the data is quantized in amplitude.
In this chapter, a general abstract model for coupled linear systems is established in
continuous time, which encompasses the models studied in [12, 82, 86, 95]. A complete
and systematic analysis is developed for the stability properties of the equilibrium sub-
space. With differences regarding the coupling structure, we analyze four cases: i) cyclic
coupling structure, ii) unidirectional interaction graph with fixed topology, iii) bidirec-
tional interaction graph with dynamic topology, iv) unidirectional interaction graph with
dynamic topology. Necessary and sufficient conditions guaranteeing global attractivity
with respect to the equilibrium subspace are obtained in each case. These are the most
general results in continuous time so far. As a result, the parallel developed convergence
conditions and update schemes in [12, 82, 86, 95] are shown to be special cases of these
more general results. Our work is not only a generalization of these existing continuous-
time work, but also relevant to the discrete-time work. Indeed, our continuous-time
model is related to the discrete-time model studied in [48] to some extent. When we
take a close look at the discretized model x(k + 1) = eATx(k) of our continuous-time
system x(t) = Ax(t), we may find out that the discretized model is in the family of
discrete-time systems which generalize the discrete-time system studied in [48] to the
more general directed graph case. However, there is a major difference between them
when there is switching. That is, the discretization of our continuous-time system at the
switching times gives rise to infinitely many transition matrices governing the trajectory
evolution, while in a discrete time setup, it is assumed that the system matrix switches
Chapter 3. Coupled Linear Systems 62
among a finite family of matrices. In this chapter, a technique for sufficiency proof of one
of our main results is borrowed from [48], namely, using Wolfowitz’s theorem. However,
Wolfowitz’s theorem can be directly used in [48] since there are only a finite number of
system matrices in the family as the authors say: “The finiteness of the set of matrices is
crucial”, whereas we need a version which can deal with infinitely many different matri-
ces. Hence, we redevelop it in the thesis in order to better suit our application concerning
directed graphs and infinite many different matrices. This generalization is one of the
important insights contributed by the thesis.
For reasons of clarity, we deliberately restrict attention to coupled systems with linear
dynamics throughout the present chapter. Indeed, all the linear results presented in this
chapter can be obtained from the more general nonlinear results in the next chapter. But
we use completely different techniques to prove them, mainly based on the matrix theory
developed in the preceding chapter. Also, it is our first step in studying such coupled
dynamic systems. So we keep it in the thesis as a self-contained chapter.
3.2 Problem Formulation
Very often problems in biological, physical, social systems, and in coordination control
of multi-agent systems can be reduced to problems involving generator matrices. This
important type of matrices was originally studied in stochastic processes, in particular,
Markov chains, of continuous time setup. It is of recent interest in the field of linear
systems, especially in coordination and control for multiple agents. Some examples and
discussion are given later on. We now introduce the abstract linear model. In order for
x ∈ Rmn : x1 = · · · = xn to be an equilibrium set of a linear system, it is not hard to
show that the system must have the following structure:
xi =n∑
j=1
aij(p) (xj − xi) , i = 1, 2, . . . , n (3.1)
Chapter 3. Coupled Linear Systems 63
where xi ∈ Rm is the state of subsystem or agent i, the index p lives in a finite set P .
Additionally, we assume that aij(p) ≥ 0 for all i, j and p. When aij(p) > 0 for some
i, j, p, it means the dynamics of xi is influenced by the state xj.
Introducing the aggregate state x = (x1, . . . , xn) ∈ Rmn, we have the matrix form
x =(Ap ⊗ Im
)x, p ∈ P (3.2)
where Im is the identity matrix of order m, and for each p ∈ P , the matrix Ap is
defined in the following way: the ijth off-diagonal entry is aij(p); the ith diagonal entry
is −∑nj=1 aij(p). Clearly, by its definition, all the matrices Ap, p ∈ P , are generator
matrices. Each individual component model x =(Ap ⊗ Im
)x for p ∈ P is called a mode
of the family (3.2).
To define a switched interconnected linear system generated by the above family, we
need the notion of a switching signal. This is a piecewise constant function σ : R →
P . Such a function σ has a finite number of discontinuities at the switching times on
every bounded time interval and takes a constant value on every interval between two
consecutive switching times. The role of σ is to specify, at each time instant t, the index
σ(t) ∈ P of the active mode, i.e., the system from the family (3.2) that is currently being
followed. In some applications, the switching signal σ(t) may be known a priori, but in
some other applications, it may be unknown and to be specified only when we come to
the specific example. Thus, a switched interconnected linear system is described by the
equation
x(t) =(Aσ(t) ⊗ Im
)x(t). (3.3)
Next we introduce an interaction digraph for each mode and a dynamic interaction
digraph for the switched interconnected linear system to capture the interconnection
structure of the n subsystems.
Definition 3.1 The interaction digraph Gp = (V , Ep) consists of
• a finite set V of n nodes, each node i modeling agent i;
Chapter 3. Coupled Linear Systems 64
• an arc set Ep representing the links between agents. An arc from node j to node i
indicates that agent j is a neighbor of agent i in the sense that aij(p) 6= 0. The set
of neighbors of agent i is denoted Ni(p).
Definition 3.2 Given a switching signal σ(t), σ : R → P, the dynamic interaction
digraph Gσ(t) is the pair(V , Eσ(t)
).
By this definition, the interaction digraph Gp is the opposite digraph of the associated
digraph GApof the generator matrix Ap.
The next example combines several of the concepts presented thus far.
Example 3.1 Consider an interconnected system of three agents labelled 1, 2, and 3.
Suppose that there are three possible modes in the family (i.e., P =1, 2, 3
):
p = 1 : p = 2 : p = 3 :
x1 = 3(x2 − x1)
x2 = 2(x3 − x2) + (x1 − x2)
x3 = 4(x1 − x3)
,
x1 = 3(x3 − x1)
x2 = 0
x3 = 4(x2 − x3)
,
x1 = x3 − x1
x2 = 0
x3 = x2 − x3
.
When x1, x2, x3 ∈ R2, the corresponding matrix form for p = 1 is
x1
x2
x3
=
−3I2 3I2 0
I2 −3I2 2I2
4I2 0 −4I2
x1
x2
x3
=
−3 3 0
1 −3 2
4 0 −4
⊗ I2
x1
x2
x3
.
Corresponding to the three modes, the interaction digraphs G1, G2, and G3, are de-
picted in Fig. 3.1.
Further, suppose we are given a switched signal σ(t) shown in Fig. 3.2. This gives rise
to a switched interconnected linear system of the form (3.3) and an associated dynamic
interaction digraph Gσ(t).
Chapter 3. Coupled Linear Systems 65
PSfrag replacements
v1v1v1 v2v2v2 v3v3v3
G1 G2 G3
Figure 3.1: Interaction digraphs.
PSfrag replacements
1
2
3
t
σ(t)
Figure 3.2: A switching signal σ(t).
Having introduced the switched interconnected system model and the associated dy-
namic interaction digraph for coupled linear systems, we are now ready to state the
problem.
Let us first define the subspace Ω as
Ω =x ∈ Rmn : x1 = x2 = · · · = xn
.
Notice that for every p ∈ P , the matrix Ap is a generator matrix. This implies that for
every switching signal σ(t)
(Aσ(t) ⊗ Im
)x∗ = 0 for all x∗ ∈ Ω.
Hence, the subspace Ω is not only an invariant subspace but also a set of equilibria for
the system (3.3).
We are interested in the convergence of the trajectory to the set Ω. For the switched
interconnected system, one major problem is to find conditions on σ(t) that guarantee
Chapter 3. Coupled Linear Systems 66
the stability and attractivity with respect to Ω (see Appendix A.1 for the definitions
of set stability and attractivity). As we will see later on, the system we study has a
unique characteristic, that is, the attractivity of the system is uniquely determined by the
coupling structure. Hence we shall work on the problem of how the dynamic interaction
digraph Gσ(t) plays the role in asymptotic stability of the system. The problem can be
stated formally as
Problem 3.1 What are the conditions on the dynamic interaction digraph Gσ(t) under
which the switched interconnected system (3.3) is uniformly stable and/or uniformly at-
tractive with respect to Ω?
When σ(t) is a constant, that is, σ(t) ≡ p for some p ∈ P , then the switched system
becomes time-invariant. We say it is with fixed topology and the previous case is with
dynamic topology. For simplicity, we drop the subscript. Thus, we have the following
interconnected linear system with fixed coupling structure:
x =(A⊗ Im
)x. (3.4)
Meanwhile, we denote the associated interaction digraph G. In this case Problem 3.1
reduces to
Problem 3.2 What are the conditions on the interaction digraph G under which the
interconnected system (3.4) is stable and/or attractive with respect to Ω?
3.3 Coupled Linear Systems with Fixed Topology
In this section we deal with the coupled linear system with fixed topology. First, analysis
will focus on the case of cyclic coupling structure. Next, generalized results for any
connection will be presented.
Chapter 3. Coupled Linear Systems 67
3.3.1 Cyclic Coupling Structure
Consider a system of n agents with cyclic coupling structure,
x1 = x2 − x1
x2 = x3 − x2...
xn = x1 − xn
(3.5)
where xi ∈ Rm. As we will see later, the case of m > 1 is just a trivial extension of the
case of m = 1. So we now assume m = 1 for simplicity. Thus the system above in matrix
form is given by
x = Ax,
where A has the form A = P − I and P is the permutation matrix obtained by taking I
and putting its first row at the bottom:
P =
0 1 0 . . . 0
0 0 1 . . . 0
......
......
...
1 0 0 . . . 0
. (3.6)
Theorem 3.1 The interconnected system (3.5) is globally attractive with respect to Ω.
Furthermore,
limt→∞
x(t) =
(x01 + x02 + · · ·+ x0n
n
)1.
Proof: It is easy to obtain that the characteristic polynomial of P is sn − 1. So the
eigenvalues of P are the nth roots of unity, and therefore the eigenvalues of A are these
roots of unity shifted left by 1. That is, A has an eigenvalue at the origin and n − 1
distinct eigenvalues strictly in the left half-plane. Moreover, it can be easily seen that
ker(A) = Ω. Hence, the system (3.5) is GA with respect to Ω.
In addition, one sees
(x1 + · · ·+ xn)/n = (1TAx)/n = 0.
Chapter 3. Coupled Linear Systems 68
So
limt→∞
x(t) =
(x01 + x02 + · · ·+ x0n
n
)1.
¥
3.3.2 Generalization
Now we turn to the general case of n subsystems with any type of coupling structure.
That is, the system represented by equations of the form
x1 =∑n
j=1 a1j(xj − x1)
...
xn =∑n
j=1 anj(xj − xn)
(3.7)
where xi ∈ Rm and aij ≥ 0.
Again, we assume m = 1, so the system above in matrix form is given by
x = Ax, (3.8)
where A is a generator matrix.
Theorem 3.2 The system (3.8) is stable with respect to every equilibrium x ∈ Ω.
The proof is similar to the proof of Theorem 3.5 to be given later. So we omit it here.
Theorem 3.3 The system (3.8) is globally attractive with respect to Ω if and only if the
interaction digraph G is QSC. Moreover,
limt→∞
x(t) =
(cTx0
cT1
)1,
where cT is a left-eigenvector of A corresponding to the zero eigenvalue.
Proof: (⇐=) If G is QSC, it follows from Theorem 2.1 and Theorem 2.13 that A has
one zero eigenvalue and all other eigenvalues have negative real part. By inspection, an
Chapter 3. Coupled Linear Systems 69
associated eigenvector is 1 and so ker(A) = Ω. This implies that the system (3.8) is
globally attractive with respect to Ω.
Let cT be a left eigenvector of A corresponding to the zero eigenvalue. Define a new
variable y = cTx. Then we have
y = cT x = cTAx = 0,
which implies
y(t) = y(0) for all t ≥ 0,
and
limt→∞
y(t) = y(0) = cTx0.
On the other hand,
limt→∞
y(t) = limt→∞
cTx(t) = cT (a1) = a(cT1).
Hence, we obtain
a(cT1) = cTx0
and so
a =cTx0
cT1.
Now we have
limt→∞
x(t) =
(cTx0
cT1
)1.
(=⇒) To prove the contrapositive form, assume that the interaction digraph G is not
QSC. Then it follows from Theorem 2.1 that GA has k ≥ 2 closed strong components.
This means that A has a zero eigenvalue of algebraic and geometric multiplicity k ≥ 2 by
Theorem 2.13. So we could find an initial state x0 in ker(A) but not in Ω. This x0 can
always be found since the dimension of ker(A) is greater than 1. Further, for this initial
state x0, the solution will remain at x0 for all t, i.e., x(t) = x(0). Hence, the system (3.8)
is not GA with respect to Ω. ¥
Chapter 3. Coupled Linear Systems 70
Now we come to the case when m > 1. The system (3.7) is written in matrix form as
follows:
x(t) = (A⊗ Im)x(t) (3.9)
Corollary 3.1 The system (3.9) is globally attractive with respect to Ω if and only if the
interaction digraph G is QSC. Moreover,
limt→∞
x(t) = 1⊗((cT ⊗ I2)x
0
cT1
),
where cT is a left-eigenvector of A corresponding to the zero eigenvalue.
Proof: Consider the mn×mn matrix partitioned as
Imn =
Im
. . .
Im
.
Construct a permutation matrix P by selecting the rows of Imn in the following order:
1, 1 +m, 1 + 2m, . . .
2, 2 +m, 2 + 2m, . . .
etc.
m, 2m, 3m, . . .
For example, for m = 2, n = 3
P =
1 0 0 0 0 0
0 0 1 0 0 0
0 0 0 0 1 0
0 1 0 0 0 0
0 0 0 1 0 0
0 0 0 0 0 1
.
Chapter 3. Coupled Linear Systems 71
Observe that the matrix P performs the transformation
P (A⊗ Im)PT = Im ⊗ A =
A · · · 0
.... . .
...
0 · · · A
.
Now applying the coordinate transformation y = Px leads to the following new system
y(t) = (Im ⊗ A) y(t).
Noticing the form of Im ⊗ A, we can directly apply Theorem 3.3 to obtain
limt→∞
y(t) = a⊗ 1,
for some a ∈ Rm. Thus,
limt→∞
x(t) = limt→∞
P Ty(t) = P T (a⊗ 1) = 1⊗ a.
Hence, the system (3.9) is globally attractive with respect to Ω if and only if the inter-
action digraph G is quasi strongly connected.
Furthermore,
a =(cT ⊗ I2)x
0
cT1,
which can be easily obtained from Theorem 3.3. ¥
3.4 Coupled Linear Systems with Dynamic Topology
We now turn our focus to the harder problem: coupled linear systems with dynamic
topology. First we shall address the case where all the system matrices are symmetric.
In this case there is a common Lyapunov function. Then we prove a general result using
a different technique.
Chapter 3. Coupled Linear Systems 72
3.4.1 Symmetric Coupling Structure
Consider n agents and assume each agent’s state is of dimension 1 without loss of gener-
ality since the higher dimensional case can also be treated similarly as in the preceding
section. Thus consider a family of dynamic systems represented by the matrix form
x = Apx, p ∈ P
where x ∈ Rn is the aggregate state of n agents, P is a finite set, Ap, p ∈ P , are symmetric
generator matrices (see Subsection 3.5.3 for a physical example).
Given a piecewise constant switching signal σ : R → P , we have the following switched
system
x(t) = Aσ(t)x(t). (3.10)
We do not deal with fast chattering switching. So we assume all the switching signals
are regular enough. As a matter of fact, we shall show in Subsection 4.5.3 by means
of a counterexample that even piecewise constant switching signals σ(t) do not have
sufficient regularity for attractivity of the switched systems. Let Sdwell(τD) denote the
set of switching signals with dwell time τD > 0, that is, two switching times differ by
at least τD, a fixed constant. We shall show that if and only if a certain graphical
condition holds, the switched system above is uniformly globally attractive with respect
to Ω. Indeed, the n agents’ states will eventually converge to the centroid of their initial
states.
Theorem 3.4 Suppose σ(t) ∈ Sdwell(τD). Then the switched system (3.10) is UGA with
respect to Ω if and only if the dynamic interaction digraph Gσ(t) is UQSC. Furthermore,
limt→∞
x(t) =
(x01 + · · ·+ x0n
n
)1.
The proof requires a lemma.
Chapter 3. Coupled Linear Systems 73
Lemma 3.1 Suppose that A ∈ Rn×n is a real symmetric matrix with eigenvalues λi
satisfying
λn ≤ λn−1 ≤ · · · ≤ λk < λk−1 = · · · = λ1 = 0.
Let X0 denote the zero eigenspace and let X1 denote the orthogonal complement of X0.
Then for every x ∈ X1,
xTAx ≤ λkxTx.
Proof: Let c1, c2, . . . , cn be normalized eigenvectors of A corresponding to the eigenval-
ues λ1, λ2, . . . , λn, respectively. Then
A = λ1c1cT1 + λ2c2c
T2 + · · ·+ λncnc
Tn = λkckc
Tk + · · ·+ λncnc
Tn .
So for x ∈ X1,xTAx = λkx
T ckcTk x+ · · ·+ λnx
T cncTnx
≤ λkxT (ckc
Tk + · · ·+ cnc
Tn )x
= λkxT (c1c
T1 + c2c
T2 + · · ·+ cnc
Tn )x
= λkxTx.
¥
Proof of Theorem 3.4: (⇐=) Let X0 = span 1 and let X1 be the orthogonal
complement of X0. Notice that for every p ∈ P , Ap is a generator matrix and is symmetric.
So it follows that
Ap1 = 0 and 1TAp = 0, for all p ∈ P .
The trajectory of (3.10) looks like
x(t) = a(t)1+ w(t),
where a(t) ∈ R and w(t) ∈ X1. Pre-multiplying the equation above by 1T , we get that
1Tx(t) = na(t).
Chapter 3. Coupled Linear Systems 74
On the other hand, we have
1T x(t) = 1TAσ(t)x(t) = 0.
So a(t) is a constant real number, say a(t) ≡ a. Setting t = t0, the starting time, we get
a = (x01 + · · ·+ x0n) /n.
Now we come to show that w(t)→ 0 uniformly on t0.
Since w(t) = x(t)− a1 and Aσ(t)1 = 0,
w(t) = x(t) = Aσ(t)x(t) = Aσ(t)w(t). (3.11)
We know that for any w(t0) ∈ X1, the solution w(t) ∈ X1 for all t ≥ t0. In other words,
X1 is a positively invariant set for the system (3.11).
Choose the positive definite function
V (w) =1
2wTw.
Take the derivative of V (w(t)) along the solution of (3.11):
V (w(t)) = wT (t)Aσ(t)w(t).
By Theorem 2.11 we know Aσ(t) is negative semi-definite. Hence, V (w(t)) is a non-
increasing function with respect to t. Also it is lower-bounded by 0, so it has a limit as
t→∞. Let
limt→∞
V (w(t)) = ε.
Now it remains to show that ε = 0. Suppose by contradiction that ε > 0. This implies
‖w(t)‖2 ≥ 2ε for all t ≥ t0. (3.12)
If the interaction digraph Gσ(t) is UQSC, it follows that there is a T > 0 such that for
every t ≥ 0 the union digraph G([t, t+ T ]
)is QSC.
Suppose σ(t) switches at time instants
t0, t1, t2, . . . .
Now we generate a new subsequence Tk from the sequence above as follows:
Chapter 3. Coupled Linear Systems 75
1. Set T0 = t0;
2. If t0 + T ∈ (ti−1, ti], set T1 = ti;
3. If t1 + T ∈ (ti−1, ti], set T2 = ti;
4. And so on.
By this construction, for every k ≥ 0, we have Tk+1 − Tk ≥ T . This implies that the
union digraph G([Tk, Tk+1]
)is QSC. Let the interaction digraphs during the interval
[Tk, Tk+1] be Gk1, . . . ,Gkν and let the corresponding matrix be Ak1, . . . , Akν . Thus, the
union digraph Gk1 ∪ · · · ∪ Gkν = G([Tk, Tk+1]
).
Now we claim that for any time interval [Tk, Tk+1], w(t) 6= 0 is not always in the null
space of Aσ(t). To see this, suppose by contradiction that w(t) 6= 0 is always in the null
space of Aσ(t) during [Tk, Tk+1]. That is,
Aσ(t)w(t) = 0.
Hence, w(t) = w(Tk) for all t ∈ [Tk, Tk+1] and so
Ak1w(Tk) = 0, . . . , Akνw(Tk) = 0.
This leads to
(Ak1 + · · ·+ Akν)w(Tk) = 0. (3.13)
Recall that each associated digraph GAkjis the opposite digraph of Gkj. So the associated
digraph G(Ak1+···+Akν) is just the opposite digraph of Gk1 ∪ · · · ∪ Gkν . Then we know
G(Ak1+···+Akν) has one and only one closed strong component since the opposite digraph is
QSC. Applying Theorem 2.13 we obtain the null space of Ak1+· · ·+Akν is span1 = X0.
Combining with (3.13) we obtain w(Tk) ∈ X0, which contradicts w(Tk) ∈ X1. Therefore,
there is ti ∈ [Tk, Tk+1] such that w(ti) is not in the null space of Aσ(ti). Recall that
Aσ(t) = Aσ(ti) for all t ∈ [ti, ti+1] since ti is a switching time. Clearly, by the construction
Chapter 3. Coupled Linear Systems 76
of the sequence Ti we know ti+1 is also in [Tk, Tk+1]. Furthermore, the assumption
σ(t) ∈ Sdwell(τD) implies that
[ti, ti + τD
]⊂[ti, ti+1
]⊂[Tk, Tk+1
].
Here, we say σ(ti) = kj for some kj ∈ P . Thus, applying Lemma 3.1 gives that, during
[ti, ti + τD]
V (w(t)) = wT (t)Aσ(t)w(t) = wT (t)Akjw(t) ≤ λkj‖w(t)‖2,
where λkj ≤ 0 is the largest nonzero eigenvalue of Akj. Considering (3.12), we obtain
V (w(t)) ≤ 2ελkj
and so
V(w(Tk+1)
)− V
(w(Tk)
)=
∫ Tk+1
Tk
V (w(t))dt ≤∫ ti+τD
ti
V (w(t))dt ≤ 2τDελkj.
Let λ be the maximum of the largest nonzero eigenvalues of Ap, p ∈ P . Then
V (w(Tk)) ≤ V (w(T0)) + 2kτDελ
and
limk→∞
V (w(Tk)) = −∞,
which contradicts with V (w(Tk)) is nonnegative. Hence,
limt→∞
w(t) = 0 and limt→∞
x(t) = a1.
Furthermore, notice that the convergence rate does not depend on initial time t0 but
depends only on λ. Hence, the switched system (3.10) thus is UGA with respect to Ω.
(=⇒) The necessary proof is the same as the one in Theorem 3.6. ¥
3.4.2 Generalization
In the previous subsection, we considered the switched system where every system matrix
is symmetric. Now we consider the general case without this assumption. A common
Chapter 3. Coupled Linear Systems 77
Lyapunov function is not available any more for the general case. We have to find a new
technique to prove the result although the statement of the theorem is the same.
Suppose we are given a family of dynamic systems represented by the matrix form
x = Apx, p ∈ P
where x ∈ Rn is the aggregate state of n agents and P is a finite set. Here the matrices
Ap are not necessarily symmetric, but of course they are generator matrices.
For a piecewise constant switching signal σ : R → P , the switched system is given by
x(t) = Aσ(t)x(t). (3.14)
Note in the stability result next that no assumption is needed about the digraph.
Theorem 3.5 The switched system (3.14) is US with respect to every equilibrium x ∈ Ω.
Proof: Suppose the switching signal σ(t) switches at
t0, t1, t2, . . . .
Notice that for t ∈ [ti, ti+1], Aσ(t) = Aσ(ti). So we let the transition matrix be
Φ(t, ti) = exp(Aσ(ti)(t− ti)
)for all t ∈ [ti, ti+1].
Hence, the solution can be expressed by
x(t) = Φ(t, ti)Φ(ti, ti−1) · · ·Φ(t1, t0)x0. (3.15)
Since for each p ∈ P , the matrix Ap is a generator matrix, by Theorem 2.10, all the
transition matrices are stochastic and so is the product. Hence, it directly follows from
(3.15) that, for any t ≥ t0 and for any i = 1, . . . , n, xi(t) is the convex combination of
x01, . . . , x0n and therefore it is in the convex hull co
x01, . . . , x
0n
.
In addition, notice that any equilibrium x ∈ Ω is of the form x = a1. Let ε > 0 be
arbitrary and choose δ = ε. Thus for any t0,
‖x0 − x‖∞ ≤ δ ⇐⇒ (∀i) ‖x0i − a‖∞ ≤ δ.
Chapter 3. Coupled Linear Systems 78
Clearly, for any x0 satisfying ‖x0 − x‖∞ ≤ δ,
cox01, . . . , x
0n
⊂x ∈ R
n : (∀i) ‖x0i − a‖∞ ≤ δ.
On the other hand, we know for every t ≥ t0
cox1(t), . . . , xn(t)
⊆ co
x01, . . . , x
0n
.
It follows now that
‖x(t)− x‖∞ ≤ ε.
Hence, the switched system (3.14) is US with respect to every equilibrium x ∈ Ω. ¥
The graphical condition guaranteeing uniform global attractivity is the same as the
one for the symmetric case. Nevertheless, we cannot predict what common state the n
agents will eventually converge to as in the symmetric case. It depends not only on the
initial state but also on the switching signal.
Theorem 3.6 Suppose that σ(t) ∈ Sdwell(τD). The switched system (3.14) is UGA with
respect to Ω if and only if the dynamic interaction digraph Gσ(t) is UQSC.
Proof: (⇐=) Suppose the switching signal σ(t) switches at
t0, t1, t2, . . . .
If there is only a finite number of switches, the final at time tm, artificially define tm+j =
tm + jb, j = 1, 2, . . . ,, where b is a finite positive value. Since σ(t) ∈ Sdwell(τD), clearly,
ti+1 − ti ≥ τD for all i ≥ 0.
In addition, we can always find a τm > τD large enough that
ti+1 − ti ≤ τm for all i ≥ 0,
since otherwise if there is no such τm for some interval [ti, ti+1] we can artificially partition
it.
Chapter 3. Coupled Linear Systems 79
If the dynamic interaction digraph Gσ(t) is UQSC, then by definition there is a positive
T such that for all t ≥ 0 the union digraph G([t, t + T ]
)is QSC. Now we generate a
subsequence tmj of the sequence ti as follows:
(1) Set m0 = 0.
(2) If tm0 + T ∈ (ti−1, ti], set m1 = i.
(3) If tm1 + T ∈ (ti−1, ti], set m2 = i.
(4) And so on.
Notice that for t ∈ [ti, ti+1], Aσ(t) = Aσ(ti). So we let the transition matrix be
Φ(t, ti) = exp(Aσ(ti)(t− ti)
)for all t ∈ [ti, ti+1].
Then it is clear that
x(tm1) = Φ(tm1 , tm1−1)Φ(tm1−1, tm1−2) · · ·Φ(t1, t0)x0
=: Ψ1x0,
x(tm2) = Φ(tm2 , tm2−1)Φ(tm2−1, tm2−2) · · ·Φ(tm1+1, tm1)x(tm1)
=: Ψ2x(tm1) = Ψ2Ψ1x0,
...
x(tmj) = Φ(tmj
, tmj−1)Φ(tmj−1, tmj−2) · · ·Φ(tmj−1+1, tmj−1)x(tmj−1
)
=: Ψjx(tmj−1) = ΨjΨj−1 · · ·Ψ1x
0,
...
Firstly, we will show limj→∞
x(tmj) = a1 for some a ∈ R. It suffices to show that
limj→∞
ΨjΨj−1 · · ·Ψ1 = 1cT ,
where cT is a row vector. (Then a = cTx0.)
Recall that for each p ∈ P , Ap is a generator matrix. So it can be written as
Ap = −spI + Ep,
Chapter 3. Coupled Linear Systems 80
where sp is a positive scalar large enough that Ep is nonnegative. Hence,
Φ(ti+1, ti) = exp(Aσ(ti)(ti+1 − ti)
)= exp
(−sσ(ti)I + Eσ(ti)
)(ti+1 − ti)
= e−sσ(ti)(ti+1−ti)
I + Eσ(ti)(ti+1 − ti) +
E2σ(ti)
(ti+1−ti)2
2!+ · · ·
,
and
Ψj = Φ(tmj, tmj−1)Φ(tmj−1, tmj−2) · · ·Φ(tmj−1+1, tmj−1
) = αj(I +Θj + Γj),
where
αj = e−sσ(tmj−1)(tmj
−tmj−1) · · · e−sσ(tmj−1 )(tmj−1+1−tmj−1 ),
Θj = Eσ(tmj−1)(tmj− tmj−1) + · · ·Eσ(tmj−1 )
(tmj−1+1 − tmj−1),
and Γj is a nonnegative matrix representing the sum of all the other terms.
First of all, by Theorem 2.10, we know, for every i, Φ(ti+1 − ti) is stochastic and so
is Ψj for every j. Secondly, for each p ∈ P , recalling that GApis an opposite digraph of
the interaction digraph Gp, it is clear that
G(Θj) = G(Eσ(tmj−1)
)∪ · · · ∪ G
(Eσ(tmj−1 )
)
has a globally reachable node since G([tmj−1
, tmj])is QSC by construction. Furthermore,
G(I) is a digraph with loop on each node. So
G(Ψj) = G(I) ∪ G(Θj) ∪ G(Γj)
has a globally reachable node which is aperiodic. Hence it follows from Theorem 2.8 that
Ψj is SIA for every j.
Let Ξ =Ψ1,Ψ2, . . .
. Thus, Ξ is a set of SIA matrices which are of order n. Let l
be the number of different associated digraphs of all SIA matrices of the same order n.
Similar to the argument above, we can show that every product of some Ψj given as the
following
Ψk1Ψk2 · · ·Ψki = αk1(I +Θk1 + Γk1)αk2(I +Θk2 + Γk2) · · ·αki(I +Θki + Γki)
= αk1αk2 · · ·αki(I +Θk1 +Θk2 + · · ·+Θki + · · · )
Chapter 3. Coupled Linear Systems 81
is still SIA. Then it follows from Lemma 2.5 that all products in Ξ of length l + 1 are
scrambling matrices. Notice that P is a finite set. So the positive entries of all the
matrices Ap, p ∈ P have a uniform lower-bound. This leads to the fact that all positive
entries of Ψj’s have a uniform lower-bound, which again implies that the positive entries
of any products in Ξ of length l+1 have a uniform lower-bound. Combing the fact above
and the fact that any product in Ξ of length l+1 is a scrambling matrix, we obtain that
there is a d (0 ≤ d < 1) such that for any product in Ξ of length l + 1,
λ(Ψk1Ψk2 · · ·Ψkm+1
)≤ d,
where λ(·) is a scalar function defined in (2.15). Furthermore, notice that d is independent
of the choice of t0 but only depends on the lower-bound of the nonnegative entries, which
are related to τD, τm, and the matrices Ap, p ∈ P . Thus applying Theorem 2.9 (Wolfowitz
theorem) gives
limj→∞
ΨjΨj−1 · · ·Ψ1 = 1cT ,
uniformly on t0, which then immediately leads to
limj→∞
x(tmj) = 1cTx0 = a1
uniformly on t0.
Now we come to look at the times between the instants tm0 , tm1 , tm2 , . . . . For
t ∈ [tmj, tmj+1
],
x(t) = Φ(t, tmj+k)Φ(tmj+k, tmj+k−1) · · ·Φ(tmj+1, tmj)x(tmj
) =: Φx(tmj),
where Φ is also stochastic for the same reason. This means that each agent’s state
xi(t), i = 1, 2, . . . , n is a convex combination of x1(tmj), . . . , xm(tmj
). Hence,
limt→∞
x(t) = a1
uniformly on t0. In conclusion, the switched system (3.14) is UGA with respect to Ω.
Chapter 3. Coupled Linear Systems 82
(=⇒) To prove the contrapositive form, assume that Gσ(t) is not UQSC. That is, for
all T > 0 there exists t∗ ≥ 0 such that the digraph G([t∗, t∗+T ]
)is not QSC. During the
interval [t∗, t∗ + T ], let σ(t) takes values p1, p2, . . . , pk in P . This implies
Gp1 ∪ Gp2 ∪ · · · ∪ Gpk
is not QSC. Recall that the interaction digraph Gp is just the opposite digraph of GAp.
So the associated digraph
G(Ap1+Ap2+···+Apk)
has at least two closed strong components by Theorem 2.1. This implies that
Ap1 , Ap2 , . . . , Apk
share a common null space of dimension at least 2. Thus we can find an initial state x0
in this common null space but not in Ω. Let c = ‖x0‖ and let ε = ‖x0‖Ω. Then for all T
there exists a t0 = t∗ such that for this x0,
(∃t = t0 + T ) ‖x(t)‖Ω = ε,
which means the switched system (3.14) is not UGA with respect to Ω. ¥
3.5 Examples and Discussion
For the switched linear system (3.3), suppose that the system matrices Ap, p ∈ P are
given and fixed. Then the performance of the overall system totally relies on the switching
signal. Different switching signals may produce totally different system behaviors. In
general, the switching signal is a piecewise constant function of time, the state/output,
and possibly some external signals. In this section, we first take a close look at the
possible expressions of the switching signals and present a classification for different
types of switching signals. Finally we discuss a possible application of our main results
in studying stability properties of switched positive systems.
Chapter 3. Coupled Linear Systems 83
3.5.1 Time Dependent Switching
Consider a group of six vehicles that want to agree upon certain quantities of interest,
say the speed xi. Suppose only one-way point to point communication is allowed. In
other words, every vehicle is allowed to either send the data to or receive the data from
another one at any time. So it is impossible to have a fixed communication link so that
the corresponding graph is connected. In order to achieve consensus, an alternative com-
munication strategy is proposed: periodically switch between two communication links
shown in Fig. 3.3, where the direction in the graph is the direction of data transferring.
Furthermore, let the agreement protocol be
xi =∑
j∈Ni(t)
(xj − xi),
where Ni(t) is the set of neighbors from which the vehicle i receives the data at time
t. Thus, in this setup, P = 1, 2, and the corresponding system matrices A1, A2 are
generator matrices but they are not symmetric. So the overall model isPSfrag replacements
11 22
33
44 55
66
G1 G2
Figure 3.3: Interaction digraphs representing two communication links.
x = Aσ(t)x,
Chapter 3. Coupled Linear Systems 84
where σ : R → P is a switching signal. This is an example of the system (3.14) we
studied in Subsection 3.4.2. Notice that, for this simple communication strategy, the
dynamic interaction digraph is UQSC. So consensus eventually occurs.
3.5.2 Time and State Dependent Switching
Consider n point-like robots moving in the plane, each with the simplest kinematic model
xi = ui, (3.16)
where xi ∈ R2 is the position state and ui ∈ R
2 is the velocity control input. This model
describes so-called omnidirectional mobile robots. The construction of omnidirectional
mobile robots can be found in [42] and its references. Each robot is free to move equally
well in all directions from standing still. Also, it can actually rotate freely independent
of its translational motion.
Suppose that each robot carries a camera with a cone-like field of view and the camera
can be controlled to rotate independently. Fig. 3.4 shows four robots and the cone-like
fields of view of robot 1 and 4.
PSfrag replacements 1
2
3
4
Figure 3.4: Cone-like fields of view.
At time t, let Ni(t, x) denote the set of labels of those agents within the cone-like field
Chapter 3. Coupled Linear Systems 85
of view of robot i. For instance, in Fig. 3.4, N1(t, x) = 2, 3 and N4(t, x) = 1. We
consider the following vehicle-level control strategy: Robot i steers towards the centroid
direction of its neighbors at time t, i.e.,
xi(t) =
0 if Ni(t, x) = φ
∑j∈Ni(t,x)
(xj(t)− xi(t)
)otherwise
(3.17)
The neighboring relationship of n robots at time t defines a digraph. Let P be the set
of all possible digraphs with n nodes and let σ be a piecewise constant switching signal
taking values in P and to be determined based on the state x and the field of view of
each camera at time t. Thus, the overall system of n robots is
x =(Aσ(t,x) ⊗ I2
)x, (3.18)
where x ∈ R2n is the aggregate state and the n×n matrix Aσ(t,x) is obtained from (3.17)
at time t.
Given a fixed switching signal σ(t, x), then for an initial condition x0, the system (3.18)
produces a trajectory x(t, x0) and thus the switching signal can be written as σ(t, x(t, x0)).
Notice that the dynamic interaction digraph Gσ(t,x) is determined by the field of view of
each camera. Since each camera can be controlled to rotate independently, a camera-level
control strategy can be designed so that it generates a fixed switching signal σ(t, x) and
for this switching signal, the induced dynamic interaction digraph Gσ(t,x(t,x0)) is UQSC
for all x0 ∈ R2n, which then leads to the congregation of n robots at the same location
by Theorem 3.6. For example, each camera rotates its view angle clockwise after keeping
stationary for a certain time. Clearly, the switching signal depends on both time and
state. Fix an initial condition x0, the switching signal is uniquely determined by t, and
in this specific situation, the condition in Theorem 3.6 is checkable to see whether the
devised vehicle-level control strategy and camera-level control strategy can make the
group of robots congregate at a common location. The following is a simple example to
illustrate this point.
Chapter 3. Coupled Linear Systems 86
Example 3.2 Consider three point-like robots 1, 2, 3 in the plane. Each camera has a
cone-like field of view with infinite radius and a viewport of 30 degrees. Now the vehicle-
level control strategy is given as the one in (3.17). The camera-level control strategy is
like this: Each camera keeps stationary for 5 minutes and then rotates 30 degree clockwise
(suppose rotation can be done instantaneously); repeat this procedure as time goes on.
For the initial condition of three robots shown in Fig. 3.5, we can see that the
interaction digraph is not connected. From the proof of Theorem 3.5, we know that with
PSfrag replacements
1
2
3
Figure 3.5: The initial condition of three robots.
the control law (3.17), three robots will remain in the triangle shown in Fig. 3.5 for all
t ≥ 0 for whatever interaction digraphs. On the other hand, with the proposed camera-
level control strategy, each camera scans the whole field every 60 minutes. Clearly, every
camera can sense the other two in every time interval of length 60 minutes. This means
that the union of interaction digraphs G([t, t + 60]
)is fully connected (and is of course
QSC). So the dynamic interaction digraph Gσ(t,x(t,x0)) is UQSC. Thus, rendezvous occurs
though the interaction digraph might be connected for some time and disconnected for
some other time.
Chapter 3. Coupled Linear Systems 87
A simulation for five robots to congregate at a common place with this setup is shown
in Fig. 3.6.
3.5.3 State Dependent Switching
Suppose now that n robots carry omnidirectional cameras of identical range r. Fig. 3.7
shows the disk-like field of view of robot 2 and the interaction digraph at that time. The
camera is fixed on the robot. Thus, the field of view cannot be controlled independently.
Instead, it is solely determined by the position state x.
Consider the same control strategy as in (3.17). In this situation, the overall system
of n robots is
x =(Aσ(x) ⊗ I2
)x.
It has almost the same form as (3.18), but the switching signal is state-dependent. Given
an initial condition x0, it produces a trajectory x(t, x0) and thus the switching signal
can be written as σ(x(t, x0)). In this situation, only for some initial conditions, the
dynamic interaction digraph is UQSC, and for some other initial conditions, it is not
UQSC. Furthermore, the condition in Theorem 3.6 becomes non-checkable.
We simulate twenty agents with the above setup. The initial locations of the twenty
robots are generated randomly as shown in Fig. 3.8. To understand the general effect
of state-dependent switching signal on the performance of asymptotical aggregation to a
single location, we carry out the simulation three times with the same initial positions
as shown in Fig. 3.8, but different sensing ranges. Specifically, for sensing range r =
30, 25, 50, the trajectories of the twenty robots are given in Fig. 3.9, 3.10, and 3.11,
respectively. The simulation results show that the sensing range r = 25 cannot guarantee
that the dynamic interaction digraph is UQSC for the initial condition, and thus the
robots do not converge to a single location but rather to two locations. For the sensing
Chapter 3. Coupled Linear Systems 88
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
Figure 3.6: Trajectories of five agents and the interaction digraphs.
ranges r = 30, 50, the dynamic interaction digraph is UQSC for the initial condition, and
convergence to a single location results.
Chapter 3. Coupled Linear Systems 89
PSfrag replacements
1
2
3
4
r
Figure 3.7: The disk-like field of view.
With this camera setup, it is clear that if some robots are initialized so far away
from the rest that they never come to their fields of view, then rendezvous never occurs.
On second thought, one may ask what if the interaction graph is connected initially.
Unfortunately, this linear control strategy is not good enough to solve the rendezvous
problem even though the initial interaction graph is connected since it may make the
already connected graph disconnect. In next chapter, we will show a nonlinear control
law that can solve this problem if initially the interaction graph is connected.
3.5.4 Switched Positive Systems
The result developed in this chapter can be used to check whether a class of switched
positive systems is asymptotically stable with respect to the origin.
Consider a family of system matrices Ap : p ∈ P where all the matrices are Metzler
matrices with row sums less than or equal to zero. Given a switching signal σ : R → P ,
it gives rise to a switched system
x(t) = Aσ(t)x(t). (3.19)
This system is called a positive system and is studied quite often without switching and
Chapter 3. Coupled Linear Systems 90
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
Figure 3.8: Initial locations.
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
Figure 3.9: Range is 30.
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
Figure 3.10: Range is 25.
0 10 20 30 40 50 60 70 80 90 1000
10
20
30
40
50
60
70
80
90
100
Figure 3.11: Range is 50.
has wide applications in social sciences and some other areas [3, 32,56,69].
Add a virtual agent (with state x0) at the origin and keep it stationary. In other
words,
x0 = 0, x00 = 0.
Then notice that for any p ∈ P , we can write a new system consisting of n+ 1 agents as
Chapter 3. Coupled Linear Systems 91
follows
x0
x
=
0 0
−∑nj=1 a1j(p)
...
−∑nj=1 anj(p)
Ap
x0
x
, (3.20)
where aij(p) denotes the (i, j)th entry of Ap. Let Ap denote the system matrix of the
above new system. Notice that −∑nj=1 aij(p) is nonnegative. So Ap is a generator matrix
for any p ∈ P . Furthermore, notice that the solution trajectory x(t) of (3.19) is just the
partial solution x(t) of the following switched system
x0(t)
x(t)
= Aσ(t)
x0(t)
x(t)
(3.21)
for any initial condition x0 and x00 = 0. Hence, the switched system (3.19) is globally
asymptotically stable with respect to the origin if and only if the new switched system
(3.21) is globally attractive with respect to Ω. Define the dynamic interaction digraph
Gσ(t) for these n+ 1 agents based on Aσ(t). Thus we can find out if the switched system
(3.19) is globally asymptotically stable with respect to the origin by checking whether
Gσ(t) is UQSC.
Example 3.3 Take P = 1, 2 and consider the two matrices
A1 =
−3 2
0 0
and A2 =
0 0
1 −2
.
Further, give a periodic piecewise constant switching signal shown at the bottom in Fig.
3.13. Thus we have a switched system of the form (3.19).
Now whether this switched system is asymptotically stable or not can be determined
from its structure. Treat the system as an interconnected system composed of two agents
labelled 1 and 2 which correspond to the state components x1 and x2. Add a virtual
agent labelled 0 and get an augmented system of the form (3.20) for p ∈ P . Thus, the
Chapter 3. Coupled Linear Systems 92
interaction digraphs G1 and G2 describing the interaction structure of three agents for
p = 1, 2 can be easily obtained and are depicted in Fig. 3.12. Notice that neither G1
PSfrag replacements
0 01 12 2
G1 G2
Figure 3.12: The interaction digraphs G1 and G2.
nor G2 is QSC, but the dynamic interaction digraph Gσ(t) is UQSC for the switching
signal given at the bottom in Fig. 3.13. Hence, by our result, the three agents globally
converge to the same point. In addition, note that the virtual agent is stationary at the
origin. This means that the original switched system is globally asymptotically stable
with respect to the origin. The time responses and phase portrait are depicted in Fig.
3.13 and Fig. 3.14, respectively.
0 5 10 15 200
3
6
t
x 1
0 5 10 15 200
2
5
t
x 2
0 5 10 15 200
1
2
3
t
σ(t)
Figure 3.13: Time responses and switching signal.
Chapter 3. Coupled Linear Systems 93
−1 0 1 2 3 4 5 6−1
0
1
2
3
4
5
6
x1
x 2
Figure 3.14: Asymptotically stable trajectory.
Chapter 4
Coupled Nonlinear Systems
4.1 Introduction
This chapter studies the stability properties of coupled nonlinear systems with fixed and
dynamic topologies. The vector fields of the individual systems are assumed to satisfy
certain hypotheses. Again, the equilibrium set contains all states with identical state
components. This class of systems generalizes that of coupled linear systems and is
abundant in biology, physics, engineering, ecology, and social science: e.g., a biochemical
reaction network [40], coupled oscillators [49, 104], arrays of chaotic systems [114, 120–
122], and a swarm of organisms [38, 39]. We model such systems by coupled nonlinear
differential equations in state form.
Similar to the linear case, central to the stability issue of such systems is the graph
describing the interaction structure in the interconnected system—that is, who is cou-
pled to whom. And a central question is, what properties of the interaction graph lead
to stability and attractivity with respect to the equilibrium set? Most existing work
has dealt with static graphs with a particular topology, such as rings [15, 85], cyclic
digraphs [89], and fully-connected graphs [38, 39, 101], or with static graphs having an
unspecified topology but a certain connectedness, e.g., coupled cell systems [103] and
94
Chapter 4. Coupled Nonlinear Systems 95
coupled oscillators [49, 122]. Of course, a static graph simplifies the problem and allows
one to focus on the difficulties caused by the nonlinear dynamics of the nodes.
The more interesting situation is when the interaction graph is time-varying. Sub-
systems become disconnected from each other and may again become connected to each
other for various natural or technological reasons. A typical example is the motion coor-
dination problem of networks of mobile robots with limited field of view. The robots may
come into and go out of sensor range of each other. For this problem, some distributed al-
gorithms were proposed in [4,108], with the objective of getting the robots to congregate
at a common location (achieving rendezvous). These algorithms were extended to various
synchronous and asynchronous stop-and-go strategies in [24, 64, 65]. From the theoretic
point of view, new tools are required for coupled nonlinear systems. Remarkable results
are obtained in [77], where a nonlinear discrete-time interconnected system is studied.
A central assumption is imposed that each agent’s updated state at next step is con-
strained to be a strict convex combination of the states of itself and its neighbors. This
assumption leads to the development of a set-valued Lyapunov function theory, with the
convex hull of the individual agents’ states playing the role of a nonincreasing set-valued
Lyapunov function in convergence analysis. Necessary and sufficient conditions on the
interconnection topology guaranteeing convergence of the individual agents’ states to a
common value are thereby obtained.
However, most systems are of inherently continuous nature and there is no counter-
part of the work of [77] for continuous-time systems. Although the discrete-time Euler
approximation may display certain properties for the corresponding continuous system,
it may or may not be useful in fully understanding the behavior of continuous systems.
So it is important to develop the continuous results in parallel. However, continuous
time is more challenging than discrete time; for example, existence and/or uniqueness
of solutions may fail, Lyapunov-like functions may not be differentiable, and switching
may cause Zeno phenomena. These and related technical difficulties make lots of ques-
Chapter 4. Coupled Nonlinear Systems 96
tions remain unknown in coupled nonlinear continuous-time systems. For example, what
are reasonable assumptions on the vector fields? How are their asymptotic behaviors
depending on coupling topologies? What new analysis technique can be used to obtain
stability and asymptotic stability conditions? These are the questions that motivate this
chapter. The purpose of this chapter is to initiate a systematic inquiry and provide a
general formalism encompassing most of the known definitions, applications, and existing
results for the linear cases. Our setup is a general interconnection of nonlinear subsys-
tems, where the vector fields can be fixed or switch within a finite family. By borrowing
the notion from set invariance study in [19], we impose a (strict) sub-tangentiality as-
sumption, analogous to the assumption in [77], on each individual vector field, namely,
the vector field of each agent should point to its neighbors, which is satisfied in various
real systems in nature and engineering and is also satisfied for the coupled linear systems
we study in the previous chapter. In order to establish necessary and sufficient conditions
for stability and attractivity, a natural auxiliary function is introduced. However, it is
not differentiable everywhere. Thus, it is expected to apply nonsmooth analysis in our
study involving the concept of Dini derivative. We associate to each vector field a directed
graph based in a natural way on the interaction structure of the subsystems; this is called
an interaction digraph in the present chapter. For the coupled nonlinear systems with a
static interaction structure, a version of LaSalle’s invariance principle using Dini deriva-
tive plays a central role. We show that the system is globally attractive with respect to
the equilibrium set if and only if the interaction digraph is QSC. For the coupled dynamic
systems with a dynamic interaction structure, our proof of main results relies heavily on
nonsmooth analysis. It is partly inspired by the result of Narendra-Annaswamy [78], who
show that with V (x, t) ≤ 0 uniform asymptotic stability can be proved if there exists a
positive T such that for all t, V (x(t+ T ), t+ T )− V (x(t), t) ≤ −γ(‖x(t)‖) < 0, where γ
is a class K function. The difference here is that we deal with attractivity with respect
to an invariant set rather than an equilibrium point; an additional complication is that
Chapter 4. Coupled Nonlinear Systems 97
the natural V -function is non-differentiable. As we expect, the coupled nonlinear system
with a dynamic interaction structure is uniformly attractive if and only if the dynamic
interaction digraph is UQSC. It is not a surprise that we obtain the same graphical
condition for continuous time as the one for discrete time in [77].
We apply our main results to the synchronization of coupled Kuramoto oscillators
with time-varying interaction, to the analysis of a biochemical reaction network and a
water tank network, and to the synthesis of a rendezvous controller for a multi-agent
system.
4.2 Problem Formulation
In this section we introduce a general nonlinear model of switched interconnected systems.
This can describe most coupled dynamic systems with either fixed topology or dynamic
topology. The examples and applications are discussed in Section 4.6.
Suppose that we are given a family of systems represented by ordinary differential
equations of the form
x1 = f 1p (x1, . . . , xn)
...
xn = fnp (x1, . . . , xn),
(4.1)
where xi ∈ Rm is the state of subsystem or agent i, i = 1, . . . , n, and where the index p
lives in a finite set P . Notice that the subsystems share a common state space, Rm.
Introducing the aggregate state x = (x1, . . . , xn) ∈ Rmn, we have the concise form
x = fp(x), p ∈ P , (4.2)
where fp : Rmn → R
mn, p ∈ P , is a family of sufficiently regular (continuous or locally
Lipschitz) vector fields, parameterized by the index set P . Each individual component
model x = fp(x) for p ∈ P is called a mode of the family (4.2).
Chapter 4. Coupled Nonlinear Systems 98
Similar to the linear case, for a piecewise constant switching signal σ : R → P , which
specifies, at each time instant t, the index σ(t) ∈ P of the active mode, we thus have the
following switched interconnected nonlinear system
x(t) = fσ(t) (x(t)) . (4.3)
We assume that the state of the system above does not jump at the switching times, i.e.,
the solution x(·) is everywhere continuous.
We now turn to define interaction digraphs and dynamic interaction digraphs for
coupled nonlinear systems. For each p ∈ P , we associate to each vector field fp an
interaction digraph Gp capturing the interaction structure of the n subsystems (agents).
Definition 4.1 The interaction digraph Gp = (V , Ep) consists of
• a finite set V of n nodes, each node i modeling agent i;
• an arc set Ep representing the links between agents. An arc from node j to node i
indicates that agent j is a neighbor of agent i in the sense that f ip depends on xj,
i.e., there exist x1j , x2j ∈ R
m such that
f ip(x1, . . . , x
1j , . . . , xn) 6= f i
p(x1, . . . , x2j , . . . , xn).
The set of neighbors of agent i is denoted Ni(p).
Like the interaction digraph which captures the coupling structure of the n agents
for each mode, a dynamic interaction digraph is adopted to characterize the interaction
structure of the switched interconnected system (4.3). Roughly, a dynamic interaction
digraph results from the evolution of an interaction digraph under the switching signal
σ(t) (see Definition 3.2 for the formal definition of a dynamic interaction digraph Gσ(t)).
The next example combines several of the concepts presented thus far.
Chapter 4. Coupled Nonlinear Systems 99
Example 4.1 Suppose we are given a family of coupled nonlinear systems for the case
P = 1, 2:
p = 1 :
x1 = f 11 (x1, x2, x3)
x2 = f 21 (x2, x3)
x3 = f 31 (x1, x3)
, p = 2 :
x1 = f 12 (x1, x2)
x2 = f 22 (x1, x2)
x3 = f 32 (x3)
.
There are two modes of the family. For each mode, it is an interconnected system
composed of three agents sharing the state space Rm.
Further, suppose we are given a switching signal σ : R → P depicted in Fig. 4.1.
PSfrag replacements
1
2
t
σ(t)
Figure 4.1: A switching signal.
Thus a switched interconnected system with switching signal σ(t) can be described
by the equation
x(t) = fσ(t)(x(t)
),
where x = (x1, x2, x3) is the aggregate state and fσ(t) =(f 1σ(t), f
2σ(t), f
3σ(t)
)is the vector
field.
The interaction digraphs G1 and G2 corresponding to the modes p = 1 and p = 2 are
shown in Fig. 4.2. The nodes 1, 2, 3 represent the agents 1, 2, 3 and the arcs in the
digraphs represent the coupling links in terms of Definition 4.1.
The dynamic interaction digraph Gσ(t) is a digraph of three nodes where the arc set
changes over time depending on σ(t).
Chapter 4. Coupled Nonlinear Systems 100
PSfrag replacements
11 22
33
G1 G2
Figure 4.2: The interaction digraphs.
Having introduced the switched interconnected system model and the dynamic inter-
action digraph capturing the coupling structure of the system, we are now ready to state
the problem addressed in this chapter.
Assume Ω is an invariant set of the switched interconnected system (4.3). Later,
assumptions will be made for the existence of Ω. Similar to coupled linear systems, we
are interested in finding out how stability and attractivity with respect to Ω of the system
(4.3) are affected by the interaction structure. The problem can be stated formally as
Problem 4.1 Suppose a family of vector fields fp : p ∈ P satisfy certain assumptions.
What are the conditions on the dynamic interaction digraph Gσ(t) under which the switched
interconnected system (4.3) is uniformly stable and/or uniformly attractive with respect
to Ω?
In the particular case when σ(t) is a constant signal, that is, σ(t) ≡ p, the switched
interconnected system turns out be time invariant and the dynamic interaction digraph
Gσ(t) is just a fixed interaction digraph Gp. We will drop the subscript p for simplicity.
Therefore, the interconnected system of interest is given by
x1 = f 1(x1, . . . , xn)
...
xn = fn(x1, . . . , xn),
(4.4)
or x = f(x) in vector form. Correspondingly, the interaction digraph is denoted by G.
This represents coupled nonlinear systems with fixed topology generally (see Subsection
Chapter 4. Coupled Nonlinear Systems 101
4.6.2 for an example). Although it is a special case of (4.3), we study it in a different
way for better understanding. Thus, for this system, we have the following special case
of Problem 4.1.
Problem 4.2 Suppose a vector field f satisfies certain assumptions. What are the con-
ditions on the interaction digraph G under which the interconnected system (4.4) is stable
and/or attractive with respect to Ω?
4.3 Mathematical Preliminaries
In this section we assemble some known and some novel concepts related to convex set,
tangent cone, and Dini derivative, and present a version of LaSalle’s invariance principle
using Dini derivative, which are used in the remainder of this chapter.
4.3.1 Convex Set and Tangent Cone
We introduce basic concepts, notations, and some properties regarding convex sets and
tangent cones. The reader may refer to [7, 8, 90,102] for the details.
Let S ⊂ Rm. The intersection of all convex sets containing S is the convex hull of S,
denoted co(S). The convex hull of a finite set of points x1, . . . , xn ∈ Rm is a polytope,
denoted cox1, . . . , xn.
We denote the interior of S by int(S) and the boundary of S by ∂(S), respectively. If
S contains the origin, the smallest subspace containing S is the carrier subspace, denoted
lin(S). The relative interior of S, denoted ri(S), is the interior of S when it is regarded
as a subset of lin(S) and the relative topology is used. Likewise for the relative boundary,
denoted rb(S). If S does not contain the origin, it must be translated by an arbitrary
vector: Let v be any point in S and let lin(S) denote the smallest subspace containing
S−v. Then ri(S) is the interior of S when it is regarded as a subset of the affine subspace
v + lin(S). In the trivial case, when S is just a point, the relative interior ri(S) is itself.
Chapter 4. Coupled Nonlinear Systems 102
In precise terms,
ri(S) =y ∈ S : ∃ ε > 0, y +
(εBm ∩ lin(S)
)⊂ S
,
where Bm is the unit ball in Rm; see Fig. 4.3. Similarly for rb(S).
PSfrag replacements
lin(S)
ri(S)
rb(S)
S
y
εBm
Figure 4.3: The set S, lin(S), ri(S), and rb(S).
Fix any norm ‖ · ‖ in Rm. For each nonempty subset S of R
m and each y ∈ Rm, we
denote the distance of y from S by
‖y‖S := infz∈S‖z − y‖.
A nonempty set K ⊂ Rm is called a cone if λy ∈ K when y ∈ K and λ > 0. Let
S ⊂ Rm be a closed convex set and y ∈ S. The tangent cone (often referred to as
contingent cone, Bouligand, 1932, [20]) to S at y is the set
T (y,S) =z ∈ R
m : lim infλ→0
‖y + λz‖Sλ
= 0
and the normal cone to S at y is
N (y,S) = z∗ : 〈z, z∗〉 ≤ 0, ∀z ∈ T (y,S).
Note that if y ∈ int(S), then T (y,S) = Rm. Thus the set T (y,S) is non-trivial only
Chapter 4. Coupled Nonlinear Systems 103
PSfrag replacements
x1
x2
“T (x1,S)”
“T (x2,S)”
S
Figure 4.4: Tangent cones T (x1,S) and T (x2,S) are obtained by translation of
“T (x1,S)” and “T (x2,S)” to the origin.
on ∂S. In particular, if S contains only one point, y, then T (y,S) = 0. In geometric
terms (see Fig. 4.4), the tangent cone for y ∈ ∂S is a cone having center at the origin
that contains all vectors whose directions point from y ‘inside’ (or they are ‘tangent to’)
the set S. If such a boundary is smooth at the point y, then the T (y,S) is just the
tangent halfspace shifted to the origin. For example, in Fig. 4.4, the boundary at x2 is
smooth, whereas the boundary at x1 is not smooth, so the T (x2,S) is just the tangent
halfspace shifted to the origin but the T (x1,S) is not.
Before concluding this subsection, we summarize in the following lemma some prop-
erties of tangent cones to convex sets. A graphical interpretation of the lemma is given
in Fig. 4.5. It is stated that if a convex set S2 contains another convex set S1, then
the tangent cone at any point x to the set S2 also contains the tangent cone at x to the
set S1 (see Fig. 4.5(a)). In addition, if a convex set is a Cartesian product of n convex
sets S1, . . .Sn, then the tangent cone at any point x = (x1, . . . , xn) to the set is just the
Cartesian product of these tangent cones T (xi,Si), i = 1, . . . , n. As an example, in Fig.
4.5(b), a convex set (square) in the plane is the Cartesian product of two intervals S1and S2. Notice that the tangent cone at x1 to the interval set S1 is just the positive
horizontal axis and the tangent cone at x2 to the interval set S2 is the positive vertical
axis. Their Cartesian product is the first quadrant, which is exactly the tangent cone at
Chapter 4. Coupled Nonlinear Systems 104
the point (x1, x2) to the square set.
PSfrag replacements
x1
x2
x
(x1, x2)
S1
S1
S2S2
T (x,S1)
T (x,S2) T (x1,S1)
T (x2,S2) T((x1, x2),S1
⊗S2)
S1⊗S2
(a) (b)
Figure 4.5: Properties of tangent cones to convex sets.
Lemma 4.1 ( [7], page 164) Let Si, i = 1, . . . , n be closed convex sets in Rm. The
following properties hold:
1. If y ∈ S1 ⊂ S2, then T (y,S1) ⊂ T (y,S2) and N (y,S2) ⊂ N (y,S1);
2. If xi ∈ Si (i = 1, . . . , n), then
T((x1, . . . , xn),
n⊗i=1
Si
)=
n⊗i=1
T (xi,Si),
N((x1, . . . , xn),
n⊗i=1
Si
)=
n⊗i=1
N (xi,Si).
4.3.2 The Dini Derivative
The investigation of stability analysis of nonlinear systems using an auxiliary function
method has been widely studied. However, it may happen that the “natural” auxiliary
function is not that smooth. Hence, it is of interest to generalize it to encompass the
case of less smooth functions. One of these generalizations is using Dini derivative.
Chapter 4. Coupled Nonlinear Systems 105
Let a, b (a < b) be two real numbers and consider a function h : (a, b)→ R, t 7→ h(t)
and a point t∗ ∈ (a, b). The upper Dini derivative of h at t∗ is defined as
D+h(t∗) = lim supτ→0+
h(t∗ + τ)− h(t∗)
τ.
Lemma 4.2 ( [91], page 347) Suppose h is continuous on (a, b). Then h is non-
increasing on (a, b) if and only if D+h(t) ≤ 0 for every t ∈ (a, b).
In stability analysis, we are interested in the Dini derivative of a function along the
solution of a differential equation. Consider the nonautonomous system
x = f(t, x) (4.5)
and let x(t) be a solution. Further, let V (t, x) : R × Rn → R be a continuous function,
satisfying a local Lipschitz condition for x, uniformly with respect to t.
Thus, the upper Dini derivative of V (t, x(t)) with respect to t is given by
D+V (t, x(t)) = lim supτ→0+
V (t+ τ, x(t+ τ))− V (t, x(t))
τ.
On the other hand, define
D+f V (t, x) = lim sup
τ→0+
V (t+ τ, x+ τf(t, x))− V (t, x)
τ.
The function D+f V is called the upper Dini derivative of V along the trajectory of (4.5).
It was shown by Yoshizawa in 1966 (see [91]) that
D+V (t∗, x(t∗)) = D+f V (t∗, x∗)
when putting x(t∗) = x∗.
Lemma 4.3 Let Vi(t, x) : R × Rn → R be class C1 for each i ∈ I0 = 1, 2, . . . , n and
let V (t, x) = maxi∈I0
Vi(t, x). If
I(t) =i ∈ I0 : Vi(t, x(t)) = V (t, x(t))
is the set of indices where the maximum is reached at t, then
D+V (t, x(t)) = maxi∈I(t)
Vi(t, x(t)).
The proof can be obtained from Danskin’s Theorem [23,25].
Chapter 4. Coupled Nonlinear Systems 106
4.3.3 The Invariance Principle
The following is a brief introduction to LaSalle’s invariance principle [60,91].
For the autonomous system
x = f(x), (4.6)
we assume only that f : D −→ Rn is continuous, where D is an open subset of R
n. With
only continuity, uniqueness of solutions is not assured. Let x0 be a point of D. The initial
time will always be chosen equal to 0. A non-continuable solution with x(0) = x0 will be
written x : (α, ω)→ Rn, where α < 0 < ω.
The positive limit set of a solution x(t) will be designated by Λ+(x0). A fundamental
property of limit sets is stated in the following lemma.
Lemma 4.4 ( [91], page 364) If a solution x(t) is bounded, then Λ+(x0) is nonempty,
compact, and connected. Moreover,
x(t)→ Λ+(x0) as t→ ω and ω =∞.
We are now ready to state the celebrated theorem—LaSalle’s invariance principle.
Theorem 4.1 ( [91], LaSalle, 1968) Let x(t) be a solution of (4.6) and let V : D → R
be a locally Lipschitz function such that D+V (x(t)) ≤ 0 on [0, ω). Then Λ+(x0) ∩ D is
contained in the union of all solutions that remain in Z = x ∈ D : D+V (x) = 0 on
their maximal intervals of definition.
4.4 Coupled Nonlinear Systems: Fixed Topology
We first attempt to solve Problem 4.2. The system under focus is given in (4.4). Now
some hypotheses on the vector field are assumed. Let C i = coxi, xj : j ∈ Ni denote
the polytope (convex hull) in Rm formed by the states of agent i and its neighbors (Ni
is the set of neighbors of agent i). Then for each i ∈ I0 = 1, . . . , n, we assume that
Chapter 4. Coupled Nonlinear Systems 107
A1′: f i is continuous on Rmn;
A2′: for all x ∈ Rmn, f i(x) ∈ T (xi, Ci). Moreover, f i(x) 6= 0 if Ci is not a singleton and
xi is its vertex.
Assumption A1′ is to guarantee the existence of solutions. The assumption f i(x) ∈
T (xi, Ci), the tangent cone to the set Ci at xi, is sometimes referred to as a sub-
tangentiality condition [19]. Fig. 4.6 illustrates two example situations of A2′. In thePSfrag replacements
11
2
2
3
f 1 f 1
C1
Figure 4.6: Two examples of vector fields f i satisfying assumption A2′.
left-hand example, agent 1 has only one neighbor, agent 2; the polytope C1 is the line
segment joining x1 and x2; the tangent cone T (x1, C1) is the closed ray λ(x2−x1) : λ ≥ 0
(in the picture it’s shown translated to x1); and A2′ means that f 1 is nonzero and points
in the direction of x2−x1. In the right-hand example, agent 1 has two neighbors, agents
2 and 3; the polytope C1 is the triangle with vertices x1, x2, x3; the tangent cone T (x1, C1)
isλ1(x2 − x1) + λ2(x3 − x1) : λ1, λ2 ≥ 0
(again, it’s shown translated to x1); and A2′ means that f 1 is nonzero and points into
this closed cone. In general, A2′ requires that f i(x) have the form
∑
j∈Ni
αj(x)(xj − xi), (4.7)
where αj(x) are non-negative scalar functions, and that f i(x) is nonzero if Ci is not a
singleton and xi is its vertex.
Chapter 4. Coupled Nonlinear Systems 108
For the coupled nonlinear system (4.4) with assumptions A1′ and A2′, the set which
we are interested in studying for stability and attractivity is defined as
Ω =x ∈ R
mn| x1 = · · · = xn
.
Let x be any point in Ω. It is clear that for any i
Ci = coxi, xj : j ∈ Ni = xi.
Thus T (xi, Ci) = 0. It then follows from Assumption A2′ that
f i(x) = 0, for all i = 1, . . . , n,
which implies that Ω is a set of equilibria and of course is an invariant set.
Now we present our main results for this coupled nonlinear system. The first result
is stability of the interconnected system (4.4) without needing any property of the inter-
action digraph. The proof relies on showing that the maximum distance of any agent to
a fixed point is nonincreasing.
Lemma 4.5 The interconnected system (4.4) has a solution x(t) over [0,∞). Let
V ai (x) =
1
2‖xi − a‖2 and V a(x) = max
i∈I0
Vi(x), (4.8)
where a ∈ Rm is an arbitrary point. Then along any trajectory x(t), D+V a(x(t)) ≤ 0.
Proof: Consider an arbitrary x0 ∈ Rmn and let x(t) be a solution of (4.4) defined on
the maximal interval [0, ω) ⊆ [0,∞) with x(0) = x0. Such a solution exists by Peano’s
Theorem.
Due to the maximum function in V a(x), it is not differential everywhere. So we use
the Dini derivative here. Define I(x) = i ∈ I0 : V ai (x) = V a(x), the set of indices
where the maximum is reached. By Lemma 4.3, it follows that
D+V a(x(t)) = maxi∈I(x(t))
V ai (x(t)). (4.9)
Chapter 4. Coupled Nonlinear Systems 109
Define a ball in Rm as
Ba(x) =y ∈ R
m : ‖y − a‖2 ≤ 2V a(x).
That is, the ball Ba(x) encloses all the points x1, . . . , xn. Then by convexity Ci =
coxi, xj, j ∈ Ni ⊂ Ba(x) (see Fig. 4.7, the dashed lines with arrows represent the
arcs of the interaction digraph). By Lemma 4.1 and assumption A2′, we have
PSfrag replacements
xi
a
f i
Ba(x)
Figure 4.7: Illustrations for Ba(x) and f i(x).
f i(x) ∈ T (xi, Ci) ⊂ T (xi,Ba(x)) .
In addition, when i ∈ I(x), it means that xi lies on the boundary of the ball Ba(x) and
so the vector (xi − a) is a radius of the ball. Hence
(xi − a) ∈ N (xi,Ba(x)) ,
the normal cone to Ba(x) at xi. Thus, by the definition of normal cone, it follows that
for each i ∈ I(x),
V ai (x(t)) = (xi − a)Tf i(x) ≤ 0.
Together with (4.9), this leads to
D+V a(x(t)) ≤ 0.
Chapter 4. Coupled Nonlinear Systems 110
In addition, from above and Lemma 4.2, one sees that V a(x(t)) ≤ V a(x(0)) for all
t ∈ [0, ω) and so x(t) is bounded. Thus, ω =∞. ¥
Theorem 4.2 The interconnected system (4.4) is stable with respect to every equilibrium
x ∈ Ω.
Proof: Consider any equilibrium x ∈ Ω. It will be of the form x = ζ ⊗ 1n for some
ζ ∈ Rm. Let
V ζ(x) =1
2maxi∈I0
‖xi − ζ‖2.
It can be easily verified that V ζ(x) = 0 when x = x and V ζ(x) > 0 when x 6= x. That
is, the function V ζ(x) is positive definite with respect to the equilibrium x.
In addition, by Lemma 4.5, we obtain that the Dini derivative along any trajectory
of (4.4)
D+V ζ(x(t)) ≤ 0.
Then by Theorem 6.2 in [91], page 89, it follows that the system (4.4) is stable with
respect to x ∈ Ω. ¥
The second result shows the relevance of the interaction digraph G to global attrac-
tivity with respect to Ω. A necessary and sufficient condition is obtained via nonsmooth
analysis with the invariance principle playing a central role.
Theorem 4.3 The interconnected system (4.4) is globally attractive with respect to Ω if
and only if the interaction digraph G is QSC.
Proof: (⇐=) Consider an arbitrary x0 ∈ Rmn. By Lemma 4.5 we know the system (4.4)
has a solution x(t) over [0,∞) with x(0) = x0. Let V a(x) be a function of the form (4.8),
where a ∈ Rm is an arbitrary point. By Lemma 4.5 again, we have D+V a(x(t)) ≤ 0.
Let Λ+(x0) be the positive limit set of solutions satisfying x(0) = x0. From the proof of
Lemma 4.5, we know that x(t) is bounded. Then, by Lemma 4.4, the positive limit set
Λ+(x0) is nonempty, compact, and connected. Moreover, x(t)→ Λ+(x0) as t→∞.
Chapter 4. Coupled Nonlinear Systems 111
On the other hand, it follows from Theorem 4.1 that Λ+(x0) ⊂ M, where M is the
union of all solutions that remain in Za = x ∈ Rmn : D+V a(x) = 0.
Choose any two arbitrary points b, c ∈ Rm. Then by the same argument and Theorem
4.1, Λ+(x0) ⊂M′, too, where M′ is the union of all solutions that remain in Zb ∩ Zc.
Next, we claim that M′ ⊂ Ω. To prove this, suppose conversely that there exists a
point q = (q1 · · · qn) ∈M′ but q /∈ Ω. Denote the k vertices (2 ≤ k ≤ n because q /∈ Ω)
of coq1, . . . , qn by z1, . . . , zk (see Fig. 4.8). Since b, c can be chosen freely, without loss
PSfrag replacements
z1
z2 z3
b c
Rm
Figure 4.8: A point q in M ′ but not in Ω.
of generality we can assume that there is one and only one vertex, say z1, in the boundary
of Bb(q) and there is one and only one vertex, say z2, in the boundary of Bc(q).
Let I ′(q) = i ∈ I0 : qi = z1 and I ′′(q) = i ∈ I0 : qi = z2, be the sets of agents
located at z1 and z2, respectively. If the interaction digraph G is QSC, it follows from
Theorem 2.1 that there is a centre node, say vc. Since I ′(q) and I ′′(q) are disjoint, then
the centre node vc cannot be in both sets. Without loss of generality, say it does not
belong to I ′(q).
Consider x(0) = q and let x(t) be any solution of (4.4) leaving from q. Since q ∈
Chapter 4. Coupled Nonlinear Systems 112
M′ ⊂ Zb ∩ Zc, we have that, for all t ∈ [0,∞),
D+V b(x(t)) = maxi∈I′(x(t))
(xi(t)− b)Tf i(x(t)) = 0,
where I ′(x(t)) = i ∈ I0 : V b(x(t)) = V bi (x(t)). Notice that we can choose b outside of
a compact set containing all xi(t), t ∈ [0,∞), so for all i ∈ I ′(x(t))),
xi(t)− b 6= 0, ∀t ∈ [0,∞).
Further, by Assumption A2′ and the fact that T (xi, Ci) is strictly contained in T (xi,Bb(x)),
f i(x(t)) is not perpendicular to (xi(t) − b). So D+V b(x(t)) = 0 implies that at each t
there exists an i ∈ I ′(x(t)) such that f i(x(t)) = 0, i.e., xi(t) = 0.
Next, by the definition of the set I ′(x(0)), one has
g(x(0)) := mini∈I′(x(0))
V bi (x(0))− max
j∈I0−I′(x(0))V bj (x(0)) > 0.
Since t 7→ g(x(t)) is a continuous function, it follows that there exists a sufficiently small
positive scalar ω1 such that ∀t ∈ [0, ω1], g(x(t)) > 0, that is, for any i ∈ I ′(x(0)) and
j ∈ I0 − I ′(x(0)),
V bi (x(t)) > V b
j (x(t)).
Hence,
(I0 − I ′(x(0))) ∩ I ′(x(t)) = φ, ∀t ∈ [0, ω1]
or, what is the same,
I ′(x(t)) ⊆ I ′(x(0)), ∀t ∈ [0, ω1].
Now partition the set I ′(x(t)) as I ′(x(t)) = J (x(t)) ∪ J (x(t)), where
J (x(t)) = i ∈ I ′(x(t)) : f i(x(t)) = 0
J (x(t)) = i ∈ I ′(x(t)) : f i(x(t)) 6= 0.
By construction, for all i ∈ I ′(x(0)), xi(0) = z1. Thus, i ∈ J (x(0)) implies xi(0) = z1
and f i(x(0)) 6= 0, which in turn implies that agent i has a neighbor in I0 − I ′(x(0)).
Chapter 4. Coupled Nonlinear Systems 113
Indeed, if all neighbors of i ∈ J (x(0)) were in I ′(x(0)), then Ci(x(0)) would be a singleton
and hence necessarily, from Assumption A2′, f i(x(0)) = 0, contradicting the fact that
i ∈ J (x(0)).
Let j ∈ I0 − I ′(x(0)) be a neighbor agent of i ∈ J (x(0)). Since, for all t ∈ [0, ω1],
I ′(x(t)) ⊆ I ′(x(0)), it follows that I0−I ′(x(0)) ⊆ I0−I ′(x(t)), and hence j ∈ I0−I ′(x(t))
for all t ∈ [0, ω1]. If, for all t ∈ [0, ω1], i ∈ I ′(x(t)) implies that xi(t) is on the boundary
of the ball Bb(x(t)) and so xi(t) is a vertex of Ci(x(t)). Moreover, since agent i has a
neighbor j ∈ I0 − I ′(x(t)), Ci(x(t)) is not a singleton. By Assumption A2′ one has that
f i(x(t)) 6= 0. We have thus shown that
(∀t ∈ [0, ω1]
)i ∈ I ′(x(t)) =⇒
(∀t ∈ [0, ω1]
)f i(x(t)) 6= 0.
This in particular implies that, during the interval [0, ω1], no agent in J (x(0)) can get
to J (x(t)) or, equivalently,
(∀t ∈ [0, ω1]
)J (x(t)) ⊆ J (x(0)).
Next, we show that J (x(t)) is strictly contained in J (x(0)). Suppose, by way of contra-
diction, that J (x(t)) = J (x(0)) for all t ∈ (0, ω1]. Then
(∀i ∈ J(x(0))
) (∀t ∈ [0, ω1]
)f i(x(t)) = 0.
This means that, during [0, ω1], xi(t) = z1 for all i ∈ J (x(0)). Since xi(t) is a vertex of
Ci(x(t)) and f i(x(t)) = 0, Assumption A2′ implies that Ci(x(t)) is a singleton. Hence,
for each i ∈ J (x(0)) and all t ∈ [0, ω1], all neighbor agents of agent i are collocated at z1.
Since the centre node of G does not belong to J (x(0)) ⊂ I ′(x(0)), some agent in J (x(0))
must have a neighbor agent j in J (x(0))∪I0−I ′(x(0)). As shown above, such a neighbor
agent j is such that xj(t) = z1 for all t ∈ [0, ω1] which implies that j /∈ I0 − I ′(x(0)).
On the other hand, the fact that ∀t ∈ [0, ω1], xj(t) = z1, also implies that f j(x(0)) = 0
contradicting the fact that j ∈ J (x(0)).
Chapter 4. Coupled Nonlinear Systems 114
We have thus shown that there exists t1 ∈ (0, ω1] such that J (x(t1)) is a strict subset
of J (x(0)). A repetition of this argument leads to the existence of tk such that J (x(tk)) is
empty, which contradicts the fact that there exists an i ∈ I ′(x(t)) such that f i(x(t)) = 0.
Thus, the solution x(t)→ Ω as t→∞ and global attractivity follows.
(=⇒) To prove the contrapositive form, assume that G is not QSC, that is, there are
two nodes i∗ and j∗ such that for any node k, either i∗ or j∗ is not reachable from k. Let
V1 be the subset of nodes from which i∗ is reachable and V2 be the subset of nodes from
which j∗ is reachable. Obviously, V1 and V2 are disjoint. Moreover, for each node i ∈ V1(resp. V2), the set of neighbors of node i is a subset of V1 (resp. V2).
Choose any z1, z2 ∈ Rm such that z1 6= z2, and pick initial conditions
xi(0) =
z1, ∀ i ∈ V1,
z2, ∀ i ∈ V2.
Then by assumption A2′,
xi(t) =
z1, ∀ i ∈ V1,
z2, ∀ i ∈ V2,∀ t ≥ 0.
This proves that the system is not globally attractive with respect to Ω. ¥
4.5 Coupled Nonlinear Systems: Dynamic Topology
In this section we turn our attention to the coupled nonlinear system with dynamic
topology. The mathematical models are given in (4.1)–(4.3). Now some hypotheses are
introduced on the general vector fields of the family (4.1).
Let Cip = coxi, xj : j ∈ Ni(p) denote the polytope in Rm formed by the states of
agent i and its neighbors. Also, it is convenient to introduce a subset X ⊂ Rm of the
common state space that plays the role of a region of focus. In this problem, initial
states of the agents will be in X n and attractivity will occur in X n. Let I0 denote the
Chapter 4. Coupled Nonlinear Systems 115
index set 1, . . . , n and assume that, for each i ∈ I0 and each p ∈ P , the vector fields
f ip : R
mn → Rm satisfy the following two assumptions:
A1: f ip is locally Lipschitz on X n;
A2: For all x ∈ X n, f ip(x) ∈ ri
(T (xi, Cip)
).
Assumption A2 is sometimes referred to as a strict sub-tangentiality condition. Com-
pared with Assumption A2′ that we introduced in the previous section, Assumption A2
excludes the case where f ip(x), viewed as a vector applied at the point xi, is tangent to
the relative boundary of the convex set C ip. Two example situations of A2 are illustrated
in Fig. 4.9.PSfrag replacements
11
2
2
3
f 1pf 1p
C1p
Figure 4.9: Some examples of vector fields f ip satisfying assumption A2.
Remark 4.1 For the coupled linear system (3.1) we studied in Chapter 3, a comparison
with (4.7) makes it clear that for each i ∈ I0 and each p ∈ P, the vector fields in (3.1)
satisfy assumptions A1 and A2 with X = Rm (and of course satisfy A1′ and A2′).
Similar to the linear case, we let Sdwell(τD) denote the class of piecewise constant
switching signals with dwell time τD and make the following assumption:
A3: σ(t) ∈ Sdwell(τD).
In the following, we shall study how the stability and attractivity of the switched
interconnected system (4.3) are affected by the dynamic interaction digraph Gσ(t).
Chapter 4. Coupled Nonlinear Systems 116
4.5.1 Set Invariance and Uniform Stability
Firstly, we introduce an important result for positive invariance property of any compact
convex set. It is perhaps helpful to give some intuition using a 2D example. For m = 2,
all agents move in the plane. Let A be a compact convex subset of X ⊂ R2 and assume
all agents start in A. Let C(t) denote the convex hull of the agents’ locations at time t.
Because A is convex, clearly C(0) ⊂ A. Now invoke assumption A2. An agent that is
initially in the interior of C(0) can head off in any direction at t = 0, but an agent that is
initially on the boundary of C(0) is constrained to head into its interior. In this way, C(t)
is non-increasing (if t2 > t1, then C(t2) ⊆ C(t1)). Saying that the agents cannot exit the
set A is mathematically equivalent to saying that the Cartesian product An is positively
invariant for the switched system (4.3).
Theorem 4.4 Let A ⊂ X be a compact convex set. Then An is positively invariant for
the switched interconnected system (4.3).
The proof requires Nagumo’s theorem concerning set invariance.
Theorem 4.5 ( [7], Nagumo, 1942) Consider the system y = F (y), with F : Rl →
Rl, and let Y ⊂ R
l be a closed convex set. Assume that, for each y0 in Y, there exists
ε(y0) > 0 such that the system admits a unique solution y(t, y0) defined for all t ∈[0, ε(y0)
). Then,
y0 ∈ Y =⇒ y(t, y0) ∈ Y , ∀t ∈[0, ε(y0)
)
if and only if F (y) ∈ T (y,Y) for all y ∈ Y.
Proof of Theorem 4.4: Let A be any compact convex set in X and consider any initial
state x0 ∈ An and any initial time t0. For any switching signal σ(t) ∈ Sdwell(τD) , let
x(t, t0, x0) be the solution of the switched interconnected system (4.3) with x(t0) = x0,
and let[t0, t0 + ε(t0, x
0))be its maximal interval of existence.
Chapter 4. Coupled Nonlinear Systems 117
For any point x ∈ An, it is obvious that Cip ⊂ A for all i ∈ I0 and p ∈ P , by convexity
of A. Thus, by property (a) in Lemma 4.1,
f ip(x) ∈ ri
(T (xi, Cip)
)⊂ T (xi,A), ∀i ∈ I0, ∀p ∈ P ,
and by property (b) in the same lemma,
g(t, x) := fσ(t)(x) ∈ T (x,An) for all t ∈ R and x ∈ An.
Set y = (t, x) and construct the augmented system
y = F (y) :=
1
g(y)
. (4.10)
Since g(t, x) admits a unique solution x(t, t0, x0) defined for all t ∈
[t0, t0 + ε(t0, x
0)), it
follows that for all y0 = (t0, x0) ∈ R × An, the augmented system (4.10) has a unique
solution y(t, y0) defined on[0, ε(y0)
). Moreover,
F (y) ∈ T (t,R)× T (x,An) = T (y,R×An) for all y ∈ R×An.
Since R×An is closed and convex, by Theorem 4.5 (Nagumo’s Theorem) it follows that
y0 = (t0, x0) ∈ R×An =⇒ y(τ) ∈ R×An, ∀τ ∈
[0, ε(y0)
). (4.11)
The solution y(τ) to (4.10) with initial condition y0 = (t0, x0) is related to the solution
x(t) to x = g(t, x) with initial condition x(t0) = x0 as follows
(t, x(t)
)= y(t− t0), ∀t ∈
[t0, t0 + ε(t0, x
0)).
We thus rewrite condition (4.11) as
t0 ∈ R and x0 ∈ An =⇒ x(t) ∈ An, ∀t ∈[t0, t0 + ε(t0, x
0)).
Since the set An is compact, it follows by Theorem 2.4 in [58] that, for all x0 ∈ An and
all t0, ε(t0, x0) =∞ and the set An is positively invariant for the switched interconnected
system (4.3). ¥
The second result establishes uniform stability with respect to Ω, again without need-
ing any property of the dynamic interaction digraph.
Chapter 4. Coupled Nonlinear Systems 118
Theorem 4.6 The switched interconnected system (4.3) is US with respect to every equi-
librium x ∈ Ω ∩ int(X n).
Proof: For any equilibrium x ∈ Ω ∩ int(X n), it is of the form
x = ζ ⊗ 1n,
where ζ ∈ int(X ). Let ε > 0 be arbitrary. We choose δ (0 < δ ≤ ε) small enough so that
the box
Aδ(ζ) :=y ∈ R
m : ‖y − ζ‖∞ ≤ δ
is still within X . Obviously, Aδ(ζ) is a compact convex set. It follows from Theorem 4.4
that the Cartesian product Anδ (ζ) is positively invariant for the system (4.3).
Notice that x ∈ Anδ (ζ) is equivalent to
‖x− x‖∞ ≤ δ
and also
Anδ (ζ) ⊆ An
ε (ζ).
We have thus proven that ∀ε > 0, ∃δ > 0 such that ∀t0
‖x0 − x‖ ≤ δ =⇒ (∀t ≥ t0) ‖x(t)− x‖ ≤ ε.
Hence, the conclusion follows. ¥
4.5.2 Uniform Attractivity
Now comes the result concerning uniform attractivity of the switched interconnected
system (4.3).
Theorem 4.7 Suppose X is closed and convex. The switched interconnected system
(4.3) is UA in X n with respect to Ω if and only if the dynamic interaction digraph Gσ(t)
is UQSC.
Chapter 4. Coupled Nonlinear Systems 119
Remark 4.2 When X = Rm in assumptions A1 and A2, the switched interconnected
system (4.3) is UGA with respect to Ω if and only if Gσ(t) is UQSC.
Remark 4.3 Under assumptions A1 and A2, if Gσ(t) is UQSC, the switched intercon-
nected system (4.3) not only is UA in X n with respect to Ω but also has the property
that(∀x0 ∈ X n
)(∃x ∈ Ω ∩ X n
)limt→∞
x(t) = x.
This fact can be easily seen from the proof.
Now we are going to prove Theorem 4.7. Before proceeding, we need some lemmas.
The first three are basic properties of continuity, Lipschitz continuity, and class KL
function, respectively. The remaining lemmas establish some technical properties that
hold for the switched interconnected system (4.3).
Lemma 4.6 Suppose that f : Rl ×M → R, where M is a compact subset of R
k, is
locally Lipschitz in its first argument. Then g : Rl → R, y 7→ max
f(y, z) : z ∈ M
is
locally Lipschitz in y.
Proof: Since f is locally Lipschitz in y, then for each z ∈M, there exists Lz such that
∀y, y′ ∈ Br(y0) :=y ∈ R
l : ‖y − y0‖ ≤ r, ‖f(y, z)− f(y′, z)‖ ≤ Lz‖y − y′‖.
On the other hand, for all y, y′ ∈ Br(y0),
‖g(y)− g(y′)‖ = ‖maxz∈M
f(y, z)−maxz∈M
f(y′, z)‖.
Let maxz∈M
f(y, z) = f(y, zy) and maxz∈M
f(y′, z) = f(y′, zy′). Then
f(y, zy) ≥ f(y, zy′) and f(y′, zy′) ≥ f(y′, zy).
So there exists at least one λ satisfying 0 ≤ λ ≤ 1 (see Fig. 4.10) such that when
y = (1− λ)y + λy′,
f (y, zy) = f (y, zy′) .
Chapter 4. Coupled Nonlinear Systems 120
PSfrag replacements
z = zy′
z = zy
0 1 λ
f(y, zy) f(y′, zy′)
f(y, zy′) f(y′, zy)
Figure 4.10: Illustration for the equaled point.
Thus,
‖g(y)− g(y′)‖ = ‖f(y, zy)− f(y, zy) + f(y, zy′)− f(y′, zy′)‖
≤ ‖f(y, zy)− f(y, zy)‖+ ‖f(y, zy′)− f(y′, zy′)‖
≤ Lzy‖y − y‖+ Lzy′‖y − y′‖
= λLzy‖y − y′‖+ (1− λ)Lzy′‖y − y′‖.
Set L = maxz∈M
Lz. Then it follows that
∀y, y′ ∈ Br(y0), ‖g(y)− g(y′)‖ ≤ L‖y − y′‖,
which shows the function g(·) is locally Lipschitz in its argument. ¥
Lemma 4.7 Let f : Rm → R be continuous and, given ξ ∈ R
l and A ∈ Rl×m, assume
that the setz ∈ R
m : Az ¹ ξis compact (the inequality is imposed component-wise).
Then,
g : Rl → R
ξ 7→ minf(z) : Az ¹ ξ
,
is continuous.
Proof: It is straightforward from Berge’s Maximum Theorem below. ¥
Chapter 4. Coupled Nonlinear Systems 121
Theorem 4.8 ( [107], Berge’s Maximum Theorem) Let f : X × Y → R be a con-
tinuous function and D : X → Y be a nonempty, compact-valued, continuous correspon-
dence1. Then we have
1. f ∗ : X → R with
f ∗(x) := maxf(x, y) : y ∈ D(x)
is a continuous function;
2. D∗ : X → Y with
D∗(x) := argmaxf(x, y) : y ∈ D(x)
=y ∈ D(x) : f(x, y) = f ∗(x)
is a compact-valued, upper-semi-continuous2 correspondence.
Remark 4.4 Every upper-semi-continuous single valued correspondence is a continuous
function.
Lemma 4.8 Let g : [0, a) → [0,∞) be a locally Lipschitz and positive definite function,
where a is a positive real number. Then, for all y0 ∈ [0, a), the differential equation
y = −g(y), y(t0) = y0
has a unique solution y(t) = γ(y0, t− t0) defined for all t ≥ t0, where γ : [0, a)× [0,∞)→
[0,∞) is a class KL function.
The proof of the lemma above employs the same arguments of the proof of Lemma
3.4 in [58].
1Let Θ and S be subsets of Rl and R
n, respectively. A correspondence Φ from Θ to S is a map thatassociates with each element θ ∈ Θ a (nonempty) subset Φ(θ) ⊂ S.
2Denote a correspondence Φ from Θ to S by Φ : Θ → P (S), where P (S) denotes the power set ofS, i.e., the set of all nonempty subsets of S. A correspondence Φ :→ P (S) is said to be upper-semi-
continuous at a point θ ∈ Θ if for all open sets V such that Φ(θ) ⊂ V , there exists an open set U
containing θ, such that θ′ ∈ U ∩Θ implies Φ(θ′) ⊂ V .
Chapter 4. Coupled Nonlinear Systems 122
Now we are ready to establish certain technical properties of the switched intercon-
nected system (4.3). First, we introduce some notations that are used in the following
proofs.
Define a hyper-cube in Rm by
Ar(z) = y ∈ Rm : ‖y − z‖∞ ≤ r .
Let c > 0 be large enough that Xc := X ∩Ac(0) is not empty. For any x = (x1, . . . , xn) ∈
PSfrag replacementsr
a1(x)
b1(x)
Xc
C(x)
Hr(x)
Figure 4.11: Illustration for notations.
X nc , the Cartesian product of n copies of Xc, we let C(x) = cox1, . . . , xn, the polytope
of x1, . . . , xn. In addition, for each j = 1, . . . ,m, we let
aj(x) = maxi∈I0
xij and bj(x) = mini∈I0
xij, (4.12)
where xij is the jth entry of xi ∈ Rm. In what follows, we call the set
y ∈ C(x) : y1 = a1(x)
the first upper boundary of C(x). Finally, for small enough r > 0, we define
Hr(x) = y ∈ C(x) : y1 ≤ a1(x)− r.
Chapter 4. Coupled Nonlinear Systems 123
See Fig. 4.11 for an example.
In the following two lemmas, we assume that the hypotheses of Theorem 4.7 hold.
That is, X is a closed convex set and the dynamic interaction digraph Gσ(t) is UQSC.
Remark 4.5 If the dynamic interaction digraph Gσ(t) is UQSC, it can be easily seen that
there is some p ∈ P such that the interaction digraph Gp has a nonempty arc set Ep.
The first lemma shows that if at time t′ all agents are in Xc, then the agents that at
some time t1 ≥ t′ are in the interior of C(x(t′)) cannot reach the first upper boundary of
C(x(t′)) in finite time. See Fig. 4.12 for an illustration.PSfrag replacements
εδ
i
a1(x(t′))
b1(x(t′))
Xc
C(x(t′))
Hε(x(t′))
Figure 4.12: Illustration for Lemma 4.9.
Lemma 4.9 Given c > 0 large enough that Xc 6= φ, there exists a class KL function
γ : [0, 2c] × [0,∞) → [0,∞) with the property γ(M, 0) =M such that the following holds.
For any (t′, x(t′)) ∈ R×X nc , any ε > 0 sufficiently small, and any T > 0, if there exists an
i such that xi(t1) ∈ Hε(x(t′)) at some t1 ≥ t′, then xi(t) ∈ Hδ(x(t
′)) for all t ∈ [t1, t1+T ]
with δ = γ(ε, T ).
Chapter 4. Coupled Nonlinear Systems 124
Proof: For an arbitrarily large c > 0, we consider any (t′, x(t′)) ∈ R × X nc and let
ς = x(t′). Then we define
fς :[b1(ς), a1(ς)
]→ R
w 7→ maxi∈I0
max
f i1p (x) : p ∈ P , x ∈ Cn(ς) such that xi1 = w
,
where f i1p and xi1 are the first component of f i
p and xi, respectively. By assumption A1,
f ip is locally Lipschitz, so it follows from Lemma 4.6 that fς is locally Lipschitz on its
domain.
Next, we are going to show that
fς(w)
= 0 if w = a1(ς)
> 0 if w ∈[b1(ς), a1(ς)
).
Firstly, for each i ∈ I0 and each p ∈ P , notice that x ∈ Cn(ς) and xi1 = a1(ς) imply
that Cip ⊂ C(ς) and xi is on the first upper boundary of C(ς). By assumption A2 and
Lemma 4.1, f ip(x) ∈ ri
(T (xi, Cip)
)⊂ T (xi, C(ς)), and so f i1
p (x) ≤ 0. Next choose i ∈ I0and p ∈ P such that Ep is non-empty, that is, agent i has at least one neighbor agent.
Such a pair (i, p) exists by Remark 4.5. Pick x ∈ Cn(ς) so that xi1 = a1(ς) and, for all
j ∈ Ni(p), xj = xi. Since Cip is the singleton xi, it follows from assumption A2 that
f ip(x) = 0, implying fς (a1(ς)) = 0. Secondly, for each w ∈
[b1(ς), a1(ς)
), choose (i, p)
as before and pick x ∈ Cn(ς) such that xi1 = w and, for all j ∈ Ni(p), xj1 = a1(ς) and
xjk = xik, k = 2, . . . ,m. Now Cip is the line segment with vertices (a1(ς), xi2, . . . , xim)
and (w, xi2, . . . , xim), so by assumption A2 f i1p (x) > 0, implying that fς(w) > 0.
Letting y := a1(ς) − w, we define hς(y) := fς(a1(ς) − y). Then it is easily followed
that the function hς(·) is locally Lipschitz and is positive definite on [0, a1(ς) − b1(ς)].
Extend its domain to [0, 2c] by the following construction
hς(y) =
hς(y), y ∈[0, a1(ς)− b1(ς)
],
hς
(a1(ς)− b1(ς)
), y ∈
(a1(ς)− b1(ς), 2c
].
Chapter 4. Coupled Nonlinear Systems 125
This function is still locally Lipschitz on y and positive definite. On the other hand, by
Lemma 4.7 the function hς(y) is continuous with respect to the parameter ς. Furthermore,
notice that X nc is a compact set, so the function
h(y) := maxhς(y) : ς ∈ X n
c
(4.13)
is well-defined, locally Lipschitz by Lemma 4.6, and positive definite. Given any initial
condition (t0, y0) ∈ R× [0, 2c], the solution of y = −h(y) is given by γ(y0, t− t0), which
is a class KL function by Lemma 4.8 and satisfies the property γ(M, 0) =M.
Consider now any ε sufficiently small and any T > 0. If there exists an i such that
xi(t1) ∈ Hε(x(t′)) at some t1 ≥ t′, by recalling ς = x(t′) it follows from Theorem 4.4 that
x(t) ∈ Cn(ς) for all t ≥ t′ and then clearly it is also true for all t ≥ t1. Hence, from the
definition of fς(·) we know that
xi1(t) = f i1σ(t)(x(t)) ≤ fς(xi1(t)) for all t ≥ t1.
Let w(t) be the solution of w = fς(w) with the initial condition w(t1) = xi1(t1). Thus,
by the Comparison Lemma [58], one obtains xi1(t) ≤ w(t) for all t ≥ t1.
On the other hand, considering the coordinate transformation y = a1(ς) − w, we
know that y(t) = a1(ς) − w(t) is the solution of y = −hς(y) and that it is also the
solution of y = −hς(y) since the initial condition y(t1) = a1(ς)− w(t1) = a1(ς)− xi1(t1)
is in[0, a1(ς) − b1(ς)
]. Furthermore, from (4.13) we know hς(y) ≤ h(y). Applying the
Comparison Lemma [58] again leads to
y(t) ≥ γ(y(t1), t− t1) for all t ≥ t1.
Hence, the following inequalities hold for all t ≥ t1
xi1(t) ≤ w(t) = a1(ς)− y(t) ≤ a1(ς)− γ(y(t1), t− t1) = a1(ς)− γ(a1(ς)− xi1(t1), t− t1).
Also, xi(t1) ∈ Hε(x(t′)) implies a1(ς) − xi1(t1) ≥ ε. This together with the fact that
γ(·, ·) is of class KL lead to that for all t ∈ [t1, t1 + T ]
xi1(t) ≤ a1(ς)− γ(a1(ς)− xi1(t1), t− t0) ≤ a1(ς)− γ(ε, T ).
Chapter 4. Coupled Nonlinear Systems 126
It in turn implies that for all t ∈ [t1, t1 + T ]
xi(t) ∈ Hδ(x(t′)) with δ = γ(ε, T ).
¥
The next lemma shows that if at time t′ all agents are in Xc, then the agents that at
some time t1 ≥ t′ have a neighbor in the interior of C(x(t′)) for certain amount of time
will be in the interior of C(x(t′)) at some time t2 ≥ t′. See Fig. 4.13 for an illustration
(the dashed line with arrow indicates agent j is a neighbor of agent i in the picture).
PSfrag replacements
δ
ε i
j
a1(x(t′))
b1(x(t′))
Xc
C(x(t′))
Hδ(x(t′))
Figure 4.13: Illustration for Lemma 4.10.
Lemma 4.10 Given c > 0 large enough that Xc 6= φ, there exists a class K function
ϕ : [0, 2c]→ [0,∞) with the property ϕ(M) <M when M6= 0 such that the following holds.
For any (t′, x(t′)) ∈ R × X nc and any δ > 0 sufficiently small, if there exist a pair (i, j)
and a t1 ≥ t′ such that j ∈ Ni(t) and xj(t) ∈ Hδ(x(t′)) for all t ∈ [t1, t1 + τD], then there
exists a t2 ∈ [t′, t1 + τD] such that xi(t2) ∈ Hε(x(t′)) with ε = ϕ(δ).
Proof: For an arbitrarily large c > 0, we consider any (t′, x(t′)) ∈ R × X nc and let
ς = x(t′). For any δ > 0 sufficiently small, and i ∈ I0, p ∈ P , we define a set
Oς(i, p, δ) :=x ∈ Cn(ς) : xi1 = a1(ς) and
(∃j ∈ Ni(p)
)xj ∈ Hδ(ς)
.
Chapter 4. Coupled Nonlinear Systems 127
This is the set of states such that xi is on the first upper boundary of C(ς) and at least one
of its neighbors (in the digraph Gp), say agent j, has its state xj in Hδ(ς). See Fig. 4.14
for an illustration (the dashed arrows indicate which agents are neighbors of agent i in
graph Gp).
PSfrag replacements
δ
i
j
Xc
C(ς)
Hδ(ς)
Figure 4.14: A possible element in Oς(i, p, δ).
Now define the minimum speed of any agent along the first direction when x ∈
Oς(i, p, δ) and the pair (i, p) ranges over I0 × P ,
dς(δ) := min|f i1
p (x)| : i ∈ I0, p ∈ P , x ∈ Oς(i, p, δ).
Obviously, dς(δ) ≥ 0. Furthermore, if δ > 0, then by Remark 4.5, there exist a pair
(i, p) ∈ I0×P such that Oς(i, p, δ) is non-empty. For any such pair (i, p), assumption A2
implies that, for all x ∈ Oς(i, p, δ), |f i1p (x)| > 0. It readily follows that dς(δ) > 0 when
δ > 0.
Notice that X nc is compact and that fp(x) is locally Lipschitz for each p ∈ P . So
there is a Lipschitz constant Lp such that
∀(x, z) ∈ X nc ×X n
c , ‖fp(x)− fp(z)‖ ≤ Lp‖x− z‖.
Chapter 4. Coupled Nonlinear Systems 128
Let L = maxLp : p ∈ P
. Then
‖fp(x)− fp(z)‖ ≤ L‖x− z‖ for all p ∈ P and for all (x, z) ∈ X nc ×X n
c .
Define
ϕς(δ) := min
δ,
τDdς(δ)
τDL+ 1
.
We thus know that ϕς(δ) = 0 if δ = 0 and that ϕς(δ) > 0 if δ > 0. In addition, by
Lemma 4.7, ϕς(δ) is continuous on both δ and ς. Extend its domain to [0, 2c] by the
following construction
ϕς(δ) =
ϕς(δ), δ ∈[0, a1(ς)− b1(ς)
],
ϕς(a1(ς)− b1(ς)), δ ∈(a1(ς)− b1(ς), 2c
].
Clearly, this function is also continuous on ς. Since X nc is compact, the function
ϕ(δ) := minϕς(δ) : ς ∈ X n
c such that a1(ς)− b1(ς) ≥ δ
is well-defined. Furthermore, it is also continuous and positive definite. Hence, there
exists a class K function ϕ : [0, 2c]→ [0,∞) such that ϕ(δ) < ϕ(δ) ≤ δ if δ 6= 0.
Next we are going to show if there exist a pair (i, j) and a t1 ≥ t′ such that j ∈ Ni(t)
and xj(t) ∈ Hδ(x(t′)) for all t ∈ [t1, t1+ τD], then there exists a t2 ∈ [t′, t1+ τD] such that
xi(t2) ∈ Hε(x(t′)) with ε = ϕ(δ). Suppose by contradiction that
xi(t) /∈ Hε(x(t′)) for all t ∈ [t′, t1 + τD]. (4.14)
On the other hand, from Theorem 4.4 we know xi(t) ∈ C(x(t′)) for all t ≥ t′. This
together with (4.14) imply a1(ς)− xi1(t) < ε for all t ∈ [t′, t1+ τD]. In this time interval,
we define a new vector x′(t) by only replacing xi1(t) in x(t) with x′i1(t) = a1(ς). Thus for
all t ∈ [t′, t1 + τD],
‖x(t)− x′(t)‖ = |xi1(t)− a1(ς)| < ε,
and therefore,
|f i1p (x′(t))| − |f i1
p (x(t))| ≤ ‖fp(x′(t))− fp(x(t))‖ ≤ L‖x(t)− x′(t)‖ < Lε. (4.15)
Chapter 4. Coupled Nonlinear Systems 129
From the definition of dς(·), we know that for all t ∈ [t′, t1 + τD],
|f i1p (x′(t))| ≥ dς(δ),
Combining the above inequality and (4.15), one has
|f i1p (x(t))| > dς(δ)− Lε.
Notice that
ε = ϕ(δ) < ϕ(δ) ≤ τDdς(δ)
τDL+ 1,
or what is the same,
τD(dς(δ)− Lε
)> ε > 0.
This implies that f i1p (x(t)) does not change sign in [t1, t1 + τD] and
|xi1(t1 + τD)− xi1(t1)| =∫ t1+τD
t1
|f i1p (x(τ))|dτ > τD
(dς(δ)− Lε
)> ε,
which contradicts assumption (4.14). ¥
Proof of Theorem 4.7: (=⇒) To prove the contrapositive form, assume that Gσ(t)
is not UQSC, that is, for all T > 0 there exists t∗ ≥ 0 such that the digraph G([t∗, t∗+T ]
)
is not QSC. Then by its definition, in the digraph G([t∗, t∗ + T ]
)there are two nodes i∗
and j∗ such that for any node k either i∗ or j∗ is not reachable from k. Let V1 be the
subset of nodes from which i∗ is reachable and V2 be the subset of nodes from which j∗
is reachable. Obviously, V1 and V2 are disjoint. Moreover, for each node i ∈ V1 (resp.
V2), the set of neighbors of agent i in the digraph G([t∗, t∗ + T ]
)is a subset of V1 (resp.
V2). This implies that, for all t ∈ [t∗, t∗ + T ], and for all (i, j) ∈ V1 × V2, Ni(σ(t)) ⊂ V1and Nj(σ(t)) ⊂ V2.
Choose any z1 ∈ X and z2 ∈ X such that z1 6= z2, let t0 = t∗, and pick any initial
condition x(t0) such that
xi(t0) =
z1, ∀i ∈ V1,
z2, ∀i ∈ V2.
Chapter 4. Coupled Nonlinear Systems 130
Then
xi(t) =
z1, ∀i ∈ V1,
z2, ∀i ∈ V2,∀t ∈ [t0, t0 + T ].
Let c = ‖x(t0)‖ and let ε be a positive scalar smaller than ‖z1 − z2‖/2. We have thus
found ε > 0 and c > 0 such that, for all T > 0, there exists t0 such that
(‖x(t0)‖ ≤ c
)∧(x(t0) ∈ X n
),
but(∃t = t0 + T
)‖x(t)‖Ω > ε.
This proves that the switched interconnected system (4.3) is not UA in X n with
respect to Ω.
(⇐=) We show the uniform attractivity in X n using ∞-norm for convenience. That
is, ∀ε > 0, ∀c > 0, ∃T ∗ > 0 such that ∀t0 ≥ 0
x0 ∈ X nc =⇒
(∀t ≥ t0 + T ∗
)‖x(t)‖Ω ≤ ε.
Let ε > 0, c > 0 be arbitrary. Then, there exist a class KL function γ(·, ·) and a class
K function ϕ(·) satisfying the properties in Lemma 4.9 and Lemma 4.10, respectively.
For any given t0 ≥ 0 and x0 ∈ X nc , consider the solution x(t) of (4.3) with x(t0) = x0
and the following nonnegative vector function
V (x) =
[V1(x) · · · Vm(x)
],
where Vj(x) = aj(x) − bj(x), j = 1, . . . ,m. By Theorem 4.4, for any t ≥ t′ ≥ t0,
xi(t) ∈ C(x(t′)) ⊂ Xc for all i. Then it follows that Vj(x(t)), j = 1, . . . ,m, is non-
increasing along the trajectory x(t).
If Gσ(t) is UQSC, it follows that there is a T ′ > 0 such that for all t the union digraph
G([t, t + T ′]) is QSC. Let T = T ′ + 2τD, where τD is the dwell time. Then we are going
to show there exists a class K function η(·) such that for any fixed t′ ≥ t0
V1(x(t′ + T ))− V1(x(t
′)) ≤ −η (V1(x(t′))) , (4.16)
Chapter 4. Coupled Nonlinear Systems 131
where T = 2nT .
The proof relies on constructing a family of parameters ε1, δ2, ε2, . . . , εn−1, δn, εn de-
fined recursively as follows:
set εn = V1(x(t′))2
;
for k = n, . . . , 2
set δk = γ(εk, T );
set εk−1 = ϕ(δk).
Define γ(·) := γ(·, T ). Then ε1 can be written as
ε1 = η (V1(x(t′)))
where η(·) := ϕ γ · · ·ϕ γ( ·2). It is a class K function since γ(·) and ϕ(·) are both
class K functions. Since γ(·, ·) is class KL with the property γ(M, 0) =M and T > 0, it
follows that δk < εk. In addition, εk−1 < δk because of ϕ(M) <M for nonzero M. Thus,
0 < ε1 < δ2 < · · · < δn < εn.
Recall that Hr(x) = y ∈ C(x) : y1 ≤ a1(x) − r for any small enough number r (in
this proof, r is either εi or δi). In what follows, without causing confusion, we use Hr to
denote Hr(x(t′)) for simplicity.
We let
τ1 = t′ + τD
τ2 = t′ + T + τD...
τ2n = t′ + (2n− 1)T + τD.
See Figure 4.15 for an illustration. Notice that for each k = 1, . . . , 2n, the digraph
G([τk, τk+T ′]) is QSC. It therefore has a centre by theorem 2.1, say ck. Let V1 and V∗1 be
a partition of the node set V such that i ∈ V1 if xi(t′) ∈ Hεn and i ∈ V∗
1 otherwise. Thus,
Chapter 4. Coupled Nonlinear Systems 132
PSfrag replacements
t′ t′ + T t′ + 2T t′ + 2nT
T ′T ′T ′ · · · · · ·
· · · · · ·
τ1 τ2 τ2n
Figure 4.15: The time interval [t′, t′ + T ].
ck is either in V1 or V∗1 , so at least n elements in c1, . . . , c2n lie in either V1 or V∗
1 . Assume
without loss of generality that they lie in V1, so there exist indices 1 ≤ k1 < · · · < kn ≤ 2n
such that cki ∈ V1.
At time t′, by its definition Hεn has at least one agent (see Fig. 4.16). Moreover, by
PSfrag replacements
εn
δn
εn−1
Xc
C(x(t′))
Figure 4.16: A distribution of agents at time t′.
Lemma 4.9, it follows that for all i
xi(t′) ∈ Hεn =⇒ xi(t) ∈ Hδn ∀t ∈ [t′, t′ + T ]. (4.17)
Recall that the digraph G([τk1 , τk1 + T ′]) has a centre ck1 in V1. Hence, there exists a
pair (i, j) ∈ V∗1 × V1 such that j is a neighbor of i in this digraph since otherwise there
is no arc from j to i for any i ∈ V∗1 and j ∈ V1, which contradicts the fact that the
digraph has a centre in V1. This further implies that there is a τ ∈ [τk1 , τk1 + T ′] such
that j ∈ Ni(τ). Since τ ∈ [τk1 , τk1 + T ′] = [t′ + (k1 − 1)T + τD, t′ + k1T − τD], it follows
Chapter 4. Coupled Nonlinear Systems 133
that [τ−τD, τ+τD] ⊂ [t′+(k1−1)T, t′+k1T ]. Since σ(t) ∈ Sdwell(τD), there is an interval
[τ , τ+τD], which contains τ and is a subinterval of [t′, t′+k1T ], such that j ∈ Ni(t) for all
t ∈ [τ , τ + τD]. In addition, since j ∈ V1, or what is the same, xj(t′) ∈ Hεn , from (4.17)
we know that xj(t) ∈ Hδn for all t ∈ [t′, t′+ T ] (and of course for all t ∈ [τ , τ+τD]). Thus,
by Lemma 4.10, there exists t1 ∈ [t′, τ + τD] ⊆ [t′, t′ + k1T ] such that xi(t1) ∈ Hεn−1 .
On the one hand, we showed that the agent i not in Hεn at t′ is in Hεn−1 at t1. On
the other hand, the agents in Hεn at t′ remain in Hδn at t1 from (4.17) and therefore
remain in Hεn−1 at t1 because Hδn ⊂ Hεn−1 . Hence, at time t1, Hεn−1(x(t′)) has at least
two agents.
Let V2 and V∗2 be a partition of the node set V such that i ∈ V2 if xi(t1) ∈ Hεn−1 and
i ∈ V∗2 otherwise. Note that by (4.17)
k ∈ V1 =⇒ xk(t′) ∈ Hεn =⇒
(4.17)xk(t1) ∈ Hδn ⊂ Hεn−1 =⇒ k ∈ V2,
so V1 ⊂ V2. In particular ck2 , the centre node of G([τk2 , τk2 +T ′]), is in V2 because it is in
V1. Then we can apply the same argument to conclude that there is a t2 ∈ [t1, t′ + k2T ]
such that xi(t2) ∈ Hεn−2 and therefore, Hεn−2 has at least three agents at t2.
Repeating this argument n− 1 times leads to the result that there is a tn−1 ∈ [t′, t′ +
kn−1T ] ⊂ [t′, t′ + T ] such that Hε1 has n agents at tn−1. Hence,
V1(x(tn−1)) ≤ V1(x(t′))− ε1 = V1(x(t
′))− η(V1(x(t′)))
and (4.16) follows.
Since we now know (4.16) holds, then we have
V1(x(t0 + kT )
)≤ V1 (x(t0))− η (V1(x(t0)))− · · · − η
(V1(x(t0 + (k − 1)T ))
).
Notice that x0 ∈ X nc (0) implies V1(x
0) ≤ 2c. In addition, considering the facts that η(·)
is a class K function and that V1(x(t)) is nonincreasing, one obtains
V1(x(t0 + kT )) ≤ 2c− kη(V1(x(t0 + kT ))).
Chapter 4. Coupled Nonlinear Systems 134
This means there is a T ∗1 = kT > 0 (k large enough) such that
V1(x(t)) < 2ε for all t ≥ t0 + T ∗1 .
For each j ∈ 1, . . . ,m, by the same argument, there is a T ∗j > 0 such that
Vj(x(t)) < 2ε for all t ≥ t0 + T ∗j .
Let T ∗ = maxT ∗j , j = 1, . . . ,m. Thus
Vj(x(t)) < 2ε for all t ≥ t0 + T ∗ and for all j ∈ 1, . . . ,m.
Notice that in ∞-norm
‖x(t)‖Ω =1
2max
V1(x(t)), . . . , Vn(x(t))
.
So
‖x(t)‖Ω < ε for all t ≥ t0 + T ∗.
We have thus proven that the switched interconnected system (4.3) is UA in X n with
respect to Ω. ¥
4.5.3 Examples and Further Remarks
In this subsection we present some examples to better illustrate the nature of our as-
sumptions.
Concerning Assumption A1
We now present an example showing that Theorem 4.7 may fail to hold when the vector
fields are just continuous instead of locally Lipschitz.
Chapter 4. Coupled Nonlinear Systems 135
Example 4.2 Consider three agents, 1, 2, and 3, with state space R. There are three
possible vector fields:
p = 1 : p = 2 : p = 3 :
x1 = g(x3 − x1)
x2 = 0
x3 = 0
,
x1 = g(x2 − x1)
x2 = 0
x3 = 0
,
x1 = 0
x2 = g(x1 − x2)
x3 = 0
where g(y) := sign(y) · |y| 12 , y ∈ R. The function g has the property that each solution of
the differential equation y = g(y) reaches the origin (asymptotically stable equilibrium)
in finite time.
For each p ∈ P = 1, 2, 3, the associated interaction digraphs are depicted in Fig.
4.17. Let X = R. Obviously, the function g(·) is only continuous (not locally Lipschitz
on R), so assumption A1 does not hold, but it can be easily checked that A2 holds. Let
PSfrag replacements
111 222
33 3
G1 G2 G3
Figure 4.17: The interaction digraphs Gp, p = 1, 2, 3.
us set a switching signal σ(t) to be periodic with period of 12 seconds, that is,
σ(t) =
1, t ∈ [12k, 12k + 4),
2, t ∈ [12k + 4, 12k + 8),
3, t ∈ [12k + 8, 12k + 12),
k = 0, 1, . . . .
Thus, assumption A3 holds.
For the switched interconnected system corresponding to the switching signal above,
the dynamic interaction digraph Gσ(t) is UQSC. To see that, simply let T = 12 and
notice that for any t > 0, G([t, t + T ]
)= G1 ∪ G2 ∪ G3 is QSC. However, this switched
Chapter 4. Coupled Nonlinear Systems 136
interconnected system is not uniformly attractive in X 3 with respect to Ω as shown by
a simulation in Fig. 4.18. Intuitively, for the period of σ(t) = 1, agent 1 moves toward
0 10 20 30 40 50 60 70 80−1.5
−1
−0.5
0
0.5
1
1.5x
1x
2x
3
Figure 4.18: Time evolution of three coordinates not tending to a common value.
agent 3 and the others remain stationary, whereas for the period of σ(t) = 2, agent 1
moves toward agent 2 and the others remain stationary. However, agent 1 reaches the
location of agent 2 and stays there during this period. Then, when the system switches
to p = 3, agent 2 starts to move toward agent 1, but since agents 1 and 2 are already
collocated, agent 2 keeps stationary. Hence, only agent 1 moves forward and backward
between the locations of agent 2 and 3 while the others are stationary.
Concerning Assumption A2
Our next example is concerned with the necessity of the strictness in assumption A2.
This cannot be relaxed to just f ip(x) ∈ T (xi, Cip), as shown next.
Example 4.3 Consider two agents, 1 and 2, with state space R. There is only one vector
field:
p = 1 :
x1 = f 11 (x1, x2) = 0
x2 = f 21 (x1, x2) = g(x1 − x2)
Chapter 4. Coupled Nonlinear Systems 137
where the smooth function g : R → R is given in Fig. 4.19.
PSfrag replacements
1
−1y
g(y)
Figure 4.19: A smooth function g(y).
The interconnected system above has fixed coupling structure, that is, σ(t) ≡ 1. So
assumption A3 is trivially satisfied. Let X = R. Assumption A1 holds, but A2 does not
hold since f 21 (x1, x2) = g(x1− x2) = 0 /∈ ri(T (x2, C21)
)when x1 = x2+1 by noticing that
C21 = cox1, x2 is the line segment joining x1 and x2. However, f11 (x1, x2) and f 21 (x1, x2)
are in T (x1, C11) and T (x2, C21) respectively for all (x1, x2) ∈ X × X .
In the associated interaction digraph of the unique vector field (p = 1), there is an arc
from node 1 to 2. So it is QSC. Recalling that the property of UQSC is equivalent to QSC
for fixed digraph, the dynamic interaction digraph Gσ(t) is UQSC. But this interconnected
system is not uniformly globally attractive with respect to Ω when, for example, initially
x1(0) = x2(0) + 1.
However, if we choose X = [a, b], where a, b are real numbers such that b − a < 1,
then assumptions A1, A2, and A3 hold. Thus, it follows that this interconnected system
is UA in X 2 with respect to Ω since the dynamic interaction digraph Gσ(t) is UQSC as
shown before.
Concerning Assumption A3
Now we turn our attention to assumption A3. In order to guarantee attractivity, some
regularity conditions on the switching signal σ(·) are needed. This is illustrated by the
Chapter 4. Coupled Nonlinear Systems 138
following very simple linear example.
Example 4.4 Consider just two agents, 1 and 2, with state space R. There are two
possible vector fields:
p = 1 :
x1 = x2 − x1
x2 = 0
, p = 2 :
x1 = 0
x2 = 0
Thus agent 2 has no neighbor and never moves. For p = 1 agent 1 moves toward agent 2,
whereas for p = 2 agent 1 has no neighbor and therefore doesn’t move. Assumptions A1
and A2 hold for X = R. Let us define switching times τk by setting τ0 = 0 and defining
the intervals δk = τk+1 − τk as follows:
k 0 1 2 3 4 5 6 · · ·
δk 1 1 1/2 1 1/22 1 1/23 · · ·
Then we define σ(t) to be the alternating sequence 1, 2, 1, 2, . . . over the time intervals,
respectively,
[τ0, τ1), [τ1, τ2), [τ2, τ3), [τ3, τ4), . . .
This switching signal is piecewise constant and the dynamic interaction digraph is UQSC.
However, if x1(0) 6= x2(0), x1(t) does not converge to x2(t)—attractivity does not occur.
The example suggests that in order to obtain attractivity, one needs to impose some
restrictions on the admissible switching signals. One way to address this problem is to
make sure that the switching signal has a dwell time, that is, there exists τD > 0 such
that
(∀k) (τk+1 − τk) ≥ τD.
This is precisely the assumption A3, and is ubiquitous in the switching control literatures
(see, for example, [45, 47,63]).
Chapter 4. Coupled Nonlinear Systems 139
Concerning Nonautonomous Vector Fields
It may seem tempting to conjecture that our main result, Theorem 4.7, still holds for the
switched interconnected system
x(t) = fσ(t)(t, x(t)),
which has nonautonomous vector fields. Again assume that A1, A2, and A3 hold where
f ip(x) is replaced by f i
p(t, x). Our next example shows that this is generally not true.
Example 4.5 Consider again two agents, 1 and 2, with state space R. Suppose there is
only one nonautonomous vector field:
p = 1 :
x1 = eαt(x2 − x1)
x2 = eβt(x1 − x2)
, where α, β < 0.
Thus, assumptions A1 and A2 hold for X = R. Here σ(t) ≡ 1 and so A3 is trivially
satisfied. Moreover, the dynamic interaction digraph Gσ(t) = G1 is UQSC as required in
Theorem 4.7. However, it is not attractive with respect to Ω, as shown in the simulation
in Fig. 4.20 where α = −3, β = −2 and x1(0) = 2, x2(0) = 0.
0 1 2 3 4 5 6 7 8 9 100
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2x
1x
2
Figure 4.20: Time evolution of two coordinates not tending to a common value.
Chapter 4. Coupled Nonlinear Systems 140
4.6 Applications
In this section we discuss some applications of our theoretic results developed in this
chapter.
4.6.1 Synchronization of Coupled Oscillators
The Kuramoto model describes the dynamics of a set of n phase oscillators θi with natural
frequencies ωi. More details can be found in [49, 104]. The time evolution of the i-th
oscillator is given by
θi = ωi + ki∑
j∈Ni(t)
sin(θj − θi),
where ki > 0 is the coupling strength and Ni(t) is the set of neighbors of oscillator i at
time t. The interaction structure can be general so far, that is, Ni(t) can be an arbitrary
set of other nodes and can be dynamic.
The neighbor sets Ni(t) define Gσ(t) and the switched interconnected system
θ(t) = fσ(t) (θ(t)) ,
where θ = [θ1 · · · θn]T and σ(t) is a suitable switching signal. For identical coupled
oscillators (i.e., ωi = ω,∀i), the transformation xi = θi − ωt yields
xi = ki∑
j∈Ni(t)
sin(xj − xi), i = 1, . . . , n. (4.18)
Let a, b be any real numbers such that 0 ≤ b − a < π, and define X = [a, b]. It is
easily seen that A1 and A2 are satisfied. Suppose σ(t) here is regular enough satisfying
A3. Then from Theorem 4.7 it follows that if, and only if, Gσ(t) is UQSC, the switched
interconnected system (4.18) is uniformly attractive in X n with respect to Ω. This implies
that there exists x ∈ R such that
θi(t)→ x+ ωt, θi(t)→ ω,
Chapter 4. Coupled Nonlinear Systems 141
and the oscillators synchronize. This is an extension of Theorem 1 in [49], which assumes
the interaction digraph is bidirectional and static and the initial state θi(0) ∈(−π2, π2
)
for all i.
As an example, three Kuramoto oscillators with dynamic interaction structure are
simulated. The initial conditions are θ1 = 0, θ2 = 1, θ3 = −1. The natural frequency
ωi equals 1, and the coupling strength ki is set to 1 for all i. The interaction structure
switches among three possible interaction structures periodically, shown in Fig. 4.21.
It can be checked that Gσ(t) is UQSC. So these three oscillators achieve asymptotical
PSfrag replacements
111 222
33 3
G1 G2 G3
Figure 4.21: Three interaction digraphs Gp, p = 1, 2, 3.
synchronization as we conclude by our main theorem. Fig. 4.22 shows the plots of
sin(θi), i = 1, 2, 3 and of the switching signal σ(t). Synchronization is evident.
4.6.2 Biochemical Reaction Network
A biochemical reaction network is a finite set of reactions among a finite set of species.
Consider, for example, two reversible reactions among three compounds C1, C2, and C3,
in which C1 is transformed into C2, C2 is transformed into C3, and vice versa:
C1k1
k2
C2k3
k4
C3
The constants k1 > 0, k2 > 0 are the forward and reverse rate constants of the reaction
C1 C2; similarly for k3 > 0, k4 > 0. Denote the concentrations of C1, C2, and C3,
respectively, by x1, x2, and x3. Only nonnegative concentrations are physically possible.
Chapter 4. Coupled Nonlinear Systems 142
0 2 4 6 8 10 12 14 16 18 20−1
−0.5
0
0.5
1
t
0 2 4 6 8 10 12 14 16 18 200
1
2
3
4
t
θ1
θ2
θ3
0 2 4 6 8 10 12 14 16 18 20−1
−0.5
0
0.5
1
t
0 2 4 6 8 10 12 14 16 18 200
1
2
3
4
t
θ1
θ2
θ3
PSfrag replacements
sin(·)
sin(·)
σ(t)
σ(t)
Figure 4.22: Synchronization of three oscillators with a dynamic interaction structure.
Such a reaction network gives rise to a dynamical system, which describes how the state
of the network changes over time.
Suppose the dynamics of both reactions are dictated by the mass action principle.
This leads to the model
x1 = −k1xα1 + k2x
α2 ,
x2 = k1xα1 − k2x
α2 − k3x
α2 + k4x
α3 ,
x3 = k3xα2 − k4x
α3 ,
(4.19)
where α ≥ 1 is an integer. For more on modeling and analysis of biochemical reaction
networks, we refer to [1, 50,61,119].
The linear transformation
y1 =
(k1k2
) 1α
x1, y2 = x2, y3 =
(k4k3
) 1α
x3,
Chapter 4. Coupled Nonlinear Systems 143
leads to
y1 = h1(y1, y2)(y2 − y1),
y2 = h2(y1, y2)(y1 − y2) + h3(y2, y3)(y3 − y2),
y3 = h4(y2, y3)(y2 − y3),
(4.20)
where h1(y1, y2), h2(y1, y2), h3(y2, y3), and h4(y2, y3) are suitable terms; for example
h1(y1, y2) =
(k1/α1 k2
k1/α2
)yα2 − yα1y2 − y1
.
It can be easily verified that h1(y1, y2) ≥ 0 and h1(y1, y2) = 0 if and only if y1 = y2 = 0.
The same observations hold for h2(y1, y2), h3(y2, y3), and h4(y2, y3). It thus follows that
each point in the set Ω =y : y1 = y2 = y3 ≥ 0
is an equilibrium. Physically, when
y ∈ Ω, the reaction network is at a chemical equilibrium.
Consider now the interaction digraph associated with (4.20). Physically, each node
represents a compound and each arc connecting two nodes represents a reaction between
two compounds. This digraph is QSC (actually, it is strongly connected). Since there is
no switching in the system (i.e., σ(t) is constant), assumption A3 is obviously satisfied
and the dynamic interaction digraph is UQSC. In addition, it can be easily checked that,
for X = [0,∞), the vector field in the above system satisfies assumptions A1 and A2.
Hence, Theorem 4.7 can be applied to conclude that system (4.20) is attractive in X 3
with respect to Ω. This result coincides with the analysis using Theorem 5.2 in [1]. Our
analysis can be extended to more complicated biochemical reaction networks containing
a set of compounds and a set of reversible reactions. Their asymptotic state agreement
property is captured by the interaction digraph.
4.6.3 Water Tank Network
Consider a tank of water and suppose the water level is x (see Fig. 4.23). Then the flow
rate out is a function of x (Toricelli’s Law), which gives rise to the following continuous-
time model
x = −a√x,
Chapter 4. Coupled Nonlinear Systems 144
where a > 0.
PSfrag replacements x
a√x
Figure 4.23: A tank of water.
Now we consider two identical coupled tanks (see Fig. 4.24). The flow rate of tank 1
PSfrag replacements
x1 x2
Figure 4.24: Two identical coupled tanks.
is a function of x2 − x1. Thus
x1 = ag(x2 − x1),
x2 = ag(x1 − x2),
where g(y) = sign(y) ·√|y|.
Now consider hundreds of water tanks connected through a network of pipes (see Fig.
4.25). Thus we have a general coupled nonlinear system model as follows
x1 =∑
j∈N1a1jg(xj − x1)
...
xn =∑
j∈Nnanjg(xj − xn),
where aij > 0 and Ni represents the set of these water tanks connected to the ith tank.
It can be checked that each individual vector field is continuous (satisfying assumption
Chapter 4. Coupled Nonlinear Systems 145PSfrag replacements
x1
hundreds of water tanks
x2
network of pipes
· · · · · ·
Figure 4.25: A network of water tanks.
A1′) though it is not locally Lipschitz and that it satisfies assumption A2′. The network
topology naturally leads to an interaction digraph which is completely the same as the one
by our definition. By Theorem 4.3, the water level of the network of tanks will eventually
be equalized if and only if the interaction digraph is QSC (indeed it is equivalent to
connected in the context of undirected graph since the graph is bidirectional). This is
consistent to our experience.
Now suppose there are valves in some or all the pipes. And suppose at any time each
valve can be either fully open or fully closed, and the valves may switch from one to the
other. Thus it gives rise to a switched system model as in (4.3). Unfortunately, it does
not satisfy local Lipschitz assumption and so Theorem 4.7 is not applicable. Indeed we
once present a counterexample (Example 4.2) using this function g(·) to show that the
failure of local Lipschitz continuity can cause the failure of Theorem 4.7. The example
has a non-bidirectional interaction structure. However, by our experience, in the example
of water tank network with switching, the water level will be eventually equalized if and
only if the dynamic interaction digraph is UQSC. It is suggestive to conjecture that
the theorem may still hold for bidirectional coupling structure with only continuouity
assumption, which we have not studied in depth.
Chapter 4. Coupled Nonlinear Systems 146
4.6.4 Synthesis of Rendezvous Controllers
Next, we turn to control synthesis for the rendezvous problem of a multi-agent system in
continuous time. Suppose there are n agents, each having the simple kinematic model of
velocity control: xi = ui, where xi ∈ Rm is the position of agent i. Assume that, due to
the limited field of view of its sensor, each agent can sense only the relative positions of
its neighbor agents within radius r. Letting Ni(x) denote the set of neighbors of agent i,
where x is the aggregate state of n agents, we thus have thatyij = xj−xi : j ∈ Ni(x)
is the information available to agent i.
The rendezvous problem is to design local distributed control laws
ui = ui
(yij : j ∈ Ni(x)
)
such that
limt→∞
x1(t) = · · · = limt→∞
xn(t) = x for some x ∈ Rm.
With the controller above, it naturally induces a state-dependent dynamic interaction
digraph, Gσ(x(t)), which models the neighbor relationships as time evolves, and a state-
dependent switched interconnected system of the form
x = fσ(x)(x), (4.21)
where σ : Rmn → P . The natural framework for studying state-dependent switching
rules is that of hybrid systems. However, let us fix an initial condition x0 ∈ Rmn and
assume that (4.21) has a solution x(t) defined for all t ≥ 0. Then the state-dependent
switching rule can be reinterpreted as a time-dependent switching rule σ(x(t)). It is then
clear that G depends on the state at time t, x(t). We denote it Gσ(x(t)).
Generally, if some agents are initialized so far away from the rest that they never
acquire information from them, then the rendezvous problem can not be solved. Mathe-
matically, this corresponds to the situation where Gσ(x(0)) is not QSC. So it is natural to
assume that Gσ(x(0)) is QSC. Moreover, we wish the controllers ui = ui
(yij : j ∈ Ni(x)
)
Chapter 4. Coupled Nonlinear Systems 147
to be devised such that Gσ(x(t)) does not lose this property in the future, even though
the controller may cause changes in Gσ(x(t)). Intuitively, ui should make the maximum
distance between agent i and its neighbor agents non-increasing.
Let Ii(x) denote the set of neighbor agents j ∈ Ni(x) that have maximum distance
from agent i.
Proposition 4.1 Suppose the initial state x0 is such that Gσ(x0) is QSC and that a so-
lution x(t) to (4.21) exists for all t ≥ 0. If, for all i, ui satisfies the condition
(∀x) maxj∈Ii(x)
(xi − xj)Tui ≤ 0, (4.22)
then Gσ(x(t)) is QSC at each time instant t ≥ 0.
Proof: Let
V (x) = maxi
maxj∈Ni(x)
‖xi − xj‖2 = maxi,j|eji=1
‖xi − xj‖2,
where eji = 1 means that j is a neighbor of i, and notice that V (x) ≤ r, where r is the
radius of the field of view of each agent. Let
I(x) =(i, j) : V (x) = ‖xi − xj‖2, j ∈ Ni(x)
be the set of pairs of indices where the maximum is reached. Then by Lemma 4.3
D+V (x(t)) = max(i,j)∈I(x)
(xi − xj)
Tui + (xj − xi)Tuj
≤ max(i,j)∈I(x)
(xi − xj)Tui + max
(i,j)∈I(x)(xj − xi)
Tuj.
It follows from condition (4.22) that
max(i,j)∈I(x)
(xi − xj)Tui ≤ 0 and max
(i,j)∈I(x)(xj − xi)
Tuj ≤ 0.
Hence
D+V (x(t)) ≤ 0, ∀t ≥ 0,
which means the already linked arcs will never be disconnected and therefore the con-
clusion follows. ¥
Chapter 4. Coupled Nonlinear Systems 148
We will show in the following proposition that if the distributed control law ui satisfies
condition (4.22) as well as assumptions A1′ and A2′, a solution x(t) to (4.21) exists for
all t ≥ 0 and the agents rendezvous.
Proposition 4.2 Suppose Gσ(x0) is QSC. If ui satisfies condition (4.22) as well as A1′
and A2′, then the agents rendezvous.
Proof: If Gσ(x0) is fully connected, then Gσ(x(t)) is fixed for all time t ≥ 0 since no arc will
be dropped, by Proposition 4.1, and no arc can be added. Then the conclusion follows
from Theorem 4.3.
If instead Gσ(x0) is not fully connected, then Gσ(x(t)) is dynamic and it switches for
a finite number of times. To prove this, suppose by contradiction that for all t ≥ 0,
Gσ(x(t)) = Gσ(x0). Then by Theorem 4.3, all the agents converge to a common location.
So Gσ(x(t)) will become fully connected at some time t, which contradicts the assumption
that Gσ(x(t)) = Gσ(x0) is not fully connected. Hence, there is a t1 ≥ 0 such that Gσ(x(t1))
has more arcs than Gσ(x0) because no arc will be dropped by Proposition 4.1. Repeating
this argument a finite number of times eventually leads to the existence of ti such that
Gσ(x(ti)) is fully connected, and thus, it is fixed after ti. Then the conclusion follows from
Theorem 4.3 by treating(ti, x(ti)
)as the initial condition. ¥
The control law given next is based on the algorithm first proposed in [4].
Proposition 4.3 A possible choice of ui satisfying condition (4.22) as well as assump-
tions A1′ and A2′ is ui = e(0, yij : j ∈ Ni(x)
), the Euclidean center of the set Z =
0, yij, j ∈ Ni(x)
.
Proof: The Euclidean center of the set Z is the unique point w that minimizes the
function
g(w) := maxz∈Z
‖w − z‖.
Interpreted geometrically, e(·) is the center of the smallest m-sphere that contains the
set of points0, yij, j ∈ Ni(x)
. Furthermore, it can be easily shown that it lies in
Chapter 4. Coupled Nonlinear Systems 149
the polytope Cip = co0, yij , j ∈ Ni(x)
but not at its vertices if the polytope is not a
singleton. Thus,
e(0, yij : j ∈ Ni(x)
)= argmin
w∈Cip
(maxz∈Z
‖w − z‖)
Then, by Theorem 4.8 (Maximum Theorem), the function e(·) is continuous (but indeed
it is not locally Lipschitz by some other arguments) and hence ui satisfies assumption
A1′.
Next, e(·) ∈ Cip implies e(·) ∈ T(0, Cip
). Also, notice that Ci
p = coxi, xj : j ∈ Ni(x)
is the translation of Cip to the point xi. Hence, e(·) ∈ T (xi, Cip). In addition, if Cip is not
a singleton and xi is its vertex, this means Cip is not a singleton and 0 is its vertex. Then
by the fact that e(·) lies in Cip but not at its vertices, it follows that ui = e(·) 6= 0. Thus
ui satisfies assumption A2′.
Finally, ui satisfies condition (4.22). This can be seen from geometry. We show the
case when m = 2 and it is same when m > 2. If ui = 0, then it trivially satisfies
(4.22). If ui 6= 0, then the picture is as in Fig. 4.26. The solid circle C1 is the smallest
PSfrag replacements
yij1
yij2 yij3
yij4
ui
C1
C2
Figure 4.26: The smallest enclosing circle.
circle enclosing the points 0 and yij, j ∈ Ni(x). The dotted circle C2 is centered at the
origin and goes through the intersection points between C1 and its diameter, which is
Chapter 4. Coupled Nonlinear Systems 150
perpendicular to ui. We know that if there are some yij in the closed shaded area, then
one of them achieves the maximal distance from the origin among all yij, j ∈ Ni(x). On
the other hand, there is at least one j ∈ Ni(x) such that yij is in the closed semicircle of
C1, since otherwise it is not the smallest circle. Hence, yij lies in the closed shaded area
if j ∈ Ii(x). Moreover, the angle between ui and yij (yij is in the closed shaded area) is
less than π/2. This implies that maxj∈Ii(x)
(xi − xj)Tui ≤ 0. ¥
Chapter 5
Coupled Kinematic Unicycles
5.1 Introduction
The problem of coordinated control of a group of autonomous wheeled vehicles is of recent
interest in control and robotics, see for example, [9, 11, 33, 34, 37, 57, 81, 96, 111,112]. For
this problem, the vehicles in the group are indeed dynamically decoupled, meaning that
the motion of one vehicle does not affect another, but they are coupled through the
information flow in order to achieve some desired cooperative tasks. So the problem
of coordinated and cooperative control of multiple vehicles is within the more general
subject—coupled dynamic systems. One interesting aspect of this subject involves the
structure of the information flow among the agents, which plays a very important role in
solving some coordination problems such as formation control.
Over the past decade and a half, many researchers have worked on formation control
problems with differences regarding the types of agent dynamics, the varieties of the
control strategies, and the types of tasks demanded. In 1990, Sugihara and Suzuki [106]
proposed a simple algorithm for a group of point-mass type robots to form approximations
to circles and simple polygons. And in the years following, distributed algorithms were
presented in [4,5,108] with the objective of getting a group of such robots to congregate
151
Chapter 5. Coupled Kinematic Unicycles 152
at a common location. Moving synchronously in discrete-time steps, the robots itera-
tively observe neighbors within some visibility range and follow simple rules to update
their positions. Moreover, there have been many results on the mathematical analysis
of formation control. In [68], stability of asynchronous swarms with a fixed communi-
cation topology is studied, where stability is used to characterize the cohesiveness of a
swarm. In [92–94], formation stabilization of a group of agents with linear dynamics
(double-integrator) is studied using structural potential functions. An alternative is to
use artificial potential functions and virtual leaders as in [62]. The approach known as
leader-following has been widely used in maintaining a desired formation while moving,
e.g., [27, 29,30].
In addition to the work mentioned so far where a network of vehicles are of linear
dynamics, there have been a number of interesting developments in solving the formation
control problem for coupled nonholonomic systems. For example, [53] studies achievable
equilibrium formations of unicycles each moving at unit speed and subject to steering
control and presents stabilizing control laws, wherein each unicycle senses all others (using
graph theory terminology, the sensing topology is fully connected), and in [70–72], a
circular formation is achieved for a group of unicycles using the strategy of cyclic pursuit,
which is a particular form of coupling structure. Considering the facts that no memoryless
continuous time-invariant state feedback control law can stabilize a nonholonomic system
to the origin as suggested by Brockett [22] and that many efforts were made to find time-
varying stabilizing control laws [73,97,98] for nonholonomic systems later on, the authors
of [123,124] study the problem of forming group formations of Hilare-type mobile robots
(which are kinematically equivalent to unicycles) by using time-varying feedback control
laws. In that work an averaging theory result is applied to show its asymptotic behavior.
However, it is assumed that the group of vehicles has an open chain sensing structure,
another particular form of network topology. All these results are restricted to find
stabilizing control law or to find achievable equilibrium formations given a particular
Chapter 5. Coupled Kinematic Unicycles 153
coupled topology. Nevertheless, the basic problem—stabilizability problem of vehicle
formations (i.e., whether it is possible to devise a controller to achieve certain formations)
has remained open.
This chapter solves this problem. As a fact, the information flow topology among
the group of vehicles is essential for the stabilizability problem of vehicle formations.
The goal of this chapter is to derive necessary and sufficient conditions for the existence
of stabilizing controller for certain formations. We consider a group of vehicles which
are identical and of kinematic unicycle model. Each one is equipped with an onboard
sensor, by which it can measure relative displacements to certain neighbors. Central to
a discussion of vehicle formations is the nature of the information flow throughout the
formation. This information flow is modeled by an information flow digraph, where a
link from node j to node i indicates that vehicle i can sense the position of vehicle j—
but only with respect to the local coordinate frame of vehicle i. In addition, we assume
the information flow digraph is static—the dynamic case, where ad hoc links can be
established or dropped, is harder and is still an open problem.
We explore three subproblems, namely, stabilization of point formation, line forma-
tion, and any geometric formation. Our first main result is that stabilization of point
formations is feasible if and only if the information flow digraph is QSC. That is, there
exists at least one vehicle that is viewable, perhaps indirectly by hopping from one vehi-
cle to another, by all other vehicles. This is precisely the same condition as in Chapter
3 and 4. Our proof of sufficiency is constructive: We present an explicit smooth peri-
odic feedback controller, which is inspired and modified from the time-varying control
law in [123, 124]. Our second main result concerns stabilization of line formations. This
turns out to be feasible if and only if there are at most two disjoint closed sets of nodes
in the opposite information flow digraph. In addition, we introduce a special informa-
tion flow digraph which guarantees that all vehicles converge to a line segment, equally
spaced. This is an extension to unicycles of a line-formation scheme of Wagner and
Chapter 5. Coupled Kinematic Unicycles 154
Bruckstein [115]. Finally, our third result shows that if the information flow digraph is
QSC and additionally the group of vehicles have a common sense of direction, then for
any geometric formation specified by a vector, the group of vehicles are stabilizable to
this desired formation.
However, collision avoidance is not considered in this chapter. There are some works
in the literature based on other formulations which take collision avoidance into account,
e.g., [52–55,125].
5.2 Problem Formulation
5.2.1 Kinematic Models
A mechanically very simple, and therefore common, wheeled vehicle is schematically
depicted in Fig. 5.1: two independently actuated fixed rear wheels and one small free
moving front castor wheel to keep the balance. The configuration is called a differentialPSfrag replacements
vi
ωi
iΣ
gΣ xi
yi
θi
Figure 5.1: Wheeled vehicle.
drive or unicycle.
Consider n such identical unicycles in the plane. The simplest possible kinematic
Chapter 5. Coupled Kinematic Unicycles 155
model for the ith unicycle is given by
xi = vi cos(θi),
yi = vi sin(θi),
θi = ωi,
(5.1)
where zi = (xi, yi) ∈ R2 is the center point on the wheel axis, θi ∈ R is the orientation
and the inputs vi, ωi are the forward and angular speeds, respectively. The kinematic
model above implies the nonholonomic constraint
(− sin(θi) cos(θi) 0
)
xi
yi
θi
= 0,
i.e., no sideways motion of a point on the wheel axis.
We can identify the real plane, R2, and the complex plane, C, by identifying a vector,
zi, and a complex number, zi. The kinematic model in complex form is
zi = viejθi ,
θi = ωi,(5.2)
Following [53], we construct a moving frame iΣ, the Frenet-Serret frame, that is fixed
on the vehicle (see Fig. 5.2). Let ri be the unit vector tangent to the trajectory in the
direction of motion at the current location of the vehicle (ri is the normalized velocity
vector) and let si be ri rotated by π/2. Since the vehicle is moving at speed vi, viri = zi,
and so in complex form
ri = ejθi , si = jri.
Thus
ri =d
dt
(ejθi)= jejθi θi = siωi, si = jri = jsiωi = −riωi.
The kinematic equations using the Frenet-Serret frame are therefore
zi = viri,
ri = ωisi,
si = −ωiri.
(5.3)
Chapter 5. Coupled Kinematic Unicycles 156
PSfrag replacements
zi
ri
si
iΣ
gΣ
trajectory
Figure 5.2: Frenet-Serret frame.
5.2.2 Information Flow Digraph
For n vehicles in the plane, they are indeed dynamically decoupled. That is, the motion
of one vehicle does not directly affect any of the other vehicles if we do not take collision
into account. However, they are coupled through the information flow in the group in
order to achieve desired group behavior such as formation.
We refer to the individual vehicles as nodes and the information flows as links. A
formal definition of an information flow digraph is stated next.
Definition 5.1 An information flow digraph G consists of
• a node set V = 1, 2 . . . , n, each node i corresponding to vehicle i;
• a set E of arcs: The arc from node j to node i is one of its arcs just in case vehicle
i can access the information from vehicle j in some way.
By this definition, information flows in the direction of the arcs in G. Let Ni denote
the set of labels of those vehicles whose information flows to vehicle i. Here, we assume
G is fixed, meaning the information flow topology is static. Although modeling the
information flow with a static digraph may not accurately model realistic situations
Chapter 5. Coupled Kinematic Unicycles 157
whereby sensors have a limited field of view, it is a necessary step toward the more
realistic dynamic setting where ad-hoc links can be established or dropped.
In our setup, it is assumed that no vehicle can access the absolute positions of other
vehicles or its own. Specifically, vehicle i can obtain only the relative positions of its
PSfrag replacements
gΣ
zi
zj
iΣ
jΣ
xij
yij
Figure 5.3: Local information.
neighbor vehicles with respect to its own Frenet-Serret frame (see Fig. 5.3), i.e.,
xij = (zj − zi) · ri,
yij = (zj − zi) · si,j ∈ Ni, (5.4)
where dot denotes dot product.
5.2.3 Stabilizability of Vehicle Formations
Our objective is to study how the information flow structure affects the feasibility of
stabilization of n vehicles to desired formations. For a formation in the plane, we em-
phasize that only the positions of the group of n vehicles are required and there is no
constraints on their orientations. Let z = (z1, . . . , zn) ∈ R2n be the aggregate position
state and let θ = (θ1, . . . , θn) ∈ Rn be the aggregate orientation state of n vehicles. For
more convenient mathematical treatment we introduce a formation space
F =z ∈ R
2n| z satisfies certain formation constraints
Chapter 5. Coupled Kinematic Unicycles 158
associated with each formation in the plane. For instance, the point formation space Fp
is defined as
Fp =z ∈ R
2n| z1 = · · · = zn
.
Thus, the n vehicles are said to be in formation F if and only if z ∈ F and the n vehicles
are said to converge to formation F if and only if z(t)→ z for some z ∈ F as t→∞.
Definition 5.2 Let F be a formation space. The n vehicles are said to be stabilizable
to formation F if there exists a smooth feedback controller
vi = gi
(t, (xij, yij)|j∈Ni
),
ωi = hi
(t, (xij, yij)|j∈Ni
),i = 1, . . . , n (5.5)
such that
∀(z(0), θ(0)
), ∃z ∈ F , z(t)→ z as t→∞.
In other words, the n vehicles are stabilizable to formation F if there is a feedback
controller (5.5) so that the n vehicles converge to formation F for all initial conditions.
Now comes our main problem.
Problem 5.1 What are the conditions on the information flow digraph G under which
the n vehicles are stabilizable to a desired formation F?
In the next three sections, we explore three subproblems, namely, stabilization of
point formations, line formations, and any geometric formations. Meanwhile, we present
their solutions.
5.3 Stabilization of Point Formations
In this section we study the stabilization problem of point formations. As we discussed
before, the formation space
Fp =z ∈ R
2n| z1 = · · · = zn
Chapter 5. Coupled Kinematic Unicycles 159
is used to describe the point formation of n vehicles in the plane, meaning that they
are collocated but can be anywhere. We want to find out what is the least restrictive
condition on the information flow digraph G such that the n vehicles are stabilizable to
formation Fp.
5.3.1 Main Result
Theorem 5.1 The n vehicles are stabilizable to formation Fp if and only if G is QSC.
Proof: (⇐=) If G is QSC, we show it by presenting an explicit smooth time-periodic
feedback controller
vi(t)
ω(t)
=
k∑j∈Ni
a′ijxij(t)
cos(t)
if Ni 6= φ
0
0
otherwise
(5.6)
for each i = 1, 2, . . . , n, where a′ij > 0 and k > 0.
Firstly, notice that when we set, for each i,
aij =
a′ij if j ∈ Ni,
0 otherwise,
then vi can be rewritten as
vi(t) = k∑
j 6=i
aijxij(t) = k∑
j 6=i
aij
(zj(t)− zi(t)
)· ri(t).
Next, using the identity
(zi · ri)ri = (rirTi )zi,
we obtain
zi = viri = k∑
j 6=i
aij
((zj − zi) · ri
)ri = k (rir
Ti )∑
j 6=i
aij
(zj − zi
).
Chapter 5. Coupled Kinematic Unicycles 160
Define
M (θi(t)) := rirTi =
cos2(θi(t)) cos(θi(t)) sin(θi(t))
cos(θi(t)) sin(θi(t)) sin2(θi(t))
(5.7)
and
H(θ(t)) :=
M(θ1(t)) 0 0 0
0 M(θ2(t)) 0 0
0 0. . . 0
0 0 0 M(θn(t))
. (5.8)
Thus, the overall position dynamics read as
z = kH(θ(t))(A⊗ I2
)z, (5.9)
where z ∈ R2n is the aggregate state of z1, z2, . . . , zn and A is an n × n matrix whose
off-diagonal entries are aij and diagonal entries are∑
j 6=i aij. Hence, the matrix A is a
generator matrix.
Let
Mi =1
2π
∫ 2π
0
M (θi(τ)) dτ.
Thus we obtain
Mi :=
m1i m2
i
m2i m3
i
where
m1i =
12π
∫ 2π0
cos2(θi(τ))dτ,
m2i =
12π
∫ 2π0
cos(θi(τ)) sin(θi(τ))dτ,
m3i =
12π
∫ 2π0
sin2(θi(τ))dτ.
Let Hav = diag(M1, . . . , Mn). Then the averaged system associated with the system (5.9)
is given as follows
z = Hav
(A⊗ I2
)z. (5.10)
Applying the Cauchy-Schwarz inequality leads to
m1im
3i ≥ (m2
i )2.
Chapter 5. Coupled Kinematic Unicycles 161
Furthermore, since θi(t) is not constant, the inequality holds strictly. It now follows that
Mi is positive definite and therefore Hav is positive definite. More exactly, we say Hav is
α-diagonal positive definite with α =1, 2, 3, 4, . . . , 2n− 1, 2n
.
Recall that G is an opposite digraph of GA, the associated digraph of the generator
matrix A. If G is QSC then GA has a globally reachable node by Theorem 2.1 and the
matrix A ⊗ I2 is H(α, 2) stable by Theorem 2.15. This means that there is a similarity
transformation F such that
F−1 (A⊗ I2)F =
As 0
0 02×2
,
where As is Hurwitz and the last two column vectors of F must be in the null space of
A ⊗ I2. Without loss of generality, we choose the last two column vectors of F to be
1⊗ I2×2.
Applying the transformation
e =
es
eo
= F−1z
where es ∈ R2n−2 and eo ∈ R
2, to the system (5.9) yields
es
eo
= kF−1H (θ(t)) (A⊗ I2)Fe = kF−1H (θ(t))F
As 0
0 02×2
es
es
=: k
As(t) 0
B(t) 02×2
es
eo
.
Correspondingly, for the averaged system (5.10), we apply the same coordinate transfor-
mation. Then we have
es
eo
= F−1Hav (A⊗ I2)Fe = F−1HavF
As 0
0 02×2
es
eo
=:
As 0
B 02×2
es
eo
.
Chapter 5. Coupled Kinematic Unicycles 162
Since A⊗ I2 is H(α, 2) stable, the matrix As has to be Hurwitz. Hence the reduced
averaged system
es = Ases
is exponentially stable. Then, it follows from Theorem A.1 (Averaging theorem) that
there exists a positive constant k∗ such that, for all 0 < k < k∗, global exponential
stability of the reduced original system
es = kAs(t)es
is established.
Moreover, since eo = kB(t)es and B(t) is uniformly bounded, it follows that eo → 0
exponentially as t → ∞. This implies that eo tends to some finite constant vector, say
a ∈ R2. In conclusion,
limt→∞
z(t) = limt→∞
F
es(t)
eo(t)
= 1⊗ a.
Thus, we have proven that the n vehicles are stabilizable to formation Fp by the smooth
time-varying feedback controller (5.6).
(=⇒) If the n vehicles are stabilizable to formation Fp then there exists a feedback
controller (5.5) so that the n vehicles converge to formation Fp for all initial conditions.
Let this controller be
vi = gi
(t, (xij, yij)|j∈Ni
),
ωi = hi
(t, (xij, yij)|j∈Ni
),i = 1, . . . , n.
Thus, we obtain the closed-loop system for i = 1, . . . , n,
zi = gi
(t, (xij, yij)|j∈Ni
)ri = gi
(t,[(zj − zi) · ri, (zj − zi) · si
]|j∈Ni
)ri
θi = hi
(t, (xij, yij)|j∈Ni
)= hi
(t,[(zj − zi) · ri, (zj − zi) · si
]|j∈Ni
) (5.11)
and we have that for all initial conditions z(0) and θ(0),
∃z ∈ Fp, z(t)→ z as t→∞.
Chapter 5. Coupled Kinematic Unicycles 163
Arbitrarily choosing an initial condition z(0) = ζ and θ(0) = ϑ, where ζ = (ζ1, . . . , ζn) ∈
R2n and ϑ = (ϑ1, . . . , ϑn) ∈ R
n, there is a z ∈ Fp such that z(t) → z as t tends to ∞.
That is,
∃a ∈ R2, zi(t)→ a for all i as t→∞.
By way of contradiction, suppose that G is not QSC. Then there are two nodes i∗
and j∗ such that for any node k, either i∗ or j∗ is not reachable from k. Let V1 be the
subset of nodes from which i∗ is reachable and V2 be the subset of nodes from which j∗
is reachable. Obviously, V1 and V2 are disjoint. Moreover, for each node i ∈ V1 (resp.
V2), Ni ⊆ V1 (resp. V2).
On the one hand, let b ∈ R2 but b 6= 0. Then for i ∈ V1, we let z′i = zi+ b and θ′i = θi.
Applying this coordinate transformation and taking the fact Ni ⊆ V1 for all i ∈ V1 into
account, we obtain
z′i = gi
(t,[(z′j − z′i) · ri, (z′j − z′i) · si
]|j∈Ni
)ri
θ′i = hi
(t,[(z′j − z′i) · ri, (z′j − z′i) · si
]|j∈Ni
) for i ∈ V1.
Notice that the dynamics of z′i and θ′i, i ∈ V1, depend only on themselves and are
completely the same as the dynamics of zi and θi given in (5.11). So given the same
initial condition z′i(0) = ζi and θ′i(0) = ϑi for i ∈ V1, we can conclude that z′i(t) → a
for all i ∈ V1 as t → ∞. This in turn implies that, for i ∈ V1, when zi(0) = ζi − b, the
trajectory zi(t)→ (a− b) by recalling that z ′i = zi + b.
On the other hand, when zj(0) = ζj and θj(0) = ϑj, ∀j ∈ V2, the trajectory zj(t)→ a
as t→∞.
Now we have an initial condition satisfying zi(0) = ζi − b for i ∈ V1 and zj(0) = ζj
for j ∈ V2, for which
zi(t)→ (a− b), ∀i ∈ V1,
zj(t)→ a, ∀j ∈ V2,as t→∞.
This contradicts the fact that for all initial conditions there is a z ∈ Fp such that z(t)→ z
as t→∞. ¥
Chapter 5. Coupled Kinematic Unicycles 164
Remark 5.1 An alternative choice of controller to stabilize the n vehicles to formation
Fp is
vi(t) =∑j∈Ni
aijxij(t),
ωi(t) = γ cos(γt),
i = 1, 2, . . . , n (5.12)
where aij > 0 and γ > 0. By applying a time scaling, τ = tγ, one can use the same
argument and know that there exists a positive constant γ∗ such that, for all γ∗ < γ <∞,
the controller (5.12) stabilize the n vehicles to a point formation Fp.
Remark 5.2 The above results show that the n vehicles can be made exponentially con-
verge to a point formation through simple local actions, or we say they achieve an agree-
ment about a common point if the information flow digraph is QSC. Notice that this
graphical condition is equivalent to that G has a centre node. So if we treat a beacon
placed at the proper location as one member of the group of vehicles and it is the centre
node in the information flow digraph, the local actions of each individual vehicle result
in the group gathering at the beacon.
5.3.2 Simulation Example
Consider ten unicycles in the plane. The information flow digraph is given in Fig. 5.4.
Clearly, it is QSC. For example, the node 5 is a centre node in the graph, while the node 1
is not. The ten unicycles are randomly initialized and the controller (5.6) is used with
the choice of k = 1. A simulation trajectories are presented in Fig. 5.5, which shows
that they eventually come to a same location (point formation).
5.4 Stabilization of Line Formations
Now we study the stabilization problem of line formations. First we define the line
formation space as follows
Fl =z ∈ R
2n| (∃a1, a2 ∈ R2)(∀i)(∃µi ∈ [0, 1])zi = µia1 + (1− µi)a2
.
Chapter 5. Coupled Kinematic Unicycles 165
PSfrag replacements
1
2
3
4
5
6
7
8
9
10
Figure 5.4: The information flow digraph.
Figure 5.5: Trajectories of ten unicycles in the plane.
Thus, z ∈ Fl implies that the n vehicles are in a line formation. Notice that Fp ⊂ Fl.
So a point formation is a special case of line formations. In this section we would like
to find out what is the least restrictive condition on the information flow digraph G such
that the n vehicles are stabilizable to formation Fl.
Chapter 5. Coupled Kinematic Unicycles 166
5.4.1 Main Result
Let us recall the notion G∗ here which is introduced in Chapter 2: G∗ is the opposite
digraph of G. Now comes our result.
Theorem 5.2 The n vehicles are stabilizable to formation Fl if and only if G∗ has at
most two closed strong components.
Proof: (⇐=) Firstly, suppose that G∗ has one closed strong component. Then by
Theorem 2.1 G is QSC and so by Theorem 5.1 the n vehicles are stabilizable to Fp, which
is a subset of Fl.
Secondly, suppose that G∗ has two closed strong components, say G∗1 = (V1, E∗1 ),
G∗2 = (V2, E∗2 ), and in addition that V1∪V2 = V , the node set of G∗. Then both G1 and G2
are QSC by Theorem 2.1. Thus, it follows from Theorem 5.1 that there exist controllers
such that
∃a1 ∈ R2, zi(t)→ a1 for all i ∈ V1 as t→∞
and
∃a2 ∈ R2, zj(t)→ a2 for all j ∈ V2 as t→∞.
This implies that
∃z ∈ Fl, z(t)→ z as t→∞.
Thirdly, suppose that G∗ has two closed strong components, say G∗1 = (V1, E∗1 ), G∗
2 =
(V2, E∗2 ), but V1 ∪ V2 6= V . Let V3 = V − V1 − V2. Without loss of generality, assume
V1 = 1, . . . , r1, V2 = r1 + 1, . . . , r1 + r2, and V3 = r1 + r2 + 1, . . . , n.
Let r3 = n − r1 − r2. Consider the time-varying feedback controller (5.6) again. Then
the overall closed-loop system is given as follows:
z = kH(θ(t)) (A⊗ I2) z (5.13)
Chapter 5. Coupled Kinematic Unicycles 167
where H(θ(t)) is the same as (5.8) and A is a generator matrix. Further, the matrix A
has the form
A =
A1 0 0
0 A2 0
B1 B2 A3
,
where A1 ∈ Rr1×r1 , A2 ∈ R
r2×r2 , and A3 ∈ Rr3×r3 . Furthermore, one can easily verify
that −A3 is a nonsingular M -matrix by Theorem 2.7. Thus the system (5.13) can be
written as
z1 = kH1(θ(t)) (A1 ⊗ I2) z1,
z2 = kH2(θ(t)) (A2 ⊗ I2) z2,
z3 = kH3(θ(t))((A3 ⊗ I2)z
3 + (B1 ⊗ I2)z1 + (B2 ⊗ I2)z
2),
where zi, i = 1, 2, 3 are of compatible dimensions and H1(θ(t)), H2(θ(t)), H3(θ(t)) are
suitable matrices.
Since the associated digraph GA1 is just the digraph G∗1 , which is strongly connected,
then by the same argument as in the proof of Theorem 5.1, there exists a positive constant
k∗1 such that, for all 0 < k < k∗1,
limt→∞
z1(t) = 1⊗ a1 for some a1 ∈ R2.
For the same reason, there exists k∗2 such that, for all 0 < k < k∗2,
limt→∞
z2(t) = 1⊗ a2 for some a2 ∈ R2.
Moreover, they have exponential convergence rate.
Next applying the change of variables ς = (A3 ⊗ I2)z3 + (B1 ⊗ I2)z
1 + (B2 ⊗ I2)z2
yields
ς = k(A3 ⊗ I2)H3(θ(t))ς + k(B1 ⊗ I2)H
1(θ(t))(A1 ⊗ I2)z1
+k(B2 ⊗ I2)H2(θ(t))(A2 ⊗ I2)z
2.(5.14)
Since −A3 is a nonsingular M -matrix, and so is −AT3 . It follows from the same
argument in the proof of Theorem 2.15 that (A3 ⊗ I2)T is H(α, 0) stable where α =
Chapter 5. Coupled Kinematic Unicycles 168
1, 2, . . . , r3− 1, r3
. This implies that, for the α block diagonal symmetric positive
matrix H3, the average of H3(θ(t)), it follows that H3 (A3 ⊗ I2)T is Hurwitz and so is
(A3 ⊗ I2)H3. Invoking Theorem A.1 (Averaging theorem) here gives that, there exists a
positive constant k∗3 such that, for all 0 < k < k∗3, the origin of the nominal system
ς = k(A3 ⊗ I2)H3(θ(t))ς
is globally exponentially stable.
In addition, notice that the other two terms in (5.14) both exponentially converge to
zero. Hence (5.14) can be viewed as an exponentially stable system with an exponentially
vanishing input, and so its origin is exponentially stable.
Let k∗ = mink∗1, k∗2, k∗3. Thus, for all 0 < k < k∗,
limt→∞
z3(t) = − (A3 ⊗ I2)−1 (B1 ⊗ I2) lim
t→∞z1(t)− (A3 ⊗ I2)
−1 (B2 ⊗ I2) limt→∞
z2(t)
= − (A3 ⊗ I2)−1 (B1 ⊗ I2) (1⊗ a1)− (A3 ⊗ I2)
−1 (B2 ⊗ I2) (1⊗ a2)
= −(A−13 B11
)⊗ a1 −
(A−13 B21
)⊗ a2.
Since (B1 B2 A3
)1 = 0,
we have
−(A−13 B11
)−(A−13 B21
)= 1.
Hence, all zi(t) for i ∈ V3 approach a convex combination of a1 and a2, which implies
that
z(t)→ z for some z ∈ Fl as t→∞.
(=⇒) The necessary proof follows from the same idea as the one for Theorem 5.1. In
other words, if the n vehicles are stabilizable to formation Fl, then we choose one of these
stabilizing controllers and obtain a closed-loop system. Thus, for all initial conditions,
the trajectory z(t) tends to some point z in Fl as t tends to ∞. However, by way of
contradiction, suppose that G∗ has at least three closed strong components. Then by the
Chapter 5. Coupled Kinematic Unicycles 169
same argument shown in the proof of Theorem 5.1, we can find an initial condition for
which the trajectory z(t) converges to some point not in Fl, a contradiction. ¥
Theorem 5.2 has an interesting special case when G∗ has two closed strong compo-
nents, both of which consisting of only one node, say 1 and n. Vehicles 1 and n are called
edge leaders. The edge leaders here are not necessarily wheeled vehicles. They can be
virtual beacons or landmarks. But the vehicles respond to these edge leaders much like
they respond to real neighbor vehicles. The purpose of the edge leaders is to introduce
the mission: to direct the vehicle group behavior. We emphasize that the edge leaders
are not central coordinators. They do not broadcast instructions. As for the remaining
vehicles, i, i = 2, . . . , n− 1, we assume that each agent can access the information from
agents i− 1 and i+1. This gives the information flow digraph G in Fig. 5.6. It is readily
seen that the opposite digraph G∗ has exactly two closed strong components. We now
show that in this special case all vehicles converge to a uniform distribution on the line
segment specified by the two edge leaders.
PSfrag replacements
1 2 3 nn− 2 n− 1
Figure 5.6: The information flow digraph.
Theorem 5.3 Consider a group of n vehicles with two stationary edge leaders labeled
1 and n. Then, there exists a positive constant k∗ such that for all 0 < k < k∗, the
following smooth time-varying feedback controller
vi(t) = k∑
j=Ni
xij(t), Ni = i− 1, i+ 1,
ωi(t) = cos(t),
i = 2, . . . , n− 1 (5.15)
guarantees that all the vehicles converge to a uniform distribution on the line segment
specified by the two edge leaders.
Chapter 5. Coupled Kinematic Unicycles 170
Proof: First we observe that for the controller (5.15) the overall closed-loop system is
of the form (5.13), where the matrix A is given as follows:
A =
0 0 0 0 · · · 0
1 −2 1 0 · · · 0
0 1 −2 1 · · · 0
.... . . . . . . . . . . .
...
0 · · · 0 1 −2 1
0 · · · 0 0 0 0
.
Since G∗ has two closed strong components, from the proof of Theorem 5.2 we know
that there exists k∗ > 0 such that, for all 0 < k < k∗, the n vehicles are stabilizable to
formation Fl. That is,
∃z ∈ Fl, z(t)→ z ast→∞.
This in turn implies that
limt→∞
(A⊗ I2)z(t) = 0.
Consider the following partition of 1, 2, . . . , n,
m1,m2, . . . ,m2n = 1, 3, 5, . . . , 2n− 1, 2, 4, 6, . . . , 2n.
Then the associated permutation matrix P has the unit coordinate vectors
em1 , em2 , . . . , em2n
for its columns. Now observe that the matrix P performs the transformation
P T (A⊗ I2)P = I2 ⊗ A =
A 0
0 A
and P T z =
x
y
,
where x = (x1 · · · xn)T and y = (y1 · · · yn)T . Thus,
0 = (A⊗ I2)z(∞) = P (I2 ⊗ A)P T z(∞) = P
A 0
0 A
x(∞)
y(∞)
Chapter 5. Coupled Kinematic Unicycles 171
implies
Ax(∞) = 0 and Ay(∞) = 0.
Also note that
Ker(A) = span
ξ1 =
0
1
2
...
n− 1
, ξ2 =
n− 1
n− 2
n− 3
...
0
.
Hence
x(∞) = α1ξ1 + α2ξ2 and y(∞) = β1ξ1 + β2ξ2.
Furthermore, since
x1(∞) = x1(0), xn(∞) = xn(0) and y1(∞) = y1(0), yn(∞) = yn(0),
we solve for
α1 =xn(0)
n− 1, α2 =
x1(0)
n− 1and β1 =
yn(0)
n− 1, β2 =
y1(0)
n− 1.
In conclusion, we have
x1(∞)
x2(∞)
...
xn−1(∞)
xn(∞)
=
x1(0)
(n−2)x1(0)+xn(0)n−1
...
x1(0)+(n−2)xn(0)n−1
xn(0)
,
y1(∞)
y2(∞)
...
yn−1(∞)
yn(∞)
=
y1(0)
(n−2)y1(0)+yn(0)n−1
...
y1(0)+(n−2)yn(0)n−1
yn(0)
.
This shows that all vehicles asymptotically approach a uniform distribution on the line
with the controller (5.15). ¥
Chapter 5. Coupled Kinematic Unicycles 172
5.4.2 Simulation Example
We make a simulation of six unicycles in the plane where two edge leaders are stationary
and the remaining four unicycles moves around the plane using the control law (5.15)
with k = 1. The information flow digraph is the same as the one in Fig. 5.6. As shown
by the simulation in Fig. 5.7, they converge to a line and are distributed in the line
segment uniformly.
Figure 5.7: A uniform distribution on a line.
5.5 Stabilization of Any Geometric Formations
In this section, we turn our attention to the stabilization problem of any geometric
formations. Represent a geometric formation of n vehicles in the plane by specifying a
Chapter 5. Coupled Kinematic Unicycles 173
vector c ∈ R2n, where c = (c1, . . . , cn). We introduce the formation space
Fc =z ∈ R
2n| (∃R, b)(∀i)zi = Rci + b
to describe the formation specified by c, where
R(·) =
cos(·) − sin(·)
sin(·) cos(·)
and b ∈ R
2
are a rotation matrix and a translation vector, respectively. Such a formation is up to
translation and rotation. In other words, the n vehicles are in formation Fc if there are
a rotation matrix R and a translation vector b such that zi = Rci + b for all i (see Fig.
5.8 for example).
PSfrag replacements
Rc1
1
2
3
4
5
c1
c2
c3 c4
c5
b
Figure 5.8: Vehicles in formation.
In order for a group of vehicles to achieve a formation specified by c, suppose that
they have a common sense of direction, represented by the angle ϑ in Fig. 5.9 and assume
that each individual i can measure its own orientation with respect to the common sense
of direction, i.e, the angle φi in Fig. 5.9. This can be implemented. For instance, each
Chapter 5. Coupled Kinematic Unicycles 174
vehicle carries a navigation device such as a compass. Alternatively, all vehicles initially
agree on their orientation and use it as the common direction. The common direction
may not coincide with the positive x-axis of the global frame. Thus φi = θi− ϑ (see Fig.
5.9).
PSfrag replacements
i
j
k
ϑ
ϑ
ϑ
φi
φj
φk
gΣ
Figure 5.9: A common sense of direction.
5.5.1 Main Result
Let us first define a rotation matrix
R(α) :=
R1(α)
R2(α)
:=
cos(α) − sin(α)
sin(α) cos(α)
.
Our result for stabilization of any geometric formation Fc is stated next.
Theorem 5.4 If G is QSC then the n vehicles are stabilizable to formation Fc by the
Chapter 5. Coupled Kinematic Unicycles 175
following feedback controller
vi
ωi
=
∑j∈Ni
a′ij
(xij +R1(−φi)(cj − ci)
)
cos(t)
, if Ni 6= φ
0
0
, otherwise
(5.16)
where a′ij > 0, i = 1, . . . , n, and 0 < k < k∗ for some k∗.
Proof: Firstly, notice that when we set, for each i,
aij =
a′ij if j ∈ Ni,
0 otherwise,
then vi can be rewritten as
vi = k∑j 6=i
aij
(xij +R1(−φi)(cj − ci)
)
= k∑j 6=i
aij
((zj − zi) +R(ϑ)(cj − ci)
)· ri.
After substituting vi into the kinematic equation of each vehicle, we get
zi = kM(θi(t))∑
j 6=i
aij
((zj − zi) +R(ϑ)(cj − ci)
).
Hence, the overall system is given by
z = kH(θ(t))((A⊗ I2)z + (In ⊗R(ϑ)) (A⊗ I2)c
).
where H(θ(t)) and A are the same as in (5.9).
By the property of Kronecker product, we know that
(In ⊗R(ϑ))(A⊗ I2) = A⊗R(ϑ) = (A⊗ I2)(In ⊗R(ϑ)).
Thus, using the equality above we obtain
z = kH(θ(t))(A⊗ I2)(z −
(In ⊗R(ϑ)
)c).
Chapter 5. Coupled Kinematic Unicycles 176
Applying the coordinate transformation ς = z −(In ⊗ R(ϑ)
)c gives the following new
system
ς = kH(θ(t))(A⊗ I2)ς.
It follows from the proof of Theorem 5.1 that limt→∞
ς(t) = 1⊗ a for some a ∈ R2. Hence,
limt→∞
zi(t) = R(ϑ)ci + a, i = 1, . . . , n
which means that the n vehicles converge to formation Fc. ¥
Remark 5.3 Notice that θi(t) = θi(t0)+sin(t), so if the n vehicles achieve an agreement
on their initial orientation θi(t0) and treat it as their common sense of direction, the
feedback controller (5.16) becomes
vi
ωi
=
k∑j∈Ni
a′ij
(xij +Re
(e−j sin(t)
)(cj − ci)
)
cos(t)
if Ni 6= φ,
0
0
otherwise,
(5.17)
The agreement on their orientation can be implemented by an linear alignment strategy
as shown in Chapter 3. However, the controller refeq:VarOffsetControl) is not robust in
practice.
5.5.2 Simulation Example
We make a simulation of ten unicycles moving in the plane. Suppose the information flow
digraph is the one given in Fig. 5.4. As we mentioned before, it is QSC. A desired circle
formation specified by c is considered, where the components of c are ci = 75e(j2(i−1)π
10), i =
1, . . . , 10, respectively. Then, the controller (5.17) is used with the parameter k = 1. For
the initial condition
x(0) = (10, 10, 10, 0, 0, 0, 0,−10,−10,−10)T ,
y(0) = (−5, 0, 5, 10, 3,−3,−10,−5, 0, 5)T ,
θ(0) =(π6, π6, π6, π6, π6, π6, π6, π6, π6, π6
)T,
Chapter 5. Coupled Kinematic Unicycles 177
the trajectories of ten vehicles forming a circle formation are depicted in Fig. 5.10.
Figure 5.10: A circle formation.
Chapter 6
Conclusions and Future Work
6.1 Thesis Summary
We have initiated a research effort to study the stability and stabilizability problems
of coupled dynamic systems. The coupling structure is schematically represented by a
directed graph whose nodes correspond to subsystems (or agents) and whose arcs rep-
resent interactions or information flows among them. This work seeks to address the
stability properties of coupled dynamic systems by studying the coupling structures and
provides rigorous mathematical proofs and justifications. Of particular interest is the
class of coupled dynamic systems where the equilibrium set contains all the states with
identical state components. First, stability and attractivity with respect to the equilib-
rium subspace are studied for both coupled linear systems and coupled nonlinear systems
with static and dynamic interaction structures. Necessary and sufficient graphical con-
ditions are derived to ensure that the equilibrium set is (uniformly globally) attractive.
Our results may explain the observed behaviors of a number of phenomena in nature.
Secondly, the stabilizability problem of vehicle formations is studied with special focus
on a network of kinematic unicycles. Again, the coupling structure—the information
flow digraph—plays the central role to address the stabilizability issue. Necessary and
178
Chapter 6. Conclusions and Future Work 179
sufficient graphical conditions are also obtained for global stabilization of vehicle forma-
tions (point formations and line formations). Stabilization of vehicles to any geometric
formations is also formulated and investigated.
6.2 Future Work
Here, we outline research directions which spring from this thesis.
• In Chapter 5, the feasibility problem of achieving a specified geometric formation of
a group of unicycles is investigated where the information flow structure is assumed
to be static. However, the more interesting and realistic situation is when the
information flow graph is time-varying or state-dependent. Developing more general
results for a network of unicycles with the dynamic information flow graph, where
ad hoc links can be established and dropped, is still an open issue. In addition,
collision avoidance is another important issue which needs to be studied in the
future.
• In Chapter 4, we assume the family of vector fields satisfy certain sub-tangentiality
conditions and then obtain a necessary and sufficient condition for (global uniform)
attractivity with respect to an equilibrium subspace. However, not all such systems
satisfy these assumptions. Some examples can be found in [6]. Hence, if we suppose
the dynamic interaction digraph is UQSC, then what conditions of the family of
vector fields can be derived to guarantee stability of the equilibrium set?
• In Chapter 4, the stability properties of switched interconnected systems are studied
where the switching signal is state-independent. In many applications, switching
among the family of vector fields fp : p ∈ P is state-dependent. In other words,
Chapter 6. Conclusions and Future Work 180
the switched interconnected system is
x1 = f 1σ(x)(x1, . . . , xn),
...
xn = fnσ(x)(x1, . . . , xn)
where σ : Rmn → P . This type of switched interconnected system, which would
call for another framework and new tools from hybrid systems or non-continuous
systems analysis, needs to be further studied in the framework of coupled dynamic
systems.
• In real applications of multi-agent systems, motion coordination is not the final
objective. A group of agents has to interact with the environment to execute
certain tasks in a cooperative fashion. Consider, for example, group foraging and
swarm robotics for search applications. How do we use autonomous, dynamic, local
interaction (as do insects) to enable collective foraging over challenging terrains?
The foraging of an insect society is a complex process involving large numbers of
individuals collecting food from many different sources. In foraging, each agent
needs not only to coordinate its motion with the neighbor agents, but also to act
in response to the environment. To better understand collective foraging behavior,
we will need a model of the coupled agent-environment system and we will need to
develop a comprehensive, rigorous stability theory for it.
Appendix A
Supplementary Material
A.1 Set Stability and Attractivity
In this section, set stability and attractivity are defined. Consider a nonautonomous
system
x = f(t, x) (A.1)
where f : R×Rn → R
n is piecewise continuous in t and continuous in x on R×Rn. Let
x(t, t0, x0) be a solution of (A.1) corresponding to the initial condition x(t0, t0, x
0) = x0.
A set Ω ⊂ Rn is called a positively invariant set of the system (A.1) if for all t0 ∈ R
and x0 ∈ Ω
x(t, t0, x0) ∈ Ω for all t ≥ t0.
In the same fashion, a set Ω ⊂ Rn is called an invariant set of the system (A.1) if for all
t0 ∈ R and x0 ∈ Ω
x(t, t0, x0) ∈ Ω for all t ∈ R.
Invariant sets can be represented by isolated (stationary) points, curves, surfaces and
other closed and open sets.
Let Ω be an invariant set of the system (A.1). Now we introduce the notions of
stability and attractivity with respect to Ω. These are concerned with the behavior of
181
Appendix A. Supplementary Material 182
any solution x(t) = x(t, t0, x0).
Definition A.1 The invariant set Ω, or the system (A.1) with respect to Ω, is called
• stable if for all t0 and for all ε > 0 there exists δ > 0 such that
‖x0‖Ω ≤ δ =⇒ (∀t ≥ t0) ‖x(t))‖Ω ≤ ε;
• uniformly stable (US) if for all ε > 0 there exists δ > 0 such that for all t0
‖x0‖Ω ≤ δ =⇒ (∀t ≥ t0) ‖x(t)‖Ω ≤ ε.
According to the definition above, the system is stable with respect to Ω if, given
that we do not want the distance of any trajectory x(t) from Ω to exceed a prespecified
positive number ε, we are able to determine an apriori bound δ in such a way that any
solution starting at t0 from an initial state inside Ω’s neighborhood of radius δ always
stays inside Ω’s neighborhood of radius ε at all future times t ≥ t0.
Once the notion of stability is understood, it is easy to understand what uniform
stability means. According to this definition, the δ depends on both ε and t0 for stability.
However, if a δ can be found that depends only on ε and not on t0, then the system is
uniformly stable with respect to Ω. If the system (A.1) is autonomous (f does not depend
explicitly on t), then there is no distinction between stability and uniform stability, since
changing the initial time merely translates the resulting solution in time by a like amount.
Note that when Ω = x, the invariant set Ω just becomes an equilibrium point x, and
this definition is completely the same as the conventional equilibrium stability concept.
In the notions of attractivity below we define a regional attractivity directly (including
the domain of attraction) instead of local attractivity. Let D be a set in Rn.
Definition A.2 The invariant set Ω, or the system (A.1) with respect to Ω, is called
• attractive in region D if
x0 ∈ D =⇒ limt→∞
‖x(t)‖Ω = 0;
Appendix A. Supplementary Material 183
• uniformly attractive (UA) in region D if for all c > 0
(‖x0‖ ≤ c
)∧(x0 ∈ D
)=⇒ lim
t→∞‖x(t)‖Ω = 0 uniformly in x0, t0;
• globally attractive (GA) if it is attractive in Rn;
• uniformly globally attractive (UGA) if it is uniformly attractive in Rn.
Regional attractivity with respect to the invariant set simply means that every solu-
tion starting in the region D approaches Ω as t→∞. Note that there is no requirement
of uniformity at all. On the contrary, uniform attractivity requires firstly that the con-
vergence rate is independent of t0 and secondly that the solution trajectories starting
inside any bounded set in D all approach Ω at a uniform rate. Note that D may not be
bounded in general. So uniformity on x0 in an unbounded set may be too strong, which
is the reason why the boundedness of initial states (namely, ‖x0‖ ≤ c) is necessary in the
notion of uniform attractivity.
Note that uniform attractivity in the definition above is equivalent to the following
statement: For all c > 0 and for all ε > 0 there exists a T > 0 such that for all t0
(‖x0‖ ≤ c
)∧(x0 ∈ D
)=⇒ (∀t ≥ t0 + T ) ‖x(t)‖Ω ≤ ε.
As mentioned before, there is no distinction between stability and uniform stability for
autonomous systems. Likewise, there is no distinction between attractivity and uniform
attractivity for autonomous systems.
A graphical interpretation of the notions of stability, attractivity in D, and global
attractivity with respect to Ω are shown in Fig. A.1
A.2 Averaging Theory
In this section we introduce an averaging result that is useful for (asymptotical) stability
analysis of slowly time-varying systems. More details can be found in [2, 58, 73, 84, 99].
Appendix A. Supplementary Material 184
PSfrag replacements
Stability Attractivity Global attractivity
ΩΩ Ω
D
Figure A.1: Stability, attractivity, and global attractivity with respect to Ω.
The averaging method applies to a system of the form
x = εf(t, x, ε) (A.2)
where ε is a small positive parameter and f(t, x, ε) is T -periodic in t for some T > 0.
We associate with (A.2) an autonomous averaged system
x = fav(x), (A.3)
where
fav(x) =1
T
∫ T
0
f(τ, x, 0)dτ.
Then we arrive at the following theorem.
Theorem A.1 ( [58], page 333) Let f(t, x, ε) be continuous and bounded, and have
continuous and bounded derivatives up to the second order with respect to (x, ε) for
(t, x, ε) ∈ [0, ∞) × D × [0, ε0], where D is a neighborhood of the origin. Suppose f
is T -periodic in t for some T > 0 and that f(t, 0, ε) = 0 for all t ≥ 0 and ε ∈ [0, ε0].
If the origin is an exponentially stable equilibrium point of the averaged system (A.3),
then there is a positive constant ε∗ ≤ ε0 such that for all 0 < ε < ε∗, the origin is an
exponentially stable equilibrium point of (A.2).
Bibliography
[1] D. Aeyels and P. D. Leenheer, “Extension of the Perron-Frobenius theorem to
homogeneous systems,” SIAM Journal on Control and Optimization, vol. 41, no. 2,
pp. 563–582, 2002.
[2] D. Aeyels and J. Peuteman, “On exponential stability of nonlinear time-varying
differential equations,” Automatica, vol. 35, no. 6, pp. 1091–1100, 1999.
[3] B. D. O. Anderson, “New developments in the theory of positive systems,” in
Systems and Control in the Twenty-First Century, C. I. Byrnes, B. N. Datta, D. S.
Gilliam, and C. F. Martin, Eds. Birkhauser, 1997, pp. 17–36.
[4] H. Ando, Y. Oasa, I. Suzuki, and M. Yamashita, “Distributed memoryless point
convergence algorithm for mobile robots with limited visibility,” IEEE Transactions
on Robotics and Automation, vol. 15, no. 5, pp. 818–828, 1999.
[5] H. Ando, I. Suzuki, and M. Yamashita, “Formation and agreement problems for
synchronous mobile robots with limited visibility,” in Proceedings of the 1995 IEEE
International Symposium on Intelligent Control, Monterey, CA, USA, 1995, pp.
453–460.
[6] D. Angeli and P. A. Bliman, “Stability of leaderless multi-agent systems: extention
of a result by Moreau,” 2005, submitted for publication.
[7] J. P. Aubin, Viability Theory. Birkhauser, 1991.
185
Bibliography 186
[8] J. P. Aubin and A. Cellina, Differential Inclusions, Set-valued Maps and Viability
Theory. Springer-Verlag, 1984.
[9] T. Balch and R. C. Arkin, “Behavior-based formation control for multirobot
teams,” IEEE Transactions on Robotics and Automation, vol. 14, no. 6, pp. 926–
939, 1998.
[10] J. Bang-Jensen and G. Gutin, Digraphs: Theory, Algorithms and Applications.
Springer-Verlag, 2002.
[11] R. W. Beard, J. Lawton, and F. Y. Hadaegh, “A coordination architecture for
spacecraft formation control,” IEEE Transactions on Control Systems Technology,
vol. 9, no. 6, pp. 777–790, 2001.
[12] R. W. Beard and V. Stepanyan, “Information consensus in distributed multiple ve-
hicle coordinated control,” in Proceedings of the 42nd IEEE Conference on Decision
and Control, Maui, Hawaii, USA, 2003, pp. 2029–2034.
[13] L. W. Beineke and R. J. Wilson, Graph Connections: Relationships Between Graph
Theory and Other Areas of Mathematics. Clarendon Press, 1997.
[14] I. V. Belykh, V. N. Belykh, and M. Hasler, “Blinking model and synchronization
in small-world networks with a time-varying coupling,” Physica D, vol. 195, no. 1,
2004.
[15] ——, “Connection graph stability method for synchronized coupled chaotic sys-
tems,” Physica D, vol. 195, no. 1, 2004.
[16] C. Berge and A. Ghouila-Houri, Programming, Games and Transportation Net-
works. John Wiley and Sons, 1965.
[17] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sci-
ences. Classics in Appl. Math. 9, SIAM, 1994.
Bibliography 187
[18] R. Bhatia, Matrix Analysis. Springer-Verlag, 1996.
[19] F. Blanchini, “Set invariance in control,” Automatica, vol. 35, no. 11, pp. 1747–
1767, 1999.
[20] G. Bouligand, Introduction a la geometrie infinitesimmale directe. Gauthiers-
Villars, 1932.
[21] C. M. Breder, “Equations descriptive of fish schools and other animal aggregations,”
Ecology, vol. 35, no. 3, pp. 361–370, 1954.
[22] R. W. Brockett, “Asymptotic stability and feedback stabilization,” in Differential
Geometric Control Theory: Proceedings of the Conference Held at Michigan Tech-
nological University, R. W. Brockett, R. S. Millman, and H. J. Sussmann, Eds.
Birkhauser, 1983.
[23] F. H. Clarke, “Generalized gradients and applications,” Transactions of the Amer-
ican Mathematical Society, vol. 205, no. 4, pp. 247–262, 1975.
[24] J. Cortes, S. Martinez, and F. Bullo, “Robust rendezvous for mobile autonomous
agents via proximity graphs in d dimensions,” 2004, submitted to IEEE Transac-
tions on Automatic Control.
[25] J. M. Danskin, “The theory of max-min, with applications,” SIAM Journal on
Applied Mathematics, vol. 14, no. 4, pp. 641–664, 1966.
[26] M. H. DeGroot, “Reach a consensus,” Journal of the American Statistical Associ-
ation, vol. 69, no. 345, pp. 118–121, 1974.
[27] J. P. Desai, J. P. Ostrowski, and V. Kumar, “Modelling and control of formation
of nonholonomic mobile robots,” IEEE Transactions on Robotics and Automation,
vol. 17, no. 6, pp. 905–908, 2001.
Bibliography 188
[28] J. Desai, J. Ostrowski, and V. Kumar, “Controlling formations of multiple mo-
bile robots,” in Proceedings of IEEE International Conference on Robotics and
Automation, Leuven, Belgium, 1998, pp. 2864–2869.
[29] M. Egerstedt and X. Hu, “Formation constrained multi-agent control,” IEEE
Transactions on Robotics and Automation, vol. 17, no. 6, pp. 947–951, 2001.
[30] ——, “A hybrid control approach to action coordination for mobile robots,” Auto-
matica, vol. 38, no. 1, pp. 125–130, 2002.
[31] T. Eren, P. N. Belhumeur, B. Anderson, and S. A. Morse, “A framework for
maintaining formations based on rigidity,” in Proceedings of the 15th IFAC World
Congress, Barcelona, Spain, 2002.
[32] L. Farina and S. Rinaldi, Positive Linear Systems: Theory and Applications.
Wiley-Interscience, 2000.
[33] J. A. Fax and R. M. Murray, “Graph laplacians and vehicle formation stabilization,”
in Proceedings of the 15th IFAC World Congress, Barcelona, Spain, 2002.
[34] ——, “Information flow and cooperative control of vehicle formations,” in Proceed-
ings of the 15th IFAC World Congress, Barcelona, Spain, 2002.
[35] R. Fierro, A. Das, V. Kumar, and J. Ostrowski, “Hybrid control of formations
of robots,” in Proceedings of IEEE International Conference on Robotics and Au-
tomation, Seoul, Korea, 2001, pp. 157–162.
[36] L. R. Foulds, Graph Theory Applications. Springer-Verlag, 1992.
[37] J. Fredslund and M. J. Mataric, “A general algorithm for robot formation using
local sensing and minimal communication,” IEEE Transactions on Robotics and
Automation, vol. 18, no. 5, pp. 837–846, 2002.
Bibliography 189
[38] V. Gazi and K. M. Passino, “Stability analysis of swarms,” IEEE Transactions on
Automatic Control, vol. 48, no. 4, pp. 692–697, 2003.
[39] ——, “Stability analysis of social foraging swarms,” IEEE Transactions on Systems,
Man, and Cybernetics–Part B: Cybernetics, vol. 34, no. 1, pp. 539–557, 2004.
[40] J. Gunawardena, “Chemical reaction network theory for in-silico biologists,” Bauer
Center for Genomics Research, Harvard University, Cambridge, MA, USA, Lecture
Notes, 2003.
[41] J. Hajnal, “Weak ergodicity in non-homogeneous Markov chains,” Proc. Cambridge
Philos. Soc., vol. 54, pp. 233–246, 1958.
[42] F. Han, K. W. T. Yamada, K. Kiguchi, and K. Izumi, “Construction of an omnidi-
rectional mobile robot platform based on active dual-wheel caster mechanisms and
development of a control simulator,” Journal of Intelligent and Robotic Systems,
vol. 29, pp. 257–275, 2000.
[43] Y. Hatano and M. Mesbahi, “Agreement over random networks,” in Proceedings
of the 43rd IEEE Conference on Decision and Control, Atlantis, Paradise Island,
Bahamas, 2004, pp. 2010–2015.
[44] D. Hershkowitz and N. Mashal, “P α-matrices and Lyapunov scalar stability,” The
Electronic Journal of Linear Algebra, vol. 4, pp. 39–47, 1998.
[45] J. P. Hespanha, “Uniform stabillity of switched linear systems: extensions of
LaSalle’s invariance principle,” IEEE Transactions on Automatic Control, vol. 49,
no. 4, pp. 470–482, 2004.
[46] C. Huygens, Horoloqium Oscilatorium. Paris, France, 1673.
Bibliography 190
[47] H. Ishii and B. A. Francis, “Stabilizing a linear system by switching control with
dwell time,” IEEE Transactions on Automatic Control, vol. 47, no. 12, pp. 1962–
1973, 2002.
[48] A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups of mobile au-
tonomous agents using nearest neighbor rules,” IEEE Transactions on Automatic
Control, vol. 48, no. 6, pp. 988–1001, 2003.
[49] A. Jadbabaie, N. Motee, and M. Barahona, “On the stability of the kuramoto
model of coupled nonlinear oscillators,” in Proceedings of the 2004 American Con-
trol Conference, Boston, USA, 2004, pp. 4296–4301.
[50] F. Jadot, Dynamics and Robust Nonlinear PI Control of Stired Tank Reactors.
Ph.D. Dissertation, Universite Catholique de Louvain, Louvain, Belgium, 1996.
[51] K. H. Johansson and A. Speranzon, “Graph Laplacians and vehicle formation stabi-
lization,” in Proceedings of the 16th IFAC World Congress, Prague, Czech Republic,
2005.
[52] E. W. Justh and P. S. Krishnaprasad, “A simple control law for UAV formation
flying,” University of Maryland, Tech. Rep., 2003, Technical Report No: TR2002-
38.
[53] ——, “Steering laws and continuum models for planar formations,” in Proceedings
of the 42nd IEEE Conference on Decision and Control, Maui, Hawaii, USA, 2003,
pp. 3609–3614.
[54] ——, “Equilibria and steering laws for planar formations,” Systems and Control
Letters, vol. 52, pp. 25–38, 2004.
[55] ——, “Natural frames and interacting particles in three dimensions,” in Proceedings
of the 44th IEEE Conference on Decision and Control, Seville, Spain, 2005.
Bibliography 191
[56] T. Kaczorek, Positive 1D and 2D Systems. Springer-Verlag, 2002.
[57] W. Kang, N. Xi, and A. Sparks, “Theory and applications of formation control
in a perceptive referenced frame,” in Proceedings of the 39th IEEE Conference on
Decision and Control, Sydney, Australia, 2000, pp. 352–357.
[58] H. K. Khalil, Nonlinear Systems, 2nd ed. Prentice Hall, 1996.
[59] Y. Kuramoto, Chemical Oscillations, Waves, and Turbulence. Springer-Verlag,
1984.
[60] J. P. LaSalle, “Stability theory for ordinary differential equations,” Journal of Dif-
ferenial Equations, vol. 4, pp. 57–65, 1968.
[61] P. D. Leenheer and D. Aeyels, “Stability properties of equilibria of classes of co-
operative systems,” IEEE Transactions on Automatic Control, vol. 46, no. 12, pp.
1996–2001, 2001.
[62] N. E. Leonard and E. Fiorelli, “Virtual leaders, artificial potentials and coordinated
control of groups,” in Proceedings of the 40th IEEE Conference on Decision and
Control, Orlando, Florida, USA, 2001, pp. 2968–2973.
[63] D. Liberzon and A. S. Morse, “Basic problems in stability and design of switched
systems,” IEEE Control Systems Magazine, vol. 19, no. 5, pp. 59–70, 1999.
[64] J. Lin, A. S. Morse, and B. D. O. Anderson, “The multi-agent rendezvous problem
- part 1 the synchronous case,” 2005, submitted to SIAM Journal on Control and
Optimization.
[65] ——, “The multi-agent rendezvous problem - part 2 the asynchronous case,” 2005,
submitted to SIAM Journal on Control and Optimization.
Bibliography 192
[66] Z. Lin, M. Broucke, and B. Francis, “Local control strategies for groups of mobile
autonomous agents,” IEEE Transactions on Automatic Control, vol. 49, no. 4, pp.
622–629, 2004.
[67] Z. Lin, B. Francis, and M. Maggiore, “Necessary and sufficient graphical condi-
tions for formation control of unicycles,” IEEE Transactions on Automatic Control,
vol. 50, no. 1, pp. 121–127, 2005.
[68] Y. Liu, K. M. Passino, and M. M. Polycarpou, “Stability analysis of m-dimensional
asynchronous swarms with a fixed communication topology,” IEEE Transactions
on Automatic Control, vol. 48, no. 1, pp. 76–95, 2003.
[69] D. G. Luenberger, Introduction to Dynamic Systems: Theory, Models, and Appli-
cations. John Wiley and Sons, 1979.
[70] J. A. Marshall, Cooperative Autonomy: Pursuit Formations of Multivehicle Sys-
tems. Ph.D. Dissertation, University of Toronto, Toronto, Canada, 2005.
[71] J. A. Marshall, M. E. Broucke, and B. A. Francis, “Formations of vehicles in cyclic
pursuit,” IEEE Transactions on Automatic Control, vol. 49, no. 11, pp. 1963–1974,
2004.
[72] ——, “Pursuit formations of unicycles,” Automatica, vol. 41, no. 12, 2005.
[73] R. T. M’Closkey and R. M. Murray, “Nonholonomic systems and exponential con-
vergence: some analysis tools,” in Proceedings of the 32th IEEE Conference on
Decision and Control, San Antonlo, Texas, USA, 1993, pp. 943–948.
[74] M. Mesbahi and F. Y. Hadegh, “Formation flying of multiple spacecraft via graphs,
matrix inequalities, and switching,” AIAA Journal of Guidance, Control, and Dyn-
matics, vol. 24, no. 2, pp. 369–377, 2000.
Bibliography 193
[75] L. Moreau, “Leaderless coordination via bidirectional and unidirectional time-
dependent communication,” in Proceedings of the 42nd IEEE Conference on Deci-
sion and Control, Maui, Hawaii, USA, 2003, pp. 3070–3073.
[76] ——, “Stability of continuous-time distributed consensus algorithm,” in Proceed-
ings of the 43rd IEEE Conference on Decision and Control, Atlantis, Paradise
Island, Bahamas, 2004, pp. 3998–4003.
[77] ——, “Stability of multiagent systems with time-dependent communication links,”
IEEE Transactions on Automatic Control, vol. 50, no. 2, pp. 169–182, 2005.
[78] K. S. Narendra and A. M. Annaswamy, “Persistent exitation in adaptive systems,”
International Journal of Control, vol. 45, pp. 127–160, 1987.
[79] H. Nijmeijer and A. Rodriguez-Angeles, Synchronization of Mechanical Systems.
World Scientific, 2003.
[80] Y. Oasa, I. Suzuki, and M. Yamashita, “A robust distributed convergence algo-
rithm for autonomous mobile robots,” in Proceedings of 1997 IEEE International
Conference on Systems, Man, and Cybernetics, Orlando, FL, USA, 1997, pp. 287–
292.
[81] P. Ogren, M. Egerstedt, and X. Hu, “A control lyapunov function approach
to multi-agent coordination,” IEEE Transactions on Robotics and Automation,
vol. 18, no. 5, pp. 847–851, 2002.
[82] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with
switching topology and time-delays,” IEEE Transactions on Automatic Control,
vol. 49, no. 9, pp. 101–115, 2004.
[83] J. K. Parrish and W. H. Hammer, Animal Group in Three Dimensions. Cambridge
University Press, 1997.
Bibliography 194
[84] J. Peuteman and D. Aeyels, “Averaging results and the study of uniform asymptotic
stability of homogeneous differential equations that are not fast time-varying,”
SIAM Journal on Control and Optimization, vol. 37, no. 4, pp. 997–1010, 1999.
[85] A. Pogromsky, G. Santoboni, and H. Nijmeijer, “Partial synchronization: from
symmetry towards stability,” Physica D, vol. 172, no. 1, pp. 65–87, 2002.
[86] W. Ren, R. W. Beard, and T. W. McLain, “Coordination variables and con-
sensus building in multiple vehicle systems,” in Cooperative Control: A Post-
Workshop Volume 2003 Block Island Workshop on Cooperative Control, V. Kumar,
N. Leonard, and A. S. Morse, Eds. Springer-Verlag, 2004, pp. 171–188.
[87] C. Reynolds, “Boids.” [Online]. Available: www.red3d.com/cwr/boids/
[88] ——, “Flocks, birds, and schools: a distributed behavioral model,” Computer
Graphics, vol. 21, pp. 25–34, 1987.
[89] T. Richardson, “Stable polygons of cyclic pursuit,” Annals of Mathematics and
Artificial Intelligence, vol. 31, pp. 147–172, 2001.
[90] T. Rockafeller, Convex Analysis. Princeton University Press, 1970.
[91] N. Rouche, P. Habets, and M. Laloy, Stability Theory by Liapunov’s Direct Method.
Springer-Verlag, 1975.
[92] R. O. Saber and R. M. Murray, “Distributed cooperative control of multiple vehicles
formations using structural potential functions,” in Proceedings of the 15th IFAC
World Congress, Barcelona, Spain, 2002.
[93] ——, “Distributed structural stabilization and tracking for formation of dynamic
multi-agents,” in Proceedings of the 41st IEEE Conference on Decision and Control,
Las Vegas, Nevada, USA, 2002, pp. 209–215.
Bibliography 195
[94] ——, “Graph rigidity and distributed formation stabilization of multi-vehicle sys-
tems,” in Proceedings of the 41st IEEE Conference on Decision and Control, Las
Vegas, Nevada, USA, 2002, pp. 2965–2971.
[95] ——, “Agreement problems in networks with directed graphs and switching topol-
ogy,” in Proceedings of the 42nd IEEE Conference on Decision and Control, Maui,
Hawaii, USA, 2003, pp. 4126–4132.
[96] ——, “Flocking with obstacles avoidance: cooperation with limited communication
in mobile networks,” in Proceedings of the 42nd IEEE Conference on Decision and
Control, Maui, Hawaii, USA, 2003, pp. 2022–2028.
[97] C. Samson, “Control of chained systems application to path following and time-
varying point-stabilization of mobile roobts,” IEEE Transactions on Automatic
Control, vol. 40, no. 1, pp. 64–77, 1995.
[98] C. Samson and K. Ait-Abderrahim, “Feedback stabilization of a nonholonomic
wheeled mobile robot,” in Proceedings of IEEE/RSJ International Workshop on
Intelligent Robot and Systems IROS’91, Osaka, Japan, 1991, pp. 1242–1247.
[99] J. A. Sanders, Averaging Methods in Nonlinear Systems. Springer-Verlag, 1985.
[100] A. V. Savkin, “Coordinated collective motion of groups of autonomous mobile
robots: analysis of vicsek’s model,” IEEE Transactions on Automatic Control,
vol. 49, no. 6, pp. 981–983, 2004.
[101] R. Sepulchre, D. Paley, and N. Leonard, “Collective motion and oscillator synchro-
nization,” in Cooperative Control: A Post-Workshop Volume 2003 Block Island
Workshop on Cooperative Control, V. Kumar, N. Leonard, and A. S. Morse, Eds.
Springer-Verlag, 2004, pp. 89–205.
Bibliography 196
[102] G. V. Smirnov, Introduction to the Theory of Differential Inclusions. American
Mathematical Society, 2001.
[103] I. Stewart, M. Golubitsky, and M. Pivato, “Symmetry groupoids and patterns of
synchrony in coupled cell networks,” SIAM Journal on Applied Dynamic Systems,
vol. 2, no. 4, pp. 609–646, 2003.
[104] S. H. Strogatz, “From Kuramoto to Crawford: exploring the onset of synchroniza-
tion in populations of coupled oscillators,” Physica D, vol. 143, pp. 1–20, 2000.
[105] ——, “Exploring complex networks,” Nature, vol. 410, no. 8, pp. 268–276, 2001.
[106] K. Sugihara and I. Suzuki, “Distributed motion coordination of multiple mobile
robots,” in Proceedings of the 5th IEEE International Symposium on Intelligent
Control, Philadelphia, PA, USA, 1990, pp. 138–143.
[107] R. K. Sundaram, A First Course in Optimization Theory. Cambridge University
Press, 1996.
[108] I. Suzuki and M. Yamashita, “Distributed anonymous mobile robots: formation of
geometric patterns,” SIAM Journal on Computing, vol. 28, no. 4, pp. 1347–1363,
1999.
[109] P. Tabuada, G. J. Pappas, and P. Lima, “Feasible formation of multi-agent sys-
tems,” in Proceedings of the American Control Conference, Arlington, VA, USA,
2001, pp. 56–61.
[110] ——, “Motion feasibility of multi-agent formations,” 2004, submitted to IEEE
Transactions on Robotics.
[111] H. G. Tanner, A. Jadbabaie, and G. J. Pappas, “Stable flocking of mobile agents,
part i: fixed topology,” in Proceedings of the 42nd IEEE Conference on Decision
and Control, Maui, Hawaii, USA, 2003, pp. 2010–2015.
Bibliography 197
[112] ——, “Stable flocking of mobile agents, part ii: dynamic topology,” in Proceedings
of the 42nd IEEE Conference on Decision and Control, Maui, Hawaii, USA, 2003,
pp. 2016–2021.
[113] T. Vicsek, A. Czirok, E. Ben-Jacob, I. Cohen, and O. Shochet, “Novel type of phase
transition in a system of self-driven particles,” Physical Review Letters, vol. 75,
no. 6, pp. 1226–1229, 1995.
[114] V. I. Vorotnikov, “On the coordinate synchronization problem for dynamical sys-
tems,” Differential Equations, vol. 40, no. 1, pp. 14–22, 2004.
[115] I. A. Wagner and A. M. Bruckstein, “Row straighening via local interactions,”
Circuits, Systems, and Signal Processing, vol. 16, no. 3, pp. 287–305, 1997.
[116] P. K. C. Wang, “Navigation strategies for multiple autonomous mobile robots mov-
ing in formation,” in Proceedings of IEEE/RSJ International Workshop on Intelli-
gent Robots and Systems, Tsukuba, Japan, 1989, pp. 486–493.
[117] ——, “Navigation strategies for multiple autonomous mobile robots moving in
formation,” Journal of Robotic Systems, vol. 8, no. 2, pp. 177–195, 1991.
[118] J. Wolfowitz, “Products of indecomposable, aperiodic, stochastic matrices,” Pro-
ceedings of the American Mathematical Society, vol. 14, no. 5, pp. 733–737, 1963.
[119] O. Wolkenhauer, Systems Biology: Dynamic Pathway Modeling. Ph.D. Disserta-
tion, University of Rostock, Rostock, Germany, 2004.
[120] C. W. Wu, “Synchronization in arrays of coupled nonlinear systems: passivity,
circle criterion, and observer design,” IEEE Transactions on Circuits and Systems–
I: Fundamental Theory and Applications, vol. 48, no. 10, pp. 1257–1261, 2001.
Bibliography 198
[121] ——, “Synchronization in coupled arrays of chaotic oscillators with nonreciprocal
coulping,” IEEE Transactions on Circuits and Systems–I: Fundamental Theory and
Applications, vol. 50, no. 2, pp. 294–297, 2003.
[122] C. W. Wu and L. O. Chua, “Synchronization in an array of linearly coupled dynam-
ical systems,” IEEE Transactions on Circuits and Systems–I: Fundamental Theory
and Applications, vol. 42, no. 8, pp. 430–447, 1995.
[123] H. Yamaguchi, “A distributed motion coordination strategy for multiple nonholo-
nomic mobile robots in cooperative hunting operations,” Robotics and Autonomous
Systems, vol. 43, no. 4, pp. 257–282, 2003.
[124] H. Yamaguchi and J. W. Burdick, “Time-varying feedback control for nonholo-
nomic mobile robots forming group formations,” in Proceedings of the 37th IEEE
Conference on Decision and Control, Tampa, Florida, USA, 1998, pp. 4156–4163.
[125] F. Zhang, E. W. Justh, and P. S. Krishnaprasad, “Boundary following using gy-
roscopic control,” in Proceedings of the 43rd IEEE Conference on Decision and
Control, Atlantis, Paradise Island, Bahamas, 2004, pp. 5204–5209.
Index
M -matrix, 45
α-diagonal, 52
α-scalar matrix, 52
Acyclic, 11
Adjacency matrix, 23
Agent, 97
Aggregate state, 97
Aperiodic, 12
Arc, 10
loop, 10
multiple, 10
Associated digraph, 23
Attractive in region, 182
Bidirectional digraph, 17
Carrier subspace, 101
Centre node, 13
Closed node subset, 16
Closed strong component, 16
Cogredient, 25
Cone, 102
Connected, 19
Connectedness, 13
disconnected, 14
fully connected, 13
quasi strongly connected, 14
strongly connected, 13
unilaterally connected, 14
weakly connected, 14
Contingent cone, 102
Convex hull, 101
Cycle, 11
d-periodic, 12
Degree matrix, 47
Dini derivative, 105
Directed graph, digraph, 10
Dynamic digraph, 17
Dynamic interaction digraph, 63
Edge, 17
Generator matrix, 45
Gershgorin disk theorem, 24
Globally attractive (GA), 183
Globally reachable node, 13
In-degree, 11
199
INDEX 200
In-neighbor, 11
In-neighborhood, 11
Induced subdigraph, 15
Interaction digraph, 63, 98
Invariant set, 181
Irreducible, 25
Kronecker product, 52
Laplacian, 47
Metzler matrix, 44
Mode, 63, 97
Node, 10, 17
end-node, 10
head, 10
tail, 10
Nonnegative matrix, 22
Nonsingular M -matrix, 45
Normal cone, 102
Opposite digraph, 15
Out-degree, 11
Out-neighbor, 11
Out-neighborhood, 11
Path, 11
Period, 12
Perron-Frobenius Theorem, 31
Polytope, 101
Positive matrix, 22
Positively invariant set, 181
Primitive matrix, 27
Reachable, 13
Reducible, 25
Relative boundary, 101
Relative interior, 101
Scalar matrix, 52
Semiwalk, 11
SIA matrix, 35
Simple, 10
Spectral radius, 23
Stable, 182
Stochastic matrix, 32
Strong component, 15
Sub-tangentiality condition, 107
Subsystem, 97
Switching signal, 63
Switching time, 63
Tangent cone, 102
Undirected graph, 17
Uniformly attractively (UA) in region, 183
Uniformly globally attractive, 183
Uniformly stable (US), 182
Union digraph, 15
INDEX 201
Walk, 11
length, 11