discrete event modeling, simulation and control with ... · discrete-event modeling and simulation...
TRANSCRIPT
DISCRETE EVENT MODELING, SIMULATION AND CONTROL WITH APPLICATION TO SENSOR BASED
INTELLIGENT MOBILE ROBOTICS
BY
Shahab Sheikh-Bahaei
B.S., Electrical Engineering, Isfahan University of Technology, 1999
THESIS
Submitted in Partial Fulfillment of the Requirements for the Degree of
Master of Science
Electrical Engineering
The University of New Mexico Albuquerque, New Mexico
December 2003
To my parents, to whom I owe all that I am today; to my wife Nasim, to whom I owe my happy life;
and to my wonderful in-laws.
iii
ACKNOWLEDGEMENTS
I gratefully acknowledge the advice, comments, corrections, encouragements and support of my advisor Professor Mo Jamshidi, Director of Center for Autonomous Control Engineering (ACE) and Regents Professor of Electrical and Computer Engineering at the University of New Mexico.
I am grateful to the members of my committee, Professor Thomas Caudell, Director
of Visualization Laboratory at High Performance Computing, Education and Research Center and Associate Professor of Electrical and Computer Engineering and Computer Science, and Professor Chris Smith, Assistant Professor of Electrical and Computer Engineering, for being in my committee and for their helpful comments and suggestions. Also my special thanks to the fourth, “unofficial” member of this committee, Dr. Nader Vadiee, Director of the UNM-NASA PURSUE Program, for his advice and supportive comments. My thanks to my uncle, Professor Mansoor Sheik-Bahae, Professor of Physics and Astronomy at the University of New Mexico, Professor Bernard Zeigler of the University of Arizona, and the faculty and staff at the University of New Mexico Department of Electrical and Computer Engineering. I am also grateful to my colleagues and friends at UNM, and thankful to the “Center for Autonomous Control Engineering (ACE)” for financial support of this work1.
And finally, and the most importantly, I gratefully acknowledge the love and spiritual
support of my wife, Nasim Sassan. This work definitely could not have been accomplished without her support and patience.
iv 1 This work is partially supported by NASA-Ames Research Center Grant number NAG 2-1457
Discrete Event Modeling, Simulation and Control with Application to Sensor Based
Intelligent Mobile Robotics
BY
Shahab Sheikh-Bahaei
ABSTRACT OF THESIS
Submitted in Partial Fulfillment of the Requirements for the Degree of
Master of Science
Electrical Engineering
The University of New Mexico Albuquerque, New Mexico
December 2003
Discrete Event Modeling, Simulation and Control
with Application to Sensor Based Intelligent Mobile Robotics
by
Shahab Sheikh-Bahaei
B.S., Electrical Engineering, Isfahan University of Technology, 1999
M.S., Electrical Engineering, University of New Mexico, 2003
ABSTRACT
This work is focused on modeling, simulation and control of cooperative mobile
robots using Discrete Event System Specifications (DEVS), which is a young approach in
modeling and control. Discrete Event approximation of differential equations is also
studied and improved by introducing DEVS-Double Integration method.
Also Fuzzy Logic Control (FLC) and Stochastic Learning Automaton (SLA) are
designed and implemented under DEVS framework. In this work we propose an
Intelligent Discrete Event (I-DEVS) approach to control autonomous agents, i.e. we
introduce a discrete event version of Fuzzy Logic Control via DEVS, which improves the
speed of Fuzzy-Logic Control and makes it more suitable for real-time adaptive control.
vi
A discrete event simulation environment for modeling and simulation of multi-agent
systems called V-LAB® is also partially developed in this research. Finally the simulated
algorithms are implemented on a real commercial robot.
vii
Contents
LIST OF FIGURES...............................................................................................x
1. Introduction .......................................................................................................... 1
2. DEVS Formalism ................................................................................................. 3
2.1 Introduction to Discrete Event Dynamical Systems (DEDS)........................... 3
2.2 Review of DEVS formalism............................................................................. 5
2.2.1 Classic DEVS Atomic Models .............................................................. 5
2.2.2 Classic DEVS Coupled Models............................................................. 7
2.2.3 DEVS Examples .................................................................................... 9
2.3 DEVS&DESS: Combined Discrete Event and Differential Equation Specified Systems..... 13
2.3.1 DEVS Abstraction for Output-Partitioned DEVS&DESS Models ..... 14
2.3.2 DEVS First Order Numerical Integrator ............................................. 17
2.3.3 DEVS Double Integrator ..................................................................... 22
2.4 Discrete Event Control (DEC)........................................................................ 30
2.4.1 Introduction ......................................................................................... 30
2.4.2 Controllability of DEVS Models......................................................... 30
2.5 Intelligent DEVS ............................................................................................ 32
2.5.1 DEVS-Fuzzy Logic ............................................................................. 32
2.5.2 DEVS-SLA.......................................................................................... 38
3. Mobile robotics .................................................................................................. 43
3.1 Introduction .................................................................................................... 43 viii
3.2 Modeling and Controllability of Mobile Robots ............................................ 44
3.3 Motion Control of Mobile Robots .................................................................. 48
3.3.1 Path Planning Methods........................................................................ 49
3.3.2 Obstacle Avoidance............................................................................. 56
3.3.3 Cooperative Robots ............................................................................. 59
3.3.4 Job Evaluation ..................................................................................... 65
4. V-LAB® Simulations ........................................................................................ 71
4.1 Simulation Components ................................................................................. 73
4.1.1 Dynamics Model.................................................................................. 73
4.1.2 Collision Model ................................................................................... 74
4.1.3 Terrain Model ...................................................................................... 83
4.1.4 Robot Model ........................................................................................ 84
4.2 Simulation Results.......................................................................................... 85
5. Experiment Results ............................................................................................ 88
6. Conclusions and discussions .............................................................................. 91
7. References .......................................................................................................... 95
ix
List of Figures
Figure 2.1 A 3-State Machine............................................................................................. 4
Figure 2.2- a) DEVS Behavior,........................................................................................... 7
Figure 2.3. A simple coupled model, composed of three atomic models .......................... 9
Figure 2.4 Graphical notation: (a) external transition......................................................... 9
Figure 2.5. Pulse Generator, an example for a DEVS atomic model ............................... 10
Figure 2.6. A DEVS coupled model example: an integrator made jointly by a Generator
and a Summer............................................................................................................. 11
Figure 2.7- The 3-State Machine in DEVS notation......................................................... 13
Figure 2.8. DEVS representation of DEVS&DESS ......................................................... 16
Figure 2.9-DEVS first order numerical integrator............................................................ 19
Figure 2.10. A DEVS coupled model for Simulation of a free falling mass. ................... 20
Figure 2.11. DEVS implementation of Example 2.5 ........................................................ 20
Figure 2.12. Simulation of a falling mass using two first order DEVS integrators with
initial velocity v0=20m/s, and initial position x0=100m and Quantum=1. (a) Position
vs. time: DEVS results compared to the exact solution. (b) Position Error vs. time. 21
Figure 2.13 Step responses for the systems of Example2.5 using DEVS integrator. (a)
Step response for system of equation 2.14 (b) Step response for system of equation
2.15 ............................................................................................................................ 22
Figure 2.14. DEVS Double Integrator .............................................................................. 25
Figure 2.15. Determining the time advance value for n-dimensional DEVS double
integrator.................................................................................................................... 26
Figure 2.16- Searching the minimum positive root of equation 2.28. .............................. 28
Figure 2.17- Simulation of a falling mass using one DEVS double integrator with initial
velocity v0=20m/s, and initial position x0=100m and Quantum=1. (a) Position vs.
time: DEVS results compared to the exact solution. ................................................. 29
x
Figure 2.18- Controllability Definitions: (a) weakly controllable (b) strongly controllable
(c) very strongly controllable path............................................................................. 32
Figure 2.19 A typical consequent membership function (CMF) model ........................... 34
Figure 2.20 A DEVS model for a typical fuzzy rule “IF x is A OR y is B THEN z is C”35
Figure 2.21. Example of using fuzzy-DEVS to control an inverted pendulum................ 37
Figure 2.22- Learning from action: gaining reward while avoiding penalty. ................... 38
Figure 2.23 Schematic of SLA inside the DEVS Environment .......................................... 39
Figure 2.24. Possible results of an example for SLA_DEVS........................................... 41
Figure 2.25- A mobile automaton learns to reach a goal point using SLA. (a) Action
probabilities vs. time (b) trajectory of the automaton (c) x and y vs. time................ 42
Figure 3.1. Robot Functional Block Diagram................................................................... 43
Figure 3.2- A differential-wheeled mobile robot.............................................................. 44
Figure 3.3. Robot Motion Control .................................................................................... 45
Figure 3.4. Kinematic Control .......................................................................................... 47
Figure 3.5. The system of equation 3.7 for a=1 and b=4;................................................. 51
Figure 3.6. Effect of changing b from 4a down to a......................................................... 52
Figure 3.7. Trajectory of the system of equation 3.7 for different initial conditions........ 52
Figure 3.8. Using time-varying a and b as defined in equation 3.8.................................. 53
Figure 3.9. Membership function used in Fuzzy-Logic Obstacle avoidance ................... 58
Figure 3.10. 3D surfaces generated by the proposed fuzzy logic rule base to perform
obstacle avoidance..................................................................................................... 59
Figure 3.11. Robot trying to push round objects to storage area ..................................... 60
Figure 3.12. A DEVS model for cooperative pushing task .............................................. 61
Figure 3.13. Pushing task performed by a single robot .................................................... 63
Figure 3.14. Pushing task by two robots........................................................................... 65
Figure 3.15. Membership functions for Objective Function, Time and Mission Success
Degree........................................................................................................................ 67
Figure 3.16- Cooperative box-pushing task...................................................................... 70
Figure 4.1 Distributing simulation layers using DEVS in V- Lab ................................ 72
Figure 4.2. DEVS coupled model in V-LAB® for multi-agent ....................................... 73
xi
Figure 4.3- Dynamic model contains DEVS-double integrators to simulate moving-
obstacles and robots................................................................................................... 74
Figure 4.4. Examples of colliding polygons: case (a) is more likely to occur than case (b)
................................................................................................................................... 75
Figure 4.5. Evaluation of function P(x) (equation 4.1) at different areas......................... 76
Figure 4.6. Hard Collision ................................................................................................ 78
Figure 4.7- Three masses pushing each other................................................................... 81
Figure 4.8- Terrain Model is responsible for calculating the distance of the nearest
obstacle to each IR-sensor. ........................................................................................ 83
Figure 4.9- A DEVS coupled model for a differential-drive wheel robot with n infra-red
sensors, one compass and one position sensor .......................................................... 84
Figure 4.10. V-Lab® components for autonomous multi-agent simulation implemented in
DEVS......................................................................................................................... 86
Figure 4.11- 3D views from V-Lab simulation ............................................................ 86
Figure 4.12- Path planning among unknown obstacles .................................................... 87
Figure 4.13. Cooperative object pushing .......................................................................... 87
Figure 5.1- Pioneer Robot made by ActivMedia Robotics.............................................. 88
Figure 5.2- Path planning: (a)Left and right velocities (b) Trajectory of the robot (c) State
variables of the robot vs. time (d) View from V-Lab® showing the final position of
the robot and its trajectory ......................................................................................... 90
Figure 5.3- Path planning while avoiding obstacles ......................................................... 90
Figure 5.4- Object pushing task. (a) V-Lab® real-time view (b) Robot pushing a bottle 91
Figure 5.5- Object pushing task by a single robot ............................................................ 91
xii
1
1. INTRODUCTION
The field of robotics involves several areas such as control engineering, signal
processing, planning, mechanical engineering, applied mathematics, real-time systems,
software engineering, artificial intelligence and machine learning. Robots perform a wide
variety of tasks in the real world. Due to the large degree of flexibility of the robotic
systems and the complexity of possible task domains, it is important to design control
architectures, which can handle complex systems in unstructured environment. Much
research has been done so far by researchers on solving different robotic problems.
The word “Robot” refers to two classes of autonomous machines: one is known as
“Robotic Arm”, and the other one “Mobile Robot”. Mobile robots, move relative to the
environment they are in, exhibiting some degree of autonomy. Wheeled-Robots and
Walking-Robots (legged-robots) are two basic categories of mobile robots. Applications
of mobile robots are in autonomous moving machines such as: cars, trains, carts and
plains, moreover tasks like painting, cleaning, inspection can be performed by mobile
robots, specially in dangerous, boring or difficult situations where human can’t work.
Discrete-event modeling and simulation of mobile robots in a virtual environment is
one of the main aspects of this research. Discrete event modeling and simulation has been
introduced recently as a new tool in modeling, simulation and control.
Work on mathematical foundation of discrete event dynamic modeling and simulation
initiated in the 70’s by Zeigler, when DEVS (discrete event system specification) was
introduced as an abstract formalism for discrete event modeling[1]. Because of its system
theoretical basis, DEVS is a general formalism for discrete event dynamical system
Introduction 2
(DEDS). DEVS can be used to specify systems whose input, state and output trajectories
are piecewise constant [2]. Recently, the Control of Discrete Event Systems is receiving
more attention[3],[4],[5]. Discrete-Event Systems (DES) control theory developed by
Ramadge and Wonham[5], has been used to control several different robotic problems
including manufacturing and assembly tasks, the coordination of mobile robotic agents
[6], [7], a grasping task performed under the supervision of a vision system [8], modeling
dexterous manipulation [9]a hybrid discrete event dynamic systems approach to robot
control [10],[11].
On the other hand, soft computing techniques, e.g. fuzzy logic, genetic algorithms
and neural networks are proven to be useful in control of autonomous agents. Fuzzy logic
control provides a human-like decision making methodology, which has been used
widely in the field of robotics and automation [12]. Design and implementation of fuzzy
logic control under DEVS framework is one of the contributions of this work.
The goal of this thesis is to investigate the power of DEVS in modeling, simulation
and control of sensor-based intelligent mobile robots. First we review the DEVS
formalism in Section 2, also DEVS differential equation solvers, DEVS numerical
integration methods, as well as Fuzzy-DEVS and SLA-DEVS are presented in this
section. In Section 3, modeling, control architecture and controllability of mobile robots,
fuzzy logic based obstacle avoidance, and cooperative mobile robots are discussed. In
Section 4, V-LAB® a discrete-event-environment in which mobile robots can be
modeled and simulated is introduced. Simulation results are also presented in this section.
In Section 5, real world experiments results with an ActivMedia™ “Pioneer AT2” mobile
robot are presented.
3
2. DEVS FORMALISM
2.1 Introduction to Discrete Event Dynamical Systems (DEDS)
Classical dynamic system models, i.e. differential equations based models, have been
developed to describe natural physical systems. Differential Equation System
Specifications (DESS) has been used widely for modeling, simulation and control of
various systems. Since late 20th century computer has been employed as a basic tool to
simulate physical systems by solving differential equations. Since then the concepts of
discrete time, sampling and difference equations are introduced. Discrete-time models
timing is based on equally spaced time intervals called step-size or sample time. The
accuracy of the model depends on the sample time: smaller sample times provide better
accuracy. But the tradeoff is that the calculation time rises with small sample times.
In contrast, widely developments of man-made systems, such as manufacturing
systems of various types need more general models in order to describe different states
and performances of the systems. Discrete Event Dynamic Systems (DEDS) provides an
explanatory framework for such systems. DEDS can be used to model the “event-driven”
systems common to man-made systems.
A discrete event system is a dynamic system whose behavior is guided by the
occurrence of discrete events. Although only certain events can occur in any given
situation, the time and the intervals in which they appear is known in general.
Example: A classical example of a discrete event system is a 3-state machine that can
be “Idle”, “Working” or “Broken-Down”. The transition diagram is shown in Figure 2.1
DEVS Formalism 4
“Working”
“Idle”
FIX
BRK
DONE
ON
“Broken-Down”
Figure 2.1 A 3-State Machine
The initial state is “Idle”. The event ON denotes the turn-on of the machine, while the
event DONE the completion of a task, BRK a break-down and FIX a completion of
repair.
In response to industrial needs for DEDS models, several simulation packages have
been developed [11]. A discrete event system specification, called DEVS formalism has
been helpful in the development of such software. DEVS is a theoretical formalism for
discrete event dynamical systems (DEDS). It can be used to specify systems whose input,
state and output trajectories are piecewise constant [2]. Several software tools have been
developed for the simulation of DEVS models. The most popular are DEVS-Java [13]
and DEVSim++[14]. In this work we use DEVS-Java to implement our DEVS models.
DEVS Formalism 5
2.2 Review of DEVS formalism
The DEVS formalism specifies discrete event models in a modular hierarchical form.
In this formalism a discrete event model is decomposed into smaller models, each defines
a discrete event model independently.
2.2.1 Classic DEVS Atomic Models
A 7-tuple specifies the lowest level of the model hierarchy called atomic model. The
following is the classic definition for an atomic DEVS [1]:
DEVS=(X, Y, S, δext, δint, λ, ta)
where
X : is the set of inputs,
Y: is the set of outputs,
S: is the set of sequential states,
δext : Q×X →S is the external state transition function,
where Q={(s,e) | s∈S, 0 ≤ e < ta(s) } is the total state,
e is the elapsed time since last event.
δint : S → S is the internal state transition function,
λ : S →Y is the output function,
ta : S → ℜ+∪{0,∞} is the time advance function.
The interpretation of the above elements is as follows: at any given time the system is
at some state, s. If no external event occurs it will remain in the same state for a period of
DEVS Formalism 6
ta(s) seconds, after which an output equal to the value, λ(s), is generated and the system
state changed to δint(s). In the case that ta(s) is equal to infinity, the system will remain in
the same state, s, until an external event occurs. In other words, if no external event
occurs, system will remain in the same state forever. The first case (ta(s)< ∞), is called a
transitory state, and the second case is a passive state.
If an external event x∈X occurs before the expiration time period ta(s), when the
system is in the total state (s,e), where e is the elapsed time since last transaction, the state
of the system will change to s′=δext(s,e,x). Therefore, the new resting time for this state
will be ta(s′), and the same story continues.
Hence, if there is no external event the internal transition function determines the
system’s new state, while the external transition function calculates the new state if an
external event occurs –this function uses the input x, the current state s, and the time
which the system has been in this state, e to determine the new state s′.
The behavior of DEVS is illustrated in Figure 2.2a. The input trajectory is a series of
events occurring at times t0 and t2. The state of the system changes whenever an external
event (input) or an internal event (output) occurs. The value of e (elapsed time) resets
after each state transition. At time t1 the elapsed time reaches ta(s) (time advance value),
where an output λ(s) is generated. Figure 2.2b shows the graphical notation for an atomic
model, which has one input port and one output port.
As oppose to classic DEVS, which only one port receives value in an external event,
parallel DEVS allows multiple ports to receive values at the same time [1], see Figure
2.2c.
DEVS Formalism 7
X
Classic DEVS atomic model
Output Input
t0 t1 t2 S (b) s3s1 s0
s2
e Out1
Out2
Out3
Parallel DEVS atomic model
In1
In2
ta(s1)
t0 t1 t2
Y
(c)
t1
(a)
Figure 2.2- a) DEVS Behavior, Figure 2.2- a) DEVS Behavior,
b) Graphical notation for Classic and Parallel models b) Graphical notation for Classic and Parallel models
2.2.2 Classic DEVS Coupled Models 2.2.2 Classic DEVS Coupled Models
Basically, a DEVS coupled model is composed of DEVS components i.e. atomic and
coupled models, by defining a coupling relation between them.
Basically, a DEVS coupled model is composed of DEVS components i.e. atomic and
coupled models, by defining a coupling relation between them.
Here is the definition of DEVS coupled models [1]: Here is the definition of DEVS coupled models [1]:
N=(X,Y,D,{Md|d∈D},EIC,EOC,IC), N=(X,Y,D,{Md|d∈D},EIC,EOC,IC),
where where
• X={(p,v) | p∈IPorts, v∈Xp } is the set of input ports and values • X={(p,v) | p∈IPorts, v∈Xp } is the set of input ports and values
• Y={(p,v) | p∈OPorts, v∈Yp } is the set of output ports and values • Y={(p,v) | p∈OPorts, v∈Yp } is the set of output ports and values
DEVS Formalism 8
• D is the set of the component names.
• Components are DEVS models, for each d∈D,
Md=( Xd,Yd,S, δext, δint, λ, ta) is a DEVS (2.1)
with Xd={(p,v)|p∈IPortsd, v∈Xp }
Yd={(p,v)|p∈Oportsd,v∈Yp}
• External Input Coupling connects external inputs to component inputs,
EIC ⊆{ ( (N, ipN), (d, ipa) ) | ipN∈IPorts, d∈D, ipd∈IPortsd}. ( 2.2)
• External Output Coupling connects component outputs to external outputs,
EOC ⊆{ ( (d, opd), (N, opN) ) | opN∈OPorts, d∈D, opd∈OPortsd}. ( 2.3)
• Internal Coupling connects component outputs to component inputs,
IC ⊆{ ( (a, opa), ( b, ipd) ) | a,b ∈D, opa ∈ Oportsa, ipb ∈ IPortsb }. ( 2.4)
However, no direct feedback loops are allowed, i.e., no output port of a
component may be connected to an input port of the same component.
Figure 2.3 shows a simple example of a coupled model.
DEVS Formalism Formalism 9
9
Coupled
Coupled
outin
out in A3
outin A2
outin A1
Figure 2.3. A simple coupled model, composed of three atomic models Figure 2.3. A simple coupled model, composed of three atomic models
2.2.3 DEVS Examples 2.2.3 DEVS Examples
Graphical notation: Our graphical notation for state transitions is shown in Figure
2.4 where (a) represents an external transition. An input event is specified using ‘?’. For
instance input event in?m means that a message (event) ‘m’ is input at the input port
“in”. Figure 2.4(b) denotes an internal transition. An output event is specified using ‘!’.
A dotted line represents an internal state transition specified by the internal transition
function. A solid line represents a state transition specified by the external transition
function.
Graphical notation: Our graphical notation for state transitions is shown in Figure
2.4 where (a) represents an external transition. An input event is specified using ‘?’. For
instance input event in?m means that a message (event) ‘m’ is input at the input port
“in”. Figure 2.4(b) denotes an internal transition. An output event is specified using ‘!’.
A dotted line represents an internal state transition specified by the internal transition
function. A solid line represents a state transition specified by the external transition
function.
si sj
p?m
si sjp?m
si sj
λ(si) = p!m
si sjp!m
(a) (b)
Figure 2.4 Graphical notation: (a) external transition Figure 2.4 Graphical notation: (a) external transition
(b) internal transition (b) internal transition
DEVS Formalism ormalism 10
10
Example 2.1. Generator Example 2.1. Generator
As an example consider an atomic model with single input and single output. The
model generates output events equal to 1 or –1, and the frequency of the output events is
proportional to the input value (Figure 2.5). Negative outputs occur when the input is
negative.
As an example consider an atomic model with single input and single output. The
model generates output events equal to 1 or –1, and the frequency of the output events is
proportional to the input value (Figure 2.5). Negative outputs occur when the input is
negative.
X= rate
-1
1
Y
Y Generator
Figure 2.5. Pulse Generator, an example for a DEVS atomic model Figure 2.5. Pulse Generator, an example for a DEVS atomic model
Here is the formal definition of such a model: Here is the formal definition of such a model:
GEN =(X, Y, S, δext, δint, λ, ta), GEN =(X, Y, S, δext, δint, λ, ta),
X=ℜ, X=ℜ,
Y={-1,1}, Y={-1,1},
S={-1,1}, S={-1,1},
δext(s,x,e)=sign(x), δext(s,x,e)=sign(x),
δint(s,x)=sign(s), δint(s,x)=sign(s),
λ(s)=s, λ(s)=s,
0( , )
0
if xta s x q if x
x
∞ ==
≠
where the constant q is a real number called quantum.
DEVS Formalism malism 11
11
Example 2.2. Summer Example 2.2. Summer
As another example consider an atomic model which sums up the input value with the
current state, and sends the result immediately.
As another example consider an atomic model which sums up the input value with the
current state, and sends the result immediately.
SUM =(X, Y, S, δext, δint, λ, ta), SUM =(X, Y, S, δext, δint, λ, ta),
X=ℜ, X=ℜ,
Y=ℜ, Y=ℜ,
S=ℜ, S=ℜ,
δext(s,x,e)=s+x, δext(s,x,e)=s+x,
δint(s,x)=s, δint(s,x)=s,
λ(s)=s, λ(s)=s,
ta(s,x)=0. ta(s,x)=0.
Example 2.3. A simple First Order Ordinary Differential Equation solver Example 2.3. A simple First Order Ordinary Differential Equation solver
Using the atomic models mentioned in the two previous examples, in a DEVS couple
model, a simple integrator can be made.
Using the atomic models mentioned in the two previous examples, in a DEVS couple
model, a simple integrator can be made.
Figure 2.6. A DEVS coupled model example: an integrator made jointly by a Generator and a Summer. Figure 2.6. A DEVS coupled model example: an integrator made jointly by a Generator and a Summer.
1 2 3
1 20 1 20
t t t
t ts x dt x dt x dt+ +∫ ∫ ∫
t3
≈
t2 t1
x2
x1
x0
Time
S
G
X
S G X SummerGenerator
DEVS Formalism 12
Example 2.4. Consider the 3-state machine discussed in section 2.1 again. This
system can be described by a DEVS atomic model as follows:
M=(X, Y, S, δext, δint, λ, ta),
X={“ON”, “FIX”}
Y=S={“Idle”, “Working”, “Broken-Down”},
if (s = "Broken-down" and x = "FIX")
if (s = "Idle" and x = "ON")
" "( , , )
" "ext
Idles x e
Workingδ
=
int
if (s ="Working" and event "DONE" occurs)
if (s ="Working" and event "BRK" occurs)
otherwise
" "( ) " "
Idles Broken Down
sδ
= −
if s="Working"
otherwise
0( , )ta s x
= ∞
λ(s)=s.
Figure 2.7 shows the state transition diagram for this atomic model. As shown in the
figure, the model has one input port “in” and one output port “out”.
DEVS Formalism Formalism 13
13
“out” “in”
“out”!BRK
“in”?FIX “in”?ON
“out”!DONE
Down Broken-
Working
Idle
Figure 2.7- The 3-State Machine in DEVS notation Figure 2.7- The 3-State Machine in DEVS notation
2.3 DEVS&DESS: Combined Discrete Event and Differential Equation
Specified Systems
2.3 DEVS&DESS: Combined Discrete Event and Differential Equation
Specified Systems
Since DEVS representation applies to a special subclass of dynamical systems, output-
partitioned DEVS&DESS formalism for hybrid system modeling is introduced by Zeigler
et al [1],[2] as an extension of the DEVS formalism for combined discrete/continuous
modeling and simulation. It provides means of specifying systems with interacting
continuous and discrete trajectories.
Since DEVS representation applies to a special subclass of dynamical systems, output-
partitioned DEVS&DESS formalism for hybrid system modeling is introduced by Zeigler
et al [1],[2] as an extension of the DEVS formalism for combined discrete/continuous
modeling and simulation. It provides means of specifying systems with interacting
continuous and discrete trajectories.
Definition: An output-partitioned DEVS&DESS is a structure [2]: Definition: An output-partitioned DEVS&DESS is a structure [2]:
Op-DEVS&DESS=(X, Y, Q, Sc, φ, λ, F) Op-DEVS&DESS=(X, Y, Q, Sc, φ, λ, F)
where where
• X is the set of input values, which is an arbitrary set. • X is the set of input values, which is an arbitrary set.
• Q is the set of states. • Q is the set of states.
• S=S1 × S 2 ×… × Sn = Rn, i.e., is the continuous state space which is an n-
dimensional vector space.
• S=S1 × S 2 ×… × Sn = Rn, i.e., is the continuous state space which is an n-
dimensional vector space.
• φ={φ1, φ2,…, φn} is a set of rate of change function with φi : S × X → ℜ, • φ={φ1, φ2,…, φn} is a set of rate of change function with φi : S × X → ℜ,
DEVS Formalism 14
• with {1,... }
ii n
F F∈
= U ,1 ,{ ,..., }ii i i mF f f= is the set of threshold values for all
dimensions i,
• Y= BF is the output value set which is the crossproduct of all Boolean value
sets of the threshold sensors , ,i b iif i b , ,thr f F∈
• λ : S → Y if the output function which is defined by
λ(s)=(thrf1,b1(s1), thrf1,b2(s2),…, thrf1,n(sn) )
i.e. the output is defined by the values of the threshold sensors.
In the above definition, a continuous system with n-dimensional state space
Sc=S1c×S2c×…×Snc=Rn whose continuous behavior can be modeled by a set of first order
differential equations φ={φ1,φ2,…,φn} is considered. Moreover, the system is furnished
with a set of threshold sensors, whose state change only at particular fixed threshold
values fi,bi of continuous state variables si, i.e. x=thr fi,b for all si<fi,bi and the opposite
value for all si>fi,bi (assuming Boolean valued sensors are used).
2.3.1 DEVS Abstraction for Output-Partitioned DEVS&DESS Models
Since an output partitioned DEV&DESS is a piecewise constant input/output
dynamical system, it has a DEVS representation. The transition functions of a DEVS are
derived from an op-DEVS&DESS in the following way [2]:
DEVS Formalism 15
1t e
For every total state ((s,x),e) and input segment2 ω:<t1,t2>→X:
• The time advance is the time interval to the next output change, i.e., threshold
crossing, provided a constant input x,
1
,(( , )) min{ | ( ) ( ( ( ), ) )}s xtta s x e s s f STRAJ x dλ λ= ≠ + ∫ τ τ
+, ( 2.5)
• The internal transition function computes the state at the next threshold
crossing occurring at time t1+ta(s,x)
1
( , )
int ,(( , )) ( ( ( ), ) , )ta s x
s xts x s f STRAJ x d xδ τ τ= + ∫ , ( 2.6)
• The external transition function updates the state at the input event x′ knowing
that the elapsed time since last event is e.
1
1,(( , ), , ) ( ( ( ), ) , )
t e
ext s xts x e x s f STRAJ x d xδ τ
+τ′ ′= + ∫ , ( 2.7)
• The output function is inherited from the DEVS&DESS
(( , ))s x sλ = , ( 2.8)
where STRAJs,x(t): ω:<t1,t2>→Q is the state trajectory, and : cf Q X S× → is the
differential equation representing the continuous behavior of the system, similar to φi’s
(Q is the total state).
2 ω:<t1,t2>→A describes a motion through the set A that begins at t1 and ends at t2, and at every t∈<t1,t2>, ω(t)
describes where the motion is at time t. See [13] for more details.
DEVS Formalism ism 16
16
Figure 2.8 illustrates how DEVS representation of DEVS&DESS works. At time ti+1,
when the external event occurs, δext updates the current state from si to si+1, knowing that
the elapsed time is e. Using the new state (si+1), ta predicts when the continuous system
crosses the next threshold fj+1 (or fj again). Finally, δint function calculates the new state
si+2 using ta(s,x), if no external event occurs.
Figure 2.8 illustrates how DEVS representation of DEVS&DESS works. At time ti+1,
when the external event occurs, δext updates the current state from si to si+1, knowing that
the elapsed time is e. Using the new state (si+1), ta predicts when the continuous system
crosses the next threshold fj+1 (or fj again). Finally, δint function calculates the new state
si+2 using ta(s,x), if no external event occurs.
ti
e
ta(s,x) δint(s,x)
δext((s,x),e,x∋)
si
si+1
si+2
jf
1jf +
,( ( ), )t
s xf STRAJ x dτ τ−∞∫
ti+1
Figure 2.8. DEVS representation of DEVS&DESS Figure 2.8. DEVS representation of DEVS&DESS
DEVS Formalism 17
2.3.2 DEVS First Order Numerical Integrator
DEVS representation of op-DEVS&DESS, leads to first order discrete event
numerical integration [14]. Consider a continuous system with n-dimensional state space,
which is described by differential equation x φ=r r& . For simplicity we assume that φ is
piecewise constant: φ={φi| φi is constant in [ti,ti+1) , i∈Z} . The set of threshold values,
Fi={ fi |fi=q.i, i∈Z, q∈ℜ} (q is called “quantum”).
The following atomic DEVS is defined:
DEVS_INT=(X, Y, S, δext, δint, λ, ta),
X=ℜ n,
Y=ℜ n,
S=ℜn,
( , , )ext s e x s e xδ = + ⋅r r r r
, ( 2.9)
1
0
( , )0
i
i
if x
q e xta s xif x
x−
∞ = − ⋅=
≠
r r
uuurr rr r
ur ( 2.10)
int ( , ) ( , )s x s ta s x xδ = + ⋅r r r r r r
, ( 2.11)
( )sλ =r r
s , ( 2.12)
Input x∈X is the value of φ function, which is generated as external event at the
beginning of each segment. The interpretation of the above transition functions is as
follows (see Figure 2.9):
According to equation 2.7,
DEVS Formalism 18
r1
1
( , , ) ( )t e
ext ts e x s d s e xδ φ τ τ
+= + = + ⋅∫
r r r r ( 2.13)
where t1 is the moment at which the external event occurs.
The time advance function estimates how long it will take for the system to reach the
next threshold value. If the input is zero, obviously, it will never reach the next threshold;
so the time advance returns Infinity, consequently the atomic model goes into passive
phase.
In the case, which the input is non-zero if:
1) No external event has happened (e=0),
Assuming the rate of change is constant and equal to x, it’s clear that s=t.x+s0. so
the amount of time required for the system to reach s0+q, is ta=q/x.
2) External event has happened (e>0),
This happens when there is an unexpected external event before the previously
estimated time advance ta is over. Is this case the new input value, xi, is used to
recalculate how much time requires to reach the threshold. Again s=t.x+s0,new , but
so,new=s0,old+e.xold. Substituting s=so,old+q, we get: ta= (q-e.xold)/xnew
Note that this time-advance function works only if one external event occurs
between internal events. In the case of multi-external events ta=qremain/|x|, where
qremain is remaining quantum, calculated at each external event: qremain← qremain - e.|x|.
Figure 2.9 illustrates how DEVS numerical integrator works. System starts with an
initial state (zero in this example). At time t0 when φ(t0)= φ0 the input x0=φ0 is generated.
The time advance function calculates t' as the next internal transition time, but before that
happens, φ changes to φ1 at time t1, and input x1 is generated.
DEVS Formalism Formalism 19
19
y3
y2
y1
S6
S5 S4
S3
S2 S1
S0
φ3
φ2
φ1
φ0
x3 x1
x0
t'
t6 t3 t2
t6 t3 t2
t4 x2
t5 t1 t0
t4 t5 t1 t0
3q
2q
q
2q
q
Y
S 3q
X
φ(t)
Figure 2.9-DEVS first order numerical integrator Figure 2.9-DEVS first order numerical integrator
This external event will cause an external transition by δext from s0 to s1. Then time
advance function estimates the next threshold crossing, t2, at which an internal transition
from s1 to s2 occurs, and so on. Right before each internal transition an output is
generated by output function λ. Note that at time t4, where φ = 0, system goes to passive
phase until t5, at which a new non-zero input is generated.
This external event will cause an external transition by δext from s0 to s1. Then time
advance function estimates the next threshold crossing, t2, at which an internal transition
from s1 to s2 occurs, and so on. Right before each internal transition an output is
generated by output function λ. Note that at time t4, where φ = 0, system goes to passive
phase until t5, at which a new non-zero input is generated.
As oppose to conventional discrete time numerical integration methods, which divide
the time axis to equal time-steps, DEVS integrator divides the state space to equal
As oppose to conventional discrete time numerical integration methods, which divide
the time axis to equal time-steps, DEVS integrator divides the state space to equal
DEVS Formalism sm 20
20
segments (q, 2q, 3q, …). The smaller these segmentations, the higher accuracy we get.
Each segment is equal to the quantum.
segments (q, 2q, 3q, …). The smaller these segmentations, the higher accuracy we get.
Each segment is equal to the quantum.
This model can be used for non-piecewise-constant φ’s, which will generate some
error depending on the value of quantum, i.e. smaller quantum produces smaller error.
It’s similar to conventional discrete time numerical integrators whose accuracy depends
on the size of the time-step.
This model can be used for non-piecewise-constant φ’s, which will generate some
error depending on the value of quantum, i.e. smaller quantum produces smaller error.
It’s similar to conventional discrete time numerical integrators whose accuracy depends
on the size of the time-step.
Example 2.5. Falling Mass Example 2.5. Falling Mass
As an example consider a falling mass m with the initial height h0 and initial velocity
v0. A coupled DEVS containing two cascaded integrators is needed, as shown in Figure
2.10.
As an example consider a falling mass m with the initial height h0 and initial velocity
v0. A coupled DEVS containing two cascaded integrators is needed, as shown in Figure
2.10.
Acceleration Position Velocity Integrator 2
Integrator 1
Mass m
Figure 2.10. A DEVS coupled model for Simulation of a free falling mass. Figure 2.10. A DEVS coupled model for Simulation of a free falling mass.
Figure 2.11 shows the DEVSJAVA Simulation Viewer, implementing the above
model. Results are presented in Figure 2.12a. Figure 2.12b shows the error generated by
these two cascaded numerical integrators (quantum=1).
Figure 2.11 shows the DEVSJAVA Simulation Viewer, implementing the above
model. Results are presented in Figure 2.12a. Figure 2.12b shows the error generated by
these two cascaded numerical integrators (quantum=1).
Figure 2.11. DEVS implementation of Example 2.5 Figure 2.11. DEVS implementation of Example 2.5
DEVS Formalism 21
(b) (a)
Figure 2.12. Simulation of a falling mass using two first order DEVS integrators with initial velocity v0=20m/s, and
initial position x0=100m and Quantum=1. (a) Position vs. time: DEVS results compared to the exact solution. (b)
Position Error vs. time.
Example 2.6. Linear Systems
Linear systems can be simulated using a single DEVS integrator. Consider the
following two LTI systems.
1 1
2 2
0 1 02 2 1
x xu
x x
= + − −
&
& ( 2.14)
1 1
2 2
0 1 01 1 1
x xu
x x
= + −
&
& ( 2.15)
The step responses are shown in Figure 2.13. More accurate results can be obtained by
choosing smaller quantum.
DEVS Formalism 22
Figure 2.13 Step responses for the systems of Example2.5 using DEVS integrator. (a) Step response for
system of equation 2.14 (b) Step response for system of equation 2.15
integrator. (a) Step response for
system of equation 2.14 (b) Step response for system of equation 2.15
(a) (b)
2.3.3 DEVS Double Integrator 2.3.3 DEVS Double Integrator
2.3.3.1 Introduction and Motivation 2.3.3.1 Introduction and Motivation 2.3.3.1 Introduction and Motivation
By double integrator, we mean a DEVS atomic model, which is equivalent to two
consecutive first order integrators, described in Section 2.3.2, i.e. it integrates the input
signal twice in one shot. The motivation behind defining such a model is:
By double integrator, we mean a DEVS atomic model, which is equivalent to two
consecutive first order integrators, described in Section 2.3.2, i.e. it integrates the input
signal twice in one shot. The motivation behind defining such a model is:
1) The equation of free motion of most natural physical systems is of the same general
form [16]:
1) The equation of free motion of most natural physical systems is of the same general
form [16]:
DEVS Formalism 23
(2.16) 2 0nx xω+ =&&
This is the equation of an undamped, simple oscillator. For a mechanical system of
mass m and stiffness k,
nkm
ω = . (2.17)
For a rotational system with the single degree of freedom and the moment of inertia J
and torsional stiffness K:
nkJ
ω = . (2.18)
For a simple linearized pendulum with length l and gravitation acceleration g:
ngl
ω = . (2.19)
For an electrical LC circuit:
1n LC
ω = . (2.20)
2) Using one atomic model instead of two, is faster (in terms of computational
complexity), more accurate and is easier to implement when you want to use a large
number of these models.
DEVS Formalism 24
2.3.3.2 Problem Definition
The goal is to define a DEVS atomic model, which integrates the input signal twice,
i.e. the relation between input and output is:
( ) ( )y t t dtdtφ= ∫ ∫ (2.21)
For simplicity, we assume that the input signal, φ, is one-dimensional piecewise
constant as defined in Section 2.3.2. (Later we’ll show how this model can be extended to
n-dimensional integrator). And again each change in the value of φ, generates an input
event x=φ. Also the set of threshold values are supposed to be the same as of Section
2.3.2.
We define the DEVS numerical double integrator as follows:
DEVS_DBL_INT=(X, Y, S, δext, δint, λ, ta),
X=ℜ,
Y=ℜ,
S=SA×SI, where SA=ℜ, is the actual state; and SI=ℜ is the intermediate state.
21( , , ) ( , )2ext A I Is e x s e s e x s e xδ = + ⋅ + ⋅ + ⋅ ( 2.22)
2int
1( , ) ( ( , ) ( , ) , ( , ) )2A I Is x s ta s x s ta s x x s ta s x xδ = + ⋅ + ⋅ + ⋅ ( 2.23)
( )s sλ = ( 2.24)
DEVS Formalism 25
2
21
0 0
0 0
( , ) 1min{ 0 | ( )2
1( )}2
I
remainI
I
A I k
A I k
if x sq if x s
sta s x
t s s t x t f
s s t x t f if x+
∞ =
0
∧ =
= ∧ ≠=
> + ⋅ + ⋅ = ∨
+ ⋅ + ⋅ = ≠
( 2.25)
where fk and fk+1 are the two thresholds, such that: fk <sA< fk+1.
If si and input x are both equal to zero, then clearly the actual state sA doesn’t change,
so never crosses the thresholds. Consequently the time advance function ta is equal to
infinity.
φ3
φ2
φ1
φ0
x3
x2
x1
x0
SI0
SI6 SI5 SI4 SI3
SI2
SI1
SA6SA5
SA3SA4
SA2
t2 t3 t4 t1 t0 t6 t5
t2 t3 t4 t1 t0 SA0
SA1
t6 t5
t4 t5 t1 t0
t4 t5 t1 t0
SA
SI
X
φ(t)
Figure 2.14. DEVS Double Integrator
DEVS Formalism alism 26
26
If the input x is zero but the intermediate state sI is not, then we have a single
integrator similar to what we described in Section 2.3.2.
If the input x is zero but the intermediate state sI is not, then we have a single
integrator similar to what we described in Section 2.3.2.
If the input is not zero, then by integrating this signal twice, we get: If the input is not zero, then by integrating this signal twice, we get:
212A Ixdt s s t x t= + ⋅ + ⋅∫∫ ( 2.26)
which could intersect with the next threshold fk+1 and/or the previous one fk. In this
case ta is equal to the time period to the next threshold crossessing.
Figure 2.14 is an example to illustrate how this double integrator works.
n-Dimensional State Space
In this case the time advance function is the time within which the norm of “change
of actual state” is equal to the quantum i.e. , 1 ,A i A ids s s+= − =q where sA,i and sA,i+1
are the ith and i+1st actual states, respectively. See Figure 2.15.
uur uuuur uuur
,A isuuur
, 1A is +
uuuur
dsuur
Origin
q
q
Figure 2.15. Determining the time advance value for n-dimensional DEVS double integrator
But,
DEVS Formalism 27
2, 1 ,
2, 1 ,
1( , ) ( , )2
1( , ) ( , )2
A i A i I
A A i A i I
s s ta s x s ta s x x
ds s s ta s x s ta s x x
+
+
= + ⋅ + ⋅
⇔ = − = ⋅ + ⋅
uuuur uuur uur r
uuur uuuur uuur uur r ( 2.27)
by equating |ds|=q, the time-advance function ta(s,x) can be found. For simplicity we
use t instead of ta(s,x) in the following calculations.
2
2 2
2 24 3 2 2
121 1( ) ( )2 2
1 ( )4
I
I I
I I
ds q
s t xt q
s t xt s t xt q
x t s x t s t q
=
⇔ + =
⇒ + ⋅ + =
⇒ + ⋅ + −
uur
uur r
uur r uur r
r uur uur uur
2
0=
( 2.28)
In order to find the time-advance value it is required to solve the above incomplete
fourth-order polynomial equation. To analyze this equation consider:
4 3 2 2( ) 0f t at bt ct q= + + − = ( 2.29)
where 2 21 0, , 0
4 I Ia x b s x c s= > = ⋅ = >r uur r uur
.
We are looking for the minimum positive root of equation 2.29.
This can be done using numerical methods. Looking at the derivative of f(t) helps
finding the root faster.
DEVS Formalism 28
3 2 2( ) 4 3 2 (4 3 2 )f t at bt ct t at bt c′ = + + = + + ( 2.30)
1,2
3
2
3( ) 0 8
0
where 9 32
btf t at
b ac
− ± ∆′ =′ = ⇒ ′ =
∆ = −
( 2.31)
Figure 2.16 shows different locations of the roots of equation 2.29. Having this
information we can search for the root in a smaller region.
∆<0
t -∞ t1 0 t2 ∞
f(t) + - - + f '(t) - - + +
00
0
∆≥0
t -∞ t1 0 t'1 t'2 ∞
- - - + + + - - +
f(t) + -
- + + + f '(t) - + - +
b<0 ⇒ t'1>0, t'2>0
0 - 0 000
0 0 00
b>0 ⇒ t'1<0, t'2<0 t -∞ t'2 t'1 0 t1 ∞
f(t) x x x x - + f '(t) x x x x +
Figure 2.16- Searching the minimum positive root of equation 2.28.
DEVS Formalism 29
Example 2.7. Falling Mass
Consider the falling mass mentioned in Example 2.4 again. In Example 2.4 two
cascaded single DEVS integrators were used to simulate a falling mass. Here we simulate
the exact same problem using one DEVS double integrator. Figure 2.17a shows the
DEVS result compared to the exact solution. Figure 2.17b depicts the error generated
(quantum=1, the same quantum as in Example 2.4). As seen in the figure this error is
much less than the error generated by two cascade DEVS integrators.
Figure 2.17- Simulation of a falling mass using one DEVS double integrator with initial velocity v0=20m/s, and initial
position x0=100m and Quantum=1. (a) Position vs. time: DEVS results compared to the exact solution.
(b) Position Error vs. time
DEVS Formalism 30
2.4 Discrete Event Control (DEC)
2.4.1 Introduction
Ramadge and Wonham [5] were the first control theorists to develop an approach to
design discrete event controllers. They modeled the plant as an automaton and used
language theory to design a controller that forces the plant to perform the desired
behavior. In order to connect means of control to the system, they classified events into
two categories: uncontrollable events, which can be observed, but cannot be prevented
from occurring (e.g. obstacle detected) and controllable events that can be enforced to or
prevented from occurring (e.g. stop or move) [5]. This method has been used by several
researchers to control discrete event models [17],[18].
In this framework a system is said to be controllable when you can reach any desired
state of the system, knowing the current state. Consequently, a controller (supervisor)
exists if the system is controllable.
Zeigler [2] reformulated the Ramadge-Wonham approach to controller design within
the DEVS formalism. He showed that if the plant is strongly controllable (defined
below), then the desired path could be mapped to a DEC using a transformation called
“Inverse-DEVS”.
2.4.2 Controllability of DEVS Models
Controllability in general means being able to reach any desired final state from any
initial state in finite time. A discrete event specified system is said to be controllable if
DEVS Formalism 31
there exist a controller that can implement a desired state trajectory[2]. Given a discrete
event plant M=(X, Y, S,δext,δint,λ,ta), let ST be a desired state trajectory.
Definition 1 (weak controllability) A path ST is weakly controllable if a series of
internal and external transitions can take each state on the path to the next state on the
path3. See Figure 2.18a.
Definition 2 (strong controllability) A path is strongly controllable if either an
internal or external transition can take each state on the path to the next state on the path
(Figure 2.18b). More formally, a path ST is strongly controllable if:
For all si, si+1 such that si+1 follows si in ST, either:
a) δint(si) = si+1, or
b) there is a pair (e,x), such that δext(si,e,x)=si+1 , where 0 < e < ta(si) and x is an
input.
Definition 3 (very strongly controllability) A strongly controllable path ST is a very
strongly controllable if:
For all si, si+1 such that si+1 follows si in ST, either:
a) δint(si) = si+1, or
b) there is an input x such that δext(si,e,x)=si+1 for all 0 < e < ta(si).
Figure 2.18 illustrates the above three definitions. Filled circles represent states on the
desired path, ST=( s0 , s1 , … , si , si+1 , … , sf ). Thick lines denote internal or external
transitions that eventually lead one state on the path ST to the next.
3 See [2]for a more formal definition for weak controllability.
DEVS Formalism 32
S0
Si
Si+1Sf
δint(si)
δext(si,e,x)weak
weak
S0
Sk
Si+1 SfS0
Si+1 Sf
ta(s0) = ∞ta(si) = ∞
ta = ∞
Figure 2.18- Controllability Definitions: (a) weakly controllable (b) strongly controllable (c) very strongly controllable
path.
controllable (c) very strongly controllable
path.
(a) (b) (c)
2.5 Intelligent DEVS 2.5 Intelligent DEVS
2.5.1 DEVS-Fuzzy Logic 2.5.1 DEVS-Fuzzy Logic
A fuzzy logic controller consists of three operations: (1) fuzzification, (2) inference
engine, and (3) defuzzification. The input sensory (crisp or numerical) data are fed into
fuzzy logic rule based system where physical quantities are represented into linguistic
variables with appropriate membership functions. These linguistic variables are then used
in the antecedents (IF-Part) of a set of fuzzy “IF-THEN” rules within an inference engine
to result in a new set of fuzzy linguistic variables or consequent (THEN-Part) [12]
A fuzzy logic controller consists of three operations: (1) fuzzification, (2) inference
engine, and (3) defuzzification. The input sensory (crisp or numerical) data are fed into
fuzzy logic rule based system where physical quantities are represented into linguistic
variables with appropriate membership functions. These linguistic variables are then used
in the antecedents (IF-Part) of a set of fuzzy “IF-THEN” rules within an inference engine
to result in a new set of fuzzy linguistic variables or consequent (THEN-Part) [12]
A typical so-called Mamdani type rule can be composed as follows A typical so-called Mamdani type rule can be composed as follows
i
1 1 2 2IF is AND is THEN is , for 1, 2,...,i i ii1 1 2 2IF is AND is THEN is , for 1, 2,...,i i ix A x A y B i = l
where and are the fuzzy sets representing the iiA1iA2
th-antecedent pairs, and are the
fuzzy sets representing the i
iB
th-consequent, and l is the number of rules.
We define DEVS-Fuzzifier as follows:
A DEVS-Fuzzifier is a DEVS atomic model:
AMF =(X, Y, S, δext, δint, λ, ta, µ)
where
DEVS Formalism 33
The input set, X, is usually a set of real numbers, to be fuzzified.
The output set Y = [0,1], since the output of this atomic model is a fuzzy value.
S is a sequence of fuzzified input values: S={si | si = δext(q,si-1,x) }.
δext(q,s,x)=µ(x), where µ is the membership function associated with this fuzzifier.
δint(s) and λ(s) are the identity functions.
ta (s) =0 , since there is no time advance in this model.
µ(x) is the membership function associated with this fuzzifier.
This atomic model represents an antecedent membership function, while a consequent
membership function can be defined as:
CMF=(X, Y, S, δext, δint, λ, ta, µ)
where
X is a set of fuzzy values, X=[0,1].
Y is a set of fuzzy sets, Y = {y | y is a fuzzy set}.
S is a set of discrete fuzzy sets (membership functions) 4.
δext(q,s,x)= min( ( ), )i
i
a xa
µ∑
where µ is the membership function associated with this model. In fact this function
cuts those values of the fuzzy set Σµ(ai)/ai whose membership values are greater than x
(see Figure 2.19).
δint(s)=s, λ(s)=s and ta=0.
4 The notation Σµ(ai)/ai denotes a discrete fuzzy set (membership function) [48].
DEVS Formalism 34
In a typical fuzzy rule, connectives are used to make connection between antecedent
pairs. An “AND” connective takes the minimum, while an “OR” connective returns the
maximum value of two pairs [12].
Y X µ
Figure 2.19 A typical consequent membership function (CMF) model
Connectives are defined in DEVS as atomic models:
CNTV=(X1, X2, …, Xn, Y, S, δext, δint, λ, ta, µ)
where
X1=…= Xn =[0,1] are input sets. (A connective can have n inputs.), Y = S = [0,1],
δint(s)=s, λ(s) = s, ta = 0,
δext(q,s,x1,…xn) = min(x1,…,xn), in case of “AND” and,
δext(q,s,x1,…xn) = max(x1,…,xn), in case of “OR”.
A fuzzy rule can be composed as a coupled model using the above three atomic
models. For instance, consider the following fuzzy rule: “IF x is A OR x is B THEN z is
C”. Figure 2.20 shows how this fuzzy rule can be composed using two Fuzzifiers, one
Connective and one CMF (consequent membership function).
DEVS Formalism lism 35
35
CMF
FZ
C
B
A
z
y
x
OR
Figure 2.20 A DEVS model for a typical fuzzy rule “IF x is A OR y is B THEN z is C” Figure 2.20 A DEVS model for a typical fuzzy rule “IF x is A OR y is B THEN z is C”
In order to fully implement a Fuzzy Logic Controller, a Defuzzifier is needed, which
can be defined in the same way. i.e. takes fuzzy sets as input and calculates the
defuzzified value (see [12] for further details about defuzzification methods).
In order to fully implement a Fuzzy Logic Controller, a Defuzzifier is needed, which
can be defined in the same way. i.e. takes fuzzy sets as input and calculates the
defuzzified value (see [12] for further details about defuzzification methods).
DeFZ==(X1, X2, …, Xn, Y, S, δext, δint, λ, ta) DeFZ==(X1, X2, …, Xn, Y, S, δext, δint, λ, ta)
where where
Inputs of this atomic model are fuzzy sets so X1=X2=…=Xn ={x | x is a fuzzy set}. Inputs of this atomic model are fuzzy sets so X1=X2=…=Xn ={x | x is a fuzzy set}.
Output is a real number, Y = ℜ,S = ℜ, δint(s) = s, λ(s) = s, ta = 0, Output is a real number, Y = ℜ,S = ℜ, δint(s) = s, λ(s) = s, ta = 0,
δext(q,s,x1,…xn) = Center_of_Gravity (x1 ∪ x2 ∪ … xn ) δext(q,s,x1,…xn) = Center_of_Gravity (x1 ∪ x2 ∪ … xn )
Center_of_Gravity(x) is a function which calculates the defuzzified value of fuzzy set
x using center of gravity method [12]:
Center_of_Gravity(x) is a function which calculates the defuzzified value of fuzzy set
x using center of gravity method [12]:
Center_of_Gravity (x) = Center_of_Gravity (x) = ( )
( )
x x dx
x dx
µ
µ∫∫
( 2.32)
Note that the time advance function (ta) is zero for all the above models, since we
don’t want to have any delay time in any of the models.
Fuzzy-DEVS, i.e. the discrete event implementation of Fuzzy Logic Control, runs
faster than conventional methods. The reason is that the three operations of the fuzzy
DEVS Formalism 36
system in Fuzzy-DEVS, are activated only when there is an input event. In other words
the DEVS-Fuzzifiers (see Figure 2.20) are activated only when they receive a new input
event. Consecutively, they only generate output events if the new fuzzified value is
different from the previous one. As a result the connective models are activated only
when there is a change in their inputs, and the story continues. This kind of behavior
saves a lot of unnecessary calculations in a large system. However, for small number of
rules the performance is not significantly improved due to change-detection calculations
performed in Fuzzy-DEVS.
In practice, Fuzzy parameters can be tuned in two ways: first, by means of adding
extra input ports to the DEVS models and sending the parameters to them via input ports.
Second, directly changing the parameters internally, or having parameters accessible
from other models.
The first method makes the Fuzzy-DEVS tunable by connecting it to other DEVS
models, such as GA-DEVS and NN-DEVS, while the second method can be used to tune
the parameters by conventional tuning algorithms such as GA and NN.
Example 2.8. As an example consider the problem of controlling an inverted
pendulum. The linearized model is given by the following equation [19]:
[ ]
0 1 0 0 00.4 0 0 0 0.970 0 0 1 00 0 7.97 0 0.8
1 0 0 0
x x u
y x
− = + −
=
&
( 2.33)
DEVS Formalism 37
where the four states are , , xθ θ& and . The input to the system is force applied to the
cart. This system can be stabilized using a fuzzy logic controller, with the following three
rules:
x&
IF θ is POS and θ& is POS THEN Force is NEG
IF θ is NEG and θ& is NEG THEN Force is POS
IF θ is ZERO and θ& is ZERO THEN Force is ZERO
where the membership functions POS, NEG and ZERO are defined in Figure 2.21a. Figure 2.21b
shows the DEVS-Fuzzy Logic Controller. The closed-loop system is shown in Figure 2.21c and
the output (θ ) of the system is shown in Figure 2.21d.
0 5 10 15 20 25 30 35-0.5
0
0.5
1
Tim e(sec)
Thet
a
ZERO NEG POS
a) Membership functions for fuzzy-DEVS b) The fuzzy-DEVS controller
c) The close-loop system d) The output of the system (θ)
Figure 2.21. Example of using fuzzy-DEVS to control an inverted pendulum
DEVS Formalism Formalism 38
38
2.5.2 DEVS-SLA 2.5.2 DEVS-SLA
2.5.2.1 Introduction to Stochastic Learning Automata (SLA) 2.5.2.1 Introduction to Stochastic Learning Automata (SLA)
SLA seeks to model the learning-from-action process from intelligent beings. Namely,
reducing the probability of performing actions, which cause to receive penalty (pain) and
increase the probability of performing actions, which are rewarded from the environment
(pleasure).
SLA seeks to model the learning-from-action process from intelligent beings. Namely,
reducing the probability of performing actions, which cause to receive penalty (pain) and
increase the probability of performing actions, which are rewarded from the environment
(pleasure).
According to a famous remark by a psychologist (Bentham’s Theory of Value),
“nature has placed mankind under the governance of two sovereign masters, pain and
pleasure. It is for them alone to point out what we ought to do, as well as to determine
what we shall do. On the one hand, the standard of right and wrong, on the other the
chain of causes and effects, are fastened to their throne”[20], i.e. all human behavior is
motivated by and only by a desire to achieve pleasure and avoid pain. This behavior can
be mathematically formulated and used in control of intelligent systems.
According to a famous remark by a psychologist (Bentham’s Theory of Value),
“nature has placed mankind under the governance of two sovereign masters, pain and
pleasure. It is for them alone to point out what we ought to do, as well as to determine
what we shall do. On the one hand, the standard of right and wrong, on the other the
chain of causes and effects, are fastened to their throne”[20], i.e. all human behavior is
motivated by and only by a desire to achieve pleasure and avoid pain. This behavior can
be mathematically formulated and used in control of intelligent systems.
Environment
Automaton
Action Penalty/Reward
Figure 2.22- Learning from action: gaining reward while avoiding penalty. Figure 2.22- Learning from action: gaining reward while avoiding penalty.
The learning process is as follows: The learning process is as follows:
The system (automata) has the option of choosing one action from n pre-defined
actions, each with a probability of being selected: Pi, i=1...n. (ΣPi = 1).
The system (automata) has the option of choosing one action from n pre-defined
actions, each with a probability of being selected: Pi, i=1...n. (ΣPi = 1).
DEVS Formalism 39
After performing the selected action, the result will be evaluated by one or more
teachers (environment), which will assign penalty or reward to the action based on the
desired objectives of the system.
Next, the system will update the probabilities by increasing (decreasing) the probably
of choosing actions that has received reward (penalty).
This process will be repeated until it converges.
This approach can be used to translate the experience of human to the rover domain.
Human like reasoning is achieved in SLA through the use of a teacher, which penalizes
or rewards a particular action taken by the agent or robot (i.e. using a human expert in
place of the teacher).
Figure 2.23 Schematic of SLA inside the DEVS Environment
2.5.2.2 Stochastic Learning Automata by DEVS
Stochastic Learning Automata (SLA) process can be modeled by DEVS as shown in
Figure 2.23. In this figure, n denotes the number of actions and m is the number of
teachers. Pi denotes the probability of the ith action. Here is a brief description of the
DEVS Formalism 40
atomic models shown in this figure: The “Action-chooser-block” chooses an action i
(i=1,2…n) based on the probability that the action occurs. The “agent” (or “agents” in
multi-agent case) will perform the action and send the result(s) to the “teacher(s)”. Each
“teacher” (denoted by Tj) assigns a reward or penalty to the result(s). The “summer
block” sums up all the rewards and penalties assigned by teachers. If +1,-1 represent
reward and penalty respectively then the output of the summer will be an integer K∈ {-
m, -m+1, … , m-1 , m}. The next block updates the probabilities (Pi’s) based on K.
Example 2.9: As an example, consider a DEVS-SLA model with four actions and
three teachers (i.e. n=4,m=3). Figure 2.24 shows a possible scenario of the simulation.
After receiving the start signal, the “Action chooser” block changes its state to busy and
after a certain amount of time, generates the output (in this example action 3 has been
chosen), and goes back to the passive state. The “Agent” performs the action as soon as it
receives the message from “Action chooser”, changing its state to busy. After the action
is performed, the “Agent” block sends the result to teachers. As soon as the teachers
receive the result from agent, they go to busy state, evaluating the result. In this example
Teacher 1, assigned a penalty and two other teachers assigned a reward. “Summer”
collects the penalties and rewards assigned by teachers, and sends the result (in this case
+1) to the “Update” block. This block increases P3 (since the third action got a reward of
+1) and decreases the others.
DEVS Formalism 41
Figure 2.24. Possible results of an example for SLA_DEVS
Example 2.10: As another example, consider a mobile automaton (robot) with 8
moving actions: 1.North, 2.East, 3.South, 4.West, 5.North-East, 6.South-East, 7.South-
West and 8.North-West. Starting from some initial position (x0, y0), the automaton learns
to reach a goal position (xg, yg). Two teachers are used: one for X direction and one for Y
direction. Figure 2.25 shows the results of SLA-DEVS for an automaton going from
initial position (10,10) to the goal point (60,80).
DEVS Formalism 42
y
x
y
x
(a)
P8
P6
P7
P5
P4
P3
P2
P1
(b) (c)
Figure 2.25- A mobile automaton learns to reach a goal point using SLA. (a) Action probabilities vs. time (b) trajectory
of the automaton (c) x and y vs. time.
As figure shown the automaton first learns to perform action number 5 (Moving
toward North-East) until it reaches the desired x=60; after that different actions are
performed to reach the desired y=80.
43
3. MOBILE ROBOTICS
3.1 Introduction
Mobile robotics involve several different arias of research such as path and motion
planning, sensor and action planning, sensor fusion, vision, machine learning, artificial
intelligence, pattern and image recognition, environment modeling, autonomous
behaviors, control and automation, multi-agent and cooperative systems. In this work we
focus on modeling, simulation and control of mobile robots, using DEVS and V-Lab®5.
Figure 3.1 shows a closed-loop block diagram for sensor-based mobile robots.
Feedback
External Sensors (vision, infra-reds, sonar, etc)
Internal Sensors (encoders, compasses, etc)
Outs Inputs Refs
Figure 3.1. Robot Functional Block Diagram
5 V-Lab® is discussed in section 4.
Mobile Robotics 44
3.2 Modeling and Controllability of Mobile Robots and Controllability of Mobile Robots
The mobile robot under consideration is a so-called unicycle robot with two
differential-drive wheels on the same axle (Figure 3.2). A robot with this kind of wheel
configuration has a non-linear property and also an underlying non-holonomic property
that adds to the complexity of the motion control problem.
The mobile robot under consideration is a so-called unicycle robot with two
differential-drive wheels on the same axle (Figure 3.2). A robot with this kind of wheel
configuration has a non-linear property and also an underlying non-holonomic property
that adds to the complexity of the motion control problem.
lv 2r
w
l
θ
rv( , )x y
v
Figure 3.2- A differential-wheeled mobile robot Figure 3.2- A differential-wheeled mobile robot
The locomotion system for such a mobile robot constitutes two parallel driving
wheels, the acceleration of each being controlled by an independent motor. Passive
castors ensure the stability of the platform. The reference point of the robot is the
midpoint of the two wheels; its coordinates, with respect to a fixed frame are denoted by
The locomotion system for such a mobile robot constitutes two parallel driving
wheels, the acceleration of each being controlled by an independent motor. Passive
castors ensure the stability of the platform. The reference point of the robot is the
midpoint of the two wheels; its coordinates, with respect to a fixed frame are denoted by
( , )( , )x y ; the main direction of the vehicle is the direction θ of the driving wheels. With
designating the distance between the driving wheels, the dynamic model is [21]:
l
Mobile Robotics 45
( )
( )
( ) 1 2
1 cos21 0 0sin2 0 0
1 0 01 000 10
r l
r l
r l
r
l
v v
x v vy
v v u ul
vv
θ
θ
θ
+
+ = − + +
&
&
&
&
&
( 3.1)
where u and u are the accelerations of two independent motors. Figure 3.3 shows
the closed-loop feedback control block diagram for such a system.
1 2
Position
CurrentVelocity
Position
Velocity Torque
DesiredVelocity
Desired Position Volt
Figure 3.3. Robot Motion Control
Figure 3.4 shows how we go from dynamics to kinematics6, assuming the desired
velocity is equal to the actual velocity. This assumption makes it a lot easier to control
this system. In real robots usually PID controllers at the low level control, ensure this
assumption providing the desired velocity. By choosing v and as inputs, the 5-
dimensional system will be reduced to the following 3-dimensional system:
r lv
6 Dynamics is “A branch of mechanics that deals with forces and their relation primarily to the motion but sometimes also to the equilibrium of bodies.” (Merrian-Webster Dictionary), while kinematics is “a branch of dynamics that deals with aspects of motion apart from considerations of mass and force”(Merrian-Webster Dictionary).
Mobile Robotics 46
( )
( )
( )
1 cos21 sin21
r l
r l
r l
x v v
y v v
v vl
θ
θ
θ
= +
= +
= −
&
&
&
( 3.2)
where | | and ,maxr rv v≤ ,max| |l lv v≤ . By choosing 1 (2 r lv v v )= + and 1( r lw v v
l= − ) , we get
the kinematic model of equation 3.3.
cossin
x vy v
w
θθ
θ
==
=
&
&
&
( 3.3)
where is the forward velocity and is the angular velocity. Of course and are
bounded with:
v w v w
min maxv v v≤ ≤ and min maxw w w≤ ≤
“This system is symmetric without drift and controllable from everywhere” [21].
Kinematics is purely differential-geometrical issue independent from the dynamics, assuming the velocity to be the system input [29].
Mobile Robotics 47
Velocity=Desired Velocity
Position
I G N O R E
Figure 3.4. Kinematic Control rol
It’s easier to control the linear and angular velocities v, w instead of left and right
velocities; however, in a real robot left and right velocities are required to control the
robot.
It’s easier to control the linear and angular velocities v, w instead of left and right
velocities; however, in a real robot left and right velocities are required to control the
robot.
The relation between [The relation between [ ]Tr lv v and [ ]Tv w can be expressed by the following matrix
equation.
1 1 12 2 21 1 1
2
r r
l l
lv vv vv vw l
l l
= ⇒ = − −
w ( 3.4)
Mobile Robotics 48
Controllability
This model is a non-holonomic7 one, which is always controllable, thus the path
planning problem is always well posed [22],[21].Note that “non-holonomic mechanical
systems, either in kinematics or dynamic form, cannot be stabilized at a point by smooth
feedback” [22]. The alternatives are time-varying or discontinuous stabilizing feedback.
3.3 Motion Control of Mobile Robots
The motion control of mobile robots among obstacles is classified to three possible
motion tasks as follows [21]:
• Point-to-point motion (goal searching): The robot is assigned to reach a
desired goal configuration starting from a given initial configuration, while
avoiding collision with obstacles. This task is sometimes called Path planning
in the literature.
• Path following: The robot must reach and follow a geometric path in the
Cartesian space starting from a given initial configuration (on or off the path).
• Trajectory tracking: The robot must reach and follow a trajectory in the
Cartesian space (i.e., a geometric path with an associated timing law) starting
from a given initial configuration.
7 A nonholonomic control system is a system with nonholonomic constraint --a kinematic constraint that cannot be integrated [22]. For a wheeled mobile robot the nonholonomic constraint is: cos sin 0y xθ θ− =& & .
Mobile Robotics 49
3.3.1 Path Planning Methods
Path planning (goal searching) involves two questions: first, “is there exists a
collision-free admissible path?” Second, “how can one compute such a path”. The
existence of a path depends on the configuration of the obstacles in the environment and
the initial configuration of robot. The path-computation problem has been receiving more
attention by researchers. There are several approaches which can be used for path
planning: Regulation of a nonholonomic dynamic wheeled mobile robot with parametric
modeling uncertainty using lyapunov function [23], a so called virtual vehicle approach
for control of mobile robots [24], game theory approaches [25], [26], a bang-bang control
technique [27], a quasi-continuous output feedback method [28], natural algebraic
structure of chained form system together with sliding mode theory [29], optimal path
planning approaches [30], [31], [32], evolutionary control architecture [16] visibility
graphs [33], [34] and other methods [35], [37], [22], [21].
Problem definition: The problem is simply to determine v and w such that the
system of equation 3.3 steers the robot arbitrarily close to a goal configuration (xg,yg,θg).
Our solution: We propose the following control law to steer the robot to a goal
configuration (xg,yg,θg).
Consider the system of equation 3.3. The following inputs:
Mobile Robotics 50
cos( )sin( )
v aw b
ϕ θϕ θ
= −= −
( 3.5)
where arctan( )g
g
y yx x
ϕ−
=−
, steer the system to the goal point (xg,yg) if a and b are
chosen properly.
After reaching the goal point, robot can turn (in place) to reach the angle θg i.e.
0sin( )g
vw b θ θ
=′= −
( 3.6)
Without any loss of generality the goal can be assumed to be at the origin, any other
goal can be moved to the origin be a coordinate transformation..
Using the proposed control law, the system becomes:
cos( ) cos( )cos( )sin( )sin( )
x ay a
b
ϕ θ θϕ θ θ
θ ϕ θ
= −= −
= −
&
&
&
( 3.7)
Here is how one can choose a and b to stabilize the system:
b is associated with the angular velocity and a with the linear velocity of the system.
bsin(ϕ-θ) turns the robot toward the goal direction ϕ. The bigger the b, the faster it turns.
On the other hand a determines how fast the robot moves forward. Intuitively, robot
should first turn toward the goal direction and then move forward to the goal.
Consequently, if b is chosen to be larger than a, this controller law should work. After
reaching the goal position let v=0, and robot turns to reach the goal direction.
Mobile Robotics 51
Figure 3.5 shows the outcome of this controller, for a=1 and b=4 and initial condition
(1,2, π/4).
r a=1 and b=4 and initial condition
(1,2, π/4).
(rad)
v,w
x,y
θ
Initial Condition :(1, 2, π/4)
y
x
b) State variables vs. time a) Trajectory
Figure 3.5. The system of equation 3.7 for a=1 and b=4; Figure 3.5. The system of equation 3.7 for a=1 and b=4;
It can be seen from the trajectory graph that, robot first goes backward and after a
while goes forward. By using the absolute value for v, robot will always go forward.
It can be seen from the trajectory graph that, robot first goes backward and after a
while goes forward. By using the absolute value for v, robot will always go forward.
Figure 3.6 shows the result of changing b for the same system, b varies from 4a to a
and a is equal to 1. In
Figure 3.6 shows the result of changing b for the same system, b varies from 4a to a
and a is equal to 1. In
Figure 3.6a the initial direction θ0 is equal to π/4, and in Figure 3.6a the initial direction θ0 is equal to π/4, and in
Figure 3.6b, θ0 is equal to -π/4. Figure 3.6b, θ0 is equal to -π/4.
Figure 3.7 depicts the trajectory of the system for 8 different initial positions. Figure 3.7 depicts the trajectory of the system for 8 different initial positions.
b=1
b=1.5
b=2
b=4 b=3
b=1
b=1.5
b=2
b=4 b=3
a) θ0=π/4 b) θ0=-π/4
Mobile Robotics 52
Figure 3.6. Effect of changing b from 4a down to a. Figure 3.6. Effect of changing b from 4a down to a.
(6, 0, π/4)
(6, -5, π/4) (0, -5, π/4) (-6, -5, π/4)
(-6, 0, π/4)
(-6, 5, π/4) (0, 5, π/4)
(6, 5, π/4)
a=1 b=1
Figure 3.7. Trajectory of the system of equation 3.7 for different initial conditions Figure 3.7. Trajectory of the system of equation 3.7 for different initial conditions
Parameters a and b can be time varying such that as robot approaches the goal it
slows down, too. For example they can be proportional to the robot’s distance to the goal.
Parameters a and b can be time varying such that as robot approaches the goal it
slows down, too. For example they can be proportional to the robot’s distance to the goal.
2 2
( )2 ( )
( ) ( ) ( )g g
a r tb r t
r t x x y y
==
= − + −
( 3.8)
Figure 3.8 shows the result for the above control strategy. Note that total energy of
the system, is damping (Figure 3.8b) since both angular and linear velocities w,v decay to
zero.
Mobile Robotics 53
2 21 ( )2
E Mv Iw= +& &
(b) Total Energy (a) Trajectory
θ
(c) State variables
Figure 3.8. Using time-varying a and b as defined in equation 3.8 Figure 3.8. Using time-varying a and b as defined in equation 3.8
Stability8 Analysis Stability8 Analysis
Here we analyze the Lyapunov stability of the system. We find condition for a and b
under which the proposed controller makes the system asymptotic stable. Consider the
following positive definite Lyapunov function:
Here we analyze the Lyapunov stability of the system. We find condition for a and b
under which the proposed controller makes the system asymptotic stable. Consider the
following positive definite Lyapunov function:
Mobile Robotics 54
2( , , , ) 1 cos( )V r t rϕ θ θ= + + −ϕ ( 3.9)
where r and ϕ are polar coordinates of the robot. Obviously this function is a positive
definite one. We need to show that
( , , , ) 0V r ttϕ θ∂
<∂
. ( 3.10)
From equation 3.9 we have:
2 ( )sin(V rr )θ ϕ θ= − − −&& && ϕ ( 3.11)
On the other hand:
2 2 2
2
( cos )( cos( ) cos ) ( sin )( cos( )sin )
cos( ) cos( )cos ( )
r x yrr xx yy
r a r arr
r ar a
ϕ ϕ θ θ ϕ ϕ θ θ
ϕ θ ϕ θ
ϕ θ
= +⇒ = +
− + −⇒ =
⇒ = − −
∴ = −
& & &
&
&
&
( 3.12)
And
8 Here, stability means being able to steer the robot to a desired configuration.
Mobile Robotics 55
2 2 2
arctan( )
( cos )( cos( )sin ) ( cos( ) cos )( sin )
cos( )sin( )
yx
xy xy r a a rx y ra
r
ϕ
ϕ ϕ θ θ ϕ θ θ ϕϕ
ϕ θ ϕ θϕ
=
− − − −⇒ = =
+− − −
∴ =
& &&
&
( 3.13)
Substituting r and & ϕ& in equation 3.11:
2
2 2
cos( )sin( )2 cos ( ) ( sin( ) )sin( )
cos( )2 cos ( ) ( )sin ( )
aV ra br
aV ra br
ϕ θ ϕ θϕ θ ϕ θ θ
ϕ θϕ θ ϕ θ
− − −= − − − −
−= − + + −
&
&
ϕ− ( 3.14)
The first term is negative if a is chosen to be negative and the second term is negative
if:
cos( ) 0a brϕ θ−
+ < ( 3.15)
Consequently,
cos( )abr
ϕ θ− > − ( 3.16)
This condition is satisfied if b is chosen such that:
Mobile Robotics 56
ab
r− > ( 3.17)
Finally,
0, 0, 0ar bb
> > < <a ( 3.18)
But r ≥ rd > 0 (where r0 is the desired distance of the robot to the goal, and can be
arbitrarily small), as a result, a and b, satisfying the following inequality, guarantee the
asymptotic stability of the system.
0 ,da r ab
< < < 0 ( 3.19)
3.3.2 Obstacle Avoidance
Obstacles can be modeled as unknown disturbances to the system. Robot can detect
the obstacles by distance sensors such as infrared sensors, sonar sensors, lasers, cameras,
etc.
A very simple obstacle avoidance strategy is as follows:
1. If obstacle is on the left hand side, turn right.
2. If obstacle is on right hand side, turn left.
3. If obstacle is in front, randomly choose to turn right or left.
Mobile Robotics 57
4. If obstacles are both on left and right but not in the front, try going through
them.
5. If obstacles are on the left and right and front, do a U-turn.
This strategy can be easily implemented using ordinary methods or soft computing
techniques such as fuzzy logic, neural networks, etc.
The controller proposed in Section 3.3.1 (equation 3.5) can be modified to do obstacle
avoidance as well:
( )1 1
1 1
cos( )
sin( )
n mL L R Ri i i ii i
n mL L R Ri i i ii i
v a s s
w b s s
ϕ θ γ γ
ϕ θ λ λ
= =
= =
= − − +
= − + −
∑ ∑∑ ∑
( 3.20)
where n is the number of sensors on the left side and m the number of sensors on the
right hand side of the robot.
s Li and s R
j are the values of the sensors on the left and right hand side respectively.
λL, λR, γ L and γ R are positive scalars to be chosen properly such that the robot avoids
obstacles without collision. The idea is to turn right when left sensors detect obstacles
and left when right sensors detect. However, it doesn’t perform strategies number 3,4 and
5.
3.3.2.1 DEVS-Fuzzy Logic Obstacle Avoidance
Fuzzy-Logic also has been used for obstacle avoidance by researchers (for example
see [36]). In order to accomplish this purpose a fuzzy logic controller should calculate the
forward and angular velocities such that robot avoids obstacles. Figure 3.9 shows the
membership functions used for obstacle avoidance. Figures Figure 3.9a and Figure 3.9b
Mobile Robotics 58
are input membership functions and Figure 3.9c and Figure 3.9d are output membership
functions.
mbership
functions.
b) Angle of sensor (th) a) Distance measured by sensor (d)
c) Forward velocity (V) d) Angular Velocity (W)
Figure 3.9. Membership function used in Fuzzy-Logic Obstacle avoidance Figure 3.9. Membership function used in Fuzzy-Logic Obstacle avoidance
The following rules, derived from expert intuition, are employed: The following rules, derived from expert intuition, are employed:
1. "IF distance is Close And angle is Pos Then V is VSlow W is Neg " 1. "IF distance is Close And angle is Pos Then V is VSlow W is Neg "
2. "IF distance is Close And angle is Zero Then V Is Stop " 2. "IF distance is Close And angle is Zero Then V Is Stop "
3. "IF distance is Close And angle is Neg Then V Is VSlow W Is Pos " 3. "IF distance is Close And angle is Neg Then V Is VSlow W Is Pos "
4. "IF distance is Near And angle is Pos Then V Is Slow W Is SNeg" 4. "IF distance is Near And angle is Pos Then V Is Slow W Is SNeg"
5. "IF distance is Near And angle is Zero Then V Is VSlow W Is SNeg" 5. "IF distance is Near And angle is Zero Then V Is VSlow W Is SNeg"
6. "IF distance is Near And angle is Neg Then V Is Slow W Is SPos" 6. "IF distance is Near And angle is Neg Then V Is Slow W Is SPos"
7. "IF distance is Far And angle is Pos Then V Is Fast W Is Zero" 7. "IF distance is Far And angle is Pos Then V Is Fast W Is Zero"
8. "IF distance is Far And angle is Zero Then V Is Fast W Is Zero" 8. "IF distance is Far And angle is Zero Then V Is Fast W Is Zero"
9. "IF distance is Far And angle is Neg Then V Is Fast W Is Zero" 9. "IF distance is Far And angle is Neg Then V Is Fast W Is Zero"
Mobile Robotics 59
Figure 3.10 shows the 3D surfaces generated from the above membership functions
and rules.
Figure 3.10 shows the 3D surfaces generated from the above membership functions
and rules.
a)Angular velocity (w) as a function of d and th b)Forward velocity (v) as a function of d and th
Figure 3.10. 3D surfaces generated by the proposed fuzzy logic rule base to perform obstacle avoidance Figure 3.10. 3D surfaces generated by the proposed fuzzy logic rule base to perform obstacle avoidance
This fuzzy logic controller is implemented by DEVS-Fuzzy Logic (described in
Section 2.5.1). The results are presented in Section 5.
This fuzzy logic controller is implemented by DEVS-Fuzzy Logic (described in
Section 2.5.1). The results are presented in Section 5.
3.3.3 Cooperative Robots 3.3.3 Cooperative Robots
Cooperative robotics has received considerable amount of interest in recent years.
This area of research lies under Multi Agent Systems (MAS). Various different
approaches has been used in multi agent systems collaborative control such as:
distributed value function techniques [38], fuzzy reinforcement learning agents in a
partially observable Markov decision process to learn which opportunities or most
worthy to pursue [39], a reinforcement learning methodology to design multi-agent
systems, where agents have to coordinate to reach a global goal [40], using cooperative
mobile robots to cooperatively map an unknown area using stereo vision [41], a
probabilistic approach to collaborative multi-robot area mapping [42], another multi-
Cooperative robotics has received considerable amount of interest in recent years.
This area of research lies under Multi Agent Systems (MAS). Various different
approaches has been used in multi agent systems collaborative control such as:
distributed value function techniques [38], fuzzy reinforcement learning agents in a
partially observable Markov decision process to learn which opportunities or most
worthy to pursue [39], a reinforcement learning methodology to design multi-agent
systems, where agents have to coordinate to reach a global goal [40], using cooperative
mobile robots to cooperatively map an unknown area using stereo vision [41], a
probabilistic approach to collaborative multi-robot area mapping [42], another multi-
Mobile Robotics 60
robot collaborative exploration using pairs of robots watching each others behavior [43],
a collaborative control of robot motion, where multiple sources share control of a single
robot [44], coordination of multiple mobile robots in an object carrying task using
implicit communication [45], cooperative multi-robot box pushing, using six-legged
robots equipped with object and goal sensing [46], a generalized mission planner for
multiple mobile robots in unstructured environment, introducing a general-purpose
interpreted grammar for task definition [47].
rs behavior [43],
a collaborative control of robot motion, where multiple sources share control of a single
robot [44], coordination of multiple mobile robots in an object carrying task using
implicit communication [45], cooperative multi-robot box pushing, using six-legged
robots equipped with object and goal sensing [46], a generalized mission planner for
multiple mobile robots in unstructured environment, introducing a general-purpose
interpreted grammar for task definition [47].
In this Section we address the problem of “cooperative object-pushing task”. Objects
are assumed to be round-shaped as shown in Figure 3.11.
In this Section we address the problem of “cooperative object-pushing task”. Objects
are assumed to be round-shaped as shown in Figure 3.11.
Figure 3.11. Robot trying to push round objects to storage area Figure 3.11. Robot trying to push round objects to storage area
In solving this problem, we adopt a simplified hierarchical control structure for
suitable distribution of single objectives, allowing us to achieve the final goal in the
easiest and fastest way.
In solving this problem, we adopt a simplified hierarchical control structure for
suitable distribution of single objectives, allowing us to achieve the final goal in the
easiest and fastest way.
We formulate a simplified description of the problem by making the following
assumptions:
We formulate a simplified description of the problem by making the following
assumptions:
Mobile Robotics 61
− The supervisory system has knowledge of initial position of objects to be collected
and can communicate it to each robot; this could be obtained by a previous
exploration of the environment.
itial position of objects to be collected
and can communicate it to each robot; this could be obtained by a previous
exploration of the environment.
− Robots are non-holonomic rovers; besides they know their position and orientation
and are supplied by sensors that permit them to determine their relative position with
respect to the objects;
− Robots are non-holonomic rovers; besides they know their position and orientation
and are supplied by sensors that permit them to determine their relative position with
respect to the objects;
− The supervisory station can communicate with robots getting information on their
position and task accomplishment level; it can eventually receive help request from
robots;
− The supervisory station can communicate with robots getting information on their
position and task accomplishment level; it can eventually receive help request from
robots;
− Objects have circular geometry and the center of gravity position is known. − Objects have circular geometry and the center of gravity position is known.
Once the assignment is made, robots approach and then position themselves for
pushing objects towards the final goal, and finally they start to push along the normal
direction to the object surface.
Once the assignment is made, robots approach and then position themselves for
pushing objects towards the final goal, and finally they start to push along the normal
direction to the object surface.
“out” “in”
“out”?object_reached_the_goal
“in”?outside_pushing_region
“in”?everybody_ready
“out”?object_touched
“in”?object_lost
“in”?object_reached
“out”!goal_reached
“in”?command
Figure 3.12. A DEVS model for cooperative pushing task Figure 3.12. A DEVS model for cooperative pushing task
Mobile Robotics 62
If one rover is not able to fulfill its job, due to the object weight, the supervisory
station can detect this condition, via evaluating robots jobs (see Job evaluation Section
3.4.3). Eventually, the supervisor can call another rover to help.
Figure 3.12 Shows a DEVS model to perform the pushing task, where states are
defined as:
• Standby: is the initial state. Robot is waiting for supervisor to send an order.
• Path Planning: this state indicates that robot is trying to go from its initial position to
the goal position, avoiding other obstacles and rovers. After reaching the goal, the
robot changes its state to either Standby or Placement, depending on the type of
command received from supervisor (i.e. if a pushing command is received it changes
to Placement state, otherwise goes to Standby state.
• Placement: this state indicates that the robot is placing itself inside the pushing region
touching the object (see Figure 3.13).
• Ready_to_push: in this state, robot is ready to push the object, waiting for other robots
to touch the object and be ready to push.
• Pushing: after supervisor confirmed that all other robots are in the state of
“ready_to_push”, rover goes to pushing state, trying to push the object toward the
goal.
A desired path can be considered as: {Standby, Path_planning, Ready_to_push,
Pushing, ,…,Standby}, which according to our assumptions, this path is weakly
controllable 9. So it can’t be controlled using an inverse-DEVS 10.
9 See Section 2.4.2
Mobile Robotics 63
First, we solve the problem for one robot and then we extend to more robots.
Consider a mobile robot at some initial position
First, we solve the problem for one robot and then we extend to more robots.
Consider a mobile robot at some initial position RRxuur
, assigned to push a circle obstacle at
initial position Bur
to goal position Gur
(Figure 3.13). First, robot should place itself
properly with respect to the obstacle in order to push it toward the goal. This desired
position denoted by Aur
in Figure 3.13, can be calculated easily as:
( ) ( )A BB GA B r rB G
−= + +
−
ur urur ur
ur ur ( 3.21)
where rA and rB are radiuses of the robot and obstacle, respectively.
Gur
lwur
rwur
Bur
Aur
Pushing region
rB
rA
Figure 3.13. Pushing task performed by a single robot
After reaching point A robot will slowly start to push the obstacle. In order to
properly push the obstacle toward the goal, robot should always remain at the
opposite side of obstacle with respect to goal. To ensure this, robot should remain
inside the pushing region shown in Figure 3.13. The boundaries of this region are two
lines going through the center of the obstacle with norms lw and . rwur ur
10 See [14] for details on inverse-DEVS
Mobile Robotics 64
Boundaries equations are as follows:
( ) ( )
( ) ( )
ll
rr
f x x B w
f x x B w
= −
= −
r r ur ur
r r ur ur ( 3.22)
and norms are:
( ) , ,2 2
( ) , ,2 2
l A
r A
B Gw ROT A ROT r BB G
B Gw ROT A ROT r BB G
π π
π π
− − = + − −
− − = + − −
ur uruur uuuuur ur uuuuur u
ur ur
ur uruur uuuuur ur uuuuur u
ur ur
r
r ( 3.23)
where ( , )ROT v θuuuuur r
is a rotate function which rotates vector v by θ degrees.
Having the equations of boundaries, it is easy for robot to check if it is inside the
region or not, and if not, to which direction should turn to go inside the region.
If both fr and fl evaluated at the position of robot are positive then robot is inside the
region, but if one of them is negative, then the robot is outside and it should turn toward
the norm vector of the negative function, to go inside the pushing region.
For multi-robot pushing task each robot pushes the obstacle while remaining inside its
own pushing region. Pushing regions are the same as above, but rotated. Pushing regions
should be chosen properly such that the total force applied to obstacle by robots, pushes
Mobile Robotics 65
the obstacle toward the goal. Figure 3.14 shows the pushing regions for two cooperative
robots.
the obstacle toward the goal. Figure 3.14 shows the pushing regions for two cooperative
robots. GGur
Bur
rB
2Aur
lwur
rwur
rA2
1Aur rA1 lw
ur
rwur
Figure 3.14. Pushing task by two robots Figure 3.14. Pushing task by two robots
The simulation results of the above task are presented in Section 4. Hardware
implementation results are demonstrated in section 5.
The simulation results of the above task are presented in Section 4. Hardware
implementation results are demonstrated in section 5.
3.3.4 Job Evaluation 3.3.4 Job Evaluation
When a large number of mobile robots are working together to do a job, it is
important to be able to evaluate each robot’s job and also evaluate the whole mission’s
progress. By means of job evaluation we can make proper decisions for the future, if the
mission is not making good progress. Here we try to find a way to evaluate robot’s job as
well as the mission progress, defining the following terms:
When a large number of mobile robots are working together to do a job, it is
important to be able to evaluate each robot’s job and also evaluate the whole mission’s
progress. By means of job evaluation we can make proper decisions for the future, if the
mission is not making good progress. Here we try to find a way to evaluate robot’s job as
well as the mission progress, defining the following terms:
Degree of success for mission at time T: is a real number between 0 and 1 showing how
much the whole mission has been successful at time T.
Degree of success for mission at time T: is a real number between 0 and 1 showing how
much the whole mission has been successful at time T.
Mobile Robotics 66
Degree of success for robot at time T: is a real number between 0 and 1 showing how
much the Robot is successful in its assignment at the time T.
Objective Function F(t):
Objective Function is a measurement of how close is a Robot (or the whole mission)
to an ideal situation. For example, suppose a Robot must go from point A to point B. The
Objective Function for this robot can be The Distance of The Robot From Point B
(goal).We want this value to be zero or minimized.
As another example, suppose three Robots are assigned to place themselves at three
goal points: Robot 1 at point A, robot2 at point B, robot 3 goes at point C. An objective
function for the whole mission can be defined as summation of three robots objective
functions. i.e. F(t)=d1+d2+d3. Where di (i=1,2,3) is the distance of the ith robot from its
goal.
How to find Degree of Success for each Robot?
We use fuzzy logic to find the degree of success for each robot. As described before each
robot has its own Objective Function, we want to minimize it within time T.
So we defined the following normalized membership functions for Objective Function
and Time and Mission Success Degree.
Mobile Robotics 67
0 1 2 3
Mission
0 0.25 0.5 0.75 1.0 1.25 Success Degree
Failed Almost OK Fairly Successful Outstanding Failed Successful
1 Early On Time Late Too Late
Normalized Time
0 0.5 1 1.5
Objective Normalized
Function 1 Close Near Far Too Far
Figure 3.15. Membership functions for Objective Function, Time and Mission Success Degree Figure 3.15. Membership functions for Objective Function, Time and Mission Success Degree
Note that the Objective Function and Time are normalized this way: Fn(t)=F(t)/F(0),
tn=t/T, where T is the desired mission time.
Note that the Objective Function and Time are normalized this way: Fn(t)=F(t)/F(0),
tn=t/T, where T is the desired mission time.
The fuzzy rules are as follows: The fuzzy rules are as follows:
1. IF F 1. IF Fn(t) is Close and tn is early THEN mission is Outstanding.
2. IF Fn(t) is Close and tn is OnTime THEN mission is Successful
3. IF Fn(t) is Close and tn is Late THEN mission is Fairly Successful
4. IF Fn(t) is Near and tn is Late THEN mission is OK
5. IF Fn(t) is Near and tn is Ontime THEN mission is Near to success.
6. IF Fn(t) is Far and tn is Ontime THEN mission is Near to fail
7. IF Fn(t) is Too_far and tn is Late THEN mission is Failed.
8. IF tn is Too_late THEN mission is Failed.
Mobile Robotics 68
9. IF Fn(t) is Too_far and tn is early THEN mission is Failed.
Using the above fuzzy rules, we can calculate a degree of success for a robot. The
same method can be used to calculate “Degree Of Success” for the whole mission, if a
proper Objective Function is defined. The Objective Function must show how close the
mission is to the ideal situation.
Here are some examples to show how one can choose the Objective Function:
Example 3.1- there are three robots R1, R2, R3 and three goals G1, G2, G3. Robots
must go to the goals within time T but there is no preference which robot goes to which
goal, i.e., Robot R1 may go G1, G2 or G3. The same is for Robots R2 and R3.
An Objective Function for Robot 1 is:
F1(t) = d11(t)*d12(t)*d13(t)
where d11 is distance between R1 and G1,
d12 is distance between R1 and G2,
d13 is distance between R1 and G3.
And for Robot 2:
F2(t)= d21(t) ×d22(t) ×d23(t)
where d21 is distance between R2 and G1,
d22 is distance between R2 and G2,
d23 is distance between R2 and G3.
Finally for Robot 3:
Mobile Robotics 69
F3(t)= d31(t) ×d32(t) ×d33(t)
where d31 is distance between R3 and G1,
d32 is distance between R3 and G2,
d33 is distance between R3 and G3.
An Objective Function for the whole mission can be defined as:
Fm(t)=F1(t)+F2(t)+F3(t)
=d11×d12×d13 + d21×d22×d23 + d31×d32×d33
This Objective Function is minimized (zero) when all Robots are at some Goal.
The Success degree for the whole mission can be calculated using fuzzy logic rules
mentioned before.
Example 3.2. Suppose you have 4 robots and there is an unknown area. The mission
is to map the area within time T.
We divide the area to 4 sub-areas. Suppose the number of unknown cells for area 1, 2,
3 and 4 are p1(t), p2(t), p3(t) and p4(t), respectively. The Objective Function for ith robot
is Fi(t)=pi(t) and the Objective Function for the whole mission can be defined as
Fm(t)=F1(t)+F2(t)+F3(t)+F4(t).
Example 3.3. For a box pushing task by two robots the Objective Function for Robot
1 and 2 are F1(t)=d1(t) and F2(t)=d2(t), respectively, where d1(t) and d2(t) are shown in
Figure 3.16. And the Objective Function for the whole mission is
Fm(t)=F1(t)+F2(t)=d1(t)+d2(t).
Mobile Robotics 70
Figure 3.16- Cooperative box-pushing task
71
4. V-LAB® SIMULATIONS
In this section we introduce V-Lab® very briefly as an environment, which allows one
to simulate DEVS models, in a distributed fashion. The V-Lab® environment consists of
4 distinct software layers, as Figure 4.1 illustrates, and each of these layers fills a specific
role in the simulation. The foundation of the simulation consists of the operating system
and the network code needed to operate the networking hardware, which in turn allows
machines to communicate over a network. Using this functionality, a middleware such as
the Common Object Request Broker Architecture (CORBA) [48] acts to solve the
problem of how to use the network to connect different portions of a simulation together.
DEVS provides the architecture needed to arrange components of a simulation into
discrete structures, and is briefly introduced in Section 2. Using the DEVS environment,
V-Lab® defines an appropriate structure in which to organize the elements of DEVS for a
distributed agent based simulation. It separates the main components into different
categories and defines the logical structure in which they communicate. It also provides
the critical objects needed to control the flow of time (inherited from DEVS), the flow of
messages, and the base class objects designers will need to create their own V-Lab®
modules [49]. The design of V-Lab® was first introduce by El-Osery, et al [51].
V-LAB® Simulations ® Simulations 72
72
V-Lab®
IDEVS
Middleware(Sockets, CORBA, or other)
Physical Network
Figure 4.1 Distributing simulation layers using DEVS in V- Lab Figure 4.1 Distributing simulation layers using DEVS in V- Lab
Using V-Lab® structure, we design an environment to simulate sensor-based
intelligent mobile robots. Figure 4.2 shows how assorted DEVS models are connected
together to perform the simulation. The central block, SimMan, is one of the main models
in V-Lab®, that acts as a relay mechanism – it relays event messages to and from agents
as they make requests of each other. The agents do not have to know the identity of the
other agents. These agents communicate via the type of the request message, i.e. in each
agent model there is a registration-list of request-types that the model recognizes. When
any other agent makes a request of this type, SimMan relays it to this agent [49].
Using V-Lab® structure, we design an environment to simulate sensor-based
intelligent mobile robots. Figure 4.2 shows how assorted DEVS models are connected
together to perform the simulation. The central block, SimMan, is one of the main models
in V-Lab®, that acts as a relay mechanism – it relays event messages to and from agents
as they make requests of each other. The agents do not have to know the identity of the
other agents. These agents communicate via the type of the request message, i.e. in each
agent model there is a registration-list of request-types that the model recognizes. When
any other agent makes a request of this type, SimMan relays it to this agent [49].
Terrain model stores information such as environment map, movable obstacles,
roughness of the ground in different areas, elevation, friction, etc.
Terrain model stores information such as environment map, movable obstacles,
roughness of the ground in different areas, elevation, friction, etc.
Dynamics model is responsible for obstacles/robots movement calculations. It uses a
DEVS-double-integrator (described in Section 2.3.3) for each movable object to
determine the position of objects, knowing speed or force applied to the object.
Dynamics model is responsible for obstacles/robots movement calculations. It uses a
DEVS-double-integrator (described in Section 2.3.3) for each movable object to
determine the position of objects, knowing speed or force applied to the object.
Collision model calculates the interaction forces, if two or more objects collide
together.
Collision model calculates the interaction forces, if two or more objects collide
together.
Robot model contains different models such as infrared sensor models, compass,
position sensor, a dynamic model of the robot and a controller model to control the robot.
Robot model contains different models such as infrared sensor models, compass,
position sensor, a dynamic model of the robot and a controller model to control the robot.
V-LAB® Simulations 73
Figure 4.2. DEVS coupled model in V-LAB® for multi-agent
sensor-based mobile robot simulation
Plot model receives plotting information from other models to update the simulation
graphical interface.
4.1 Simulation Components
In this section we focus on each simulation component shown in Figure 4.2 and
present a detailed discussion for each one.
4.1.1 Dynamics Model
This model contains DEVS-mass models developed in Example 2.4. For every
obstacle and robot in the simulation, there is a mass model inside the dynamics model
(see Figure 4.3).
V-LAB® Simulations ® Simulations 74
74
Dynamics
outin
.
.
.
Mass n
Mass 2
Mass 1
Figure 4.3- Dynamic model contains DEVS-double integrators to simulate moving-obstacles and robots. Figure 4.3- Dynamic model contains DEVS-double integrators to simulate moving-obstacles and robots.
The input to each of the mass models can be force applied to the mass, velocity in case
of collision or friction coefficient of terrain. The model can calculate new position of the
obstacle according to the input. It generates output events at every change of position.
These output events will be caught by plot model and terrain model to update the terrain.
Obstacles are assumed to be circular in order to avoid rotation, torque and angular
velocity calculations and easier calculation of velocities after collision. We approximate
them by convex polygons.
The input to each of the mass models can be force applied to the mass, velocity in case
of collision or friction coefficient of terrain. The model can calculate new position of the
obstacle according to the input. It generates output events at every change of position.
These output events will be caught by plot model and terrain model to update the terrain.
Obstacles are assumed to be circular in order to avoid rotation, torque and angular
velocity calculations and easier calculation of velocities after collision. We approximate
them by convex polygons.
4.1.2 Collision Model 4.1.2 Collision Model
This model performs two tasks: first, collision detection and second, calculates the
velocities and/or forces after collision.
This model performs two tasks: first, collision detection and second, calculates the
velocities and/or forces after collision.
V-LAB® Simulations Simulations 75
75
4.1.2.1 Collision Detection 4.1.2.1 Collision Detection
In a dynamic environment simulation it is required to detect any collision event
between any two obstacles including robots. We define the collision detection problem as
follows:
In a dynamic environment simulation it is required to detect any collision event
between any two obstacles including robots. We define the collision detection problem as
follows:
Given two convex polygons in 2D plain, are they colliding? Given two convex polygons in 2D plain, are they colliding?
B
A
B
A
(a) (b)
Figure 4.4. Examples of colliding polygons: case (a) is more likely to occur than case (b) Figure 4.4. Examples of colliding polygons: case (a) is more likely to occur than case (b)
Figure 4.4 shows two cases of colliding polygons. Intuitively case (a) is more likely
to occur than case (b), because for example if object A is moving toward object B at a
reasonable speed and collides with it, it won’t jump too far through the object B. Chances
are that some vertices from object A will actually end up inside the object B (this is
illustrated in Figure 4.4) [52].
Figure 4.4 shows two cases of colliding polygons. Intuitively case (a) is more likely
to occur than case (b), because for example if object A is moving toward object B at a
reasonable speed and collides with it, it won’t jump too far through the object B. Chances
are that some vertices from object A will actually end up inside the object B (this is
illustrated in Figure 4.4) [52].
It’s much easier and faster to check if any of the vertices from one polygon is inside
the other one. Hence, the collision-checking algorithm can be in this manner:
It’s much easier and faster to check if any of the vertices from one polygon is inside
the other one. Hence, the collision-checking algorithm can be in this manner:
COLISSION_CHECK(P1, P2) COLISSION_CHECK(P1, P2)
for i=1 to Length(P1) for i=1 to Length(P1)
do if IS_POINT_INSIDE_POLY(P1[i], P2, 1, length(P2) ) then do if IS_POINT_INSIDE_POLY(P1[i], P2, 1, length(P2) ) then
return true return true
V-LAB® Simulations Simulations 76
76
for i=1 to Length(P2) for i=1 to Length(P2)
do if IS_POINT_INSIDE_POLY(P2[i], P1, 1, length(P1)) then do if IS_POINT_INSIDE_POLY(P2[i], P1, 1, length(P1)) then
return true return true
return false return false
Now we shall focus on the problem of checking if a given point is inside a convex
polygon or not. First we solve this problem for a triangle. Consider the triangle shown in
Figure 4.5 with vertices A,B and C.
Now we shall focus on the problem of checking if a given point is inside a convex
polygon or not. First we solve this problem for a triangle. Consider the triangle shown in
Figure 4.5 with vertices A,B and C.
0 0 1 1
3 2 2
2
0 0
0 0
-2
-2
-2
2
2
2
C B
A 1
Figure 4.5. Evaluation of function P(x) (equation 4.1) at different areas Figure 4.5. Evaluation of function P(x) (equation 4.1) at different areas
We define the following function: We define the following function:
( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )AB AB BC BC CA CAP x sign f x f C sign f x f A sign f x f B= ⋅ + ⋅ + ⋅ ( 4.1)
where fAB(x) is the equation of the line going through points A and B evaluated at x,
and similar for fBC(x) and fCA(x); sign(x) is sign function which is equal to 1 if x>0, -1 if
x<0 and 0 if x=0.
V-LAB® Simulations 77
Figure 4.5 illustrates the values of function P(x) evaluated at different areas, created
by the triangle. It can be seen that only inside the triangle the value of P(x) is equal to 3.
Any convex polygon with n (more than three) sides can be partitioned into n-2
triangles. So a divide-and-conquer approach can be used to determine if a point is inside
an n-sided-polygon or not.
IS_POINT_INSIDE_POLY (X, P, p, q)
IF (q-p >=3) THEN
r ← IS_POINT_INSIDE_POLY(X, P, p, λ(p+q)/2µ )
IF (not r) THEN
Return IS_POINT_INSIDE_POLY(X, P, λ(p+q)/2µ+1, q )
ELSE
Return r
ELSE
A=P[n]
B=P[n+1]
C=P[n+2]
P=sign(f(A,B,X)*f(A,B,C))+sign(f(B,C,X)*f(B,C,A))+sign(f(C,A,X)*f(C,A,B))
IF (P=3) THEN
Return true
ELSE
Return false
This algorithm is O(n) for an n-sided polygon, as a result the Collision Check
algorithm is O(nm) for two n-sided and m-sided polygons.
V-LAB® Simulations 78
2vuu
4.1.2.2 After-Collision Calculations
After-collision calculations are 2 types. One is due to hard collisions, and the other is
due to solf collisions. By hard collision we mean two inelastic objects collide together
very fast, such that a very large impulse of force is generated. Figure 4.6 shows two
colliding objects m1 and m2 with the speeds of v1 and v2 (hard collision). v1′ and v2′ are
the speeds after collision. In this kind of collision, rules of conservation of momentum
and conservation of energy can be used to find the final velocities v1′ and v2′. But they are
not enough. There should be some assumption about the direction of two objects after
collision.
v2
- 1 αm→
2 αm→
v2′
v1′
v1
P2
P1 m2
m1
Figure 4.6. Hard Collision
Assuming circular objects, it is easy to find the direction of velocities after collision.
Here is how v1′ and v2′ can be found.
As shown in Figure 4.6 we assume the following:
1 2 1
2 1
v m v
v m
α
α
′ = +
′ = − +
ur ur ur
uur ur r ( 4.2)
V-LAB® Simulations 79
u
This assumption satisfies the conservation of momentum:
1 1 2 2 1 1 2 2m v m v m v m v′ ′+ = +ur uur ur ur
( 4.3)
We attempt to find vector αur
such that v1′ and v2′ satisfy the conservation of energy:
2 2 2 2
1 1 2 2 1 1 2 22 2 2 2
1 1 2 2 1 2 1 2 1 22 2 2 2 2 22 2
1 1 2 2 1 2 1 2 1 2 1 2 1 22
1 2 1 2
( ) ( )
( 2 ) ( 2
0 ( ) 2 ( )
m v m v m v m v
m v m v m m v m m v
m v m v m m v m v m m v m v
m m v v
α α
α α α
α α
α
′ ′+ = +
⇔ + = + + − +
⇔ + = + + ⋅ + + − ⋅
⇔ = + + ⋅ −
⇔
ur uur ur uur
ur uur ur ur ur uur
ur uur ur ur ur ur ur uur ur r
ur ur ur uur
u ( )1 2 1 2( ) 2( ) 0m m v vα⋅ + + − =r ur ur uur
)αuu
( 4.4)
i.e.αur
is perpendicular to ( )1 2 1 2( ) 2(m m v vα+ + − )ur ur uur
. But αur
is parallel to
. Consequently, 22 1P P P= = − 1
r ur urd P
uuu
( )1 2 1 2
1 2 1 2
1 2 1 2
2 1
1 2
( ) 2( ) 0
( ) 2 ( )
( ) 2 ( )
2 ( )( )
d m m v v
m m d d v v
m m d d v v
d v vm m d
α
α
α
α
⇔ ⋅ + + − =
⇔ + ⋅ + ⋅ − =
⇔ + + ⋅ − =
⋅ −⇔ =
+
ur ur ur uur
ur ur ur ur uur
ur ur ur ur uur
ur uur urur
ur
0
0 ( 4.5)
V-LAB® Simulations 80
and if d is normalized:
2 1
1 2
2 1
1 2
2 ( )( )
2 ( )( )
d v vm m
d v v dm m
α
α
⋅ −=
+
⋅ −∴ =
+
ur uur urur
ur uur urur ur
( 4.6)
Having αur
, v1′ and v2′ can be determined using equation set 4.2.
4.1.2.3 Soft collision
Soft collision happens when two or more objects slowly collide together such that no
big impulse of force is produced. In other words, their relative velocities are very small.
In this case objects may slowly push each other.
Collision model has to find all the reaction forces applied to these masses from other
colliding masses, and send these reaction forces to the Dynamic model as input to the
corresponding mass models.
To analyze this problem consider three masses m1, m2 and m3 as shown in Figure 4.7.
The initial force applied to the masses before collision is assumed to be f1,0, f2,0, and f3,0.
Furthermore, the velocities before collision are assumed to be almost equal. The terms f1,
f2 and f3 are total respective forces applied to the masses. In addition, gij is normalized
(unit-length) vector from mi to mj.
V-LAB® Simulations ® Simulations 81
81
23guuur
31guuur
12guuur
3fuur
1fuur
2fuur
1,3f ′uuur
θ1 θ2
θ3
m3
m2
m1
Figure 4.7- Three masses pushing each other Figure 4.7- Three masses pushing each other
From the Newton’s Second law we have: From the Newton’s Second law we have:
1,3 3,1
1 3
2,3 3,2
2 3
1,2 2,1
1 2
f fm m
f fm m
f fm m
′ ′=
′ ′=
′ ′=
uuur uuur
uuur uuur
uuur uuur
( 4.7)
where ,i jf ′
uuuris the projection of
ifur
onto mimj direction. We therefore have the following
:
( )
( )( ) ( )
1 31 1 311,3 31 31 1 31 312
31 31 31
3,1 3 31 31
1 31 3 3131 31
1 3
31
1
f g f gf g g f gg g g
f f g g
f g f gg g
m m
ffm
⋅ ⋅ ′ = = = ⋅ ⋅
′ = ⋅
⋅ ⋅⇒ =
⇒ −
uur uuur uur uuuruuur uuur uuur uur uuur uuur
uuur uuur uuur
uuur uur uuur uuur
uur uuur uur uuuruuur uuur
uur
g
313
0gm
⋅ =
uuruuur
( 4.8)
In the same manner we get:
V-LAB® Simulations 82
1 212
1 2
0f f gm m
− ⋅ =
uur uuruuur
( 4.9)
3223
2 3
0ff gm m
− ⋅ =
uuruuruuur
( 4.10)
Since we are trying to find reaction forces to each of the masses, Let
1,2 2,1 3 12
3,1 1,3 2 31
2,3 3,2 1 23
f f x g
f f x g
f f x g
= − =
= − =
= − =
uuur uuur uur
uuur uuur uur
uuur uuur uuur
u
u ( 4.11)
where ,i jfuuur
is the reaction force applied to mi from mj. Now we can write the following equations for the total forces:
1 1,2 1,3 1,0
2 2,1 2,3 2,
3 3,1 3,2 3,
0
0
f f f f
f f f f
f f f f
= + +
= + +
= + +
uur uuur uuur uuur
uur uuur uuur uuur
uur uuur uuur uuur
( 4.12)
Substituting equation 4.11 in equation 4.12:
1 3 12 2 31 1
2 3 12 1 23 2
3 2 31 1 23
,0
,0
3,0
f x g x g f
f x g x g f
f x g x g f
= − +
= − + +
= − +
uur uuur uuur uuur
uur uuur uuur uuur
uur uuur uuur uuur
( 4.13)
V-LAB® Simulations ns 83
83
Substituting equation 4.13 in equations 4.8, 4.9 and 4.10, we obtain the following
matrix equation (4.14), from which x1,x2 and x3 and subsequently the reaction forces
can be found from equation 4.11.
Substituting equation 4.13 in equations 4.8, 4.9 and 4.10, we obtain the following
matrix equation (4.14), from which x1,x2 and x3 and subsequently the reaction forces
can be found from equation 4.11.
2,0 3,03 2 1 23
2 32 3 3 2
2 3,0 1,03 1
3 1 3 1 3
32 1
2 1 1 2
cos( ) cos( )1 1
cos( ) cos( )1 1
cos( ) cos( ) 1 1
f fx gm mm m m m
x f fm m m m m
x
m m m m
θ θ
θ θ
θ θ
− ⋅ − + − + = − − +
uuur uuuruuur
uuur uur
311
1,0 2,012
1 2
gm
f fg
m m
⋅
− ⋅
uuuur
uuur uuuruuur
( 4.14)
4.1.3 Terrain Model
In this work we assume that the train is 2D flat with the same friction coefficient
everywhere. The main task for Terrain Model is to respond to IR-Sensors, calculating
their distance to the nearest obstacle, d (see Figure 4.8 as an IR-Sensor is detecting an
obstacle). Sensors send their position xs, ys and pointing direction θs as well as range of
detection to the Terrain.
(xs, ys)
θs
Obstacle
IR-Sensor
r ( Sensor Max Range)
d
Figure 4.8- Terrain Model is responsible for calculating the distance of the nearest obstacle to each IR-sensor.
V-LAB® Simulations imulations 84
84
To resolve this problem, one should identify which obstacle, if any, intersects with the
sensor line (dashed line in Figure 4.8). To do this, the line should be checked to see if it is
intersecting with any of the obstacles sides. The distance d then will be the distance from
sensor location to the intersection point. The computational complexity of this operation
for one n-sided polygon will be O(n), and if m obstacles are in the terrain O(nm).
To resolve this problem, one should identify which obstacle, if any, intersects with the
sensor line (dashed line in Figure 4.8). To do this, the line should be checked to see if it is
intersecting with any of the obstacles sides. The distance d then will be the distance from
sensor location to the intersection point. The computational complexity of this operation
for one n-sided polygon will be O(n), and if m obstacles are in the terrain O(nm).
4.1.4 Robot Model 4.1.4 Robot Model
This model represents a unicycle two differential-drive wheel robot with circular
geometry, whose dynamic properties are studied in Section 3.2.
This model represents a unicycle two differential-drive wheel robot with circular
geometry, whose dynamic properties are studied in Section 3.2.
ROBOT
out in
Position Sensor
Compass Controller
.
.
.IR Sensor n
IR Sensor 2
IR Sensor 1
RobotDyn
Figure 4.9- A DEVS coupled model for a differential-drive wheel robot with n infra-red sensors, one compass and
one position sensor
Figure 4.9- A DEVS coupled model for a differential-drive wheel robot with n infra-red sensors, one compass and
one position sensor
V-LAB® Simulations 85
Figure 4.9 shows the internal components of this model. RobotDyn model calculates
dynamics (position and direction) of robot based on equation 3.3. However, for pushing
tasks it only calculates the direction and the force applied to the robot (from its two
motors) and sends this information to SimMan so that Dynamics model takes care of
position calculation.
Each “IR-Sensor model” models an infrared sensor that sends its position and
direction to the “Terrain model”, to calculate the distance of the IR-Sensor to the nearest
obstacle, and send it back to the sensor model through SimMan. IR-Sensor then sends a
voltage associated with the distance to the Controller model. Compass models a digital
compass and Position-Sensor models a position sensor such as GPS or odometer.
Controller model collects the sensory information and calculates the left and right
velocities (vl, vr) and sends them to RobotDyn to calculate new position of robot.
4.2 Simulation Results
In this section we present the V-Lab® simulations results. Figure 4.10 shows the
DEVS atomic and coupled models and couplings between them, which are fully
described in Section 4.
V-LAB® Simulations 86
Figure 4.10. V-Lab® components for autonomous multi-agent simulation implemented in DEVS
Figure 4.11 shows two different 3D views from V-Lab®.
Figure 4.11- 3D views from V-Lab simulation
Figure 4.12 shows two robots, one of which performing path planning to a desired
goal while avoiding obstacles.
Figure 4.13 depicts two robots pushing a heavy object cooperatively to an assigned
goal.
V-LAB® Simulations 87
Figure 4.12- Path planning among unknown obstacles
Figure 4.13. Cooperative object pushing
88
5. EXPERIMENT RESULTS
In this section we present the real world experiment results. The algorithms described
in Section 3 are implemented on a Pioneer AT2 robot from ActivMedia Robotics™. The
robot has two independent motors and four wheels. The two left wheels are driven by one
motor and two right wheels by the other motor.
Figure 5.1- Pioneer Robot made by ActivMedia Robotics
This robot is equipped with 16 sonar sensors, eight of which are in the front and eight
are in the rear. Localization is done by odometers. The robot internally calculates its
direction from odometers information. An onboard computer is also installed inside the
robot, operated by Windows 2000.
Using DEVSJAVA and V-Lab® we implemented the control algorithms, proposed in
Section 3, on the robot. The following figures show the outcome of these experiments.
Figure 5.2 shows a very simple path-planning (goal seeking) task. Robot starts from
initial configuration (X0, Y0, θ0)=(1,1,π/2) seeking the goal point (8,4,π/2), using the
control law proposed in Section 3.3.1. Figure 5.2a depicts the left and right velocities
produced by the controller vs. time, Figure 5.2b the trajectory of the robot, Figure 5.2c
Experiment Results 89
state variables vs. time and Figure 5.2d a 2D view from V-Lab® showing the final
configuration of the robot and its path.
Figure 5.3 shows the goal-seeking task with three obstacles in the terrain. DEVS-
Fuzzy-Logic is used to perform obstacle avoidance. Robot is assigned to go from its
initial configuration (10,9, - π/2) to the desired location (2,2, -π/2) while avoiding
obstacles. Figure 5.3 shows the same task with two different obstacle configurations.
Figure 5.4 illustrates the pushing task. Using the method described in Section 3.3.3 the
robot has to push the object located at point (7,4) to the location (3,2), starting from
initial position (10,9, -π/2). After it’s done with the pushing, it goes back to the initial
position. Figure 5.5 shows another pushing scenario: robot is assigned to push the object
located at (7,4) to the point (4,8) starting from initial position (0,0).
The algorithm assumes that the position of the object is known all the time. But in
reality this is not the case. To solve this problem, another DEVS model called
“objectLocationEstimator” has been used, which estimates the position of the object near
the robot, using sensory information.
Experiment Results 90
Velocity (mm/s)
θ ο
X, Y
(b) (a)
Y (tiles)
X (tiles)
(d) (c)
Figure 5.2- Path planning: (a)Left and right velocities (b) Trajectory of the robot (c) State variables of the robot vs.
time (d) View from V-Lab® showing the final position of the robot and its trajectory
Figure 5.2- Path planning: (a)Left and right velocities (b) Trajectory of the robot (c) State variables of the robot vs.
time (d) View from V-Lab® showing the final position of the robot and its trajectory
Figure 5.3- Path planning while avoiding obstacles Figure 5.3- Path planning while avoiding obstacles
Experiment Results 91
(a) (b)
Figure 5.4- Object pushing task. (a) V-Lab® real-time view (b) Robot pushing a bottle
654
321
Figure 5.5- Object pushing task by a single robot
92
6. CONCLUSIONS AND DISCUSSIONS
This thesis explores the ability of DEVS in modeling, simulation and control of
autonomous agents. DEVS provides an abstract formalism to specify any system with
piecewise constant input/output. Discrete Event numerical integration method in
investigated in this work. DEVS numerical integration method is an interesting and fast
way of solving ordinary differential equations. Formulating DEVS-Double-Integrator is
one of the contributions of this research. DEVS-Double-Integrator provides means of
faster calculating the motion of free masses in a 3D space, which is used in our
simulation software, V-Lab®. Fuzzy logic is another powerful tool in control, which has
been investigated in this work. Design and implementation of DEVS-Fuzzy Logic,
discrete event version of Fuzzy Logic, is another contribution of this work. DEVS-Fuzzy
Logic has been successfully used in a real mobile Robot to perform obstacle avoidance.
V-Lab®, a virtual laboratory for simulation of multi-agent systems, is partially
developed in this work. V-Lab® which is built over DEVSJAVA, provides a discrete
event modular tool for simulation of autonomous agents. Using V-Lab®, we simulated
and tested our algorithms for mobile robot path planning, obstacle avoidance and
cooperative pushing task by two robots. The exact same V-Lab® controller modules,
were used in the real robot to successfully perform the path planning, obstacle avoidance
as well as pushing assignment.
The main contributions of this thesis are as follows:
Conclusions and discussions 93
1. DEVS Double Integration Method
This method, which provides a Discrete Event Integrator to solve second order
differential equations, is described in section 2.3.3. We showed that using this method
the computation error is reduced in solving pure second order differential equations.
Therefore, this method is proper for modeling and simulation of free object motion.
This method is used in V-Lab® to simulate the motion of obstacles and robots.
2. DEVS-Fuzzy Logic Control
In this thesis we presented a way to enhance Fuzzy Logic Control by means of
Discrete Event tools. In particular, we showed how it is possible to incorporate fuzzy
reasoning methods in the DEVS formalism obtaining what we call Intelligent-DEVS (I-
DEVS). The resulting Discrete-Event Fuzzy-Logic is faster than conventional algorithms,
when a large number of rules and membership functions are employed. In order to show
the ability of Fuzzy-DEVS in controlling real-time systems, we employed this method in
a case study that involved a certain number of robots to be coordinated in order to
achieve the common task of pushing some objects toward a storage area; we explained
how to represent and solve this problem in the DEVS formalism. Fuzzy-DEVS was
successfully used to perform obstacle avoidance in both simulation and real world robots.
One of the limitations of DEVS models is that they could be hard to simulate. In fact
DEVS models can be simulated by means of a simple program written in any
programming language. The simulation of an atomic DEVS model is not much more
complex than the simulation of a Discrete Time Model. However, simulation of big
coupled models could be very complicated. There are several free DEVS simulator
softwares available online [13],[14]. In this thesis we used DEVSJAVA[13] to simulate
Conclusions and discussions 94
our DEVS models. DEVSJAVA provides java classes and objects as well as graphical
tools to easily design and simulate complex DEVS models.
95
7. REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
B. P. Zeigler., Kim T.G. and Praehofer H., “Theory of Modeling and Simulation:
Integrating Discrete Event and Continuous Complex Dynamic Systems”, 2nd Edition,
Academic Press, Boston, 2000.
B. P. Zeigler, H. S. Song, T. G. Kim, and H. Praehofer, “DEVS Framework for
Modeling, Simulation, Analysis, and Design of Hybrid Systems”, Lecture Notes in
Computer Science, Vol. 999, pp. 529-551, Springer-Verlag, Berlin, (1995).
X. Hu, and B.P. Zeigler, “An Integrated Modeling and Simulation Methodology for
Intelligent Systems Design and Testing”, Proc. of PERMIS’02, Gaithersburg, Aug, 2002.
F. Lin and H. Ying, “Modeling and Control of Fuzzy Discrete Event Systems”, IEEE
Transactions on System, Man and Cybernetics –Part B, VOL. 32, NO. 4, August 2002.
W.M. Wonham, “Supervisory Control of Discrete Event Systems: An Introduction”,
Systems Control Group, Dept. of Electrical & Computer Engineering, University of
Toronto, June 1999.
B.A Brandin, W.M. Wonham, and B. Benhabib. “Manufacturing cell supervisory control
– a modular timed discrete-event system approach”. In proceedings of the IEEE
international Conference on Robotics and Automation, pages 846-851, 1993.
B.J. McCarragher and H.Asada. “A discrete event approach to the control of robotic
assembly tasks”, Proceedings of the IEEE International
T.M. Sobh and R. Bajcy. “Autonomous observation under uncertainty”. In Proceedings
of the IEEE International Conference on Robotics and Automation
S.L. Ricker, N. Sarkar and K.Rudie. “Adiscrete-Event Systems Approach to Modeling
Dextrous Manipulation.” External Technical Report. Queen’s University, 1995
References 96
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
M. Huber and R. A. Grupen, “A Hybrid Discrete Event Dynamic System Approach to
Robot Control”, Laboratory for Perceptual Robotics, Department of Computer Science,
Univ. of Massachusetts, Technical Report #96-43, October, 1996
H. Darabi, M. A. Jafari and A. L. Buczak, “A Control Switching theory for
supervisory control of discrete event systems”, IEEE transactions on robotics and
automation, Vol. 19, NO. 1, Feb 2003
M. Jamshidi and A. Zilouchian, “Intelligent Control Systems using Soft Computing
Methodologies”, CRC Press, Boca Raton, FL, 2001.
B. Zeigler and H. Sarjoughian, “Introduction to DEVS Modeling and Simulation with
JAVA: A Simplifed Approach to HLA-Compliant Distributed Simulations”. Arizona
Center for Integrative Modeling and Simulation. Available at
http://www.acims.arizona.edu/.
T. G. Kim, “DEVSim++ User’s Manual. C++ Based Simulation with Hierarchical
Modular DEVS Models” Korea Advance Institute of Science and Technology, 1994.
Available at http://www.acims.arizona.edu/
E. Kofman, “Discrete Event Based Simulation and Control of Continuous Systems”,
PHD Dissertation, Universidad Nacional de Rosario, 2003.
C. Silva, “Vibration: Fundamentals and Practice”, CRC Press, 2000
J. R. Clymer, “System design and evaluation using discrete event simulation with
artificial intelligence”, Proceedings of the 1993 Winter Simulation Conference
P. Dadone and H. Vanlandingham, “Modeling and control of discrete event dynamic
systems: a simulator-based reinforcement-learning paradigm” International Journal of
Intelligent Control and Systems, Vol. 2, No. 4 (1998) 609-631
M. Jamshidi, “Large-Scale Systems – Modeling, Control and Fuzzy Logic”, Prentice
Hall Publishers, Saddle River, NJ., 1997.
References 97
[20] K. Himma, “Bentham’s Pleasure as the Good”, online document:
http://faculty.Washington.edu/himma/phil240/lect7.htm
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
J.P. Laumond, S. Sekhavat and F. Lamiraux, “Guidelines in Nonholonomic Motion
Planning for Mobile Robots” in J.-P. Laumond, editor, “Robot Motion Planning and
Control”, pages 1-53. Springer-Verlag, Berlin, 1998.
G. Indiveri, “An Introduction to Wheeled Mobile Robot Kinematics and Dynamics”,
RoboCamp – Paderborn, April 2002.
A. Aguiar, A. Atassi and A. Pascoal, “Regulation of a Nonholonomic Dynamic
Wheeled Mobile Robot with Parametric Modeling Uncertainty using Lyapunov
Functions”, Proc. Of CDC’2000-39th IEEE Conference on Decision and Control, Sydney,
Australia, December 2000
M. Egerstedt, X. Hu and A. Stotsky, “Control of Mobile Platform Using a Virtual
Vehicle Approach”, EEE Transactions on Automatic Control, Vol. 46, NO. 11,
November 2001
J. Esposito and V. Kumar, “Closed lopp motion plans for mobile robots”, Proceedings
of the 2000 IEEE International Conference on Robotics & Automation, April 2000
H. Zhang, V. Kumar and J. Ostrowski, “ Motion Planning With Uncertainty”,
Proceedings of the 1998 IEEE International Conference on Robotics & Automation, May
1998.
H. Xu and S. Yang, “ A Novel Tracking Control Method for a Wheeled Mobile
Robots”, Electronic Journal of Computational Kinematics (EJCK), Vol 1, number 1 (May
2002)
A. Tayebi, M. Tadjine and A. Rachid, “Quasi-Continuous Output Feedback Control
for Nonholonomic Systems in Chained Form”, Proc. of 4th ECC, Brussels, Belgium, July
1997
References 98
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
L. Jiangzhou, S. Sakhavat, X. ming and C. Laugier, “Sliding Mode Control for
Nonholonomic Mobile Robot”, In Proc. of the Int’l Conference on Control, Automation,
Robotics and Vision, Singapore (SG) December 2000.
W. Esquivel and L. Chiang, “Nonholonomic path planning among obstacles subject to
curvature restrictions”, Robotica (2002) vol. 20, pp. 49-58, 2002 Cambridge University
Press.
P. Soueres, A. Balluchi and A. Bicchi, “Optimal Feedback Control for Route Tracking
with a Bounded-Curvature Vehicle”, Proceedings of the 2000 IEEE International
Conference on Robotics & Automation, April 2000.
K. Jiang, L. Seneviratne and S. Earles, “Time-optimal smooth-path motion planning
for a mobile robot with kinematic constraints”, Robotica (1997) volume 15, pp 547-553,
August 1996.
B. Graf, J. Hostalet Wandosell and C. Scheaffer, “Flexible Path Planning for
Nonholonomic Mobile Robots”, Proceedings of The fourth European workshop on
advanced mobile robots (EUROBOT'01), September 19-21, 2001, Lund, Schweden,
pp.199-206.
V. Isler, “Theoretical Robot Exploration”, Technical Report University of
Pennsylvania, Department of Computer Science, Written Preliminary Exam, 2001.
R. Hogg, A. Rankin, S. Roumeliotis, M. McHenry, D. Helmick, C. Bergh and L.
Matthies, “Algorithms and Sensors for Small Robot Path Following”, Proceedings of the
2002 IEEE International Conference on Robotics & Automation, May 2002.
M. Zhang, S. Peng and Q. Meng, “Neural network and fuzzy logic techniques based
collision avoidance for a mobile robot”, Robotica, Vol. 15, Cambridge University Press,
1991.
References 99
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
M. Bak, N. Poulsen and O. Ravn, “Path Following Mobile Robot in the Presence of
Velocity Constraints” Technical Report, IMM-TR-2001-7, Informatics and Mathematical
Modelling, Technical University of Denmark, DTU, 2001.
E. Ferreira and P. Khosla, “Multi Agent Collaboration Using Distributed Value
Functions”, Proceedings of the IEEE Intelligent Vehicle Symposium 2000, October 2000.
H. Berenji and D. Vengerov, “Cooperation and Coordination Between Fuzzy
Reinforcement Learning Agents in Continuous State Partially Observable Markov
Decision Processes”, 1999 IEEE International Fuzzy Systems Conference Proceedings
A. Dutech, O. Buffet and F. Charpillet, “Multi-Agent Systems by Incremental Gradient
Reinforcement Learning”, Proc. of IJCAI'01 Conference, Seatle, USA (2001)
C. Jennings, D Murray and J. Little, “Cooperative Robot Localization with Vision-
based Mapping”, Proceedings of the 1999 IEEE International Conference on Robotics &
Automation, May 1999.
W. Burgard, M. Moors, D. Fox, Reid Simmons and S. Thrun, “Collaborative Multi-
Robot Exploration”, Proceedings of the 2000 IEEE international Conference on Robotics
& automation.
I. Rkleitis, G. Dudek and E. Milios, “Multi-Robot Collaboration for Robust
Exploration”, Proceedings of the 2000 IEEE International Conference on Robotics &
Automation, April 2000.
K. Goldberg and B. Chen, “Collaborative Control of Robot Motion: Robustness to
Error”, Proceedings of the 2001 IEEE/RSJ, International Conference on Intelligent
Robots and Systems, Oct 2001.
G. Pereira, et al, “Coordination of multiple mobile robots in an object carrying task
using implicit communication”, Proceedings of the 2002 IEEE International Conference
on Robotics and Automation, pp. 281-286. Washington - DC, EUA, May 2002
References 100
[46]
[47]
[48]
[49]
[50]
[51]
[52]
M. Mataric, M. Nilsson and K. Simsarian, “Cooperative Multi-Robot Box-Pushing”,
Proceeding IROS−95, pp. 556−561. Pittsburgh, PA.
B. Brumitt and A. Stentz, “GRAMMPS: A Generalized Mission Planner for Multiple
Mobile Robots In Unstructured Environments”, Robotics Institute, Carnegie Mellon
University, Pittsburgh, PA
The Object Management Group, “CORBA BASICS,” electronic document,
http://www.omg.org/gettingstarted/corbafaq.htm
M. Jamshidi, S. Sheikh-Bahaei, J. Kitzinger, P. Sridhar,S. Beatty, S. Xia, Y. Wang, T.
Song, J. liu, E. Tunstel, Jr.,M. Akbarzadeh, P. Lino, U. Dole, A. El-Osery, M. Fathi,X.
Hu and B. P. Zeigler, “V-Lab - A Distributed Intelligent Discrete-Event Environment for
Autonomous Agents Simulation,” Intelligent Automation and Soft Computing, Volume
9, No. 3, 2003, pp 181-214.
M. Jamshidi, S. Sheikh-Bahae, J. Kitzinger, P. Sridhar, S. Xia, Y. Wang, J. Liu, E.
Tunstel, Jr , M.Akbarzadeh , A. El-Osery , M. Fathi, X. Hu , and B. P.Zeigler “A
Distributed Intelligent Discrete-Event Environment for Autonomous Agents Simulation,”
Chapter 11, “Applied System Simulation,” Kluwer Publications 2003, pp 241-274.
A. El-Osery, et al, “V-Lab® - A Virtual Laboratory for Autonomous Agents- SLA
Based Controllers”, IEEE Transactions on Systems, Man and Cybernetics, Vol. 32, No.
6, pp. 791-803, 2002.
J. Blow, “Practical Collision Detection”, Lecture #177, http://www.bolt-action.com/