5 - localization and mapping
TRANSCRIPT
-
8/8/2019 5 - Localization and Mapping
1/122
Autonomous Mobile Robots
Autonomous Systems LabZrich
Localization andMapping
"Position"Global Map
Perception Motion Control
Cognition
Real WorldEnvironment
Localization
PathEnvironment Model
Local Map
-
8/8/2019 5 - Localization and Mapping
2/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
2 Localization and Mapping
Noise and aliasing; odometric position estimation
To localize or not to localize
Belief representation
Map representation
Probabilistic map-based localization
Other examples of localization system
Autonomous map building (SLAM)
-
8/8/2019 5 - Localization and Mapping
3/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
3 Localization, Where am I?
?
Odometry, Dead Reckoning
Localization based on external sensors,
beacons or landmarks
Probabilistic Map Based Localization
-
8/8/2019 5 - Localization and Mapping
4/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
4 Challenges of Localization
Knowing the absolute position (e.g. GPS) is not sufficient
Localization in human-scale in relation with environment
Planning in the Cognitionstep requires more than only position as input
Perception and motion plays an important role
Sensor noise
Sensor aliasing
Effector noise
Odometric position estimation
-
8/8/2019 5 - Localization and Mapping
5/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
5 Sensor Noise
Sensor noise is mainly influenced by environmente.g. surface, illumination
or by the measurement principle itselfe.g. interference between ultrasonic sensors
Sensor noise drastically reduces the useful information of sensor readings.The solution is:
to take multiple readings into account
employ temporal and/or multi-sensor fusion
-
8/8/2019 5 - Localization and Mapping
6/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
6 Sensor Aliasing
In robots, non-uniqueness of sensors readings is the norm
Even with multiple sensors, there is a many-to-one mapping fromenvironmental states to robots perceptual inputs
Therefore the amount of information perceived by the sensors is generally
insufficient to identify the robots position from a single reading Robots localization is usually based on a series of readings
Sufficient information is recovered by the robot over time
-
8/8/2019 5 - Localization and Mapping
7/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
7 Effector Noise: Odometry, Deduced Reckoning
Odometry and dead reckoning:Position update is based on proprioceptive sensors
Odometry: wheel sensors only Dead reckoning: also heading sensors
The movement of the robot, sensed with wheel encoders and/or heading
sensors is integrated to the position. Pros: Straight forward, easy
Cons: Errors are integrated -> unbound
Using additional heading sensors (e.g. gyroscope) might help to reducethe cumulated errors, but the main problems remain the same.
-
8/8/2019 5 - Localization and Mapping
8/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
8 Odometry: Error sources
deterministic non-deterministic
(systematic) (non-systematic)
deterministic errors can be eliminated by proper calibration of the system.
non-deterministic errors have to be described by error models and will always lead touncertain position estimate.
Major Error Sources: Limited resolution during integration (time increments, measurement resolution)
Misalignment of the wheels (deterministic)
Unequal wheel diameter (deterministic)
Variation in the contact point of the wheel
Unequal floor contact (slipping, not planar )
-
8/8/2019 5 - Localization and Mapping
9/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
9 Odometry: Classification of Integration Errors
Range error: integrated path length (distance) of the robots movement
sum of the wheel movements
Turn error: similar to range error, but for turns
difference of the wheel motions
Drift error: difference in the error of the wheels leads to an error in therobots angular orientation
Over long periods of time, turn and drift errors far outweigh range
errors! Consider moving forward on a straight line along the x axis. The error in the y-
position introduced by a move of d meters will have a component of dsinDq,which can be quite large as the angular error Dq grows.
-
8/8/2019 5 - Localization and Mapping
10/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
10 Odometry: The Differential Drive Robot (1)
= y
x
p
+= y
x
pp
-
8/8/2019 5 - Localization and Mapping
11/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
11 Odometry: The Differential Drive Robot (2)
Kinematics
-
8/8/2019 5 - Localization and Mapping
12/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
12 Odometry: The Differential Drive Robot (3)
Error model
-
8/8/2019 5 - Localization and Mapping
13/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
13 Odometry: Growth of Pose uncertainty for Straight Line Movement
Note: Errors perpendicular to the direction of movement are growing much faster!
-
8/8/2019 5 - Localization and Mapping
14/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
14 Odometry: Growth of Pose uncertainty for Movement on a Circle
Note: Errors ellipse in does not remain perpendicular to the direction of movement!
1rst hour, 7.4.2008
-
8/8/2019 5 - Localization and Mapping
15/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
15 Odometry: example of non-Gaussian error model
Note: Errors are not shaped like ellipses!
[Fox, Thrun, Burgard, Dellaert, 2000]
Courtesy AI Lab, Stanford
-
8/8/2019 5 - Localization and Mapping
16/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
16 Odometry: Calibration of Errors I (Borenstein [5])
The unidirectional square path experiment
BILD 1 Borenstein
-
8/8/2019 5 - Localization and Mapping
17/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
17 Odometry: Calibration of Errors II (Borenstein [5])
The bi-directional square path experiment
BILD 2/3 Borenstein
-
8/8/2019 5 - Localization and Mapping
18/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
18 Odometry: Calibration of Errors III (Borenstein [5])
The deterministic andnon-deterministic errors
-
8/8/2019 5 - Localization and Mapping
19/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
19 To localize or not?
How to navigate between A and B
navigation without hitting obstacles
detection of goal location
Possible by following always the left wall However, how to detect that the goal is reached
-
8/8/2019 5 - Localization and Mapping
20/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
20 Behavior Based Navigation
-
8/8/2019 5 - Localization and Mapping
21/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
21 Model Based Navigation
-
8/8/2019 5 - Localization and Mapping
22/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
22 Belief Representation
a) Continuous mapwith single hypothesis
b) Continuous mapwith multiple hypothesis
c) Discretized mapwith probability distribution
d) Discretized topologicalmap with probabilitydistribution
-
8/8/2019 5 - Localization and Mapping
23/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
23 Belief Representation: Characteristics
Continuous
Precision bound by sensor data
Typically single hypothesis poseestimate
Lost when diverging (for single
hypothesis) Compact representation and
typically reasonable in processingpower.
Discrete
Precision bound by resolution ofdiscretisation
Typically multiple hypothesis poseestimate
Never lost (when divergesconverges to another cell)
Important memory and processingpower needed. (not the case for
topological maps)
-
8/8/2019 5 - Localization and Mapping
24/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
24 Bayesian Approach: A taxonomy of probabilistic models
More general
More specific
discrete
HMMs
continuous
HMMs
Markov Loc
semi-cont.
HMMs
Bayesian
Filters
Bayesian
Programs
Bayesian
Networks
DBNs
Kalman
Filters
MCML POMDPs
MDPs
Particle
Filters
Markov
Chains
StSt-1
StS
t-1O
t
StSt-1At
StS
t-1O
tA
t
Courtesy ofJulien Diard
S: State
O: ObservationA: Action
-
8/8/2019 5 - Localization and Mapping
25/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
25 Single-hypothesis Belief Continuous Line-Map
-
8/8/2019 5 - Localization and Mapping
26/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
26 Single-hypothesis Belief Grid and Topological Map
-
8/8/2019 5 - Localization and Mapping
27/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
27 Grid-based Representation - Multi Hypothesis
Grid size around 20 cm2.
Courtesy of W. Burgar
5 L li i d M i
-
8/8/2019 5 - Localization and Mapping
28/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
28 Map Representation
1. Map precision vs. application
2. Features precision vs. map precision
3. Precision vs. computational complexity
Continuous Representation
Decomposition (Discretisation)
2nd hour, 7.4.2008
5 L li ti d M i
-
8/8/2019 5 - Localization and Mapping
29/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
29 Representation of the Environment
Environment Representation
Continuos Metric x,y,
Discrete Metric metric grid
Discrete Topological topological grid
Environment Modeling
Raw sensor data, e.g. laser range data, grayscale images
large volume of data, low distinctiveness on the level of individual values makes use of all acquired information
Low level features, e.g. line other geometric features
medium volume of data, average distinctiveness
filters out the useful information, still ambiguities
High level features, e.g. doors, a car, the Eiffel tower low volume of data, high distinctiveness
filters out the useful information, few/no ambiguities, not enough information
5 Localization and Mapping
-
8/8/2019 5 - Localization and Mapping
30/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
30 Map Representation: Continuous Line-Based
a) Architecture map
b) Representation with set of infinite lines
5 Localization and Mapping
-
8/8/2019 5 - Localization and Mapping
31/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
31 Map Representation: Decomposition (1)
Exact cell decomposition - Polygons
5 - Localization and Mapping
-
8/8/2019 5 - Localization and Mapping
32/122
5 - Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
32 Map Representation: Decomposition (2)
Fixed cell decomposition
Narrow passages disappear
5 - Localization and Mapping
-
8/8/2019 5 - Localization and Mapping
33/122
5 Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
33 Map Representation: Decomposition (3)
Adaptive cell decomposition
5 - Localization and Mapping
-
8/8/2019 5 - Localization and Mapping
34/122
5 Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
34 Map Representation: Decomposition (4)
Fixed cell decomposition Examplewith very small cells
Courtesy of S. Thrun
5 - Localization and Mapping
-
8/8/2019 5 - Localization and Mapping
35/122
5 Localization and Mapping
R. Siegwart, ETH Zurich - ASL
5
35 Map Representation: Decomposition (5)
Topological Decomposition
5 - Localization and Mapping
-
8/8/2019 5 - Localization and Mapping
36/122
pp g
R. Siegwart, ETH Zurich - ASL
5
36 Map Representation: Decomposition (6)
Topological Decomposition
node
(location)
edge(connectivity)
5 - Localization and Mapping
-
8/8/2019 5 - Localization and Mapping
37/122
pp g
R. Siegwart, ETH Zurich - ASL
5
37 Map Representation: Decomposition (7)
Topological Decomposition
5 - Localization and Mapping
-
8/8/2019 5 - Localization and Mapping
38/122
R. Siegwart, ETH Zurich - ASL
5
38 State-of-the-Art: Current Challenges in Map Representation
Real world is dynamic
Perception is still a major challenge
Error prone
Extraction of useful information difficult
Traversal of open space
How to build up topology (boundaries of nodes)
Sensor fusion
2D...3D
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
39/122
R. Siegwart, ETH Zurich - ASL
5
39 Sneak Preview: Particle Filter Localization
Alternative discretisation of space
Montecarlo method: simulate randomerrors, one for each sample of robot state
Propagate samples (particles), add andprune as you go along
Example movies from Stanford campus
[Fox, Thrun, Burgard, Dellaert, 2000]
Courtesy AI Lab, Stanford
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
40/122
R. Siegwart, ETH Zurich - ASL
5
40 Localization and Map Building
Noise and aliasing; odometric position estimation
To localize or not to localize
Belief representation
Map representation Probabilistic map-based localization
Other examples of localization system
Autonomous map building
"Position"Global Map
Perception Motion Control
Cognition
Real WorldEnvironment
Localization
PathEnvironment ModelLocal Map
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
41/122
R. Siegwart, ETH Zurich - ASL
5
41 Localization, Where am I?
?
Odometry, Dead Reckoning
Localization base on external sensors,
beacons or landmarks
Probabilistic Map Based Localization
Observation
Mapdata base
Prediction ofPosition
(e.g. odometry)
Perception
Matching
Position Update(Estimation?)
raw sensor data orextracted features
predicted position
position
matchedobservations
YES
Encoder
Perception
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
42/122
R. Siegwart, ETH Zurich - ASL
5
42 Probabilistic, Map-Based Localization (1)
Consider a mobile robot moving in a known environment.
As it start to move, say from a precisely known location, it might keep trackof its location using odometry.
However, after a certain movement the robot will get very uncertain about
its position. update using an observation of its environment.
observation leads also to an estimate of the robots position which can than
be fused with the odometric estimation to get the best possible update ofthe robots actual position.
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
43/122
R. Siegwart, ETH Zurich - ASL
5
43 Probabilistic, Map-Based Localization (2)
Action update
action model ACT
with ot: Encoder Measurement, st-1: prior belief state
increases uncertainty
Perception update perception model SEE
with it: exteroceptive sensor inputs, st: updated belief state
decreases uncertainty
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
44/122
R. Siegwart, ETH Zurich - ASL
5
44 Improving belief state by moving1st hour, 21.4.2008
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
45/122
R. Siegwart, ETH Zurich - ASL
5
45 Probabilistic, Map-Based Localization (3)
Given
the position estimate
its covariance for time k,
the current control input
the current set of observations and
the map
Compute the
new (posteriori) position estimate and
its covariance
Such a procedure usually involves five steps:
)( kkp)( kk
p
)(ku
)1( +kZ)(kM
)11( ++ kkp
)11( ++ kkp
Probability Density
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
46/122
R. Siegwart, ETH Zurich - ASL
46 The Five Steps for Map-Based Localization
1. Prediction based on previous estimate and odometry
2. Observation with on-board sensors
3. Measurement prediction based on prediction and map
4. Matching of observation and map
5. Estimation -> position update (posteriori position)
Observationon-board sensors
Mapdata base
PredictionMeasurement and Position
(odometry)
Matching
Estimation
(fusion)
raw sensor data orextracted features
position
estimate
matched predictionand observations
YES
Encoder
perception
Predic
tedfeatures
obs
ervations
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
47/122
R. Siegwart, ETH Zurich - ASL
47 Markov Kalman Filter Localization
Markov localization
localization starting from any unknownposition
recovers from ambiguous situation. However, to update the probability of all
positions within the whole state space atany time requires a discreterepresentation of the space (grid). The
required memory and calculation powercan thus become very important if a finegrid is used.
Kalman filter localization
tracks the robot and is inherently veryprecise and efficient.
However, if the uncertainty of the robotbecomes to large (e.g. collision with anobject) the Kalman filter will fail and theposition is definitively lost.
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
48/122
R. Siegwart, ETH Zurich - ASL
48 Markov Localization (1)
Markov localization uses anexplicit, discrete representation for the probability ofall position in the state space.
This is usually done by representing the environment by a grid or atopological graph with a finite number of possible states (positions).
During each update, the probability for each state (element) of the entirespace is updated.
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
49/122
R. Siegwart, ETH Zurich - ASL
49 Markov Localization (2): Applying Bayes theory to robot localization
P(A):Probability that A is true.
e.g. p(rt= l): probability that the robot ris at position lat time t
We wish to compute the probability of each individual robot position given
actions and sensor measures.
P(A|B):Conditional probability of A given that we know B.
e.g. p(rt
= l|it
): probability that the robot is at position lgiven the sensors input it
.
Product rule:
Bayes rule:
5 - Localization and Mapping5
-
8/8/2019 5 - Localization and Mapping
50/122
R. Siegwart, ETH Zurich - ASL
50 Markov Localization (3): Applying Bayes theory to robot localization
Bayes rule:
Map from a belief state and a sensor input to a refined belief state (SEE):
p(l): belief state before perceptual update process
p(i|l): probability to get measurement iwhen being at position l-> MAP
consult robots map, identify the probability of a certain sensor reading for eachpossible position in the map
p(i): normalization factor so that sum over all lfor L equals 1.
5 - Localization and Mapping5
M k L li i (4)
-
8/8/2019 5 - Localization and Mapping
51/122
R. Siegwart, ETH Zurich - ASL
51 Markov Localization (4): Applying Bayes theory to robot localization
Bayes rule:
Map from a belief state and a action to new belief state (ACT):
Summing over all possible ways in which the robot may have reached l.
with ltand lt-1 : positions, ot: odometric measurement
Markov assumption: Update only depends on previous state and its mostrecent actions and perception.
5 - Localization and Mapping5
52 M k L li ti C St d 1 T l i l M (1)
-
8/8/2019 5 - Localization and Mapping
52/122
R. Siegwart, ETH Zurich - ASL
52 Markov Localization: Case Study 1 - Topological Map (1)
The Dervish Robot
Topological Localization with Sonar
5 - Localization and Mapping5
53 M k L li ti C St d 1 T l i l M (2)
-
8/8/2019 5 - Localization and Mapping
53/122
R. Siegwart, ETH Zurich - ASL
53 Markov Localization: Case Study 1 - Topological Map (2)
Topological map of office-type environment
5 - Localization and Mapping5
54 Markov Localization: Case Study 1 Topological Map (3)
-
8/8/2019 5 - Localization and Mapping
54/122
R. Siegwart, ETH Zurich - ASL
54 Markov Localization: Case Study 1 - Topological Map (3)
Update of believe state for position ngiven the percept-pair i
p(n|i): new likelihood for being in position n
p(n): current believe state
p(i|n): probability of seeing iin n(see table)
No action update !
However, the robot is moving and therefore we can apply a combination ofaction and perception update
t-iis used instead of t-1 because the topological distance between nand nisvery depending on the specific topological map
5 - Localization and Mapping5
55 Markov Localization: Case Study 1 Topological Map (4)
-
8/8/2019 5 - Localization and Mapping
55/122
R. Siegwart, ETH Zurich - ASL
55 Markov Localization: Case Study 1 - Topological Map (4)
The calculation
is calculated by multiplying the probability of generating perceptual event iat position nby the probability of having failed to generate perceptualevent sat all nodes between nand n.
5 - Localization and Mapping5
56 Markov Localization: Case Study 1 Topological Map (5)
-
8/8/2019 5 - Localization and Mapping
56/122
R. Siegwart, ETH Zurich - ASL
56 Markov Localization: Case Study 1 - Topological Map (5)
Example calculation Assume that the robot has two nonzero belief states
p(1-2) = 1.0 ; p(2-3) = 0.2*
and that it is facing east with certainty
Perceptual event: open hallway on its left and open door on its right
State 2-3will progress potentially to 3and 3-4to 4.
State 3and 3-4can be eliminated because the likelihood of detecting an open door is zero.
The likelihood of reaching state 4is the product of the initial likelihood p(2-3)= 0.2, (a) thelikelihood of detecting anything at node 3and the likelihood of detecting a hallway on the leftand a door on the right at node 4and (b) the likelihood of detecting a hallway on the left and
a door on the right at node 4. (for simplicity we assume that the likelihood of detectingnothing at node 3-4is 1.0)
(a) occurs only if Dervish fails to detect the door on its left at node 3(either closed or open),[0.6 0.4 +(1-0.6) 0.05] and correctly detects nothing on its right, 0.7.
(b) occurs if Dervish correctly identifies the open hallway on its left at node 4, 0.90, andmistakes the right hallway for an open door, 0.10.
This leads to: 0.2 [0.6 0.4 + 0.4 0.05] 0.7 [0.9 0.1] p(4) = 0.003. Similar calculation for progress from 1-2 p(2) = 0.3.
* Note that the probabilities do not sum up to one. For simplicity normalization was avoided in this example
5 - Localization and Mapping5
57 Markov Localization: Case Study 1 Topological Map (6)
-
8/8/2019 5 - Localization and Mapping
57/122
R. Siegwart, ETH Zurich - ASL
57 Markov Localization: Case Study 1 - Topological Map (6)
Topological map of office-type environment2nd hour, 21.4.2008
5 - Localization and Mapping5
58 Markov Localization: Case Study 2 Grid Map (3)
-
8/8/2019 5 - Localization and Mapping
58/122
R. Siegwart, ETH Zurich - ASL
58 Markov Localization: Case Study 2 Grid Map (3)
The 1D case1. Start
No knowledge at start, thus we havean uniform probability distribution.
2. Robot perceives first pillarSeeing only one pillar, the probability
being at pillar 1, 2 or 3 is equal.
3. Robot moves
Action model enables to estimate thenew probability distribution basedon the previous one and the motion.
4. Robot perceives second pillar
Base on all prior knowledge the
probability being at pillar 2 becomesdominant
5 - Localization and Mapping5
59 Markov Localization: Case Study 2 Grid Map (1)
-
8/8/2019 5 - Localization and Mapping
59/122
R. Siegwart, ETH Zurich - ASL
59 Markov Localization: Case Study 2 Grid Map (1)
Fine fixed decompositiongrid (x, y, ), 15 cm x 15 cm x 1 Action and perception update
Action update:
Sum over previous possible positionsand motion model
Discrete version of eq. 5.22
Perception update:
Given perception i, what is theprobability to be a location l
Courtesy of
W. Burgard
5 - Localization and Mapping5
60 Markov Localization: Case Study 2 Grid Map (2)
-
8/8/2019 5 - Localization and Mapping
60/122
R. Siegwart, ETH Zurich - ASL
60 Markov Localization: Case Study 2 Grid Map (2)
The critical challenge is the calculation of p(i|l)
The number of possible sensor readings and geometric contexts is extremely large
p(i | l) is computed using a model of the robots sensor behavior, its position l, and thelocal environment metric map around l.
Assumptions
Measurement error can be described by a distribution with a mean
Non-zero chance for any measurement
Courtesy of
W. Burgard
5 - Localization and Mapping5
61 Markov Localization: Case Study 2 Grid Map (3)
-
8/8/2019 5 - Localization and Mapping
61/122
R. Siegwart, ETH Zurich - ASL
Markov Localization: Case Study 2 Grid Map (3)
How to calculate p(i|l) - examples Depends largely on the sensor and environment model
E.g.: Counting the on number of sensor readings that are laying in a occupiedcell for a given pose (x,y,).
E.g.: learning p(i | l) from experiments -> What is the probability to see a door ifthe robots stand in front of a door with a certain angle.
SLIDENEE
DSSO
MEUPDATE
SLIDENEE
DSSO
MEUP
DATE
5 - Localization and Mapping5
62
Markov Localization: Case Study 2 Grid Map (4)
-
8/8/2019 5 - Localization and Mapping
62/122
R. Siegwart, ETH Zurich - ASL
Markov Localization: Case Study 2 Grid Map (4)
Example 1: Office Building
Position 3 Position 4
Position 5
Courtesy of
W. Burgard
5 - Localization and Mapping5
63 Markov Localization: Case Study 2 Grid Map (5)
-
8/8/2019 5 - Localization and Mapping
63/122
R. Siegwart, ETH Zurich - ASL
y p ( )
Example 2: Museum Laser scan 1
Courtesy of
W. Burgard
5 - Localization and Mapping5
64 Markov Localization: Case Study 2 Grid Map (6)
-
8/8/2019 5 - Localization and Mapping
64/122
R. Siegwart, ETH Zurich - ASL
y p ( )
Example 2: Museum Laser scan 2
Courtesy of
W. Burgard
5 - Localization and Mapping5
65 Markov Localization: Case Study 2 Grid Map (7)
-
8/8/2019 5 - Localization and Mapping
65/122
R. Siegwart, ETH Zurich - ASL
y p ( )
Example 2: Museum Laser scan 3
Courtesy of
W. Burgard
5 - Localization and Mapping5
66 Markov Localization: Case Study 2 Grid Map (8)
-
8/8/2019 5 - Localization and Mapping
66/122
R. Siegwart, ETH Zurich - ASL
Example 2: Museum Laser scan 13
Courtesy of
W. Burgard
5 - Localization and Mapping5
67 Markov Localization: Case Study 2 Grid Map (9)
-
8/8/2019 5 - Localization and Mapping
67/122
R. Siegwart, ETH Zurich - ASL
Example 2: Museum Laser scan 21
Courtesy of
W. Burgard
5 - Localization and Mapping5
68 Markov Localization: Case Study 2 Grid Map (10)
-
8/8/2019 5 - Localization and Mapping
68/122
R. Siegwart, ETH Zurich - ASL
Fine fixed decomposition grids result in a huge state space Very important processing power needed
Large memory requirement
Reducing complexity
Various approached have been proposed for reducing complexity
The main goal is to reduce the number of states that are updated in each step
Randomized Sampling / Particle Filter
Approximated belief state by representing only a representative subset of all
states (possible locations) E.g update only 10% of all possible locations
The sampling process is typically weighted, e.g. put more samples around thelocal peaks in the probability density function
However, you have to ensure some less likely locations are still tracked,otherwise the robot might get lost
5 - Localization and Mapping5
69 Kalman Filter Localization
-
8/8/2019 5 - Localization and Mapping
69/122
R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping5
70 Introduction to Kalman Filter (1)
-
8/8/2019 5 - Localization and Mapping
70/122
R. Siegwart, ETH Zurich - ASL
Two measurements
Weighted least-square
Finding minimum error
After some calculation and rearrangements
5 - Localization and Mapping5
71 Introduction to Kalman Filter (2)
-
8/8/2019 5 - Localization and Mapping
71/122
R. Siegwart, ETH Zurich - ASL
In Kalman Filter notation
5 - Localization and Mapping5
72 Introduction to Kalman Filter (3)
-
8/8/2019 5 - Localization and Mapping
72/122
R. Siegwart, ETH Zurich - ASL
Dynamic Prediction (robot moving)
u = velocity
w = noise
Motion
Combining fusion and dynamic prediction
1st hour, 28.4.2008
5 - Localization and Mapping5
73 The Five Steps for Map-Based Localization
-
8/8/2019 5 - Localization and Mapping
73/122
R. Siegwart, ETH Zurich - ASL
1. Prediction based on previous estimate and odometry2. Observation with on-board sensors
3. Measurement prediction based on prediction and map
4. Matching of observation and map
5. Estimation -> position update (posteriori position)
Observationon-board sensors
Mapdata base
PredictionMeasurement and Position
(odometry)
Matching
Estimation
(fusion)
raw sensor data orextracted features
position
estimate
matched predictionand observations
YES
Encoder
perception
Predi
ctedfeatures
ob
servations
5 - Localization and Mapping5
74 Kalman Filter for Mobile Robot Localization: Robot Position Prediction
-
8/8/2019 5 - Localization and Mapping
74/122
R. Siegwart, ETH Zurich - ASL
In a first step, the robots position at time step k+1 is predicted based on itsold location (time step k) and its movement due to the control input u(k):
))(,)(()1( kukkpfkkp =+
Tuuu
Tpppp f)k(ff)kk(f)kk( +=+ 1
f: Odometry function
5 - Localization and Mapping5
75
Kalman Filter for Mobile Robot Localization: Robot Position Prediction: Example
-
8/8/2019 5 - Localization and Mapping
75/122
R. Siegwart, ETH Zurich - ASL
+
+
+
+
+
=+=+
bss
b
ssssb
ssss
k
ky
kx
kukkpkkp
lr
lrlr
lrlr
)2
sin(2
)2
cos(2
)(
)(
)(
)()()1(
Tuuu
Tpppp f)k(ff)kk(f)kk( +=+ 1
=
ll
rr
usk
skk
0
0)(
OdometryOdometry
5 - Localization and Mapping5
76 Kalman Filter for Mobile Robot Localization: Observation
-
8/8/2019 5 - Localization and Mapping
76/122
R. Siegwart, ETH Zurich - ASL
The second step it to obtain the observation Z(k+1) (measurements) fromthe robots sensors at the new location at time k+1
The observation usually consists of a set n0 of single observations zj(k+1)extracted from the different sensors signals. It can represent raw datascansas well as featureslike lines, doorsor any kind of landmarks.
The parameters of the targets are usually observed in the sensor frame
{S}. Therefore the observations have to be transformed to the world frame {W} or
the measurement prediction have to be transformed to the sensor frame {S}.
This transformation is specified in the function hi(see later).
5 - Localization and Mapping5
77 Kalman Filter for Mobile Robot Localization: Observation: Example
-
8/8/2019 5 - Localization and Mapping
77/122
R. Siegwart, ETH Zurich - ASL
j
rj
linej
Raw Date ofLaser Scanner
Extracted Lines Extracted Lines
in Model Space
jrrr
r
jR k
=+
)1(,
=+
j
j
R
j rkz )1(
Sensor(robot)frame
5 - Localization and Mapping5
78 Kalman Filter for Mobile Robot Localization: Measurement Prediction
-
8/8/2019 5 - Localization and Mapping
78/122
R. Siegwart, ETH Zurich - ASL
In the next step we use the predicted robot positionand the map M(k) to generate multiple predicted observations zt.
They have to be transformed into the sensor frame
We can now define the measurement prediction as the set containing all nipredicted observations
The function hi is mainly the coordinate transformation between the worldframe and the sensor frame
( )kkp 1+=
( ) ( )( )kkp,zhkz tii 11 +=+
( ) ( )( ){ }ii nikzkZ +=+ 111
5 - Localization and Mapping5
79 Kalman Filter for Mobile Robot Localization: Measurement Prediction: Example
-
8/8/2019 5 - Localization and Mapping
79/122
R. Siegwart, ETH Zurich - ASL
For prediction, only the walls that are inthe field of view of the robot are selected.
This is done by linking the individuallines to the nodes of the path
5 - Localization and Mapping5
80 Kalman Filter for Mobile Robot Localization: Measurement Prediction: Example
-
8/8/2019 5 - Localization and Mapping
80/122
R. Siegwart, ETH Zurich - ASL
The generated measurement predictions have to be transformed to therobot frame {R}
According to the figure in previous slide the transformation is given by
and its Jacobian by
5 - Localization and Mapping5
81 Kalman Filter for Mobile Robot Localization: Matching
-
8/8/2019 5 - Localization and Mapping
81/122
R. Siegwart, ETH Zurich - ASL
Assignment from observations zj(k+1) (gained by the sensors) to the targets zt(stored in the map)
For each measurement prediction for which an corresponding observation is foundwe calculate the innovation:
and its innovation covariance found by applying the error propagation law:
The validity of the correspondence between measurement and prediction can e.g.be evaluated through the Mahalanobis distance:
5 - Localization and Mapping5
82 Kalman Filter for Mobile Robot Localization: Matching: Example
-
8/8/2019 5 - Localization and Mapping
82/122
R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping5
83 Kalman Filter for Mobile Robot Localization: Matching: Example
-
8/8/2019 5 - Localization and Mapping
83/122
R. Siegwart, ETH Zurich - ASL
To find correspondence (pairs) of predicted and observed features we usethe Mahalanobis distance
with
5 - Localization and Mapping5
84 Kalman Filter for Mobile Robot Localization: Estimation: Applying the Kalman Filter
-
8/8/2019 5 - Localization and Mapping
84/122
R. Siegwart, ETH Zurich - ASL
Kalman filter gain:
Update of robots position estimate:
The associate variance
5 - Localization and Mapping5
85 Kalman Filter for Mobile Robot Localization: Estimation: 1D Case
-
8/8/2019 5 - Localization and Mapping
85/122
R. Siegwart, ETH Zurich - ASL
For the one-dimensional case with we can showthat the estimation corresponds to the Kalman filter for one-dimensionpresented earlier.
5 - Localization and Mapping5
86 Kalman Filter for Mobile Robot Localization: Estimation: Example
-
8/8/2019 5 - Localization and Mapping
86/122
R. Siegwart, ETH Zurich - ASL
Kalman filter estimation of the new robotposition :
By fusing the prediction of robot position
(magenta) with the innovation gained by themeasurements (green) we get the updatedestimate of the robot position (red)
)( kkp
5 - Localization and Mapping5
87 Autonomous Indoor Navigation (Pygmalion EPFL)
b h fl l li i
-
8/8/2019 5 - Localization and Mapping
87/122
R. Siegwart, ETH Zurich - ASL
very robust on-the-fly localization one of the first systems with probabilistic sensor fusion
47 steps,78 meter length, realistic office environment,
conducted 16 times > 1km overall distance
partially difficult surfaces (laser), partially few vertical edges (vision)
5 - Localization and Mapping5
92 Other Localization Methods (not probabilistic): Positioning Beacon Systems: Triangulation
-
8/8/2019 5 - Localization and Mapping
88/122
R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping5
93 Other Localization Methods (not probabilistic): Positioning Beacon Systems: Triangulation
-
8/8/2019 5 - Localization and Mapping
89/122
R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping5
97 Autonomous Map Building (SLAM)
-
8/8/2019 5 - Localization and Mapping
90/122
R. Siegwart, ETH Zurich - ASL
Starting from an arbitrary initial point,
a mobile robot should be able to autonomously explore theenvironment with its on board sensors,
gain knowledge about it,
interpret the scene,
build an appropriate map
and localize itself relative to this map.
SLAMThe Simultaneous Localization and Mapping Problem
5 - Localization and Mapping5
99 Map Building: The Problems
-
8/8/2019 5 - Localization and Mapping
91/122
R. Siegwart, ETH Zurich - ASL
1. Map Maintaining: Keeping track ofchanges in the environment
e.g. disappearing
cupboard
- e.g. measure of belief of eachenvironment feature
2. Representation andReduction of Uncertainty
position of robot -> position of wall
position of wall -> position of robot
probability densities for feature positions
additional exploration strategies
?
5 - Localization and Mapping5
100 General Map Building Schematics
-
8/8/2019 5 - Localization and Mapping
92/122
R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping5
101 Map Representation
M is a set n of probabilistic feature locations
-
8/8/2019 5 - Localization and Mapping
93/122
R. Siegwart, ETH Zurich - ASL
Mis a set nof probabilistic feature locations Each feature is represented by the covariance matrix tand an associated
credibility factor ct
ct is between 0 and 1 and quantifies the belief in the existence of thefeature in the environment
a and b define the learning and forgetting rate and ns
and nu
are thenumber of matched and unobserved predictions up to time k, respectively.
5 - Localization and Mapping5
102 Autonomous Map Building: Stochastic Map Technique
Stacked system state vector:
-
8/8/2019 5 - Localization and Mapping
94/122
R. Siegwart, ETH Zurich - ASL
Stacked system state vector:
State covariance matrix:
5 - Localization and Mapping5
103 Particle Filter SLAM 1
Overview Simultaneous localization and mapping (SLAM) is a technique used by robots and
-
8/8/2019 5 - Localization and Mapping
95/122
R. Siegwart, ETH Zurich - ASL
Overview Simultaneous localization and mapping (SLAM) is a technique used by robots andautonomous vehicles to build up a map within an unknown environment while at thesame time keeping track of their current position. This is not as straightforward as it mightsound due to inherent uncertainties in discerning the robot's relative movement from itsvarious sensors. If at the next iteration of map building the measured distance anddirection traveled has a slight inaccuracy, then any features being added to the map will
contain corresponding errors. If unchecked, these positional errors build cumulatively,grossly distorting the map and therefore the robot's ability to know its precise location.There are various techniques to compensate for this such as recognizing features that ithas come across previously and re-skewing recent parts of the map to make sure the twoinstances of that feature become one. Some of the statistical techniques used in SLAMinclude Kalman filters, particle filters and scan matching of range data.
Sensors characteristics A sensor is characterized principally by:1. Noise2. Dimensionality of input
a. Planar laser range finder (2D points)b. 3D laser range finder (3D point cloud)c. Camera features..
3. Frame of referencea. Laser/camera in robot frameb. GPS in earth coord. Framec. Accelerometer/Gyros in inertial coord. frame
5 - Localization and Mapping5
104 Particle Filter SLAM 2
SLAM problem
-
8/8/2019 5 - Localization and Mapping
96/122
R. Siegwart, ETH Zurich - ASL
SLAM problem The approach to solve the SLAM problem is addressed using probabilities.
SLAM is usually explained by the conditional probability:
Overview
An online SLAM algorithm factorize that formula to estimate the robot stateat current time t
5 - Localization and Mapping5
105 Particle Filter SLAM 3
FastSLAM approach
-
8/8/2019 5 - Localization and Mapping
97/122
R. Siegwart, ETH Zurich - ASL
FastSLAM approach It solves the SLAM problem using particle filters. Particle filters are
mathematical models that represent probability distribution as a set of discreteparticles which occupy the state space.
Particle filter update
In the update step a new particledistribution given motion model and
controls applied is generated.a) For each particle:
1. Compare particles prediction of measurements with actual measurements
2. Particles whose predictions match the measurements are given a high weight
b) Filter resample:
Resample particles based on weight
Filter resample
Assign each particle a weight depending on how well its estimate of the stateagrees with the measurements and randomly draw particles from previous
distribution based on weights creating a new distribution.
5 - Localization and Mapping5
106 Particle Filter SLAM 4
FastSLAM approach (continuation)
-
8/8/2019 5 - Localization and Mapping
98/122
R. Siegwart, ETH Zurich - ASL
FastSLAM approach (continuation) Formulation
Particle filter SLAM (or FastSLAM) decouples map of features from pose:
1. Each particle represents a robot pose
2. Feature measurements are correlated thought the robot pose If the robot pose was known all of the features would be uncorrelated so it
treats each pose particle as if it is the true pose, processing all of the feature
measurements independently.
5 - Localization and Mapping5
107 Particle Filter SLAM 5
FastSLAM approach (continuation)
-
8/8/2019 5 - Localization and Mapping
99/122
R. Siegwart, ETH Zurich - ASL
pp ( )
It is possible to factorize the previous formula (Murphy1999) as:
5 - Localization and Mapping5
108 Particle Filter SLAM 6
FastSLAM approach (continuation)P ti l t
-
8/8/2019 5 - Localization and Mapping
100/122
R. Siegwart, ETH Zurich - ASL
pp ( ) Particle set
The robot posterior is solved using a Rao Blackwellized particle filtering usinglandmarks. Each landmark estimation is represented by a 2x2 EKF (ExtendedKalman Filter). Each particle is independent (due the factorization) from the others
and maintains the estimate of M landmark positions.
5 - Localization and Mapping5
110Autonomous Map Building
Example of Feature Based Mapping (ETH)
-
8/8/2019 5 - Localization and Mapping
101/122
R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping5
111 Probabilistic Feature Based 3D Maps
raw 3D scan ofth
-
8/8/2019 5 - Localization and Mapping
102/122
R. Siegwart, ETH Zurich - ASL
photo of the scene
find a plane for every cellusing RANSAC
oneplaneper
gridcell
decompose space into grid cellsfill cells with data
the same scene
raw data
fuse similar neighboring
planes together
finalsegmentation
segmented planar segments
5 - Localization and Mapping5
112 Probabilistic Feature Based 3D SLAM
-
8/8/2019 5 - Localization and Mapping
103/122
R. Siegwart, ETH Zurich - ASL
close-up of a reconstructed hallway
close-up of reconstructed bookshelves
The same experiment as before butthis time planar segments are
visualized and integrated into the
estimation process
5 - Localization and Mapping5
113 Cyclic Environments
Small local error accumulate to arbitrary large global errors! This is usually irrelevant for navigation
Courtesy of Sebastian Thrun
-
8/8/2019 5 - Localization and Mapping
104/122
R. Siegwart, ETH Zurich - ASL
This is usually irrelevant for navigation
However, when closing loops, global error does matter
5 - Localization and Mapping5
114 Dynamic Environments
Dynamical changes require continuous mapping
-
8/8/2019 5 - Localization and Mapping
105/122
R. Siegwart, ETH Zurich - ASL
If extraction of high-level features would bepossible, the mapping in dynamic
environments would becomesignificantly more straightforward.
e.g. difference between human and wall
Environment modeling is a key factorfor robustness
?
5 - Localization and Mapping5
115Map Building:
Exploration and Graph Construction
1. Exploration 2. Graph Construction
-
8/8/2019 5 - Localization and Mapping
106/122
R. Siegwart, ETH Zurich - ASL
- provides correct topology
- must recognize already visited location- backtracking for unexplored openings
Where to put the nodes?
Topology-based: at distinctive locations
Metric-based: where features disappearor get visible
explore
on stack
alreadyexamined
Autonomous Mobile Robots
-
8/8/2019 5 - Localization and Mapping
107/122
Autonomous Systems LabZrich
Real Time 3D SLAM with a Single Camera
This presentation is based on the following papers:
Andrew J. Davison, Real-Time Simultaneous Localization and Mapping with a Single Camera, ICCV 2003
Nicholas Molton, Andrew J. Davison and Ian Reid, Locally Planar Patch Features for Real-Time Structurefrom Motion, BMVC 2004
5 - Localization and Mapping5
118 Outline
-
8/8/2019 5 - Localization and Mapping
108/122
R. Siegwart, ETH Zurich - ASL
Visual SLAM versus Structure From Motion
Extended Kalman Filter for First-Order Uncertainty Propagation
Camera Motion Model Matching of Existing Features
Initialization of new features in the map
Improving feature matching
5 - Localization and Mapping5
119
Structure from Motion:
Structure From Motion (SFM)
-
8/8/2019 5 - Localization and Mapping
109/122
R. Siegwart, ETH Zurich - ASL
Take some images of the object toreconstruct
Features (points, lines, ) are
extracted from all frames andmatched among them
All images are processedsimultaneously
Both camera motion and 3D structurecan be recovered by optimally fusingthe overall information, up to a scalefactor
Robust solution but far frombeing Real-Time !
5 - Localization and Mapping5
120 Visual SLAM
-
8/8/2019 5 - Localization and Mapping
110/122
R. Siegwart, ETH Zurich - ASL
(From Davison03)
Can be Real-Time !
5 - Localization and Mapping5
121 SLAM versus SFM
Repeatable Localization
-
8/8/2019 5 - Localization and Mapping
111/122
R. Siegwart, ETH Zurich - ASL
Ability to estimate the location of the camera after 10 minutes of motion with the sameaccuracy as was possible after 10 seconds.
Features must be stable, long-term landmark, no transient (as in SFM)
5 - Localization and Mapping5
122State of the System
The State vector and the first order uncertainty are:
-
8/8/2019 5 - Localization and Mapping
112/122
R. Siegwart, ETH Zurich - ASL
The State vector and the first-order uncertainty are:
Where:
In the SLAM approach the Covariance matrix is not diagonal, that is, theuncertainty of any feature affects the position estimate of all other features
and the camera
5 - Localization and Mapping5
123Extended Kalman Filter (EKF)
The state vector X and the Covariance matrix P are updated during cameramotion using an EKF:
-
8/8/2019 5 - Localization and Mapping
113/122
R. Siegwart, ETH Zurich - ASL
Prediction: a motion model is used to predict where the camera will be in the next timestep (P increases)
Observation: a new feature is measured (P decreases)
?
5 - Localization and Mapping5
124 A Motion Model for Smoothly Moving Camera
Attempts to predict where the camera will be in the next time step
-
8/8/2019 5 - Localization and Mapping
114/122
R. Siegwart, ETH Zurich - ASL
In the case the camera is attached to a person, the unknown intentions ofthe person can be statistically modeled
Most Structure from Motion approaches did not use any motion model!
5 - Localization and Mapping5
125 Constant Velocity Model
-
8/8/2019 5 - Localization and Mapping
115/122
R. Siegwart, ETH Zurich - ASL
The unknown intentions, and so unknown accelerations, are taken into account by
modeling the acceleration as a process of zero mean and Gaussian distribution:
By setting the covariance matrix ofn to small or large values, we define the smoothness
or rapidity of the motion we expect. In practice these values were used:
srad
sm
/6
/4
5 - Localization and Mapping5
126 Selection of Features already in the Map
By predicting the next camera pose we can predict where
-
8/8/2019 5 - Localization and Mapping
116/122
R. Siegwart, ETH Zurich - ASL
By predicting the next camera pose, we can predict whereeach features is going likely to appear:
5 - Localization and Mapping5
127
Selection of Features already in the Map
-
8/8/2019 5 - Localization and Mapping
117/122
R. Siegwart, ETH Zurich - ASL
At each frame, the features occurring at previous step are searched in the elliptic region
where they are expected to be according to the motion model (Normalized Sum of Square
Differences is used for matching)
Large ellipses mean that the feature is difficult to predict, thus the feature inside will
provide more information for camera motion estimation
Once the features are matched, the entire state of the system is updated according to EKF
5 - Localization and Mapping5
128
Initialization of New Features in the Database
-
8/8/2019 5 - Localization and Mapping
118/122
R. Siegwart, ETH Zurich - ASL
The Shi-Tomasi feature is firstly initialized as a 3D line
Along this line, 100 possible feature positions are set uniformly in the range 0.5-5 m
A each time, each hypothesis is tested by projecting it into the image
Each matching produces a likelihood for each hypothesis and their probabilities are recomputed
5 - Localization and Mapping5
129 Map Management
-
8/8/2019 5 - Localization and Mapping
119/122
R. Siegwart, ETH Zurich - ASL
The number of features to be constantly visible in the image varies (in practice between 6-10) according to
Localization accuracy Computing power available
If a feature is required to be added, the detected feature is added only if it is not expectedto disappear from the next image
5 - Localization and Mapping5
130Improved Feature Matching
Up to now, tracked features were treated as 2D templates in image space
Long-term tracking can be improved by approximating the feature as a locally
-
8/8/2019 5 - Localization and Mapping
120/122
R. Siegwart, ETH Zurich - ASL
Long term tracking can be improved by approximating the feature as a locallyplanar region on 3D world surfaces
5 - Localization and Mapping5
131 Locally Planar 3D Patch Features
-
8/8/2019 5 - Localization and Mapping
121/122
R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping5
132 References
A. J. Davison, Real-Time Simultaneous Localization and Mapping with a Single Camera, ICCV 2003
N. Molton, A. J. Davison and I. Reid, Locally Planar Patch Features for Real-Time Structure from Motion, BMVC 2004
-
8/8/2019 5 - Localization and Mapping
122/122
R. Siegwart, ETH Zurich - ASL
A. J. Davison, Y. Gonzalez Cid, N. Kita, Real-Time 3D SLAM with wide-angle vision, IFAC Symposium on IntelligentAutonomous Vehicles, 2004
J. Shi and C. Tomasi, Good features to track, CVPR, 1994.