lecture 06: features and uncertainty

30
Introduction to Robotics Features and Uncertainty October 4, 2010 Nikolaus Correll

Upload: university-of-colorado-at-boulder

Post on 19-Jan-2015

745 views

Category:

Documents


5 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Lecture 06: Features and Uncertainty

Introduction to RoboticsFeatures and Uncertainty

October 4, 2010Nikolaus Correll

Page 2: Lecture 06: Features and Uncertainty

Review: Vision• Convolution-based filters• Thresholds• Goal: condensing information• Exercise:

1. 2. 3. 4.

5. 6. 7. 8. a: (q1 ~ q2) b: (q3 ~ q4) c: (q1>q3)

a&b&c -> Heart

Page 3: Lecture 06: Features and Uncertainty

Midterm

• October 11• Today: Quick review of culearn exercises

Page 4: Lecture 06: Features and Uncertainty

Week 2: Reading• What is a possible reason for the fact that nature did not evolve wheels

except for a few animals that using rolling as means of locomotion?– Because rotational actuators are not part of nature's repertoire.– Because wheeled locomotion is not efficient on soft and/or uneven ground.– Not true, there are various examples for wheel-based locomotion in nature.

• What is the difference between static and dynamic stability?– Dynamic stability is when a robot does not fall over even when moving.– Static stability considers "snapshots" of robot poses, whereas dynamic stability

addresses sequences of statically stable poses.– Dynamic stability requires motion for the system to be stable, static stability does

not.• What is the prime purpose of a suspension system in a mobile robot?

– To prevent damage of equipment on the robot– To guarantee that the robot base is always parallel to the ground– To assure that all wheels have maximum ground contact

Page 5: Lecture 06: Features and Uncertainty

Week 3: Reading• How do you calculate the forward kinematics of a wheeled robot?

– I calculate the contribution of each wheel to the degrees of freedom of the robot in robot coordinates, then add up them up, and finally transform them into world coordinates.

– The world coordinates can be expressed in robot coordinates using a simple rotation matrix.– I calculate the 1st and 2nd moments of the rotational center of the robot and transform those using a 3x3 rotation

matrix in world coordinates.• What is key when calculating wheel kinematic constraints?

– The angle of the wheel plane needs to be fixed.– Rolling and sliding constraints should not be zero for the robot to move.– T he rolling speed must add up to the robot motion along the direction of the wheel.

• Which one of the following configurations has steerability degree 1 and mobility degree 2? – A robot that can translate along two axes and rotate its main body with a single steering wheel.– A robot that can steer one wheel which leads to change translation along one axis and rotation of its main body.– A robot with two steering wheels that can independently drive the robot AND let it rotate on the spot.

• What is a good recipe to drive a differential wheel robot to a desired position? – Calculate the robot speed as a function of the robot’s wheel-speed (Forward kinematics). Use this information to predict

how to change the wheel-speed in order to drive the error (expressed in Polar-coordinates) using the controller from Section 3.6.2.4.

– Use the control law from Section 3.6.2.4 to calculate the desired robot speed in polar coordinates. Now transform the polar coordinates into robot coordinates (Inverse kinematics) and from there in world coordinates (Forward kinematic model).

– Calculate first the relation between forward and angular speed at the robot’s center and its wheel-speed (Forward kinematic model). Determine how to set the wheel-speed in order to achieve a desired robot speed (Inverse kinematics). Calculate the error in polar coordinates, and use the control law from Section 3.6.2.4 to calculate the desired robot speed.

Page 6: Lecture 06: Features and Uncertainty

Week 4: Reading• Your robot is facing a wall with its distance sensor, and even though the robot is not moving its readings

appear to be random. This is most likely a problem with the sensor’s– Resolution– Dynamic Range– Bandwidth– Precision

• Your robot is equipped with an infra-red distance sensor that delivers very accurate readings that reflect even very small changes in distance. Unfortunately, the sensors do not work well when sunlight is penetrating the lab windows. This is a problem of– Sensitivity of the sensor– Cross-Sensitivity of the sensor– Accuracy of the sensor

• Why do you require four satellites to establish your position with GPS?– There are four unknowns: x, y, z and orientation– There are four unknowns: x,y, z and clockskew– There are only three unknowns, a compass is required for orientation

• How does a laser range finder work?– A laser beam changes its amplitude at high speed. The Doppler effect leads to a phase-shift of the amplitude-

modulated laser signal. This phase-shift can be measured and is proportional to the distance.– The amplitude of the laser beam changes with a specific frequency whose wavelength is larger than the maximum

range of the laser. Upon reflection the phase of this beam is shifted. This phase-shift can be measured and is proportional to the distance.

– A laser beam with wavelength of 824 nm is reflected from a surface and its reflection is recorded on a linear camera, which is used to measure the time between the emission of the ray and its arrival.

Page 7: Lecture 06: Features and Uncertainty

Week 5: Reading• What makes color-based object recognition using image sensors difficult?

– Colors are expressed in terms of their red, green and blue components. The associated gains change drastically as a function of the lighting conditions, and make even red and green objects ambiguous to distinguish.

– The way the sensor sees the image is different from that of the human eye and therefore requires careful calibration.

– Colors are easy to distinguish and this is therefore one of the easiest problems in vision. • What is not a valuable cue to detect depth from a single monocular image?

– Blurring– Known size of an object– Disparity

• All of the vision-based range extraction mechanisms suffer from the following problem– Depth is difficult to estimate for objects that are far away– Changes in lighting conditions change the way color is perceived– Only stereo-vision range estimates fail in the far field

• Range estimates based on stereo-vision can be improved by increasing the baseline between the cameras. What are the trade-offs?– The sensor requires considerably more space and range to objects that are close cannot be estimated as one of

the cameras might not see it anymore.– The sensor requires considerably more space and is more difficult to calibrate.– The sensor just requires more space.

Page 8: Lecture 06: Features and Uncertainty

Uncertainty

• All sensors are noisy• Today– How to model uncertainty?– How does uncertainty

propagate from sensors to features?

– Example: line detection

Page 9: Lecture 06: Features and Uncertainty

The Gaussian/Normal Distribution

• Defined by– Mean– Variance

• Good approximation for some sensors

Page 10: Lecture 06: Features and Uncertainty

Current Stats: Week 1-4 vs. Spring 2010

• Bi-modal distribution: Undergraduates/Graduates

• Different performance thresholds for U-Grads / Grads

• Spring 2010, 2 different distributions

N=27Max=48

# st

uden

tsPoints

# st

uden

ts

Overall score (%)

Page 11: Lecture 06: Features and Uncertainty

Week 6: reading• Why is a Gaussian distribution the model of choice for representing uncertainty in robotic sensing?

– Sensor readings are subject to uncertainty and this uncertainty behaves like a Gaussian distribution.– The true distribution of noise on most sensors is unknown, but the mathematical properties of the Gaussian

model make it the model of choice being applicable to most sensors.– Because the likelihood is very high that all the sensor readings are within 3 standard deviations.

• What is the reasoning behind the derivation of Equation 4.73 and 4.74 (least-squares optimization)?– The derivative of a function is zero at its extreme values (maximum or minimum), and thus finding the value

where the derivative of the least-square error is zero, minimizes it. The value for which the least-square error is minimal, is the best fit for the line.

– Finding the angle of the line that best matches the set of point requires a double integration (double sum).– Finding the best fitting line is a complex numerical optimization problem for which no analytical solution can

be found.• In order to detect an edge in the image

– You have to find areas in the image where the frequency, i.e. the change between neighboring pixels is high– You have to find areas in the image that are brighter than others – You have to find areas in the image that are darker than others

• How can you calculate the variance for the detection of a feature that relies on multiple sensors?– The variance for feature detection corresponds to that of the sensor with the highest variance. This is

represented by the Jacobian that encodes the dependencies between all sensor’s error models.– The variance for feature detection is the product of the variances of all sensors involved in its detection. This is

represented by the Jacobian that encodes the dependencies between all sensor’s error models.– The variance for feature detection is a weighted sum of the individual variance of each sensor weighted by the

dependency of the sensors of each other.

Page 12: Lecture 06: Features and Uncertainty

Example Feature: Detecting LinesCa

mer

aLa

sers

can

N.B. Every single point is subject to uncertainty!

Page 13: Lecture 06: Features and Uncertainty

Line Fitting

What is the uncertainty associated to each line feature?

Page 14: Lecture 06: Features and Uncertainty

Example: Line Fitting

• Given:• Desired: r, a 1. 2. 3.

Page 15: Lecture 06: Features and Uncertainty

Solution (Line fitting)

Additional trick: weight each measurement bythe variance expected at this distance.

Page 16: Lecture 06: Features and Uncertainty

Whiteboard

Page 17: Lecture 06: Features and Uncertainty

Error propagation

• What is the variance of a and r?• Error propagation law

• Y are the output variables, X input• Cx,y are matrices of covariances• F is a Jacobian matrix

Page 18: Lecture 06: Features and Uncertainty

Example: Line fitting

• 17 measurements• f -> (ri,qi) -> (r,a)• We need

• We know

Page 19: Lecture 06: Features and Uncertainty

Summary

• Every sensor has noise and makes reasoning uncertain

• Sensor measurements can be combined into features

• The uncertainty of these features can be calculated using the error propagation law

• Knowing how uncertainty behaves helps you decide

Page 20: Lecture 06: Features and Uncertainty

Line segmentation: Split-and-Merge

Page 21: Lecture 06: Features and Uncertainty

Other features: Segments

• Pyramid, mean-shift, graph-cut• Here: Watershed

Gary Bradski (c) 2008 2121

Page 22: Lecture 06: Features and Uncertainty

Watershed algorithm

http://cmm.ensmp.fr/~beucher/wtshed.html

Demo OpenCV pyramid_segmentation

Page 23: Lecture 06: Features and Uncertainty

Alternative line features: Hough Transform

Demo: OpenCV houghlines

Page 24: Lecture 06: Features and Uncertainty

Hough Transform

Source: K. Grauma / D. Scaramuzza

Page 25: Lecture 06: Features and Uncertainty

Project Assignments

• 16 Undergraduate Students• 10 Graduate students• 5 groups -> 5+5+5+5+6• ~2 graduate students + ~3 undergrads• Goal: implement a controller for Ratslife• Grad students: have to submit controller• Undergraduates have to present (final/design

reviews)

Page 26: Lecture 06: Features and Uncertainty

Exercise: Introduction to Ratslife

Page 27: Lecture 06: Features and Uncertainty

Exercise 2: Locomotion• If you were to write a controller, what do you think would be your

best bet to generate the joint values for ji for joint i at time t? Hint: look at how the dog in ghostdog.wbt is controlled.– ji(t+1)=a j1(t)+b

– ji(t+1)= a sin (t-b)+c

– ji(t+1)=a(t-b)^2+c

• Can you implement a new gait in ghostdog.wbt that lets the robot trot? What do you need to do except adding a case TROT to the finite state machine in ghostdog.c? Try this out before answering the question!– Calculate the servo speed so that both front and hind pairs are out of

phase, but one front leg is in phase with one hind leg.– Calculate the servo speed so that the phase between front and hind legs is

always shifted by 90 degrees.– Calculate the servo speed so that all legs are phase shifted by 45 degrees.

• Which of the motions in Figure 2.1 is only dynamically stable?– From 1->2– From 3->4– From 5->8– None

• What is a straightforward way to presumably double the speed of the forward motion? To test this, edit the file with a text-editor. If you don’t get the desired speed-up, why is this?– The inertia and limited motor speed and torque hinder the robot from

executing the motions double as fast.– The motors are simply not fast enough.– Just changing the timing of the gait does not affect its actual execution.

Page 28: Lecture 06: Features and Uncertainty

Exercise 3: Control• What happens when you increase KR and lower KA in the controller?

– The robot will drive curves with larger radius.– The robot will drive curves with smaller radius.– The robot will just drive straight, the values need to be exactly as they are.

• How does your controller deal with the obstacles– Collision avoidance subsumes navigation. Obstacles are avoided and navigation is resumed

as soon as the obstacles are cleared.– The robot plans around the obstacles.– The robot gets stuck in the obstacles.

• Build an obstacle with a U-shape by shifting the obstacles in the arena (press the shift key and move them with the mouse) and let the robot run into this. What happens?– The robot follows the inner perimeter of the U-obstacle to get out of it and eventually

reaches the goal. – The robot goes back and forth into the U-obstacle. Some kind of planning would be needed.– The shape of the obstacle does not matter.

Page 29: Lecture 06: Features and Uncertainty

Exercise 4: Control• How can you tell that the robot is lying on its front or its back?

– I need to integrate the direction of acceleration in order to determine the direction the robot has fallen.– I need to identify the direction of the acceleration exerted by the gravity of earth.– I use the accelerometer to detect a fall and then use the camera to detect whether the robot is facing down

or not.• Can you use the Nao’s accelerometer for integrating the robot’s position? If you are not sure, try

it!– Sure, the acceleration allows me to calculate the position and small errors around the mean will cancel each

other out. – It is not possible to calculate position simply from acceleration.– Problems are the gravity of earth and the fact that even small errors have fatal impact after the required

double integration.• What is the problem with the resulting map?

– The laser scanner’s readings accuracy is pretty bad, leading to a rather noisy map.– Accumulating odometry errors renders the resulting map useless very fast.– The environment is hard to map.

• What problem does this robot create?– It continues to collide with the mapping robot.– Dynamic obstacles make their way into the map.– The other robot moves to fast to be mapped accurately.

Page 30: Lecture 06: Features and Uncertainty

Homework

• Read chapter 5 -> Section 5.5 (pages 181-212)