a viewpoint on perception, planning, and control
Post on 28-Feb-2022
2 Views
Preview:
TRANSCRIPT
A Viewpoint on Perception, Planning, and
Control
Claire Tomlin
Somil Bansal, Varun Tolani, Aleksandra Faust, Jitendra Malik
How to efficiently navigate an autonomous system with a monocular RGB
camera to a goal in an a priori unknown environment?
[Bansal, Tolani, Gupta, Malik, Tomlin, CoRL 2019]
πΒ·
π = ππππ(π½) + π π
πΒ·
π = ππππ(π½) + π π
πΒ·= π
π½Β·
= π [Bansal, Tolani, Gupta, Malik, Tomlin, CoRL 2019]
Success rate in reaching the goal (%):
Time taken to reach the goal (s):
Average acceleration along
the trajectory (π/π 2):
Average jerk along the
trajectory (π/π 3):
model-based E2E
Success Rate
Control Profile
model-based E2E
Some lessons learned
β’ Data representation is important
β’ Optimal control can be too optimal
β’ Waypoint representation
vs
vs
More lessons learned
β’ Building on existing NN architectures
β’ Image and perspective distortions during training
β’ RL on supervised learning
Safety Challenges
β’ Monitor: Is the image data in the training distribution?
β’ What is the uncertainty around the output of the
perception module?
β’ How should this uncertainty affect the planning and
control?
β’ More complex environments?
SURREAL
SD3DIS
HumANav
SURREAL
SD3DIS[π₯, π¦, π]
Robot State
[π₯, π¦, π, π£, π]
Human State & Control
Human Identity- Gender
- Texture (Clothing, Skin Color,
Facial Features)
- Body Shape (Tall, Short, etc.)
HumANav
SURREAL
SD3DIS
HumANav
[π₯, π¦, π]
Robot State
[π₯, π¦, π, π£, π]
Human State & Control
Human Identity- Gender
- Texture (Clothing, Skin Color,
Facial Features)
- Body Shape (Tall, Short, etc.)
Summary
Incorporating perception in the control loop
Supervising learning using optimal control
β’ a perception-planning-control pipeline
β’ comparison with a more traditional SLAM pipeline
β’ applied to a vision-based navigation task
Models of human motion
Challenges for safe control
top related