inverse reinforcement learning for telepresence robots

48
Inverse Reinforcement Learning for Telepresence Robots Shimon Whiteson Dept. of Computer Science University of Oxford joint work with Kyriacos Shiarlis & João Messias

Upload: others

Post on 30-Apr-2022

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Inverse Reinforcement Learning for Telepresence Robots

Inverse Reinforcement Learning for Telepresence Robots

Shimon Whiteson Dept. of Computer Science

University of Oxford

joint work with Kyriacos Shiarlis & João Messias

Page 2: Inverse Reinforcement Learning for Telepresence Robots

Acknowledgements

João MessiasKyriacos Shiarlis

Page 3: Inverse Reinforcement Learning for Telepresence Robots

Outline• TERESA

• Inverse Reinforcement Learning

• Inverse Reinforcement Learning from Failure

• Results

• Next Steps

Page 4: Inverse Reinforcement Learning for Telepresence Robots
Page 5: Inverse Reinforcement Learning for Telepresence Robots

What Is Telepresence?

“Skype on a stick”

Human Controller / Pilot Interaction Target

Page 6: Inverse Reinforcement Learning for Telepresence Robots

Telepresence Benefits

• Greater physical presence:“Your alter ego on wheels”

• Mobility enables spontaneous interaction

Page 7: Inverse Reinforcement Learning for Telepresence Robots

Application: Education Accessibility

Page 8: Inverse Reinforcement Learning for Telepresence Robots

Application: Remote Health Care

Page 9: Inverse Reinforcement Learning for Telepresence Robots

Application: Elderly Accessibility

“Cafe Philo" Discussion Group

Les Arcades Prevention Centre in Troyes, France

Page 10: Inverse Reinforcement Learning for Telepresence Robots

Telepresence Limitations• Learning curve for human controller

• Otherwise automatic behaviours (e.g., body language) require manual execution

• Human controller must simultaneously make high-level and low-level decisions

• Leads to cognitive overload: mistakes at the low level; less attention for the high level [Tsui et al. 2011]

• Result is poor quality social interaction

Page 11: Inverse Reinforcement Learning for Telepresence Robots

TERESA Solution

• A new telepresence system with partial autonomy: automate low-level decision making

• Free human controller to focus on high-level decisions

• Requires social intelligence:

• Social navigation

• Social conversation

Page 12: Inverse Reinforcement Learning for Telepresence Robots

TERESA Robot

Page 13: Inverse Reinforcement Learning for Telepresence Robots

Experiments with Real Subjects

Les Arcades Prevention Centre in Troyes, France

Page 14: Inverse Reinforcement Learning for Telepresence Robots

Video

Page 15: Inverse Reinforcement Learning for Telepresence Robots

Interface

Page 16: Inverse Reinforcement Learning for Telepresence Robots

Human Controller

Audiovisual Processing Environmentaudio &

videoaudio &video

Cost Estimator State Estimator

Navigator Body Pose Controller

controllerreactions

peoplereactions

peoplelocations

peoplebehavior

state estimates

cost estimates

high-levelcontrolsignals

bodyposes motion

Cognitive Architecture

Page 17: Inverse Reinforcement Learning for Telepresence Robots

Outline• TERESA

• Inverse Reinforcement Learning

• Inverse Reinforcement Learning from Failure

• Results

• Next Steps

Page 18: Inverse Reinforcement Learning for Telepresence Robots

Inverse Reinforcement Learning

Page 19: Inverse Reinforcement Learning for Telepresence Robots

Reward & Value

R(s, a) =KX

k=1

wk�k(s, a)

V ⇡(s) = E{hX

t=1

R(st, at) | s1 = s}

V ⇡(s) =KX

k=1

wk

E{

hX

t=1

�k(st, at) | s1 = s}!

=KX

k=1

wkµ⇡,1:hk |s1=s,

unknown weight feature function

feature expectation under s1

Page 20: Inverse Reinforcement Learning for Telepresence Robots

Data & Feature Expectations

D =�⌧1, ⌧2, ...⌧N

⌧i = h(s⌧i1 , a⌧i1 ), (s⌧i2 , a⌧i2 ), . . . , (s⌧ih , a⌧ih )i

Dataset:

Trajectory:

eµDk =

1

N

X

⌧2D

hX

t=1

�k(s⌧t , a

⌧t )

µ⇡k |D =

X

s2SPD(s1 = s)µ⇡

k |s1=s

Empirical feature expectation:

Feature expectation of ∏:

Page 21: Inverse Reinforcement Learning for Telepresence Robots

Maximum Causal Entropy IRLfind: max

⇡H(Ah||Sh

)

subject to: eµDk = µ⇡

k |D 8k

and:

X

a2A⇡(s, a) = 1 8s 2 S

and: ⇡(s, a) � 0 8s 2 S, a 2 A

H(Ah||Sh) = �

hX

t=1

X

s1:t2St

a1:t2At

P (a1:t, s1:t) log(P (at|st))

where H is the causal entropy:

B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In AAAI, pages 1433–1438, 2008.

depends only on previous states & actions

Page 22: Inverse Reinforcement Learning for Telepresence Robots

Lagrangian

L(⇡, w) = H(Ah||Sh) +KX

k=1

wk(µ⇡k |D � eµD

k )

find: min

w

n

max

⇡(L(⇡, w))

o

subject to:

X

a2A⇡(s, a) = 1 8s 2 S

and: ⇡(s, a) � 0 8s 2 S, a 2 A

Page 23: Inverse Reinforcement Learning for Telepresence Robots

Setting to zero implies:

Solving the Lagrangianr⇡(st,at)L(⇡, w)

⇡(st, at) / exp

H(At:h||St:h

) +

KX

k=1

wkµ⇡,t:hk |st,at

!

Finding ∏ corresponds to performing soft value iteration:

Q

w

(s, a)

soft

=

KX

k=1

w

k

k

(s, a) +

X

s

0

T (s, a, s

0)V

w

(s

0)

V

w

(s)

soft

= log

X

a

exp(Q

w

(s, a))

⇡(s, a) = exp(Q

w

(s, a)� V

w

(s))

Q⇡(st, at)

Page 24: Inverse Reinforcement Learning for Telepresence Robots

Gradient descent w.r.t. in the outer loop:

Outer Loop

Finding ∏ for a fixed corresponds to the inner loop of:

min

w

n

max

⇡(L(⇡, w))

o

w

w

rwL(⇡, w) = µ⇡|D � eµD,

Page 25: Inverse Reinforcement Learning for Telepresence Robots

Outline• TERESA

• Inverse Reinforcement Learning

• Inverse Reinforcement Learning from Failure

• Results

• Next Steps

Page 26: Inverse Reinforcement Learning for Telepresence Robots

Failed Demonstrations

• Existing IRL methods learn only from success

• IRL motivation: describing reward is hard but demonstrating success is easy

• Failed demonstrations also often available whenever humans learn via trial and error

Page 27: Inverse Reinforcement Learning for Telepresence Robots

• We extend IRL to include learning from failure

• Finds cost function that encourages behaviour that:

• Maximises similarity to successful demos

• Minimises similarity to failed demos

Inverse Reinforcement Learning from Failure

Kyriacos Shiarlis‚ João Messias and Shimon Whiteson. Inverse Reinforcement Learning from Failure. In AAMAS 2016.

Page 28: Inverse Reinforcement Learning for Telepresence Robots

Add inequality constraints:

Naive Approach

|eµFk � µ⇡

k |F | > ↵k 8k

max

⇡,↵H(Ah||Sh

) +

KX

k=1

↵k

Non-convex!

Convex relaxation:

max

⇡,✓H(Ah||Sh

) +

KX

k=1

✓k(µ⇡k |F � eµF

k )

Intractable!

Page 29: Inverse Reinforcement Learning for Telepresence Robots

IRLF

max

⇡,✓,zH(Ah||Sh

) +

KX

k=1

✓kzk � �

2

||✓||2

subject to: µ⇡k |D = eµD

k 8kand: µ⇡

k |F � eµFk = zk 8k

and:

X

a2A⇡(s, a) = 1 8s 2 S

and: ⇡(s, a) � 0 8s 2 S, a 2 A,

causal entropy dissimilarity regularisation

new constraint

Page 30: Inverse Reinforcement Learning for Telepresence Robots

Lagrangian

L(⇡, ✓, z, wD, wF ) = H(Ah||Sh)� �

2||✓||2 +

KX

k=1

✓kzk+

KX

k=1

wDk (µ⇡

k |D � eµDk ) +

KX

k=1

wFk (µ⇡

k � eµFk |F � zk)

find: min

w

⇢max

⇡,✓,z(L(⇡, w))

subject to:

X

a2A⇡(s, a) = 1 8s 2 S

and: ⇡(s, a) � 0 8s 2 S, a 2 A

Page 31: Inverse Reinforcement Learning for Telepresence Robots

First find critical points w.r.t. z and θ, then w.r.t. ∏, yielding:

Solving the Lagrangian

Yielding a new soft value iteration:

⇡(st, at) / exp

H(At:h||St:h

) +

KX

k=1

(wDk + wF

k )µ⇡,t:hk |st,at

!

Q

w

(s, a)

soft

=

KX

k=1

(w

Dk

+ w

Fk

)�

k

(s, a) +

X

s

0

T (s, a, s

0)V

w

(s

0)

V

w

(s)

soft

= log

X

a

exp(Q

w

(s, a))

⇡(s, a) = exp(Q

w

(s, a)� V

w

(s))

Page 32: Inverse Reinforcement Learning for Telepresence Robots

Outer Loop

rwDkL⇤(wD, wF ) = µ⇡⇤

k |D � eµDk ,

rwFkL⇤(wD, wF ) = µ⇡⇤

k |F � eµFk � �wF

k .

Finding ∏,θ,z for a fixed corresponds to the inner loop of:

find: min

w

⇢max

⇡,✓,z(L(⇡, w))

subject to:

X

a2A⇡(s, a) = 1 8s 2 S

and: ⇡(s, a) � 0 8s 2 S, a 2 A

w

Gradient descent w.r.t. and in the outer loop:wD wF

Page 33: Inverse Reinforcement Learning for Telepresence Robots

Incremental IRLF

• Updates to each set of weights affect feature expectations and thus interact with each other

• Some failed demos may be better for fine tuning

• Solution: anneal �

Page 34: Inverse Reinforcement Learning for Telepresence Robots

Outline• TERESA

• Inverse Reinforcement Learning

• Inverse Reinforcement Learning from Failure

• Results

• Next Steps

Page 35: Inverse Reinforcement Learning for Telepresence Robots

Simulated Robot Domain

Page 36: Inverse Reinforcement Learning for Telepresence Robots

Simulated Robot ResultsOverlapping

E: reach target, avoid obstacles T: reach target,

hit obstacle

Complementary: E: reach target T: hit obstacle

w.r.

t. Ex

pert

w.r.

t. Ta

boo

Contrasting E: reach target, avoid obstacles T: hit obstacle

Page 37: Inverse Reinforcement Learning for Telepresence Robots

Varying # of Successful Demos

(Contrasting Scenario)

Page 38: Inverse Reinforcement Learning for Telepresence Robots

Learned Reward Functions

Expert Taboo

IRL IRLF(Contrasting Scenario)

Page 39: Inverse Reinforcement Learning for Telepresence Robots

Factory Domain Results

w.r.t. Expert w.r.t. Taboo

R. Dearden & C. Boutilier. Abstraction and Approximate Decision-Theoretic Planning. Artificial Intelligence, 89(1):219–283, 1997.

Page 40: Inverse Reinforcement Learning for Telepresence Robots

Real Robot Data

Page 41: Inverse Reinforcement Learning for Telepresence Robots

Real-World IRL Evaluation• Missing ground-truth reward functions

• Current practice: use qualitative and/or ad-hoc metrics

• New approach:

• Apply IRL to D and F separately

• Resulting reward functions := ground truth

• Generate new data from corresponding policies

• Apply IRLF to new datasets

Page 42: Inverse Reinforcement Learning for Telepresence Robots

Real Robot Results

w.r.t. Expert w.r.t. Taboo

Page 43: Inverse Reinforcement Learning for Telepresence Robots

Outline• TERESA

• Inverse Reinforcement Learning

• Inverse Reinforcement Learning from Failure

• Results

• Next Steps

Page 44: Inverse Reinforcement Learning for Telepresence Robots

Rapidly Exploring Random Trees

Figure source: Wikipedia

Page 45: Inverse Reinforcement Learning for Telepresence Robots

Next Version of TERESA

Page 46: Inverse Reinforcement Learning for Telepresence Robots

Extra Slides

Page 47: Inverse Reinforcement Learning for Telepresence Robots

Wizard of Oz MethodologyHuman Controller

Interaction Target

Human Obstacles

Wizard of Oz

Page 48: Inverse Reinforcement Learning for Telepresence Robots

Deep IRLF

Figure source: Wulfmeier et al.

Markus Wulfmeier, Peter Ondruska, Ingmar Posner Maximum Entropy Deep Inverse Reinforcement Learning, arXiv:1507.04888