10703 deep reinforcement learning and controlrsalakhu/10703/lectures/lecture_pg2.pdf ·...

35
10703 Deep Reinforcement Learning and Control Russ Salakhutdinov Machine Learning Department [email protected] Policy Gradient II

Upload: others

Post on 16-Aug-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

10703DeepReinforcementLearningandControl

RussSalakhutdinovMachine Learning Department

[email protected]

Policy Gradient II

Page 2: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Used Materials •  Disclaimer: Much of the material and slides for this lecture were borrowed from Rich Sutton’s RL class and David Silver’s Deep RL tutorial

Page 3: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Policy-Based Reinforcement Learning ‣  So far we approximated the value or action-value function using

parameters θ (e.g. neural networks)

‣  A policy was generated directly from the value function e.g. using ε-greedy

‣  In this lecture we will directly parameterize the policy

‣  We will focus again on model-free reinforcement learning

Page 4: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Policy Gradient Theorem ‣  The policy gradient theorem generalizes the likelihood ratio

approach to multi-step MDPs

‣  Replaces instantaneous reward r with long-term value Qπ(s,a)

‣  Policy gradient theorem applies to start state objective, average reward and average value objective

‣  For any differentiable policy πθ(s, a), the policy gradient is

Page 5: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Monte-Carlo Policy Gradient (REINFORCE) ‣  Update parameters by stochastic gradient ascent

‣  Using policy gradient theorem

‣  Using return Gt as an unbiased sample of

Page 6: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Actor-Critic ‣  Monte-Carlo policy gradient still has high variance

‣  We can use a critic to estimate the action-value function:

‣  Actor-critic algorithms maintain two sets of parameters

-  Critic Updates action-value function parameters w

-  Actor Updates policy parameters θ, in direction suggested by critic

‣  Actor-critic algorithms follow an approximate policy gradient

Page 7: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Reducing Variance Using a Baseline ‣  We can subtract a baseline function B(s) from the policy gradient

‣  This can reduce variance, without changing expectation!

equals 1

‣  A good baseline is the state value function

Function of state s, but not action a

Page 8: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Reducing Variance Using a Baseline ‣  We can subtract a baseline function B(s) from the policy gradient

‣  This can reduce variance, without changing expectation!

‣  A good baseline is the state value function

‣  So we can rewrite the policy gradient using the advantage function:

‣  Note that it is the exact same policy gradient:

Page 9: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Estimating the Advantage Function ‣  The advantage function can significantly reduce variance of policy

gradient

‣  So the critic should really estimate the advantage function

‣  Using two function approximators and two parameter vectors:

‣  And updating both value functions by e.g. TD learning

‣  For example, by estimating both and

Page 10: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Dueling Networks ‣  Split Q-network into two channels

‣  Action-independent value function V(s,v)

‣  Action-dependent advantage function A(s, a, w)

Wang et.al., ICML, 2016

‣  Advantage function is defined as:

Page 11: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Estimating the Advantage Function ‣  For the true value function , the TD error:

is an unbiased estimate of the advantage function:

‣  So we can use the TD error to compute the policy gradient

‣  Remember the policy gradient

Page 12: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Estimating the Advantage Function ‣  For the true value function , the TD error:

is an unbiased estimate of the advantage function

‣  So we can use the TD error to compute the policy gradient

‣  This approach only requires one set of critic parameters v

‣  In practice we can use an approximate TD error

Page 13: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Critic ‣  Critic can estimate value function Vv(s) from various targets. For

example, from previous lectures:

‣  For MC, the target is the return Gt

‣  Vv can be a deep neural network with parameters v.

‣  For TD(0), the target is the TD target

‣  So critic is updated to minimize MSE w.r.t. target, given by MC or TD(0)

Page 14: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Actor ‣  The policy gradient can also be estimated as follows:

‣  Monte-Carlo policy gradient uses error from complete return

‣  Actor-critic policy gradient uses the one-step TD error

Page 15: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Advantage Actor-Critic Algorithm

Page 16: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

So Far: Summary of PG Algorithms ‣  The policy gradient has many equivalent forms

‣  Each leads a stochastic gradient ascent algorithm

‣  Critic uses policy evaluation (e.g. MC or TD learning) to estimate

Page 17: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Bias in Actor-Critic Algorithms ‣  Approximating the policy gradient introduces bias

‣  A biased policy gradient may not find the right solution

‣  Luckily, if we choose value function approximation carefully

‣  Then we can avoid introducing any bias

‣  i.e. we can still follow the exact policy gradient

Page 18: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Compatible Function Approximation ‣  If the following two conditions are satisfied:

1. Value function approximator is compatible to the policy

‣  Then the policy gradient is exact,

2 Value function parameters w minimize the mean-squared error

‣  Remember:

Page 19: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Proof ‣  If w is chosen to minimize mean-squared error, gradient of ε w.r.t. w

must be zero,

‣  So Qw(s, a) can be substituted directly into the policy gradient,

Page 20: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Alternative Policy Gradient Directions ‣  Gradient ascent algorithms can follow any ascent direction

‣  A good ascent direction can significantly speed convergence

‣  Also, a policy can often be reparametrized without changing action probabilities

‣  For example, increasing score of all actions in a softmax policy

‣  The vanilla gradient is sensitive to these reparametrizations

Page 21: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Natural Policy Gradient ‣  The natural policy gradient is parametrzsation independent

‣  it finds ascent direction that is closest to vanilla gradient, when changing policy by a small, fixed amount

‣  where Gθ is the Fisher information matrix

Page 22: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Natural Actor-Critic ‣  Using compatible function approximation,

‣  i.e. update actor parameters in direction of critic parameters

‣  The natural policy gradient simplifies,

Under linear model:

Page 23: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Summary of Policy Gradient Algorithms ‣  The policy gradient has many equivalent forms

‣  Each leads a stochastic gradient ascent algorithm

‣  Critic uses policy evaluation (e.g. MC or TD learning) to estimate

Page 24: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Cap>onGenera>onwithVisualACen>on

Xuet.al.,ICML2015

Amanridingahorseinafield.

Xuet.al.,ICML2015

Page 25: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Cap>onGenera>onwithVisualACen>on

Amanridingahorseinafield.

Xuet.al.,ICML2015

Page 26: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Cap>onGenera>onwithVisualACen>on

Xuet.al.,ICML2015

Page 27: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

ImprovingAc>onRecogni>on•  Considerperformingac>onrecogni>oninavideo:

•  Insteadofprocessingeachframe,wecanprocessonlyasmallpieceofeachframe.

Page 28: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

ImprovingAc>onRecogni>on

Cycling Horsebackriding BasketballShoo>ngSoccerjuggling

Page 29: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

RecurrentACen>onModel

CoarseImage

Classifica>on

RecurrentNeuralNetwork

Sampleac>on:

• HardA'en*on:Sampleac>on(latentgazeloca>on)

• So-A'en*on:Takeexpecta>oninsteadofsampling

Page 30: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

ModelSetup•  Weassumethatwehaveadatasetwithlabelsyforthe

supervisedpredic>ontask(e.g.objectcategory).

•  Goal:LearnanaCen>onpolicy:Thebestloca>onstoaCendtoaretheoneswhichleadthemodeltopredictthecorrectclass.

Page 31: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

ModelDefini>on•  Weaimtomaximizetheprobabilityofcorrectclassby

marginalizingovertheac>ons(orlatentgazeloca>ons):

where-  Wisthesetofparametersoftherecurrentnetwork.-  aisasetofac>ons(latentgazeloca>ons,scale).-  X:istheinput(e.g.image,videoframe).

Forclarityofpresenta>on,Iwillsome>mesomitcondi>oningonWorX.Itshouldbeobviousfromthecontext.

Page 32: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

Varia>onalLearning•  Previousapproachesusedvaria>onallowerbound:

•  Hereissomeapproxima>ontoposterioroverthegazeloca>ons.

Baet.al.,ICLR2015Mnihet.al.,NIPS2014

•  Inthecasewhereqistheprior,,thevaria>onalboundbecomes:

Page 33: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

REINFORCE

•  Deriva>vesw.r.tmodelparameters:

Verybadtermasitisunbounded.Introduceshighvarianceinthees>mator.

•  Needtointroduceheuris>cs(e.g.replacingthistermwitha0/1discreteindicatorfunc>on,whichleadstoREINFORCEalgorithmofWilliams,1992). Baet.al.,ICLR2015

Mnihet.al.,NIPS2014

Page 34: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

•  Thestochas>ces>matorofthegradientisgivenby:

REINFORCE

wherewedrawMac>onsfromtheprior:

•  Deriva>vesw.r.tmodelparameters:

Page 35: 10703 Deep Reinforcement Learning and Controlrsalakhu/10703/Lectures/Lecture_PG2.pdf · Policy-Based Reinforcement Learning ‣ So far we approximated the value or action-value function

MNISTACen>onDemo• Ac>onscontain:

- Loca>on:2-dGaussianlatentvariable- Scale:3-waysobmaxover3differentscales