constrained sampling via langevin dynamics · •mini-batch or on-line extensions: sampling with...

130
Constrained sampling via Langevin dynamics Volkan Cevher https://lions.epfl.ch Laboratory for Information and Inference Systems (LIONS) ´ Ecole Polytechnique F´ ed´ erale de Lausanne (EPFL) Switzerland ´ Ecole Polytechnique, Paris [June 2019]

Upload: others

Post on 28-Jul-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Constrained sampling via Langevin dynamics

Volkan Cevherhttps://lions.epfl.ch

Laboratory for Information and Inference Systems (LIONS)

Ecole Polytechnique Federale de Lausanne (EPFL)Switzerland

Ecole Polytechnique, Paris

[June 2019]

Page 2: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Statistical Inference: A summary-based approach

• M -estimators in statistics/learning: Assume prior p(x), observe data z1, z2, ..., zn.

. Find the best parameter/predictor via optimization:

x = arg minx

n∑i=1

f(x; zi)− log p(x).

. An intuitive summary of the posterior

. Extremely popular (ex., MLE, ERM, etc.) with theoretical backup

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 2/ 74

Page 3: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Statistical Inference: A more descriptive approach

• Posterior sampling: Assume prior p(x), observe data z1, z2, ..., zn.

. Generate samples from the posterior

p(x|z1, z2, ..., zn) ∝ p(x) · exp

(−

n∑i=1

f(x; zi)

).

. Ability to obtain different summaries of the posterior (i.e., posterior mean)

. Reinventing itself

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 3/ 74

Page 4: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Point estimates vs posterior sampling

• Posterior sampling: Arguably more difficult.

• Upshots

. Flexibility: Generativeness (i.e., Generative Adversarial Networks)...

. Uncertainty: Distribution of parameters/predictors, confidence bounds...

. Information: Multi-modality, other summaries and features...

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 4/ 74

Page 5: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Summary of this lecture

• Unconstrained sampling via unadjusted Langevin Dynamics

• Constrained sampling via mirrored Langevin dynamics

• Applications / Extensions:

. Sampling for Generative Adversarial Networks

. Sampling for non-convex optimization

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 5/ 74

Page 6: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Before we begin, let’s recall a key concept

• Markov chain [Kemeny and Snell, 1976]

. sequence of random variables {X1, X2, . . .} satisfying

Pr(Xi|X1, X2, . . . , Xi−1) = Pr(Xi|Xi−1) ∀i > 1

• Example Bayesian network

X1 X2 X3 ...

. remove arrows to form a Markov random field

• No memory: Future is independent of the past given the present.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 6/ 74

Page 7: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Markov Chain Monte Carlo

• Goal: Sample from a distribution µ.

• Idea of MCMC:

. Construct a Markov Chain whose stationary distribution is µ

. Simulate the Markov Chain until mixing time

• Key question: How to design such a Markov Chain?

• Many possibilities:

. Metropolis-Hasting [Metropolis and Ulam, 1949, Hastings, 1970]

. Gibbs sampling [Gelfand et al., 1990]

. Hamiltonian Monte Carlo [Neal et al., 2011]

. Importance sampling [Kahn and Harris, 1951]

. ...

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 7/ 74

Page 8: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Markov Chain Monte Carlo

• Goal: Sample from a distribution µ.

• Idea of MCMC:

. Construct a Markov Chain whose stationary distribution is µ

. Simulate the Markov Chain until mixing time

• Key question: How to design such a Markov Chain?

• Many possibilities:

. Metropolis-Hasting [Metropolis and Ulam, 1949, Hastings, 1970]

. Gibbs sampling [Gelfand et al., 1990]

. Hamiltonian Monte Carlo [Neal et al., 2011]

. Importance sampling [Kahn and Harris, 1951]

. ...

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 7/ 74

Page 9: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Langevin Monte Carlo

• Task: Given a target distribution dµ = e−V (x)dx, generate samples from µ.

. Most samples should be gathered around the minimum of V

. We do not want convergence to the minimum

• Idea: Use Gradient Descent + noise

Xk+1 = Xk − βk∇V (Xk) + noise

• Question: What amount of noise should we add so that X∞ ∼ π?

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 8/ 74

Page 10: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Unadjusted Langevin Algorithm

• Task: Given a target distribution dµ = e−V (x)dx, generate samples from µ.

. Fundamental in machine learning/statistics/computer science/etc.

• A scalable framework: First-order sampling (assuming access to ∇V ).

Step 1. Langevin Dynamics

dXt = −∇V (Xt)dt+√

2dBt ⇒ X∞ ∼ e−V .

Step 2. Discretize

xk+1 = xk − βk∇V (xk) +√

2βkξk

. βk step-size, ξk standard normal

. strong analogy to gradient descent method

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 9/ 74

Page 11: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Log-concave distribution

DefinitionA function V : X → R ∪ {+∞} is called convex on its domain X if, for anyx1, x2 ∈ X and α ∈ [0, 1], we have:

V (αx1 + (1− α)x2) ≤ αV (x1) + (1− α)V (x2).

Corollary

If V : X → R is twice differentiable, then V is convex if and only if

∇2V (x) � 0 ∀x ∈ X ,

where ∇2V (x) denotes the Hessian matrix of V at x, and ∀A,B ∈ Rd×d, A � Bmeans that A−B is positive semi-definite.

• Similarly, a function V is called concave if −V is convex.

DefinitionA distribution µ over X is called log-concave if it can be written as dµ = e−V (x)dxwith V convex.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 10/ 74

Page 12: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Recent progress: Unconstrained distributions are easy

• State-of-the-art: When dom(V ) = Rd,

Assumption W2 dTV KL Literature

LI � ∇2V � mI O(ε−2d

)O(ε−2d

)O(ε−1d

) [Cheng and Bartlett, 2017][Dalalyan and Karagulyan, 2017]

[Durmus et al., 2018]

LI � ∇2V � 0 - O(ε−4d

)O(ε−2d

)[Durmus et al., 2018]

Note: W2(µ1, µ2) B√

infX∼µ1,Y∼µ2

E‖X − Y ‖2, dTV(µ1, µ2) B supA Borel

|µ1(A)−µ2(A)|

• Added twists with additional assumptions

. Metropolis-Hastings [Dwivedi et al., 2018]

. Underdamping [Cheng et al., 2017]

. JKO-based schemes [Wibisono, 2018, Bernton, 2018, Jordan et al., 1998]

• What about constrained distributions?

. include many important applications, such as Latent Dirichlet Allocation (LDA)

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 11/ 74

Page 13: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

A challenge: Constrained distributions are hard

• When dom(V ) is compact, convergence rates deteriorate significantly.

Assumption W2 or KL dTV Literature

LI � ∇2V � mI ? O(ε−6d5

)[Brosse et al., 2017]

LI � ∇2V � 0 ? O(ε−6d5

)[Brosse et al., 2017]

. cf., when V is unconstrained, O(ε−4d) convergence in dTV

. Projection is not a solution: Slow rates [Bubeck et al., 2015], boundary issues

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 12/ 74

Page 14: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Another challenge: Convergence for non-log-concave distributions

• Log-concave distributions have nice convergence.

• What about non-log-concave distributions?

. The Dirichlet posteriors:

V (x) = C −d+1∑`=1

(n` + α` − 1) log x`.

. Super important in text modeling (via LDA), but no existing rate.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 13/ 74

Page 15: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Mirrored Langevin Dynamics [Hsieh et al., 2018a]

• We provide a unified framework for constrained sampling.

. Provide matching rates as if the constraint is absent: O(ε−4d4

)improvement

• The first non-asymptotic guarantee for Dirichlet posteriors (in first-order sampling).

. Applicable to a wide class of distributions

• Mini-batch or on-line extensions: sampling with stochastic gradients.

. Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia corpus

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 14/ 74

Page 16: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

approach, algorithm, and theory

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 15/ 74

Page 17: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

A starting point: [Nemirovski-Yudin 83] + [Beck-Teboulle 03]

• Entropic Mirror Descent: Unconstrained optimization within the simplex

xk+1 = ∇h∗(∇h(xk)− βk∇V (xk)

), (h : the entropy function)

. Mirror vs primal image: y = ∇h(x)⇔ x = ∇h∗(y)

yk+1 = yk − βk∇V (xk)⇒ no projection in the mirrored space

. Compare to the projected gradient algorithm

xk+1 = Proj∆(xk − βk∇V (xk)

).

Q: Can we derive an entropic, unconstrained Langevin dynamics on the simplex?

• More generally, a “mirror descent theory” for Langevin Dynamics? Any benefit?

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 16/ 74

Page 18: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

A starting point: [Nemirovski-Yudin 83] + [Beck-Teboulle 03]

• Entropic Mirror Descent: Unconstrained optimization within the simplex

xk+1 = ∇h∗(∇h(xk)− βk∇V (xk)

), (h : the entropy function)

. Mirror vs primal image: y = ∇h(x)⇔ x = ∇h∗(y)

yk+1 = yk − βk∇V (xk)⇒ no projection in the mirrored space

. Compare to the projected gradient algorithm

xk+1 = Proj∆(xk − βk∇V (xk)

).

Q: Can we derive an entropic, unconstrained Langevin dynamics on the simplex?

• More generally, a “mirror descent theory” for Langevin Dynamics? Any benefit?

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 16/ 74

Page 19: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

A starting point: [Nemirovski-Yudin 83] + [Beck-Teboulle 03]

• Entropic Mirror Descent: Unconstrained optimization within the simplex

xk+1 = ∇h∗(∇h(xk)− βk∇V (xk)

), (h : the entropy function)

. Mirror vs primal image: y = ∇h(x)⇔ x = ∇h∗(y)

yk+1 = yk − βk∇V (xk)⇒ no projection in the mirrored space

. Compare to the projected gradient algorithm

xk+1 = Proj∆(xk − βk∇V (xk)

).

Q: Can we derive an entropic, unconstrained Langevin dynamics on the simplex?

• More generally, a “mirror descent theory” for Langevin Dynamics? Any benefit?

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 16/ 74

Page 20: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Mirrored Langevin Dynamics MLD

• Approach: Push-forward measure

ν = ∇h#µ⇔ ν(A) = µ(∇h−1(A)), A Borel.

• A simple but crucial observation: Let Yt = ∇h(Xt), then it holds

h strictly convex ⇔ ∇h one-to-one ⇒ Y∞ = ∇h(X∞) ∼ ∇h#e−V .

• For ease of discretization, simply consider the Langevin Dynamics for ∇h#e−V :

MLD ≡{

dYt = −∇W • ∇h(Xt)dt+√

2dBtXt = ∇h∗(Yt)

, where e−W = ∇h#e−V .

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 17/ 74

Page 21: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Mirrored Langevin Dynamics MLD

• Approach: Push-forward measure

ν = ∇h#µ⇔ ν(A) = µ(∇h−1(A)), A Borel.

• A simple but crucial observation: Let Yt = ∇h(Xt), then it holds

h strictly convex ⇔ ∇h one-to-one ⇒ Y∞ = ∇h(X∞) ∼ ∇h#e−V .

• For ease of discretization, simply consider the Langevin Dynamics for ∇h#e−V :

MLD ≡{

dYt = −∇W • ∇h(Xt)dt+√

2dBtXt = ∇h∗(Yt)

, where e−W = ∇h#e−V .

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 17/ 74

Page 22: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Implications of MLD I: Preserving the convergence

• Theory: Sampling with or without constraint has the same iteration complexity.

Theorem (Informal)

For discretized MLD with strongly convex h,

1 If yk is close to e−W in KL/W1/W2/dTV, then xk is close to e−V inKL/W1/W2/dTV.

2 Let ∇2V � mI, bounded convex domain. Then there exists a good mirror mapsuch that the discretized MLD yields convergence

O(ε−1d

)in KL, O

(ε−2d

)in dTV, O

(ε−2d

)in W1, O

(ε−2d

)in W2.

• Comparison to state-of-the-art:

Assumption W1 dTV Literature

LI � ∇2V � mI O(ε−6d5

)O(ε−6d5

)[Brosse et al., 2017]

Caveat: Existential; no explicit algorithm.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 18/ 74

Page 23: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Implications of MLD I: Preserving the convergence

• Theory: Sampling with or without constraint has the same iteration complexity.

Theorem (Informal)

For discretized MLD with strongly convex h,

1 If yk is close to e−W in KL/W1/W2/dTV, then xk is close to e−V inKL/W1/W2/dTV.

2 Let ∇2V � mI, bounded convex domain. Then there exists a good mirror mapsuch that the discretized MLD yields convergence

O(ε−1d

)in KL, O

(ε−2d

)in dTV, O

(ε−2d

)in W1, O

(ε−2d

)in W2.

• Comparison to state-of-the-art:

Assumption W1 dTV Literature

LI � ∇2V � mI O(ε−6d5

)O(ε−6d5

)[Brosse et al., 2017]

Caveat: Existential; no explicit algorithm.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 18/ 74

Page 24: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Implications of MLD I: Preserving the convergence

• A key lemma: Let Bh(x, x′) = h(x)− h(x′)− 〈∇h(x′), x− x′〉.

Lemma (Duality of Wasserstein Distances)

Let µ1, µ2 be probability measures. Then, it holds that

WBh (µ1, µ2)=WBh∗ (∇h#µ1,∇h#µ2),

where

WBh (µ1, µ2) B infT :T#µ1=µ2

∫Bh(x, T (x))dµ1(x),

WBh∗ (ν1, ν2) B infT :T#ν1=ν2

∫Bh∗ (T (y), y)dν1(y).

• Generalizes the classical duality:

Bh(x, x′)=Bh∗ (∇h(x′),∇h(x))

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 19/ 74

Page 25: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Convergence in W2

Theorem (Formal, W2)

For discretized MLD with ρ-strongly convex h,

W2(xt, e−V ) ≤1ρW2(yt, e−W ).

• Classical mirror descent theory:

. If h is ρ-strongly convex, then

ρ

2‖x− x′‖2 ≤ Bh(x, x′)=Bh∗ (∇h(x′),∇h(x)) ≤

12ρ‖∇h(x)−∇h(x′)‖2.

. Take expectation, use yt ∼ ∇h#xt and e−W = ∇h#e−V

. Our key lemma asserts the = holds in the Wasserstein space. �

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 20/ 74

Page 26: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Convergence in W2

Theorem (Formal, W2)

For discretized MLD with ρ-strongly convex h,

W2(xt, e−V ) ≤1ρW2(yt, e−W ).

• Classical mirror descent theory:

. If h is ρ-strongly convex, then

ρ

2‖x− x′‖2 ≤ Bh(x, x′)=Bh∗ (∇h(x′),∇h(x)) ≤

12ρ‖∇h(x)−∇h(x′)‖2.

. Take expectation, use yt ∼ ∇h#xt and e−W = ∇h#e−V

. Our key lemma asserts the = holds in the Wasserstein space. �

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 20/ 74

Page 27: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Implications of MLD II: Practical sampling on simplex

• Practice: Efficient sampling algorithms on the simplex

. Run A-MLD with the entropic mirror map

h(x) =d∑i=1

xi log xi +

(1−

d∑i=1

xi

)log

(1−

d∑i=1

xi

)

. Explicit formula for ∇h#e−V :

W (y) = V ◦ ∇h∗(y)−d∑i=1

yi + (d+ 1)h∗(y), h∗(y) = log

(1 +

d∑i=1

eyi

).

. Surprise: Convex W vs. non-convex V !

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 21/ 74

Page 28: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Implications of MLD II: Practical sampling on simplex

• Practice: Efficient sampling algorithms on the simplex

. Run A-MLD with the entropic mirror map

h(x) =d∑i=1

xi log xi +

(1−

d∑i=1

xi

)log

(1−

d∑i=1

xi

)

. Explicit formula for ∇h#e−V :

W (y) = V ◦ ∇h∗(y)−d∑i=1

yi + (d+ 1)h∗(y), h∗(y) = log

(1 +

d∑i=1

eyi

).

. Surprise: Convex W vs. non-convex V !

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 21/ 74

Page 29: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Implications of MLD II: Practical sampling on simplex

• Practice: Efficient sampling algorithms on the simplex

. Run A-MLD with the entropic mirror map

h(x) =d∑i=1

xi log xi +

(1−

d∑i=1

xi

)log

(1−

d∑i=1

xi

)

. Explicit formula for ∇h#e−V :

W (y) = V ◦ ∇h∗(y)−d∑i=1

yi + (d+ 1)h∗(y), h∗(y) = log

(1 +

d∑i=1

eyi

).

. Surprise: Convex W vs. non-convex V !

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 21/ 74

Page 30: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

The curious case of the Dirichlet Posterior

• Dirichlet posterior: V (x) = C −∑d+1

i=1 (ni + αi − 1) log xi

. ni (observations), αi (parameters)

. A simple approach in topic modeling

. A practical (sparse) scenario: n1 = 10000, n2 = 10, n3 = n4 = · · · = 0,

αi = 0.1 for all i

. V is neither convex nor concave ⇒ existing theory does not apply.

• For MLD,

W (y) = C′ −d∑i=1

(ni + αi)yi +

(d+1∑i=1

(ni + αi)

)h∗(y)

(i) W convex and Lipschitz gradient

. O(ε−2d2R0) convergence in KL, for any setting of ni’s and αi’s!

(ii) Unconstrained W : Efficient algorithm without projections

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 22/ 74

Page 31: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

The curious case of the Dirichlet Posterior

• Dirichlet posterior: V (x) = C −∑d+1

i=1 (ni + αi − 1) log xi

. ni (observations), αi (parameters)

. A simple approach in topic modeling

. A practical (sparse) scenario: n1 = 10000, n2 = 10, n3 = n4 = · · · = 0,

αi = 0.1 for all i

. V is neither convex nor concave ⇒ existing theory does not apply.

• For MLD,

W (y) = C′ −d∑i=1

(ni + αi)yi +

(d+1∑i=1

(ni + αi)

)h∗(y)

(i) W convex and Lipschitz gradient

. O(ε−2d2R0) convergence in KL, for any setting of ni’s and αi’s!

(ii) Unconstrained W : Efficient algorithm without projections

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 22/ 74

Page 32: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

MLD for Dirichlet Posterior• Synthetic setup: 11-dimensional Dirichlet posterior on the simplex

. Observations n1 = 10000, n2 = n3 = 10, n4 = n5 = · · ·n11 = 0.

. Parameters αi = 0.1 for all i

. Comparing against SGRLD [Patterson and Teh, 2013]

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 23/ 74

Page 33: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Stochastic MLD

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 24/ 74

Page 34: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Stochastic first-order sampling

• Evaluating ∇V is expensive in contemporary ML

• Simple solution: Use stochastic gradient estimates ∇V .

. Already present in the seminal work of [Welling and Teh, 2011].

. Wide empirical success.

. Some theory for unconstrained log-concave distributions.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 25/ 74

Page 35: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Stochastic Mirrored Langevin Dynamics

• Assume decomposibility: V =∑N

i=1 Vi.

. Stochastic gradient ∇V = Nb∇Vi for a random mini-batch B, |B| = b.

Q: An MLD with only stochastic gradients? Convergence?

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 26/ 74

Page 36: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Stochastic Mirrored Langevin Dynamics

• Assume decomposibility: V =∑N

i=1 Vi.

. Stochastic gradient ∇V = Nb∇Vi for a random mini-batch B, |B| = b.

Q: An MLD with only stochastic gradients? Convergence?

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 26/ 74

Page 37: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Stochastic Mirrored Langevin Dynamics

• Assume decomposibility: V =∑N

i=1 Vi.

. Stochastic gradient ∇V = Nb∇Vi for a random mini-batch B, |B| = b.

Lemma (SMLD)

Assume that h is 1-strongly convex. For i = 1, 2, ..., N, let Wi be such that

e−NWi = ∇h#e−NVi∫e−NVi

.

Define W B∑N

i=1Wi and ∇W B Nb

∑i∈B ∇Wi. Then:

1. Primal decomposibility implies dual decomposability: There is a constant C suchthat e−(W+C) = ∇h#e−V .

2. For each i, the gradient ∇Wi depends only on ∇Vi and the mirror map h.

3. The gradient estimate is unbiased: E∇W = ∇W .

4. The dual stochastic gradient is never less accurate:

E‖∇W −∇W‖2 ≤ E‖∇V −∇V ‖2.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 27/ 74

Page 38: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Stochastic Mirrored Langevin Dynamics

• Assume decomposibility: V =∑N

i=1 Vi.

. Stochastic gradient ∇V = Nb∇Vi for a random mini-batch B, |B| = b.

Lemma (SMLD)

Assume that h is 1-strongly convex. For i = 1, 2, ..., N, let Wi be such that

e−NWi = ∇h#e−NVi∫e−NVi

.

Define W B∑N

i=1Wi and ∇W B Nb

∑i∈B ∇Wi. Then:

1. Primal decomposibility implies dual decomposability: There is a constant C suchthat e−(W+C) = ∇h#e−V .

2. For each i, the gradient ∇Wi depends only on ∇Vi and the mirror map h.

3. The gradient estimate is unbiased: E∇W = ∇W .

4. The dual stochastic gradient is never less accurate:

E‖∇W −∇W‖2 ≤ E‖∇V −∇V ‖2.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 27/ 74

Page 39: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Stochastic Mirrored Langevin Dynamics

• Assume decomposibility: V =∑N

i=1 Vi.

. Stochastic gradient ∇V = Nb∇Vi for a random mini-batch B, |B| = b.

Lemma (SMLD)

Assume that h is 1-strongly convex. For i = 1, 2, ..., N, let Wi be such that

e−NWi = ∇h#e−NVi∫e−NVi

.

Define W B∑N

i=1Wi and ∇W B Nb

∑i∈B ∇Wi. Then:

1. Primal decomposibility implies dual decomposability: There is a constant C suchthat e−(W+C) = ∇h#e−V .

2. For each i, the gradient ∇Wi depends only on ∇Vi and the mirror map h.

3. The gradient estimate is unbiased: E∇W = ∇W .

4. The dual stochastic gradient is never less accurate:

E‖∇W −∇W‖2 ≤ E‖∇V −∇V ‖2.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 27/ 74

Page 40: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Stochastic Mirrored Langevin Dynamics

• Assume decomposibility: V =∑N

i=1 Vi.

. Stochastic gradient ∇V = Nb∇Vi for a random mini-batch B, |B| = b.

Lemma (SMLD)

Assume that h is 1-strongly convex. For i = 1, 2, ..., N, let Wi be such that

e−NWi = ∇h#e−NVi∫e−NVi

.

Define W B∑N

i=1Wi and ∇W B Nb

∑i∈B ∇Wi. Then:

1. Primal decomposibility implies dual decomposability: There is a constant C suchthat e−(W+C) = ∇h#e−V .

2. For each i, the gradient ∇Wi depends only on ∇Vi and the mirror map h.

3. The gradient estimate is unbiased: E∇W = ∇W .

4. The dual stochastic gradient is never less accurate:

E‖∇W −∇W‖2 ≤ E‖∇V −∇V ‖2.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 27/ 74

Page 41: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Stochastic Mirrored Langevin Dynamics

Theorem (Convergence of SMLD)

Let e−V be the target distribution and h a 1-strongly convex mirror map. Letσ2 B E‖∇V −∇V ‖2 be the variance of the stochastic gradient of V . Suppose thatLI � ∇2W � 0. Then, applying SMLD with constant step-size βt = β yields:

D(xT ‖e−V ) ≤

√2W2

2 (y0, e−W )(Ld+ σ2)T

= O

(√Ld+ σ2

T

), (1)

provided that β ≤ min{[

2TW22(y0, e−W )(Ld+ σ2

)]− 12 , 1

L

}.

• For Dirichlet posteriors,

. W is strictly convex.

. L =∑d+1

`=1 n` + α` = O(N + d).

. simple formula for Wi’s.

. first non-asymptotic rate for first-order sampling.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 28/ 74

Page 42: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Stochastic Mirrored Langevin Dynamics

Theorem (Convergence of SMLD)

Let e−V be the target distribution and h a 1-strongly convex mirror map. Letσ2 B E‖∇V −∇V ‖2 be the variance of the stochastic gradient of V . Suppose thatLI � ∇2W � 0. Then, applying SMLD with constant step-size βt = β yields:

D(xT ‖e−V ) ≤

√2W2

2 (y0, e−W )(Ld+ σ2)T

= O

(√Ld+ σ2

T

), (1)

provided that β ≤ min{[

2TW22(y0, e−W )(Ld+ σ2

)]− 12 , 1

L

}.

• For Dirichlet posteriors,

. W is strictly convex.

. L =∑d+1

`=1 n` + α` = O(N + d).

. simple formula for Wi’s.

. first non-asymptotic rate for first-order sampling.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 28/ 74

Page 43: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

SMLD for Latent Dirichlet Allocation (LDA)

• LDA: A key framework in topic modeling.

. Involves multiple Dirichlet posteriors (1 for each topic).

. Wikipedia corpus with 100000 documents, 100 topics.

Caveat: Need to use ey ' max{0, 1 + y} in large-scale application.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 29/ 74

Page 44: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

SMLD for Latent Dirichlet Allocation (LDA)

• LDA: A key framework in topic modeling.

. Involves multiple Dirichlet posteriors (1 for each topic).

. Wikipedia corpus with 100000 documents, 100 topics.

Caveat: Need to use ey ' max{0, 1 + y} in large-scale application.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 29/ 74

Page 45: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Summary and follow-ups

• A new mirror descent theory for first-order sampling

. Take any unconstrained algorithm for e−W to improve efficiency

Metropolis-Hastings adjusted Langevin Dynamics [Dwivedi et al., 2018]

⇒ Linear rate for e−V with ∇2V � mI, constrained or not.

. Nontrivial extension of ideas & techniques from mirror descent

. Stochastic version works well on real datasets

• We operate with the non-ideal `2-norm for entropic mirror map

. Does not really concern practitioners, but unsatisfactory in theory.

. Future work: Provide the `1 (or `p) counterpart.

• Need more theoretically guided step-size rules coming out of the theory

. Working on it!

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 30/ 74

Page 46: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Application of SGLD to GANs training

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 31/ 74

Page 47: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Before we begin, let us review a key concept: Minimax problems

• Optimization template:

minx∈X

maxy∈Y

f(x, y) B f(x∗, y∗), X ⊂ Rm,Y ⊂ Rn.

• Sion’s Minimax Theorem [Sion, 1958]: Under mild conditions,

f convex in x and concave in y ⇒ minx∈X

maxy∈Y

f(x, y) = maxy∈Y

minx∈X

f(x, y).

• Numerous applications.

. Langrange dual for constrained optimization

. Robust optimization

. Image denoising

. Game theory

. etc.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 32/ 74

Page 48: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Before we begin, let us review a key concept: Minimax problems

• Optimization template:

minx∈X

maxy∈Y

f(x, y) B f(x∗, y∗), X ⊂ Rm,Y ⊂ Rn.

• Sion’s Minimax Theorem [Sion, 1958]: Under mild conditions,

f convex in x and concave in y ⇒ minx∈X

maxy∈Y

f(x, y) = maxy∈Y

minx∈X

f(x, y).

• Numerous applications.

. Langrange dual for constrained optimization

. Robust optimization

. Image denoising

. Game theory

. etc.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 32/ 74

Page 49: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Saddle-Point Problems (cont.)

• Optimization template:

minx∈X

maxy∈Y

f(x, y) B f(x∗, y∗), X ⊂ Rm,Y ⊂ Rn.

• Algorithms for finding (x∗, y∗) when f convex in x and concave in y.

. Prox methods:[Nemirovskii and Yudin, 1983, Nemirovski, 2004, Beck and Teboulle, 2003]

. Chambolle-Pock [Chambolle and Pock, 2011]

. Fast rates assuming efficiently solvable iterates.

• In short, for such optimization,

. The problem is well-defined (saddle-points exist).

. Efficient algorithms with rigorous convergence rates.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 33/ 74

Page 50: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Saddle-Point Problems (cont.)

• Optimization template:

minx∈X

maxy∈Y

f(x, y) B f(x∗, y∗), X ⊂ Rm,Y ⊂ Rn.

• Algorithms for finding (x∗, y∗) when f convex in x and concave in y.

. Prox methods:[Nemirovskii and Yudin, 1983, Nemirovski, 2004, Beck and Teboulle, 2003]

. Chambolle-Pock [Chambolle and Pock, 2011]

. Fast rates assuming efficiently solvable iterates.

• In short, for such optimization,

. The problem is well-defined (saddle-points exist).

. Efficient algorithms with rigorous convergence rates.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 33/ 74

Page 51: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Open problems: Non-convex/non-concave Saddle-Point Problems

• Optimization template:

minx∈X

maxy∈Y

f(x, y) B f(x∗, y∗), X ⊂ Rm,Y ⊂ Rn.

• What if f is NOT convex in x and NOT concave in y?

. Does a (local) saddle-point exist? Under what conditions??

. Algorithms with convergence rates???

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 34/ 74

Page 52: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Main Motivation: Generative Adversarial Networks (GANs)

• Objective of Wasserstein GANs:

minθ∈Θ

maxw∈W

EX∼Preal[Dw(X)]− EX∼Pθ

fake[Dw(X)] .

. θ: a generator neural net, generating fake data.

. w: a discriminator neural net.

. Dw: output of discriminator at w, highly non-convex/non-concave.

• Theoretical challenge:

. A saddle point might NOT exist [Dasgupta and Maskin, 1986](non-existence of pure strategy equilibrium).

. No provably convergent algorithm.

• Practical challenge: An empirically working algorithm.

. SGD does NOT work.

. Adaptive methods (Adam, RMSProp, ...) are highly unstable, require tuning.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 35/ 74

Page 53: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Main Motivation: Generative Adversarial Networks (GANs)

• Objective of Wasserstein GANs:

minθ∈Θ

maxw∈W

EX∼Preal[Dw(X)]− EX∼Pθ

fake[Dw(X)] .

. θ: a generator neural net, generating fake data.

. w: a discriminator neural net.

. Dw: output of discriminator at w, highly non-convex/non-concave.

• Theoretical challenge:

. A saddle point might NOT exist [Dasgupta and Maskin, 1986](non-existence of pure strategy equilibrium).

. No provably convergent algorithm.

• Practical challenge: An empirically working algorithm.

. SGD does NOT work.

. Adaptive methods (Adam, RMSProp, ...) are highly unstable, require tuning.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 35/ 74

Page 54: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Main Motivation: Generative Adversarial Networks (GANs)

• Objective of Wasserstein GANs:

minθ∈Θ

maxw∈W

EX∼Preal[Dw(X)]− EX∼Pθ

fake[Dw(X)] .

. θ: a generator neural net, generating fake data.

. w: a discriminator neural net.

. Dw: output of discriminator at w, highly non-convex/non-concave.

• Theoretical challenge:

. A saddle point might NOT exist [Dasgupta and Maskin, 1986](non-existence of pure strategy equilibrium).

. No provably convergent algorithm.

• Practical challenge: An empirically working algorithm.

. SGD does NOT work.

. Adaptive methods (Adam, RMSProp, ...) are highly unstable, require tuning.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 35/ 74

Page 55: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Goal of the Lecture: Generative Adversarial Networks (GANs)

• The Langevin-based approach:

. Inspired by game theory, we lift the optimization problem to infinite-dimension:

pure strategy equilibria → mixed strategy equilibria

finite dimension,non-convex/non-concave

→ infinite dimension, bi-affine

. Provable, infinite-dimensional algorithms on space of probability measures.

. A principled guideline for obtaining stable, but inefficient algorithms.

. A simple heuristic for obtaining efficient, and empirically stable algorithms.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 36/ 74

Page 56: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Goal of the Lecture: Generative Adversarial Networks (GANs)

• The Langevin-based approach:

. Inspired by game theory, we lift the optimization problem to infinite-dimension:

pure strategy equilibria → mixed strategy equilibria

finite dimension,non-convex/non-concave

→ infinite dimension, bi-affine

. Provable, infinite-dimensional algorithms on space of probability measures.

. A principled guideline for obtaining stable, but inefficient algorithms.

. A simple heuristic for obtaining efficient, and empirically stable algorithms.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 36/ 74

Page 57: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

NOT our goal today: Game-Theoretical Learning

• In game-theoretical learning, we care about

. Efficiently finding the NE.

. Cumulative loss/regret for each player; i.e., you cannot suffer one player a lot justto find the NE.

. Algorithmic challenge to adapt to the opponent (adversarial, collaborative, ...);best known results in [Kangarshahi∗ et al., 2018].

• In training GANs, we only care about finding the mixed NE.

. The players (neural nets) are artificial; we do not care about their loss.

. YOU design the algorithm; the environment is known.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 37/ 74

Page 58: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

NOT our goal today: Game-Theoretical Learning

• In game-theoretical learning, we care about

. Efficiently finding the NE.

. Cumulative loss/regret for each player; i.e., you cannot suffer one player a lot justto find the NE.

. Algorithmic challenge to adapt to the opponent (adversarial, collaborative, ...);best known results in [Kangarshahi∗ et al., 2018].

• In training GANs, we only care about finding the mixed NE.

. The players (neural nets) are artificial; we do not care about their loss.

. YOU design the algorithm; the environment is known.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 37/ 74

Page 59: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Preliminary:

Bi-affine games with finite strategies

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 38/ 74

Page 60: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Bi-affine two-player games with finite strategies: The setting

• Bi-affine two-player games:

minx∈∆m

maxy∈∆n

〈y, a〉 − 〈x,Ay〉 B 〈y∗, a〉 − 〈x∗, Ay∗〉 (?)

. ∆d: d-simplex, mixed strategies over d pure strategies.

. a,A: cost vector/matrix.

. Goal: find a mixed Nash Equilibrium (NE) (x∗, y∗).

• Existence of NE: von Neumann’s minimax theorem [Neumann, 1928].

. (?) is a well-defined optimization problem.

• Assumptions.

. A is expensive to evaluate.

. (Stochastic) gradients a−A>x and −Ay are cheap.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 39/ 74

Page 61: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Bi-affine two-player games with finite strategies: The setting

• Bi-affine two-player games:

minx∈∆m

maxy∈∆n

〈y, a〉 − 〈x,Ay〉 B 〈y∗, a〉 − 〈x∗, Ay∗〉 (?)

. ∆d: d-simplex, mixed strategies over d pure strategies.

. a,A: cost vector/matrix.

. Goal: find a mixed Nash Equilibrium (NE) (x∗, y∗).

• Existence of NE: von Neumann’s minimax theorem [Neumann, 1928].

. (?) is a well-defined optimization problem.

• Assumptions.

. A is expensive to evaluate.

. (Stochastic) gradients a−A>x and −Ay are cheap.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 39/ 74

Page 62: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Bi-affine two-player games with finite strategies: The setting

• Bi-affine two-player games:

minx∈∆m

maxy∈∆n

〈y, a〉 − 〈x,Ay〉 B 〈y∗, a〉 − 〈x∗, Ay∗〉 (?)

. ∆d: d-simplex, mixed strategies over d pure strategies.

. a,A: cost vector/matrix.

. Goal: find a mixed Nash Equilibrium (NE) (x∗, y∗).

• Existence of NE: von Neumann’s minimax theorem [Neumann, 1928].

. (?) is a well-defined optimization problem.

• Assumptions.

. A is expensive to evaluate.

. (Stochastic) gradients a−A>x and −Ay are cheap.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 39/ 74

Page 63: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Bi-affine two-player games with finite strategies: algorithms

• Bi-affine two-player games:

minx∈∆m

maxy∈∆n

〈y, a〉 − 〈x,Ay〉 B 〈y∗, a〉 − 〈x∗, Ay∗〉

. ∆d: d-simplex, mixed strategies over d pure strategies

. a,A: cost vector/matrix.

• Fundamental building blocks: Entropic Mirror Descent [Beck and Teboulle, 2003].

. φ(z) =∑d

i=1 zi log zi, φ?(z) = log∑d

i=1 ezi .

. MD iterates:

z′ = MDη (z, b) ≡ z′ = ∇φ? (∇φ(z)− ηb) ≡ z′i =zie−ηbi∑d

i=1 zie−ηbi

.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 40/ 74

Page 64: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Bi-affine two-player games with finite strategies: algorithms

• Algorithm for finding NE: Entropic Mirror Descent and variants

. Mirror descent [Beck and Teboulle, 2003]{xt+1 = MDη

(xt,−A>yt

)yt+1 = MDη (yt,−a+Axt)

⇒ (xT , yT ) is an O(T−1/2

)-NE.

. Mirror-Prox [Nemirovski, 2004]{xt = MDη

(xt,−A>yt

), xt+1 = MDη

(xt,−A>yt

)yt = MDη (yt,−a+Axt) , yt+1 = MDη (yt,−a+Axt)

⇒ (xT , yT ) is an O(T−1

)-NE.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 41/ 74

Page 65: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Mixed NE of GANs:

Bi-affine games with infinite strategies

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 42/ 74

Page 66: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium

• Objective of Wasseretein GANs, pure strategy equilibrium:

minθ∈Θ

maxw∈W

EX∼Preal[Dw(X)]− EX∼Pfake

[Dw(X)] .

• Objective of Wasseretein GANs, mixed strategy equilibrium:

minν∈M(Θ)

maxµ∈M(W)

Ew∼µEX∼Preal[Dw(X)]

− Ew∼µEθ∼νEX∼Pθfake

[Dw(X)] .

where M(Z) B {all (regular) probability measures on Z}.

• Existence of NE (ν∗, µ∗): Glicksberg’s existence theorem [Glicksberg, 1952].

. ν∗ , δθ∗ , µ∗ , δw∗

. Optimal strategies are indeed mixed !

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 43/ 74

Page 67: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium

• Objective of Wasseretein GANs, pure strategy equilibrium:

minθ∈Θ

maxw∈W

EX∼Preal[Dw(X)]− EX∼Pfake

[Dw(X)] .

• Objective of Wasseretein GANs, mixed strategy equilibrium:

minν∈M(Θ)

maxµ∈M(W)

Ew∼µEX∼Preal[Dw(X)]

− Ew∼µEθ∼νEX∼Pθfake

[Dw(X)] .

where M(Z) B {all (regular) probability measures on Z}.

• Existence of NE (ν∗, µ∗): Glicksberg’s existence theorem [Glicksberg, 1952].

. ν∗ , δθ∗ , µ∗ , δw∗

. Optimal strategies are indeed mixed !

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 43/ 74

Page 68: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium

• Objective of Wasseretein GANs, pure strategy equilibrium:

minθ∈Θ

maxw∈W

EX∼Preal[Dw(X)]− EX∼Pfake

[Dw(X)] .

• Objective of Wasseretein GANs, mixed strategy equilibrium:

minν∈M(Θ)

maxµ∈M(W)

Ew∼µEX∼Preal[Dw(X)]

− Ew∼µEθ∼νEX∼Pθfake

[Dw(X)] .

where M(Z) B {all (regular) probability measures on Z}.

• Existence of NE (ν∗, µ∗): Glicksberg’s existence theorem [Glicksberg, 1952].

. ν∗ , δθ∗ , µ∗ , δw∗

. Optimal strategies are indeed mixed !

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 43/ 74

Page 69: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium ≡ bi-affine games

• Objective of Wasseretein GANs, mixed strategy equilibrium:

minν∈M(Θ)

maxµ∈M(W)

Ew∼µEX∼Preal[Dw(X)]

− Ew∼µEθ∼νEX∼Pθfake

[Dw(X)] . (?)

• Define

. (Riesz representation) 〈µ, h〉 B∫hdµ for a measure µ and function h.

. Function g(w) B EX∼Preal[Dw(X)]

. Linear operator G and its adjoint G†:

G :M(Θ)→ a funciton on W, G† :M(W)→ a funciton on Θ,(Gν)(w) B Eθ∼νEX∼Pθ

fake[Dw(X)] ,

(G†µ)(θ) B Ew∼µEX∼Pθfake

[Dw(X)]

• Rewrite:

(?)m

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉

... a bi-affine game... in infinite dimension!!

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 44/ 74

Page 70: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium ≡ bi-affine games

• Objective of Wasseretein GANs, mixed strategy equilibrium:

minν∈M(Θ)

maxµ∈M(W)

Ew∼µEX∼Preal[Dw(X)]

− Ew∼µEθ∼νEX∼Pθfake

[Dw(X)] . (?)

• Define

. (Riesz representation) 〈µ, h〉 B∫hdµ for a measure µ and function h.

. Function g(w) B EX∼Preal[Dw(X)]

. Linear operator G and its adjoint G†:

G :M(Θ)→ a funciton on W, G† :M(W)→ a funciton on Θ,(Gν)(w) B Eθ∼νEX∼Pθ

fake[Dw(X)] ,

(G†µ)(θ) B Ew∼µEX∼Pθfake

[Dw(X)]

• Rewrite:

(?)m

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉

... a bi-affine game... in infinite dimension!!

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 44/ 74

Page 71: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium ≡ bi-affine games

• Objective of Wasseretein GANs, mixed strategy equilibrium:

minν∈M(Θ)

maxµ∈M(W)

Ew∼µEX∼Preal[Dw(X)]

− Ew∼µEθ∼νEX∼Pθfake

[Dw(X)] . (?)

• Define

. (Riesz representation) 〈µ, h〉 B∫hdµ for a measure µ and function h.

. Function g(w) B EX∼Preal[Dw(X)]

. Linear operator G and its adjoint G†:

G :M(Θ)→ a funciton on W, G† :M(W)→ a funciton on Θ,(Gν)(w) B Eθ∼νEX∼Pθ

fake[Dw(X)] ,

(G†µ)(θ) B Ew∼µEX∼Pθfake

[Dw(X)]

• Rewrite:

(?)m

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉

... a bi-affine game... in infinite dimension!!

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 44/ 74

Page 72: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium ≡ bi-affine games

• Objective of Wasseretein GANs, mixed strategy equilibrium:

minν∈M(Θ)

maxµ∈M(W)

Ew∼µEX∼Preal[Dw(X)]

− Ew∼µEθ∼νEX∼Pθfake

[Dw(X)] . (?)

• Define

. (Riesz representation) 〈µ, h〉 B∫hdµ for a measure µ and function h.

. Function g(w) B EX∼Preal[Dw(X)]

. Linear operator G and its adjoint G†:

G :M(Θ)→ a funciton on W, G† :M(W)→ a funciton on Θ,(Gν)(w) B Eθ∼νEX∼Pθ

fake[Dw(X)] ,

(G†µ)(θ) B Ew∼µEX∼Pθfake

[Dw(X)]

• Rewrite:

(?)m

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉

... a bi-affine game... in infinite dimension!!

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 44/ 74

Page 73: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium ≡ bi-affine games

• Objective of Wasseretein GANs, mixed strategy equilibrium:

minν∈M(Θ)

maxµ∈M(W)

Ew∼µEX∼Preal[Dw(X)]

− Ew∼µEθ∼νEX∼Pθfake

[Dw(X)] . (?)

• Define

. (Riesz representation) 〈µ, h〉 B∫hdµ for a measure µ and function h.

. Function g(w) B EX∼Preal[Dw(X)]

. Linear operator G and its adjoint G†:

G :M(Θ)→ a funciton on W, G† :M(W)→ a funciton on Θ,(Gν)(w) B Eθ∼νEX∼Pθ

fake[Dw(X)] ,

(G†µ)(θ) B Ew∼µEX∼Pθfake

[Dw(X)]

• Rewrite:

(?)m

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉

... a bi-affine game...

in infinite dimension!!

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 44/ 74

Page 74: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium ≡ bi-affine games

• Objective of Wasseretein GANs, mixed strategy equilibrium:

minν∈M(Θ)

maxµ∈M(W)

Ew∼µEX∼Preal[Dw(X)]

− Ew∼µEθ∼νEX∼Pθfake

[Dw(X)] . (?)

• Define

. (Riesz representation) 〈µ, h〉 B∫hdµ for a measure µ and function h.

. Function g(w) B EX∼Preal[Dw(X)]

. Linear operator G and its adjoint G†:

G :M(Θ)→ a funciton on W, G† :M(W)→ a funciton on Θ,(Gν)(w) B Eθ∼νEX∼Pθ

fake[Dw(X)] ,

(G†µ)(θ) B Ew∼µEX∼Pθfake

[Dw(X)]

• Rewrite:

(?)m

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉

... a bi-affine game... in infinite dimension!!

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 44/ 74

Page 75: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium ≡ bi-affine games

• Bi-affine two-player game, finite strategies:

minx∈∆m

maxy∈∆n

〈y, a〉 − 〈x,Ay〉 .

. φ(z) =∑d

i=1 zi log zi, φ?(z) = log∑d

i=1 ezi .

. Entropic MD iterates:

z′ = MDη (z, b) ≡ z′ = ∇φ? (∇φ(z)− ηb) ≡ z′i =zie−ηbi∑d

i=1 zie−ηbi

.

. Entropic MD ⇒ O(T−1/2)-NE, MP ⇒ O(T−1)-NE.

• Bi-affine two-player game, continuously infinite strategies:

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉 .

. φ(z) = ??, φ?(z) = ?? (negative Shannon entropy, maybe?)

. Entropic MD iterates in infinite dimension??

. Entropic MD in infinite dimension ⇒ O(T−1/2)-NE?? MP ⇒ O(T−1)-NE??

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 45/ 74

Page 76: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium ≡ bi-affine games

• Bi-affine two-player game, finite strategies:

minx∈∆m

maxy∈∆n

〈y, a〉 − 〈x,Ay〉 .

. φ(z) =∑d

i=1 zi log zi, φ?(z) = log∑d

i=1 ezi .

. Entropic MD iterates:

z′ = MDη (z, b) ≡ z′ = ∇φ? (∇φ(z)− ηb) ≡ z′i =zie−ηbi∑d

i=1 zie−ηbi

.

. Entropic MD ⇒ O(T−1/2)-NE, MP ⇒ O(T−1)-NE.

• Bi-affine two-player game, continuously infinite strategies:

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉 .

. φ(z) = ??, φ?(z) = ?? (negative Shannon entropy, maybe?)

. Entropic MD iterates in infinite dimension??

. Entropic MD in infinite dimension ⇒ O(T−1/2)-NE?? MP ⇒ O(T−1)-NE??

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 45/ 74

Page 77: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium ≡ bi-affine games

• Bi-affine two-player game, finite strategies:

minx∈∆m

maxy∈∆n

〈y, a〉 − 〈x,Ay〉 .

. φ(z) =∑d

i=1 zi log zi, φ?(z) = log∑d

i=1 ezi .

. Entropic MD iterates:

z′ = MDη (z, b) ≡ z′ = ∇φ? (∇φ(z)− ηb) ≡ z′i =zie−ηbi∑d

i=1 zie−ηbi

.

. Entropic MD ⇒ O(T−1/2)-NE, MP ⇒ O(T−1)-NE.

• Bi-affine two-player game, continuously infinite strategies:

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉 .

. φ(z) = ??, φ?(z) = ?? (negative Shannon entropy, maybe?)

. Entropic MD iterates in infinite dimension??

. Entropic MD in infinite dimension ⇒ O(T−1/2)-NE?? MP ⇒ O(T−1)-NE??

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 45/ 74

Page 78: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Wasserstein GANs, mixed Nash Equilibrium ≡ bi-affine games

• Bi-affine two-player game, finite strategies:

minx∈∆m

maxy∈∆n

〈y, a〉 − 〈x,Ay〉 .

. φ(z) =∑d

i=1 zi log zi, φ?(z) = log∑d

i=1 ezi .

. Entropic MD iterates:

z′ = MDη (z, b) ≡ z′ = ∇φ? (∇φ(z)− ηb) ≡ z′i =zie−ηbi∑d

i=1 zie−ηbi

.

. Entropic MD ⇒ O(T−1/2)-NE, MP ⇒ O(T−1)-NE.

• Bi-affine two-player game, continuously infinite strategies:

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉 .

. φ(z) = ??, φ?(z) = ?? (negative Shannon entropy, maybe?)

. Entropic MD iterates in infinite dimension??

. Entropic MD in infinite dimension ⇒ O(T−1/2)-NE?? MP ⇒ O(T−1)-NE??

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 45/ 74

Page 79: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Entropic Mirror Descent Iterates in Infinite Dimension

• Negative Shannon entropy and its Fenchel dual: (dz BLebesgue)

. Φ(µ) =∫µ log dµ

dz.

. Φ?(h) = log∫eh.

. dΦ and dΦ?: Frechet derivatives.1

Theorem (Infinite-Dimensional Mirror Descent, informal)

For a learning rate η, a probability measure µ, and an arbitrary function h, we canequivalently define

µ+ = MDη (µ, h) ≡ µ+ = dΦ? (dΦ(µ)− ηh) ≡ dµ+ =e−ηhdµ∫e−ηhdµ

.

Moreover, most the essential ingredients in the analysis of finite-dimensional proxmethods can be generalized to infinite dimension.

In particular, the three-point identity holds: For all µ, µ′, µ′′,⟨µ′′ − µ, dΦ(µ′)− dΦ(µ)

⟩= DΦ(µ, µ′) +DΦ(µ′′, µ)−DΦ(µ′′, µ′).

where DΦ is the Bregman divergence associated with Φ.

1Under mild regularity conditions on the measure/function.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 46/ 74

Page 80: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Entropic Mirror Descent Iterates in Infinite Dimension

• Negative Shannon entropy and its Fenchel dual: (dz BLebesgue)

. Φ(µ) =∫µ log dµ

dz.

. Φ?(h) = log∫eh.

. dΦ and dΦ?: Frechet derivatives.1

Theorem (Infinite-Dimensional Mirror Descent, informal)

For a learning rate η, a probability measure µ, and an arbitrary function h, we canequivalently define

µ+ = MDη (µ, h) ≡ µ+ = dΦ? (dΦ(µ)− ηh) ≡ dµ+ =e−ηhdµ∫e−ηhdµ

.

Moreover, most the essential ingredients in the analysis of finite-dimensional proxmethods can be generalized to infinite dimension.

In particular, the three-point identity holds: For all µ, µ′, µ′′,⟨µ′′ − µ, dΦ(µ′)− dΦ(µ)

⟩= DΦ(µ, µ′) +DΦ(µ′′, µ)−DΦ(µ′′, µ′).

where DΦ is the Bregman divergence associated with Φ.1Under mild regularity conditions on the measure/function.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 46/ 74

Page 81: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Entropic Mirror Descent/Mirror-Prox in Infinite Dimension: Rates

• Algorithms:

Theorem (Convergence Rates)

Let Φ(µ) =∫dµ log dµ

dz. Then

1. Entropic MD ⇒ O(T−1/2)-NE, MP ⇒ O(T−1)-NE.

2. If only stochastic derivatives (−G†µ and g − Gν) are available, then both MDand MP ⇒ O(T−1/2)-NE in expectation.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 47/ 74

Page 82: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Entropic Mirror Descent/Mirror-Prox in Infinite Dimension: Rates

• Algorithms:

Theorem (Convergence Rates)

Let Φ(µ) =∫dµ log dµ

dz. Then

1. Entropic MD ⇒ O(T−1/2)-NE, MP ⇒ O(T−1)-NE.

2. If only stochastic derivatives (−G†µ and g − Gν) are available, then both MDand MP ⇒ O(T−1/2)-NE in expectation.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 47/ 74

Page 83: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Summary so far

• Mixed NE of GANs ≡ bi-affine two-player game with continuously infinite strategies:

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉

• Can find NE of GANs via entropic MD and MP with rigorous rates, provided...

. Have access to derivatives −G†µ and g −Gν.

. Can update probability measures µ+ = MDη (µ, derivative).

. ...but we can do neither.

• Solution to both: Sampling.

. Empirical estimate for stochastic derivatives.

. Draw samples from the updated probability measure via Langevin dynamics.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 48/ 74

Page 84: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Summary so far

• Mixed NE of GANs ≡ bi-affine two-player game with continuously infinite strategies:

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉

• Can find NE of GANs via entropic MD and MP with rigorous rates, provided...

. Have access to derivatives −G†µ and g −Gν.

. Can update probability measures µ+ = MDη (µ, derivative).

. ...but we can do neither.

• Solution to both: Sampling.

. Empirical estimate for stochastic derivatives.

. Draw samples from the updated probability measure via Langevin dynamics.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 48/ 74

Page 85: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Summary so far

• Mixed NE of GANs ≡ bi-affine two-player game with continuously infinite strategies:

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉

• Can find NE of GANs via entropic MD and MP with rigorous rates, provided...

. Have access to derivatives −G†µ and g −Gν.

. Can update probability measures µ+ = MDη (µ, derivative).

. ...but we can do neither.

• Solution to both: Sampling.

. Empirical estimate for stochastic derivatives.

. Draw samples from the updated probability measure via Langevin dynamics.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 48/ 74

Page 86: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Summary so far

• Mixed NE of GANs ≡ bi-affine two-player game with continuously infinite strategies:

minν∈M(Θ)

maxµ∈M(W)

〈µ, g〉 − 〈µ,Gν〉

• Can find NE of GANs via entropic MD and MP with rigorous rates, provided...

. Have access to derivatives −G†µ and g −Gν.

. Can update probability measures µ+ = MDη (µ, derivative).

. ...but we can do neither.

• Solution to both: Sampling.

. Empirical estimate for stochastic derivatives.

. Draw samples from the updated probability measure via Langevin dynamics.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 48/ 74

Page 87: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

From Theory to Practice:

Sampling via Langevin Dynamics

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 49/ 74

Page 88: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction from Entropic MD to Sampling

• Goal of the section:

. A principled reduction from entropic MD to sampling.2

. A heuristic for reducing computational/memory burdens.

• We set η = 1 since it plays no role in the following derivation.

2Entropic Mirror-Prox can be similarly reduced.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 50/ 74

Page 89: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction Step 1: Reformulating MD Iterates

Lemma (Iterates of Entropic MD)

Entropic MD is equivalent to:

dµT =exp{

(T − 1)g −G∑T−1

s=1 νs}dw∫

exp{

(T − 1)g −G∑T−1

s=1 νs}dw

.

and

dνT =exp{G†∑T−1

s=1 µs}dθ∫

exp{G†∑T−1

s=1 µs}dθ.

Proof.A well-known result in finite dimension; the proof holds verbatim given ourinfinite-dimensional framework for MD. �

• Implication:

Want to sample from µT /νT ≡ Need (stochastic) derivatives of history.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 51/ 74

Page 90: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction Step 1: Reformulating MD Iterates

Lemma (Iterates of Entropic MD)

Entropic MD is equivalent to:

dµT =exp{

(T − 1)g −G∑T−1

s=1 νs}dw∫

exp{

(T − 1)g −G∑T−1

s=1 νs}dw

.

and

dνT =exp{G†∑T−1

s=1 µs}dθ∫

exp{G†∑T−1

s=1 µs}dθ.

Proof.A well-known result in finite dimension; the proof holds verbatim given ourinfinite-dimensional framework for MD. �

• Implication:

Want to sample from µT /νT ≡ Need (stochastic) derivatives of history.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 51/ 74

Page 91: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction Step 2: Stochastic Derivatives based on Samples

• Empirical approximation for stochastic derivatives

. Suppose we can acquire samples {θi}ni=1 ∼ ν1,

(Gν1)(w) = Eθ∼ν1EX∼Pθfake

[fw(X)] '1n

n∑i=1

EX∼Pθi

fake

[fw(X)] B Gν1(w)

. Similarly, for {wi}ni=1 ∼ µ1,

(G†µ1)(θ) = Ew∼µ1EX∼Pθfake

[fw(X)] '1n

n∑i=1

EX∼Pθfake

[fwi (X)] B G†µ1(θ)

• Key question:

Given samples from (µ1, ν1), how do we draw samples from (µ2, ν2)?

. If we can do this, then applying recursively, we can get samples from (µT , νT ).

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 52/ 74

Page 92: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction Step 2: Stochastic Derivatives based on Samples

• Empirical approximation for stochastic derivatives

. Suppose we can acquire samples {θi}ni=1 ∼ ν1,

(Gν1)(w) = Eθ∼ν1EX∼Pθfake

[fw(X)] '1n

n∑i=1

EX∼Pθi

fake

[fw(X)] B Gν1(w)

. Similarly, for {wi}ni=1 ∼ µ1,

(G†µ1)(θ) = Ew∼µ1EX∼Pθfake

[fw(X)] '1n

n∑i=1

EX∼Pθfake

[fwi (X)] B G†µ1(θ)

• Key question:

Given samples from (µ1, ν1), how do we draw samples from (µ2, ν2)?

. If we can do this, then applying recursively, we can get samples from (µT , νT ).

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 52/ 74

Page 93: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction Step 2: Stochastic Derivatives based on Samples

• Empirical approximation for stochastic derivatives

. Suppose we can acquire samples {θi}ni=1 ∼ ν1,

(Gν1)(w) = Eθ∼ν1EX∼Pθfake

[fw(X)] '1n

n∑i=1

EX∼Pθi

fake

[fw(X)] B Gν1(w)

. Similarly, for {wi}ni=1 ∼ µ1,

(G†µ1)(θ) = Ew∼µ1EX∼Pθfake

[fw(X)] '1n

n∑i=1

EX∼Pθfake

[fwi (X)] B G†µ1(θ)

• Key question:

Given samples from (µ1, ν1), how do we draw samples from (µ2, ν2)?

. If we can do this, then applying recursively, we can get samples from (µT , νT ).

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 52/ 74

Page 94: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction Step 2: Stochastic Derivatives based on Samples

• Empirical approximation for stochastic derivatives

. Suppose we can acquire samples {θi}ni=1 ∼ ν1,

(Gν1)(w) = Eθ∼ν1EX∼Pθfake

[fw(X)] '1n

n∑i=1

EX∼Pθi

fake

[fw(X)] B Gν1(w)

. Similarly, for {wi}ni=1 ∼ µ1,

(G†µ1)(θ) = Ew∼µ1EX∼Pθfake

[fw(X)] '1n

n∑i=1

EX∼Pθfake

[fwi (X)] B G†µ1(θ)

• Key question:

Given samples from (µ1, ν1), how do we draw samples from (µ2, ν2)?

. If we can do this, then applying recursively, we can get samples from (µT , νT ).

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 52/ 74

Page 95: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction Step 3: Stochastic Gradient Langevin Dynamics (SGLD)

• Given any distribution dµ = e−h∫e−h

, the SGLD iterates

zk+1 = zk − γ∇h(zk) +√

2γεN (0, I)

. γ step-size, ∇ stochastic gradients, ε thermal noise.

. zk converges to samples from µ [Welling and Teh, 2011].

. Used in deep learning, empirically successful[Chaudhari et al., 2017, Dziugaite and Roy, 2018, Chaudhari et al., 2018].

• Let h← −G†µ1, then ν2 ∝ e−h.

. SGLD requires stochastic gradients of h. We again use empirical approximation:

∇θh = ∇θ

(1n

n∑i=1

EX∼Pθfake

[fwi (X)]

)' ∇θ

(1nn′

n∑i,j=1

fwi (Xj)

)B ∇θh,

. Xj ∼ P θfake for j = 1, 2, ..., n′.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 53/ 74

Page 96: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction Step 3: Stochastic Gradient Langevin Dynamics (SGLD)

• Given any distribution dµ = e−h∫e−h

, the SGLD iterates

zk+1 = zk − γ∇h(zk) +√

2γεN (0, I)

. γ step-size, ∇ stochastic gradients, ε thermal noise.

. zk converges to samples from µ [Welling and Teh, 2011].

. Used in deep learning, empirically successful[Chaudhari et al., 2017, Dziugaite and Roy, 2018, Chaudhari et al., 2018].

• Let h← −G†µ1, then ν2 ∝ e−h.

. SGLD requires stochastic gradients of h. We again use empirical approximation:

∇θh = ∇θ

(1n

n∑i=1

EX∼Pθfake

[fwi (X)]

)' ∇θ

(1nn′

n∑i,j=1

fwi (Xj)

)B ∇θh,

. Xj ∼ P θfake for j = 1, 2, ..., n′.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 53/ 74

Page 97: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction Step 3: Stochastic Gradient Langevin Dynamics (SGLD)

• Given any distribution dµ = e−h∫e−h

, the SGLD iterates

zk+1 = zk − γ∇h(zk) +√

2γεN (0, I)

. γ step-size, ∇ stochastic gradients, ε thermal noise.

. zk converges to samples from µ [Welling and Teh, 2011].

. Used in deep learning, empirically successful[Chaudhari et al., 2017, Dziugaite and Roy, 2018, Chaudhari et al., 2018].

• Let h← −G†µ1, then ν2 ∝ e−h.

. SGLD requires stochastic gradients of h. We again use empirical approximation:

∇θh = ∇θ

(1n

n∑i=1

EX∼Pθfake

[fwi (X)]

)' ∇θ

(1nn′

n∑i,j=1

fwi (Xj)

)B ∇θh,

. Xj ∼ P θfake for j = 1, 2, ..., n′.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 53/ 74

Page 98: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction Step 3: Stochastic Gradient Langevin Dynamics (SGLD)

• Given any distribution dµ = e−h∫e−h

, the SGLD iterates

zk+1 = zk − γ∇h(zk) +√

2γεN (0, I)

. γ step-size, ∇ stochastic gradients, ε thermal noise.

. zk converges to samples from µ [Welling and Teh, 2011].

. Used in deep learning, empirically successful[Chaudhari et al., 2017, Dziugaite and Roy, 2018, Chaudhari et al., 2018].

• Let h← −G†µ1, then ν2 ∝ e−h.

. SGLD requires stochastic gradients of h. We again use empirical approximation:

∇θh = ∇θ

(1n

n∑i=1

EX∼Pθfake

[fwi (X)]

)' ∇θ

(1nn′

n∑i,j=1

fwi (Xj)

)B ∇θh,

. Xj ∼ P θfake for j = 1, 2, ..., n′.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 53/ 74

Page 99: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reduction Step 3: Stochastic Gradient Langevin Dynamics (SGLD)

• Given any distribution dµ = e−h∫e−h

, the SGLD iterates

zk+1 = zk − γ∇h(zk) +√

2γεN (0, I)

. γ step-size, ∇ stochastic gradients, ε thermal noise.

. zk converges to samples from µ [Welling and Teh, 2011].

. Used in deep learning, empirically successful[Chaudhari et al., 2017, Dziugaite and Roy, 2018, Chaudhari et al., 2018].

• Let h← −G†µ1, then ν2 ∝ e−h.

. SGLD requires stochastic gradients of h. We again use empirical approximation:

∇θh = ∇θ

(1n

n∑i=1

EX∼Pθfake

[fwi (X)]

)' ∇θ

(1nn′

n∑i,j=1

fwi (Xj)

)B ∇θh,

. Xj ∼ P θfake for j = 1, 2, ..., n′.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 53/ 74

Page 100: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Implementable Entropic Mirror Descent

• Combining Steps 1-3, we get an implementable algorithm [Hsieh et al., 2019].

• Caveat: The algorithm is not practical since

. The algorithm is complicated.

. The algorithm takes O(T ) memory and O(T 2) per-iteration complexity since weneed to memorize all history.

• Key question:

How to reduce the complexity while maintaining the sampling perspective?

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 54/ 74

Page 101: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Implementable Entropic Mirror Descent

• Combining Steps 1-3, we get an implementable algorithm [Hsieh et al., 2019].

• Caveat: The algorithm is not practical since

. The algorithm is complicated.

. The algorithm takes O(T ) memory and O(T 2) per-iteration complexity since weneed to memorize all history.

• Key question:

How to reduce the complexity while maintaining the sampling perspective?

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 54/ 74

Page 102: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

? Efficient Entropic Mirror Descent

• Our proposal: Summarizing the samples by their mean.

{θi}ni=1 θ B1∑n

i=1 αi

n∑i=1

αiθi (' Eθ∼ν [θ]) ,

{wi}ni=1 w B1∑n

i=1 α′i

n∑i=1

α′iwi (' Ew∼µ[w]) .

• Resulting algorithm: Mirror-GAN

. Linear time.

. Constant memory.

. Complexity ' SGD.

Remark: In principle, we can use any first-order algorithm to compute the mean, notnecessarily SGLD.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 55/ 74

Page 103: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

? Efficient Entropic Mirror Descent

• Our proposal: Summarizing the samples by their mean.

{θi}ni=1 θ B1∑n

i=1 αi

n∑i=1

αiθi (' Eθ∼ν [θ]) ,

{wi}ni=1 w B1∑n

i=1 α′i

n∑i=1

α′iwi (' Ew∼µ[w]) .

• Resulting algorithm: Mirror-GAN

. Linear time.

. Constant memory.

. Complexity ' SGD.

Remark: In principle, we can use any first-order algorithm to compute the mean, notnecessarily SGLD.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 55/ 74

Page 104: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

? Efficient Entropic Mirror Descent

• Our proposal: Summarizing the samples by their mean.

{θi}ni=1 θ B1∑n

i=1 αi

n∑i=1

αiθi (' Eθ∼ν [θ]) ,

{wi}ni=1 w B1∑n

i=1 α′i

n∑i=1

α′iwi (' Ew∼µ[w]) .

• Resulting algorithm: Mirror-GAN

. Linear time.

. Constant memory.

. Complexity ' SGD.

Remark: In principle, we can use any first-order algorithm to compute the mean, notnecessarily SGLD.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 55/ 74

Page 105: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

? Efficient Entropic Mirror Descent: Algorithm[Hsieh et al., 2018b]

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 56/ 74

Page 106: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Experiments

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 57/ 74

Page 107: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Synthetic Data: 8 Gaussian mixtures

• Observations:

. Mirror-/Mirror-Prox-GAN capture mode/variance more accurately.

. Mirror-/Mirror-Prox-GAN never overfit (mode collapse), in contrast to Adam.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 58/ 74

Page 108: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Synthetic Data: 25 Gaussian mixtures

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 59/ 74

Page 109: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Synthetic Data: 25 Gaussian mixtures (cont.)

• Mirror-/Mirror-Prox-GAN are faster.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 60/ 74

Page 110: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Real Data• Better image quality.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 61/ 74

Page 111: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Summary and potential follow-ups

• Re-think the framework for GANs.

. From pure strategy equilibria to mixed Nash Equilibira.

. Mixed Nash Equilibira of GANs are but bi-affine two-player games!!!

. Infinite-dimensional mirror descent/mirror-prox provably find the mixed NE.

• Algorithmic approach.

. Sampling for stochastic derivatives and stochastic gradients.

. Summarizing samples by mean; efficient algorithms.

• Future work.

. We have only considered WGANs, but our framework applies to any GAN.

. More generally, can apply our framework to any min-max objective(e.g., adversarial training, robust reinforcement learning).

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 62/ 74

Page 112: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Summary and potential follow-ups

• Re-think the framework for GANs.

. From pure strategy equilibria to mixed Nash Equilibira.

. Mixed Nash Equilibira of GANs are but bi-affine two-player games!!!

. Infinite-dimensional mirror descent/mirror-prox provably find the mixed NE.

• Algorithmic approach.

. Sampling for stochastic derivatives and stochastic gradients.

. Summarizing samples by mean; efficient algorithms.

• Future work.

. We have only considered WGANs, but our framework applies to any GAN.

. More generally, can apply our framework to any min-max objective(e.g., adversarial training, robust reinforcement learning).

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 62/ 74

Page 113: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Langevin dynamicsfor non-convex optimization

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 63/ 74

Page 114: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Need for noise in non-convex optimization

• Standard unconstrained optimization problem:

maxx∈Rd

V (x) (2)

• Global optimization of non-convex objective:

. Escape saddle-points

. Escape local maxima

• Proposed solution: add noise to the iterates → Langevin Dynamics !

• Studies of LD for non-convex optimization [Xu et al., 2018, Gao et al., 2018]

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 64/ 74

Page 115: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

A sampling perspective of optimization

• Standard unconstrained optimization problem:

maxx∈Rd

f(x) (3)

• Similarly as with GANs, (3) can be recasted as

maxµ∈M(Rd)

Ex∼µ[f(x)] (4)

. The exact solution µ? = δx? is a Dirac delta function δ on the solution x? to (3).

• Add entropy regularizer:

maxµ∈M(Rd)

Ex∼µ[f(x)] + αH(µ) (5)

where α ∈ R+ and H(µ) ≡ Ex∼µ[− logµ(x)].

. The exact solution becomes µ∗α(x) ∝ e1αf(x)dx −−−→

α→0δx∗

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 65/ 74

Page 116: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Preconditioned SGLD

• Sample from µ∗α using SGLD.

• Speed up sampling using preconditioning matrix G(x) [Patterson and Teh, 2013]:

xk+1 = xk − βk( 1αG(xk)∇f(xk) + Γ(xk)

)+√

2βkG(xk)12 ξk, (6)

where Γ(x)i =∑d

j=1∂G(x)ij∂xj

.

• Good choice of preconditioning matrix: inverse Fisher Information Matrix.

• Diagonal approximation of inverse FIM: RMSProp preconditioner

G(xk+1) = diag

(1 �(λ1 +

√g(xk+1)

))g(xk+1) = γg(xk) +

1− γα2 ∇f(xk) �∇f(xk)

where γ ∈ [0, 1], and operators �,� represent the element-wise matrix product anddivision, respectively.

• When γ is small, the Γ(xk) term in (6) can be safely ignored.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 66/ 74

Page 117: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

SGLD algorithm for training Deep Neural Networks[Li et al., 2016]

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 67/ 74

Page 118: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Reinforcement Learning: Setup

• Parametrized deterministic policy: πθ : S → A.

• Goal:

maxθ

J(θ) := Es0,r1,s1,...

[∞∑t=1

γt−1rt

∣∣∣∣∣ ai = πθ(si), i = 1, 2, . . .

]

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 68/ 74

Page 119: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

SGLD for training a Reinforcement Learning agent

• Policy Gradient Theorem:

∇J(θ) = Es∼dπθ[∇θπθ(s)∇aQπθ (s, a) |a=πθ(s)

],

. Qπθ (s, a) = Eπθ[∑∞

k=0 γkrt+k+1

∣∣St = s,At = a]

,

. dπθ is the stationary state distribution under πθ.

. Can (almost) be treated as a standard first order optimization problem.

. Non-convex objective: need for exploration.

• Other approaches [Lillicrap et al., 2015, Plappert et al., 2017]

. Add noise during the episode collection phase

• SGLD: directly adds noise when updating the parameters

. Annealing temperature: αk ∝ k−r, (r = 0.25).

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 69/ 74

Page 120: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Empirical comparison between various exploration methods

• A comparison of exploration strategies with TD3 algorithm [Fujimoto et al., 2018]

. Action: adds noise to actions during the episode collection phase

. Parameter: adds noise the policy parametrs during the episode collection phase

. SGLD: adds noise during parameter update

. SGLD+Action: adds noise both to action during episode collection and duringparameter update phase

. SGLD+Parameter: adds noise both to parameters during episode collection andduring parameter update phase

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 70/ 74

Page 121: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Expertiment on Walker2d Mujoco environment

• Expected reward averaged over 10 episodes

• Shaded area: half standard deviation

• For each algorithm, we optimize thehyperparameters, i.e.:

. initial step-size

. noise level

• Improved efficiency of Langevin methods

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 71/ 74

Page 122: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Experiments on several Mujoco environments

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 72/ 74

Page 123: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Generalization across various environments

• Generalization: Fixed set of hyper-parameters across all environments

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 73/ 74

Page 124: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

That’s all folks!!!

• https://lions.epfl.ch/publications

• https://arxiv.org/abs/1802.10174

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 74/ 74

Page 125: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

[0] Beck, A. and Teboulle, M. (2003).Mirror descent and nonlinear projected subgradient methods for convexoptimization.Operations Research Letters, 31(3):167–175.

[0] Bernton, E. (2018).Langevin monte carlo and jko splitting.arXiv preprint arXiv:1802.08671.

[0] Brosse, N., Durmus, A., Moulines, E., and Pereyra, M. (2017).Sampling from a log-concave distribution with compact support with proximallangevin monte carlo.arXiv preprint arXiv:1705.08964.

[0] Bubeck, S., Eldan, R., and Lehec, J. (2015).Sampling from a log-concave distribution with projected langevin monte carlo.arXiv preprint arXiv:1507.02564.

[0] Chambolle, A. and Pock, T. (2011).A first-order primal-dual algorithm for convex problems with applications toimaging.Journal of mathematical imaging and vision, 40(1):120–145.

[0] Chaudhari, P., Choromanska, A., Soatto, S., LeCun, Y., Baldassi, C., Borgs, C.,Chayes, J., Sagun, L., and Zecchina, R. (2017).Entropy-sgd: Biasing gradient descent into wide valleys.In International Conference on Learning Representations.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 74/ 74

Page 126: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

[0] Chaudhari, P., Oberman, A., Osher, S., Soatto, S., and Carlier, G. (2018).Deep relaxation: partial differential equations for optimizing deep neural networks.Research in the Mathematical Sciences, 5(3):30.

[0] Cheng, X. and Bartlett, P. (2017).Convergence of langevin mcmc in kl-divergence.arXiv preprint arXiv:1705.09048.

[0] Cheng, X., Chatterji, N. S., Bartlett, P. L., and Jordan, M. I. (2017).Underdamped langevin mcmc: A non-asymptotic analysis.arXiv preprint arXiv:1707.03663.

[0] Dalalyan, A. S. and Karagulyan, A. G. (2017).User-friendly guarantees for the langevin monte carlo with inaccurate gradient.arXiv preprint arXiv:1710.00095.

[0] Dasgupta, P. and Maskin, E. (1986).The existence of equilibrium in discontinuous economic games, i: Theory.The Review of economic studies, 53(1):1–26.

[0] Durmus, A., Majewski, S., and Miasojedow, B. (2018).Analysis of langevin monte carlo via convex optimization.arXiv preprint arXiv:1802.09188.

[0] Dwivedi, R., Chen, Y., Wainwright, M. J., and Yu, B. (2018).Log-concave sampling: Metropolis-hastings algorithms are fast!arXiv preprint arXiv:1801.02309.

[0] Dziugaite, G. K. and Roy, D. (2018).

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 74/ 74

Page 127: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Entropy-sgd optimizes the prior of a pac-bayes bound: Generalization properties ofentropy-sgd and data-dependent priors.In International Conference on Machine Learning, pages 1376–1385.

[0] Fujimoto, S., van Hoof, H., and Meger, D. (2018).Addressing function approximation error in actor-critic methods.arXiv preprint arXiv:1802.09477.

[0] Gao, X., Gurbuzbalaban, M., and Zhu, L. (2018).Breaking reversibility accelerates langevin dynamics for global non-convexoptimization.arXiv preprint arXiv:1812.07725.

[0] Gelfand, A. E., Hills, S. E., Racine-Poon, A., and Smith, A. F. (1990).Illustration of bayesian inference in normal data models using gibbs sampling.Journal of the American Statistical Association, 85(412):972–985.

[0] Glicksberg, I. L. (1952).A further generalization of the kakutani fixed point theorem, with application tonash equilibrium points.Proceedings of the American Mathematical Society, 3(1):170–174.

[0] Hastings, W. K. (1970).Monte carlo sampling methods using markov chains and their applications.

[0] Hsieh, Y.-P., Kavis, A., Rolland, P., and Cevher, V. (2018a).Mirrored langevin dynamics.In Advances in Neural Information Processing Systems.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 74/ 74

Page 128: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

[0] Hsieh, Y.-P., Liu, C., and Cevher, V. (2018b).Finding mixed nash equilibria of generative adversarial networks.arXiv preprint arXiv:1811.02002.

[0] Hsieh, Y.-P., Liu, C., and Cevher, V. (2019).Finding mixed nash equilibria of generative adversarial networks.In Submitted to International Conference on Learning Representations.under review.

[0] Jordan, R., Kinderlehrer, D., and Otto, F. (1998).The variational formulation of the fokker–planck equation.SIAM journal on mathematical analysis, 29(1):1–17.

[0] Kahn, H. and Harris, T. E. (1951).Estimation of particle transmission by random sampling.National Bureau of Standards applied mathematics series, 12:27–30.

[0] Kangarshahi∗, E. A., Hsieh∗, Y.-P., Sahin, M. F., and Cevher, V. (2018).Let’s be honest: An optimal no-regret framework for zero-sum games.In Proceedings of the 35th International Conference on Machine Learning, pages2488–2496.

[0] Kemeny, J. G. and Snell, J. L. (1976).Markov Chains.Springer-Verlag, New York.

[0] Li, C., Chen, C., Carlson, D., and Carin, L. (2016).Preconditioned stochastic gradient langevin dynamics for deep neural networks.In Thirtieth AAAI Conference on Artificial Intelligence.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 74/ 74

Page 129: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

[0] Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D.,and Wierstra, D. (2015).Continuous control with deep reinforcement learning.arXiv preprint arXiv:1509.02971.

[0] Metropolis, N. and Ulam, S. (1949).The monte carlo method.Journal of the American statistical association, 44(247):335–341.

[0] Neal, R. M. et al. (2011).Mcmc using hamiltonian dynamics.Handbook of markov chain monte carlo, 2(11):2.

[0] Nemirovski, A. (2004).Prox-method with rate of convergence o (1/t) for variational inequalities withlipschitz continuous monotone operators and smooth convex-concave saddle pointproblems.SIAM Journal on Optimization, 15(1):229–251.

[0] Nemirovskii, A. and Yudin, D. B. (1983).Problem complexity and method efficiency in optimization.Wiley.

[0] Neumann, J. v. (1928).Zur theorie der gesellschaftsspiele.Mathematische annalen, 100(1):295–320.

[0] Patterson, S. and Teh, Y. W. (2013).

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 74/ 74

Page 130: Constrained sampling via Langevin dynamics · •Mini-batch or on-line extensions: sampling with stochastic gradients..Experiments for the Latent Dirichlet Allocation (LDA), Wikipedia

Stochastic gradient riemannian langevin dynamics on the probability simplex.In Advances in Neural Information Processing Systems, pages 3102–3110.

[0] Plappert, M., Houthooft, R., Dhariwal, P., Sidor, S., Chen, R. Y., Chen, X.,Asfour, T., Abbeel, P., and Andrychowicz, M. (2017).Parameter space noise for exploration.arXiv preprint arXiv:1706.01905.

[0] Sion, M. (1958).On general minimax theorems.Pacific Journal of mathematics, 8(1):171–176.

[0] Welling, M. and Teh, Y. W. (2011).Bayesian learning via stochastic gradient langevin dynamics.In Proceedings of the 28th International Conference on Machine Learning(ICML-11), pages 681–688.

[0] Wibisono, A. (2018).Sampling as optimization in the space of measures: The langevin dynamics as acomposite optimization problem.arXiv preprint arXiv:1802.08089.

[0] Xu, P., Chen, J., Zou, D., and Gu, Q. (2018).Global convergence of langevin dynamics based algorithms for nonconvexoptimization.In Advances in Neural Information Processing Systems, pages 3122–3133.

Constrained sampling via Langevin dynamics | Volkan Cevher, https://lions.epfl.ch Slide 74/ 74