numerical optimization for physicists and statisticiansbayesint.github.io/isnet5/wild.pdf ·...

80
Numerical Optimization for Physicists and Statisticians Stefan Wild Mathematics and Computer Science Division Argonne National Laboratory Grateful to many physicist collaborators: A. Ekstr¨ om, C. Forss´ en, G. Hagen, M. Hjorth-Jensen, G.R. Jansen, M. Kortelainen, T. Lesinski, A. Lovell, R. Machleidt, J. McDonnell, H. Nam, N. Michel, W. Nazarewicz, F.M. Nunes, E. Olsen, T. Papenbrock, P.-G. Reinhardt, N. Schunck, M. Stoitsov, J. Vary, K. Wendt, and others November 7, 2017

Upload: others

Post on 20-May-2020

21 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Numerical Optimization for Physicists and Statisticians

Stefan Wild

Mathematics and Computer Science DivisionArgonne National Laboratory

Grateful to many physicist collaborators:A. Ekstrom, C. Forssen, G. Hagen, M. Hjorth-Jensen, G.R. Jansen,

M. Kortelainen, T. Lesinski, A. Lovell, R. Machleidt, J. McDonnell, H. Nam,N. Michel, W. Nazarewicz, F.M. Nunes, E. Olsen, T. Papenbrock,

P.-G. Reinhardt, N. Schunck, M. Stoitsov, J. Vary, K. Wendt, and others

November 7, 2017

Page 2: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Mathematical/Numerical Optimization for ISNET

Possible Topics Today

⋄ Optimization Basics

⋄ Optimization for Expensive Model Calibration

fast, – limiting the number of expensive simulation evaluationslocal, – given enough resources, find you a point for which you cannot

improve the objective in a local neighborhoodderivative-free – useful in situations where derivatives unavailable

⋄ Beyond χ2 Minimization

⋄ Stochastic Optimization

⋄ Bayesian Optimization

ISNET5 1

Page 3: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

1. Mathematical/Numerical Nonlinear Optimization

Optimization is the “science of better”

Find parameters (controls) x = (x1, . . . , xn) in domain Ω to improve objective f

min f(x) : x ∈ Ω ⊆ Rn

⋄ (Unless Ω is very special) Need to evaluate f at many x to find a good x∗

⋄ Focus on local solutions: f(x∗) ≤ f(x) ∀x ∈ N (x∗) ∩ Ω

−1 −0.5 0 0.5 1−1

−0.5

0

0.5

1

1.5

2

2.5

3

Unconstrained

Constrained

−1 −0.5 0 0.5 1−1

−0.5

0

0.5

1

1.5

2

2.5

3

Unconstrained

Constrained

Implicitly assume that uncertainty modeled through constraints and objective(s)

ISNET5 2

Page 4: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

The Price of Algorithm Choice: Solvers in PETSc/TAO

chwirut1 (n = 6)

100

101

102

103.7

103.8

103.9

Number of Evaluations

Be

st

f V

alu

e F

ou

nd

lmvm

pounders

nm

Toolkit for Advanced Optimization

[Munson et al.; mcs.anl.gov/tao]

Increasing level of user input:

nm Assumes ∇xf

unavailable, black box

pounders Assumes ∇xf

unavailable, exploitsproblem structure

lmvm Uses available ∇xf

Observe: Constrained by budget on #evals, method limits solution accuracy/problem size

ISNET5 3

Page 5: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Why Not Global Optimization, minx∈Ω f(x)?

Careful:

⋄ Global convergence: Convergence (to a localsolution/stationary point) from anywhere in Ω

⋄ Convergence to a global minimizer: Obtain x∗ withf(x∗) ≤ f(x) ∀x ∈ Ω

−4 −3 −2 −1 0 1 20

10

20

30

40

50

60

70

80

x

f(x)

Local

Global

ISNET5 4

Page 6: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Why Not Global Optimization, minx∈Ω f(x)?

Careful:

⋄ Global convergence: Convergence (to a localsolution/stationary point) from anywhere in Ω

⋄ Convergence to a global minimizer: Obtain x∗ withf(x∗) ≤ f(x) ∀x ∈ Ω

−4 −3 −2 −1 0 1 20

10

20

30

40

50

60

70

80

x

f(x)

Local

Global

Anyone selling you global solutions when derivatives are unavailable:

either assumes more about your problem (e.g., convex f)

or expects you to wait forever

Torn and Zilinskas: An algorithm converges to the global minimum for anycontinuous f if and only if the sequence of points visited by thealgorithm is dense in Ω.

or cannot be trusted

ISNET5 4

Page 7: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Why Not Global Optimization, minx∈Ω f(x)?

Careful:

⋄ Global convergence: Convergence (to a localsolution/stationary point) from anywhere in Ω

⋄ Convergence to a global minimizer: Obtain x∗ withf(x∗) ≤ f(x) ∀x ∈ Ω

−4 −3 −2 −1 0 1 20

10

20

30

40

50

60

70

80

x

f(x)

Local

Global

Anyone selling you global solutions when derivatives are unavailable:

either assumes more about your problem (e.g., convex f)

or expects you to wait forever

Torn and Zilinskas: An algorithm converges to the global minimum for anycontinuous f if and only if the sequence of points visited by thealgorithm is dense in Ω.

or cannot be trusted

Instead:

⋄ Rapidly find good local solutions and/or be robust to poor solutions

⋄ Find several good local solutions concurrently (APOSMM/LibEnsemble)

ISNET5 4

Page 8: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Optimization Tightly Coupled With Derivatives (WRT Parameters)

Typical optimality (no noise, smooth functions)

∇xf(x∗) + λT∇xcE(x∗) = 0, cE(x∗) = 0

(sub)gradients ∇xf, ∇xc enable:

⋄ Faster feasibility⋄ Faster convergence

Guaranteed descent Approximation of nonlinearities

⋄ Better termination Measure of criticality

‖∇xf‖ or ‖PΩ(∇xf)‖

But derivatives ∇xS(x) are not always available/do not always exist

ISNET5 5

Page 9: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Obtain Derivatives ∇xS Whenever Possible

Handcoding (HC)

“Army of students/programmers”

? Prone to errors/conditioning

? Intractable as number of ops increases

Algorithmic/Automatic Differentiation (AD)

“Exact∗ derivatives!”

? No black boxes allowed

? Not always automatic/cheap/well-conditioned

Finite Differences (FD)

“Nonintrusive”

? Expense grows with n

? Sensitive to stepsize choice/noise

→[More & W.; SISC 2011], [More & W.; TOMS 2012]

. . . then apply derivative-based method (that handles inexact derivatives)

ISNET5 6

Page 10: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Algorithmic Differentiation

→ [Coleman & Xu; SIAM 2016], [Griewank & Walther; SIAM 2008]

Computational Graph

⋄ y = sin(a ∗ b) ∗ c⋄ Forward and reverse modes

⋄ AD tool provides code for yourderivatives

Write codes and formulateproblems with AD in mind!

Many tools (see www.autodiff.org):

F OpenAD

F/C Tapenade, Rapsodia

C/C++ ADOL-C, ADIC

Matlab ADiMat, INTLAB

Python/R ADOL-C

Also done in AMPL, GAMS, JULIA!

ISNET5 7

Page 11: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Numerical DifferentiationThe Problem: Finite differences sensitive to choice of h

f(t0 + h)− f(t0)

h≈ f ′

s(t0)

−0.54 −0.52 −0.5 −0.48 −0.460.2

0.22

0.24

0.26

0.28

No

isy f

(x)

x

ISNET5 8

Page 12: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Numerical DifferentiationThe Problem: Finite differences sensitive to choice of h

f(t0 + h)− f(t0)

h≈ f ′

s(t0)

−0.54 −0.52 −0.5 −0.48 −0.460.2

0.22

0.24

0.26

0.28

No

isy f

(x)

x

Minimize E E(h) = E

(

f(t0+h)−f(t0)h

− f ′s(t0)

)2

ISNET5 8

Page 13: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Optimal Forward Difference Parameter h

10−6

10−4

10−2

100

10−2

100

102

Re

lative

Err

or

of

f’ E

stim

ate

Step Size, h

ISNET5 9

Page 14: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Optimal Forward Difference Parameter h

1

4µ2

Lh2 + 2

ε2f

h2≤ E E(h) ≤ 1

4µ2

Mh2 + 2ε2f

h2

10−6

10−4

10−2

100

10−2

100

102

Re

lative

Err

or

of

f’ E

stim

ate

Step Size, h

h ↓ Variance (noise) dominates

h ↑ Bias (f ′′) dominates

1. Upper bound minimized by

hM = 81/4(

εfµM

)1/2

ε2f =Varf(t0) µM ≥ |f

′′|

2. When µL > 0, hM is near-optimal:

E E(hM ) =√2µMεf ≤

(

µM

µL

)

min0≤h≤h0

E E(h) .

[Estimating Noisy Derivatives. More & W., TOMS 2012]]

ISNET5 9

Page 15: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Simulation-Based Optimization

minx∈Rn

f(x) = F [S(x)] : c(S(x)) ≤ 0, x ∈ B

Optimize expensive, nonlinear functions arising in science & engineering

“parameter estimation”, “model calibration”, “design optimization”, . . .

⋄ f : Rn → R objective, S : Rn → Rp numerical simulation, Ω constraints

⋄ Evaluating S means running a simulation modeling some (smooth) processEx- S = solving PDEs via finite elements

Here: assume f is from a deterministic computer simulation

⋄ S can contribute to objective and/or constraints, possibly noisy

⋄ Derivatives ∇xS often unavailable or prohibitively expensive toobtain/approximate directly

⋄ S (could/must be parallelized) takes secs/mins/hrs/days for 1 x

Evaluation is a bottleneck for optimization

B compact, known region (e.g., finite bound constraints)

ISNET5 10

Page 16: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Deterministic Algorithms

Page 17: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

“Simplest” (=Most Naive) Formulation: Blackbox f

Optimizer gives x, physicist provides f(x)

⋄ f can be a blackbox (executable only orproprietary/legacy codes)

⋄ Only give a single output

no derivatives with respect to x: ∇xS(x),∇2x,xS(x)

no problem structure

Good solutions guaranteed in the limit, but:

⋄ Computational budget limits number of evaluations

ISNET5 12

Page 18: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

“Simplest” (=Most Naive) Formulation: Blackbox f

Optimizer gives x, physicist provides f(x)

⋄ f can be a blackbox (executable only orproprietary/legacy codes)

⋄ Only give a single output

no derivatives with respect to x: ∇xS(x),∇2x,xS(x)

no problem structure

Good solutions guaranteed in the limit, but:

⋄ Computational budget limits number of evaluations

Two main styles of local algorithms

⋄ Direct search methods (pattern search, Nelder-Mead,. . . )

⋄ Model- (“surrogate-”)based methods (quadratics, radialbasis functions, . . . )

ISNET5 12

Page 19: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Algorithms: Direct Search Methods

Pattern Search

2.2 2.4 2.6 2.8 3 3.2 3.4 3.6

1.5

2

2.5

3

3.5

4

4.5

5

Easy to parallelize f evaluations

Nelder-Mead

x

y

Popularized by Numerical Recipes

⋄ Rely on indicator functions: [f(xk + s) <? f(xk)]

⋄ Work with black-box f(x), do not exploit structure F [x,S(x)]

→[Kolda, Lewis, Torczon, SIREV 2003]

ISNET5 13

Page 20: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Trust-Region Methods Use Models Instead of f

To reduce the number of expensive f evaluations→ Replace difficult optimization problem min f(x) with a much simpler onemin m(x) : x ∈ B

Classic NLP Technique:

f Original function: computationallyexpensive, no derivatives

m Surrogate model: computationallyattractive, analytic derivatives

ISNET5 14

Page 21: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Basic Trust-Region Idea

Use a surrogate m(x) in place of the unwieldy f(x)

2.2 2.4 2.6 2.8 3 3.2 3.4 3.6

1.5

2

2.5

3

3.5

4

4.5

5

Optimize over m to avoid expense of f

⋄ Trust m to approximate f withinB = x ∈ R

n : ‖x− xk‖ ≤ ∆k,⋄ Obtain next point from

min m(x) : x ∈ B,⋄ Evaluate function and update (xk,∆k)

based on how good the model’sprediction was.

ISNET5 15

Page 22: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Basic Trust-Region Idea

Use a surrogate m(x) in place of the unwieldy f(x)

2.2 2.4 2.6 2.8 3 3.2 3.4 3.6

1.5

2

2.5

3

3.5

4

4.5

5

Optimize over m to avoid expense of f

⋄ Trust m to approximate f withinB = x ∈ R

n : ‖x− xk‖ ≤ ∆k,⋄ Obtain next point from

min m(x) : x ∈ B,⋄ Evaluate function and update (xk,∆k)

based on how good the model’sprediction was.

ISNET5 15

Page 23: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Basic Trust-Region Idea

Use a surrogate m(x) in place of the unwieldy f(x)

2.2 2.4 2.6 2.8 3 3.2 3.4 3.6

1.5

2

2.5

3

3.5

4

4.5

5

Optimize over m to avoid expense of f

⋄ Trust m to approximate f withinB = x ∈ R

n : ‖x− xk‖ ≤ ∆k,⋄ Obtain next point from

min m(x) : x ∈ B,⋄ Evaluate function and update (xk,∆k)

based on how good the model’sprediction was.

ISNET5 15

Page 24: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Basic Trust-Region Idea

Use a surrogate m(x) in place of the unwieldy f(x)

2.2 2.4 2.6 2.8 3 3.2 3.4 3.6

1.5

2

2.5

3

3.5

4

4.5

5

Optimize over m to avoid expense of f

⋄ Trust m to approximate f withinB = x ∈ R

n : ‖x− xk‖ ≤ ∆k,⋄ Obtain next point from

min m(x) : x ∈ B,⋄ Evaluate function and update (xk,∆k)

based on how good the model’sprediction was.

ISNET5 15

Page 25: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Interpolation-Based Trust-Region Methods

Iteration k:

⋄ Build a model mk interpolating f

on Yk⋄ Trust mk within region Bk⋄ Minimize mk within Bk to obtain

next point for evaluation

⋄ Do expensive evaluation

⋄ Update mk and Bk based on howgood model prediction was

ISNET5 16

Page 26: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Interpolation-Based Trust-Region Methods

Iteration k:

⋄ Build a model mk interpolating f

on Yk⋄ Trust mk within region Bk⋄ Minimize mk within Bk to obtain

next point for evaluation

⋄ Do expensive evaluation

⋄ Update mk and Bk based on howgood model prediction was

ISNET5 16

Page 27: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Interpolation-Based Trust-Region Methods

Iteration k:

⋄ Build a model mk interpolating f

on Yk⋄ Trust mk within region Bk⋄ Minimize mk within Bk to obtain

next point for evaluation

⋄ Do expensive evaluation

⋄ Update mk and Bk based on howgood model prediction was

ISNET5 16

Page 28: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Interpolation-Based Trust-Region Methods

Iteration k:

⋄ Build a model mk interpolating f

on Yk⋄ Trust mk within region Bk⋄ Minimize mk within Bk to obtain

next point for evaluation

⋄ Do expensive evaluation

⋄ Update mk and Bk based on howgood model prediction was

ISNET5 16

Page 29: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Exploit Structure!

Page 30: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Performance of Model-Based Methods

0 100 200 300 400

104

106

108

1010

Number of Evaluations

pounder

pounders

poundersm

Optimizing EDF in [Bertolli et al., PRC 2012]

ISNET5 18

Page 31: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Parameter Estimation is NOT a Blackbox Problem

Generic:

minxf(x) : x ∈ Ω ⊆ R

n

x n decision variables

f : Rn → R objective function

Ω feasible region,x : cE(x) = 0, cI(x) ≤ 0cE (vector of) equality constraintscI (vector of) inequality constraints

ISNET5 19

Page 32: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Parameter Estimation is NOT a Blackbox Problem

Generic:

minxf(x) : x ∈ Ω ⊆ R

n

x n decision variables

f : Rn → R objective function

Ω feasible region,x : cE(x) = 0, cI(x) ≤ 0cE (vector of) equality constraintscI (vector of) inequality constraints

Typical calibration problem:

f(x) = ‖R(x)‖22 =∑p

i=1Ri(x)

2

x n coupling constants

Ri : Rn → R residual functionEx.- 1

wi(S(x; θi)− di)

S(x; θi): numerical simulationEx.- Obtain χ2(x) by 1

p−n f(x)

Ω = x : l ≤ x ≤ u Finite bounds (for some xi) Often dictated by dom(S)

[Ekstrom et al, PRL 2013] [Kortelainen et al, PRC 2014]

ISNET5 19

Page 33: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Parameter Estimation is NOT a Blackbox Problem

Generic:

minxf(x) : x ∈ Ω ⊆ R

n

x n decision variables

f : Rn → R objective function

Ω feasible region,x : cE(x) = 0, cI(x) ≤ 0cE (vector of) equality constraintscI (vector of) inequality constraints

Typical calibration problem:

f(x) = ‖R(x)‖22 =∑p

i=1Ri(x)

2

x n coupling constants

Ri : Rn → R residual functionEx.- 1

wi(S(x; θi)− di)

S(x; θi): numerical simulationEx.- Obtain χ2(x) by 1

p−n f(x)

Ω = x : l ≤ x ≤ u Finite bounds (for some xi) Often dictated by dom(S)

[Ekstrom et al, PRL 2013] [Kortelainen et al, PRC 2014]

⋄ Taking advantage of structure should further reduce # of expensive evaluations

ISNET5 19

Page 34: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Exploiting Nonlinear Least Squares Structure

Obtain a vector of output R1(x), . . . , Rp(x)

⋄ (Locally) Model each Ri by a surrogate q(i)k

Ri(x) ≈ q(i)k (x) = Ri(xk) + (x− xk)

⊤g(i)k +

1

2(x− xk)

⊤H(i)k (x− xk)

⋄ Employ models in the approximation

∇f(x) =∑

i ∇Ri(x)Ri(x) →∑

i g(i)k

(x)Ri(x)

∇2f(x) =

i ∇Ri(x)∇Ri(x)T + Ri(x)∇2Ri(x) →∑

i g(i)k

(x)g(i)k

(x)T + Ri(x)H(i)k

(x)

ISNET5 20

Page 35: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

General Nonlinear Least Squares

minx

f(x) = ‖R(x)‖2W

R : Rn → Rp “residual vector”

→ Think: Ri(x) = S(x; θi)− di

W norm: ‖y‖W =(

yTWy)1/2

→W = Ip recovers ‖ · ‖2

ISNET5 21

Page 36: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

General Nonlinear Least Squares

minx

f(x) = ‖R(x)‖2W

R : Rn → Rp “residual vector”

→ Think: Ri(x) = S(x; θi)− di

W norm: ‖y‖W =(

yTWy)1/2

→W = Ip recovers ‖ · ‖2W symmetric positive definite

W = WT

yTWy > 0 for all y 6= 0

f(x) =

p∑

i=1

p∑

j=1

Wi,jRi(x)Rj (x) ≥ 0

ISNET5 21

Page 37: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

General Nonlinear Least Squares

minx

f(x) = ‖R(x)‖2W

R : Rn → Rp “residual vector”

→ Think: Ri(x) = S(x; θi)− di

W norm: ‖y‖W =(

yTWy)1/2

→W = Ip recovers ‖ · ‖2W symmetric positive definite

W = WT

yTWy > 0 for all y 6= 0

f(x) =

p∑

i=1

p∑

j=1

Wi,jRi(x)Rj (x) ≥ 0

W = (diag(σ))−1 yields familiar

f(x) =

p∑

i=1

(S(x; θi)− di)2

σi

=

p∑

i=1

Ri(x)2

σi

ISNET5 21

Page 38: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

A Warning

Can I pass this to my favorite minx χ2(x) = ‖R(x)‖2 solver?

ISNET5 22

Page 39: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

A Warning

Can I pass this to my favorite minx χ2(x) = ‖R(x)‖2 solver?

∑pi=1

∑pj=1

(

Ri,j(x))2

=∑p

i=1

∑pj=1

(

|Wi,jRi(x)Rj(x)|)2

6= ∑pi=1

∑pj=1 Wi,jRi(x)Rj(x)

ISNET5 22

Page 40: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

A Warning

Can I pass this to my favorite minx χ2(x) = ‖R(x)‖2 solver?

∑pi=1

∑pj=1

(

Ri,j(x))2

=∑p

i=1

∑pj=1

(

|Wi,jRi(x)Rj(x)|)2

6= ∑pi=1

∑pj=1 Wi,jRi(x)Rj(x)

! Allow for complex-valued residuals

! Disallow Wi,jRi(x)Rj(x) < 0

In any case, you will likely suffer algorithmically

ISNET5 22

Page 41: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Relationship to Covariance Matrices

Data (θ1, d1), · · · , (θp, dp)⋄ Errors independent and normally distributed: d ∼ N(µ,Σ),

di = µ(θi;x∗) + εi, εi ∼ N(0, σ2i ) i = 1, . . . , p.

Σ is a p× p diagonal matrix, with ith diagonal entry σ2i

⋄ Model, S(θ;x) with Gaussian errors:

[S(θ1;x), · · · , S(θp;x)]T ∼ N (µ(·; x), C) ,

⋄ C a (p × p symmetric positive definite) covariance matrix accounting forcorrelation between model outputs (i.e., Cov(S(θi; x), S(θj ;x)) = Ci,j)

⋄ Assuming model errors are independent of data errors,

[m(x; θ1)− d1, · · · , m(x; θp)− dp]T ∼ N(0, C +Σ),

⋄ Joint likelihood l(x; θ; d) ∝ exp

[

−1

2R(x; θ)T (C+Σ)−1 R(x; θ)

]

ISNET5 23

Page 42: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Relationship to Covariance Matrices

Data (θ1, d1), · · · , (θp, dp)⋄ Errors independent and normally distributed: d ∼ N(µ,Σ),

di = µ(θi;x∗) + εi, εi ∼ N(0, σ2i ) i = 1, . . . , p.

Σ is a p× p diagonal matrix, with ith diagonal entry σ2i

⋄ Model, S(θ;x) with Gaussian errors:

[S(θ1;x), · · · , S(θp;x)]T ∼ N (µ(·; x), C) ,

⋄ C a (p × p symmetric positive definite) covariance matrix accounting forcorrelation between model outputs (i.e., Cov(S(θi; x), S(θj ;x)) = Ci,j)

⋄ Assuming model errors are independent of data errors,

[m(x; θ1)− d1, · · · , m(x; θp)− dp]T ∼ N(0, C +Σ),

⋄ Joint likelihood l(x; θ; d) ∝ exp

[

−1

2R(x; θ)T (C+Σ)−1 R(x; θ)

]

Warning: C,Σ can no longer hide behind constants of proportionality

ISNET5 23

Page 43: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Optical Potentials: Incorporating Covariances in W

Elastic

0 501000 501000 2040020400 204002040020400 2040020400 501000 5010001002000100200020040005001000010002000010002000

050

1000

501000

2040

02040

02040

02040

02040

02040

02040

050

1000

50100

01002000100200

0200400

0500

1000010002000

010002000

→ Monday talk of LovellISNET5 24

Page 44: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Applications Using the Jacobian [J ]i,j =∂Ri(x)

∂xj= 1

wi

∂S(x;θi)∂xj

Residual R(x) ∈ Rp undergoes a change by ǫ ∈ R

p

⋄ Ex.- normalized datum diwi

is changed to diwi

+ ǫi

x ∈ arg minx∈Rn

f0(x) = ‖R(x)‖22 xǫ ∈ arg minx∈Rn

f(x) = ‖R(x) + ǫ‖22

A second-order expansion of f = ‖R(x) + ǫ‖22 about x:

f(x) + 2ǫT J(x− x) +1

2(x− x)T

(

∇2f0(x) + 2

p∑

i=1

ǫi∇2Ri(x)

)

(x− x),

When ǫ is small, this quadratic will be convex and hence minimized at

xǫ − x = 2(

∇2f0(x))−1

JT ǫ+O(‖ǫ‖2).

When R(x) is small, ∇2f0(x) ≈ 2JT J and

xǫ ≈ x+(

JT J)−1

JT ǫ

ISNET5 25

Page 45: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Stochastic Optimization

Page 46: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Stochastic Optimization

General problemmin

f(x) = Eξ [F (x, ξ)] : x ∈ X

(1)

⋄ x ∈ Rn decision variables

⋄ ξ vector of random variables independent of x P (ξ) distribution function for ξ ξ has support Ξ

⋄ F (x, ·) functional form of uncertainty for decision x

⋄ X ⊆ Rn set defined by deterministic constraints

ISNET5 27

Page 47: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Approach of Sampling Methods for f(x) = Eξ [F (x, ξ)]

⋄ Let ξ1, ξ2, · · · , ξN ∼ P

⋄ For x ∈ X, define:

fN (x) =1

N

N∑

i=1

F (x, ξi)

fN is a random variable (really, a stochastic process)

(depends on(

ξ1, ξ2, · · · , ξN)

)

Motivated by Eξ [fN (x)] = f(x)

ISNET5 28

Page 48: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Bias of Sampling Methods

⋄ Let f∗ = f(x∗) for x∗ ∈ X∗ ⊆ X

ISNET5 29

Page 49: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Bias of Sampling Methods

⋄ Let f∗ = f(x∗) for x∗ ∈ X∗ ⊆ X

⋄ For any N ≥ 1:Eξ [f

∗N ] ≤ f∗ = Eξ [F (x∗, ξ)]

because

Eξ [f∗1 ] = Eξ [min F (x, ξ) : x ∈ X] ≤ min

Eξ [F (x, ξ)] : x ∈ X

= f∗

ISNET5 29

Page 50: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Bias of Sampling Methods

⋄ Let f∗ = f(x∗) for x∗ ∈ X∗ ⊆ X

⋄ For any N ≥ 1:Eξ [f

∗N ] ≤ f∗ = Eξ [F (x∗, ξ)]

because

Eξ [f∗1 ] = Eξ [min F (x, ξ) : x ∈ X] ≤ min

Eξ [F (x, ξ)] : x ∈ X

= f∗

⋄ Sampling problems result in optimal values below f∗

⋄ f∗N is biased estimator of f∗

ISNET5 29

Page 51: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Sample Average Approximation

⋄ Draw realizations ξ1, ξ2, · · · , ξN ∼ P of(

ξ1, ξ2, · · · , ξN)

⋄ Replace (1) with

min

1

N

N∑

i=1

F (x, ξi) : x ∈ X

(2)

fN (x) = 1N

∑Ni=1 F (x, ξi) deterministic

Follows mean of the N sample paths defined by the (fixed) ξi

ISNET5 30

Page 52: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Convergence with N

⋄ A sufficient condition: For any ǫ > 0 there exists Nǫ so that

∣fN (x)− f(x)

∣< ǫ ∀N ≥ Nǫ ∀ x ∈ X

with probability 1 (wp1).

⋄ Then f∗N → f∗ wp1.

⋄ (With additional assumptions on f and X∗ ⊂ X):dist(x∗

N , X∗)→ 0

⋄ (+ uniqueness, X∗ = x∗):x∗N → x∗

ISNET5 31

Page 53: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Stochastic Approximation Method

Basically just:

Input x0

1. xk+1 ← PX

xk − αksk

, k = 0, 1, . . .

⋄ αk a step size

⋄ sk a random direction

ISNET5 32

Page 54: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Stochastic Approximation Method

Basically just:

Input x0

1. xk+1 ← PX

xk − αksk

, k = 0, 1, . . .

⋄ αk a step size

⋄ sk a random direction

Generally assume:

αk:∑∞

k=0 αk =∞,∑∞

k=0 α2k <∞ (e.g., αk = c

k)

sk: E

∇f(xk)T sk

> 0

sk is an ascent direction (in expectation) at xk

ISNET5 32

Page 55: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Stochastic Approximation Method

Basically just:

Input x0

1. xk+1 ← PX

xk − αksk

, k = 0, 1, . . .

⋄ αk a step size

⋄ sk a random direction

Generally assume:

αk:∑∞

k=0 αk =∞,∑∞

k=0 α2k <∞ (e.g., αk = c

k)

sk: E

∇f(xk)T sk

> 0

sk is an ascent direction (in expectation) at xk

⋄ “Exact” Stochastic Gradient Descent: sk = ∇f(xk)

ISNET5 32

Page 56: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Classic SA Algorithms

⋄ “Original” method is Robbins-Monro (1951)

⋄ Without derivatives: Kiefer-Wolfowitz (1952)replaces gradient with finite-difference approximation, e.g.,

1. xk+1 ← xk − αksk , k = 0, 1, . . .

where

sk =

F (xk + hkIn; ξk)− F (xk − hkIn; ξ

k+1/2)

2hk

ISNET5 33

Page 57: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Classic SA Algorithms

⋄ “Original” method is Robbins-Monro (1951)

⋄ Without derivatives: Kiefer-Wolfowitz (1952)replaces gradient with finite-difference approximation, e.g.,

1. xk+1 ← xk − αksk , k = 0, 1, . . .

where

sk =

F (xk + hkIn; ξk)− F (xk − hkIn; ξ

k+1/2)

2hk

Requires 2n evaluations every iteration Can appeal to variance reduction techniques (e.g., common RNs) Convergence xk → x∗ if f strongly convex (near x∗), usual conditions on αk ,

hk → 0,∑

k

α2k

h2k

<∞

K-W recommend: αk = 1k , hk = 1

k1/3

ISNET5 33

Page 58: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Classic SA Algorithms

⋄ “Original” method is Robbins-Monro (1951)

⋄ Without derivatives: Kiefer-Wolfowitz (1952)replaces gradient with finite-difference approximation, e.g.,

1. xk+1 ← xk − αksk , k = 0, 1, . . .

where

sk =

F (xk + hkIn; ξk)− F (xk − hkIn; ξ

k+1/2)

2hk

Requires 2n evaluations every iteration Can appeal to variance reduction techniques (e.g., common RNs) Convergence xk → x∗ if f strongly convex (near x∗), usual conditions on αk ,

hk → 0,∑

k

α2k

h2k

<∞

K-W recommend: αk = 1k , hk = 1

k1/3

⋄ Extensions such as SPSA (Spall) reduce number of evaluations (see randomizedmethods slides. . . )

ISNET5 33

Page 59: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Derivative-Based Stochastic Gradient Descent

Input x0; Repeat:

1. Draw realization ξk ∼ P of ξk

2. Compute sk = ∇xF (xk; ξk)

3. Update xk+1 ← PX

xk − αksk

ISNET5 34

Page 60: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Derivative-Based Stochastic Gradient Descent

Input x0; Repeat:

1. Draw realization ξk ∼ P of ξk

2. Compute sk = ∇xF (xk; ξk)

3. Update xk+1 ← PX

xk − αksk

⋄ ∇xF (xk; ξk) is an unbiased estimator for ∇f(xk)

ISNET5 34

Page 61: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Derivative-Based Stochastic Gradient Descent

Input x0; Repeat:

1. Draw realization ξk ∼ P of ξk

2. Compute sk = ∇xF (xk; ξk)

3. Update xk+1 ← PX

xk − αksk

⋄ ∇xF (xk; ξk) is an unbiased estimator for ∇f(xk)

⋄ Can incorporate curvature if desired

e.g., Bksk an unbiased estimator for(

∇2f(xk))

−1∇f(xk)

⋄ Can work with subgradients

⋄ Can even output xN = 1N

∑Nk=1 x

k

ISNET5 34

Page 62: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Randomized Algorithms for Deterministic Problems

min f(x) : x ∈ X ⊆ Rn

⋄ f deterministic

⋄ Random variables are now generated by the method, not from the problem

⋄ Often assume properties of fe.g., ∇f is L′-Lipschitz:

‖∇f(x)−∇f(y)‖ ≤ L′‖x− y‖ ∀x, y ∈ X

e.g., f is strongly convex (with parameter τ ):

f(x) ≥ f(y) + (x− y)T∇f(y) +τ

2‖x− y‖2 ∀x, y ∈ X

ISNET5 35

Page 63: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Basic Algorithms

Matyas (e.g., 1965):

⋄ Input x0; repeat:

1. Generate Gaussian uk (centered about 0)

2. Evaluate f(xk + uk)

3. xk+1 =

xk + uk if f(xk + uk) < f(xk)

xk otherwise.

ISNET5 36

Page 64: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Basic Algorithms

Matyas (e.g., 1965):

⋄ Input x0; repeat:

1. Generate Gaussian uk (centered about 0)

2. Evaluate f(xk + uk)

3. xk+1 =

xk + uk if f(xk + uk) < f(xk)

xk otherwise.

Poljak (e.g., 1987)

⋄ Input x0, hk, µkk; repeat:1. Generate a random uk ∈ Rn

2. xk+1 = xk − hkf(xk + µku

k)− f(xk)

µk

uk

hk > 0 is the step size µk > 0 is called the smoothing parameter

ISNET5 36

Page 65: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Applying SA-Like Ideas to Special Cases

min

f(x) =1

m

m∑

i=1

Fi(x) : x ∈ X

m huge

ISNET5 37

Page 66: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Applying SA-Like Ideas to Special Cases

min

f(x) =1

m

m∑

i=1

Fi(x) : x ∈ X

m hugeEx.- Nonlinear Least Squares Warning: likely nonconvex!

Fi(x) = ‖φ(x; θi)− di‖2Evaluating φ(·, ·) requires solving a large PDE

Ex.- Sample Average Approximation

Fi(x) = R(x; ξi)

ξi ∈ Ω a scenario/RV realization

(and R depends nontrivially on ξi)

ISNET5 37

Page 67: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Applying SA-Like Ideas to Special Cases

min

f(x) =1

m

m∑

i=1

Fi(x) : x ∈ X

m hugeEx.- Nonlinear Least Squares Warning: likely nonconvex!

Fi(x) = ‖φ(x; θi)− di‖2Evaluating φ(·, ·) requires solving a large PDE

Ex.- Sample Average Approximation

Fi(x) = R(x; ξi)

ξi ∈ Ω a scenario/RV realization

(and R depends nontrivially on ξi)

The good:

⋄ ∇f(x) =∑mi=1∇Fi(x)

The bad:

⋄ m still huge

ISNET5 37

Page 68: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Residual Stochastic Averaging

min

f(x) =1

m

m∑

i=1

Fi(x) : x ∈ X

“Fi(x) is a member of a population of size m”

ISNET5 38

Page 69: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Residual Stochastic Averaging

min

f(x) =1

m

m∑

i=1

Fi(x) : x ∈ X

“Fi(x) is a member of a population of size m”

⋄ Randomly sample S, a subset of size |S|, from 1, · · · ,m

ISNET5 38

Page 70: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Residual Stochastic Averaging

min

f(x) =1

m

m∑

i=1

Fi(x) : x ∈ X

“Fi(x) is a member of a population of size m”

⋄ Randomly sample S, a subset of size |S|, from 1, · · · ,m⋄ Under minimal assumptions:

E

1

|S|∑

i∈S

Fi(x)

= f(x) and E

1

|S|∑

i∈S

∇Fi(x)

= ∇f(x)

ISNET5 38

Page 71: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Residual Stochastic Averaging

min

f(x) =1

m

m∑

i=1

Fi(x) : x ∈ X

“Fi(x) is a member of a population of size m”

⋄ Randomly sample S, a subset of size |S|, from 1, · · · ,m⋄ Under minimal assumptions:

E

1

|S|∑

i∈S

Fi(x)

= f(x) and E

1

|S|∑

i∈S

∇Fi(x)

= ∇f(x)

⋄ Use −∇fS = − 1|S|

i∈S ∇Fi(x) as direction sk

ISNET5 38

Page 72: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Residual Stochastic Averaging

min

f(x) =1

m

m∑

i=1

Fi(x) : x ∈ X

“Fi(x) is a member of a population of size m”

⋄ Randomly sample S, a subset of size |S|, from 1, · · · ,m⋄ Under minimal assumptions:

E

1

|S|∑

i∈S

Fi(x)

= f(x) and E

1

|S|∑

i∈S

∇Fi(x)

= ∇f(x)

⋄ Use −∇fS = − 1|S|

i∈S ∇Fi(x) as direction sk

⋄ How to choose S?

E

‖∇fSn −∇f‖2

=

(

1− |S|m

)

E

‖∇fSr −∇f‖2

⇒ sampling without replacement (Sn) gives lower variance than does samplingwith replacement (Sr)

ISNET5 38

Page 73: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Bayesian Optimization for Approximate Global Optimization

Statistical approaches (e.g., EGO [Jones et al., 1998])

⋄ enjoy global exploration properties,

⋄ excel when simulation is expensive, noisy, nonconvex

. . . but offer limited support for constraints[Schonlau et al., 1998]; [Gramacy & Lee, 2011]; [Williams et al., 2010]

ISNET5 39

Page 74: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Bayesian Optimization for Approximate Global Optimization

Statistical approaches (e.g., EGO [Jones et al., 1998])

⋄ enjoy global exploration properties,

⋄ excel when simulation is expensive, noisy, nonconvex

. . . but offer limited support for constraints[Schonlau et al., 1998]; [Gramacy & Lee, 2011]; [Williams et al., 2010]

Combine (global) statistical (objective-only) optimization tools

a) response surface modeling/emulation: training a flexible model fk onx(i), y(i)ki=1 to guide choosing x(k+1)

e.g., [Mockus, et al., 1978], [Booker et al., 1999]

b) expected improvement (EI) via Gaussian process (GP) emulation [Jones, et al., 1998]

... with a tool from mathematical programming

c) augmented Lagrangian (AL): for handling nonlinear constraints [Powell, 1969],[Bertsekas, 1982], . . .

Similar approach for combining other data terms[Picheny, Gramacy, W., Le Digabel. NIPS 2016]; [Gramacy et al, Technometrics 2016]

ISNET5 39

Page 75: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Expected Improvement

Improvement: I(x) = max0, fkmin − Y (x), fk

min ≡ mini=1,...,k

f(xi)

Expectation of improvement (EI) has closed-form expression:

EI(x)=(fkmin − µk(x))Φ

(

fkmin − µk(x)

σk(x)

)

+ σn(x)φ

(

fkmin − µk(x)

σk(x)

)

ISNET5 40

Page 76: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Expected Improvement

Improvement: I(x) = max0, fkmin − Y (x), fk

min ≡ mini=1,...,k

f(xi)

Expectation of improvement (EI) has closed-form expression:

EI(x)=(fkmin − µk(x))Φ

(

fkmin − µk(x)

σk(x)

)

+ σn(x)φ

(

fkmin − µk(x)

σk(x)

)

0 2 4 6 8 10 120

2

4

6

8

10

12

DACEpredictor

Standarderror

fmin

0 2 4 6 8 10 120

2

4

6

8

10

12

0

0.01

0.02

0.03

0.04

0.05

0.06

0 2 4 6 8 10 120

2

4

6

8

10

12

0

0.001

0.002

0.003

0.004

0.005

0.006

0.007

0.008

0.009

0.01

µn(x)

n(x)

EI (x) EI (x)

⋄ balanceexploitation andexploration

⋄ e.g., EGO: [Jones, et

al., 1998]

ISNET5 40

Page 77: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Separate, Independent Component Modeling

⋄ f −→ Yf (x)

⋄ c = (c1, . . . , cm) −→ Yc(x) = (Yc1 (x), . . . , Ycm(x))

Distribution of composite random variable serves as a surrogate for LA(x; λ, ρ):

Y (x) = Yf (x) + λ⊤Yc(x) +1

m∑

j=1

max(0, Ycj (x))2

ISNET5 41

Page 78: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Separate, Independent Component Modeling

⋄ f −→ Yf (x)

⋄ c = (c1, . . . , cm) −→ Yc(x) = (Yc1 (x), . . . , Ycm(x))

Distribution of composite random variable serves as a surrogate for LA(x; λ, ρ):

Y (x) = Yf (x) + λ⊤Yc(x) +1

m∑

j=1

max(0, Ycj (x))2

Simplifications when f is known:

⋄ Composite posterior mean available in closed form; e.g., under GP priors:

EY (x) = µkf (x) + λ⊤µk

c (x) +1

m∑

j=1

Emax(0, Ycj (x))2

⋄ Generalized EI [Schonlau et al., 1998] gives

Emax(0, Ycj (x))2 = σ2n

cj(x)

1+

(

µkcj(x)

σkcj(x)

)2

Φ

(

µkcj(x)

σkcj(x)

)

+µkcj(x)

σkcj(x)

φ

(

µkcj(x)

σkcj(x)

)

ISNET5 41

Page 79: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Summary

⋄ Move beyond “blackbox” optimization

⋄ Exploiting structure yields better solutions, in fewer simulations

⋄ Promote optimization/modeling considerations during code development

⋄ Correlated residuals a first step

⋄ Highlights attention that must be paid to model and data uncertainties

⋄ Can repeat for nonGaussian, MAPs, . . .

[www.mcs.anl.gov/tao (Optimization toolkit) www.mcs.anl.gov/~wild (Get in touch!)]

Grateful to relevant coauthors

M. Bertolli, A. Ekstrom, C. Forssen, R. Gramacy, G. Hagen, M. Hjorth-Jensen, D. Higdon, G.R. Jansen,

M. Kortelainen, E. Lawrence, T. Lesinski, A. Lovell, R. Machleidt, J. McDonnell, J. More, T. Munson, H. Nam,

W. Nazarewicz, F.M. Nunes, E. Olsen, T. Papenbrock, A. Pastore, P.-G. Reinhardt, J. Sarich, N. Schunck,

M. Stoitsov, J. Vary, K. Wendt, and others

ISNET5 42

Page 80: Numerical Optimization for Physicists and Statisticiansbayesint.github.io/isnet5/wild.pdf · Mathematical/Numerical Optimization for ISNET PossibleTopics Today ⋄Optimization Basics

Summary

⋄ Move beyond “blackbox” optimization

⋄ Exploiting structure yields better solutions, in fewer simulations

⋄ Promote optimization/modeling considerations during code development

⋄ Correlated residuals a first step

⋄ Highlights attention that must be paid to model and data uncertainties

⋄ Can repeat for nonGaussian, MAPs, . . .

[www.mcs.anl.gov/tao (Optimization toolkit) www.mcs.anl.gov/~wild (Get in touch!)]

Grateful to relevant coauthors

M. Bertolli, A. Ekstrom, C. Forssen, R. Gramacy, G. Hagen, M. Hjorth-Jensen, D. Higdon, G.R. Jansen,

M. Kortelainen, E. Lawrence, T. Lesinski, A. Lovell, R. Machleidt, J. McDonnell, J. More, T. Munson, H. Nam,

W. Nazarewicz, F.M. Nunes, E. Olsen, T. Papenbrock, A. Pastore, P.-G. Reinhardt, J. Sarich, N. Schunck,

M. Stoitsov, J. Vary, K. Wendt, and others

Thank You!

ISNET5 42