stochastic global optimization as a filtering problem · in order to reach a global extremum we...

24
Stochastic global optimization as a filtering problem Panos Stinis Northwest Institute for Advanced Computing Pacific Northwest National Laboratory Barrett Lectures, UT Knoxville, May 2015

Upload: others

Post on 23-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Stochastic global optimizationas a filtering problem

Panos Stinis

Northwest Institute for Advanced ComputingPacific Northwest National Laboratory

Barrett Lectures, UT Knoxville, May 2015

Page 2: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Global optimization

For many optimization problems the function to be optimized isa function of several variables and can have multiple localextrema. Usually, optimization algorithms are guaranteed tofind a local extremum when started from an arbitrary initialcondition.

In order to reach a global extremum we should use anoptimization algorithm with multiple starting values and choosethe best solution.

There is a category of global (deterministic and stochastic)optimization algorithms (e.g. branch and bound, tabu methods,genetic algorithms, ant colony optimization, swarmoptimization, hill climbing) which allow for the different solutionsto interact.

Barrett Lectures, UT Knoxville, May 2015

Page 3: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Assume that we are given a scalar objective function H(x)depending on d parameters x1, . . . , xd . The purpose of theoptimization problem is to compute the optimal (maximal orminimal) value of H(x), say op(H(x)), as well as the optimalparameter vector x̂ = (x̂1, . . . , x̂d) for which H(x̂) = op(H(x)).

Generic optimization algorithm1 Pick an initial approximation x (0).2 For k ≥ 0, compute x (k+1) = f (x (k)), where

f (x) = (f1(x), . . . , fd(x)) is a d-dimensional vector-valuedfunction. Different optimization algorithms use a differentfunction f (x). Note that in general f can depend on statesx (i) with i < k . Here we restrict attention to algorithmswhich use only the previous state, i.e. x (k).

3 Evaluate H(x (k+1)). If H(x (k+1)) satisfies a stoppingcriterion or k + 1 is equal to a maximum number ofiterations kmax , stop. Else, set k = k + 1 and proceed toStep 2.

Barrett Lectures, UT Knoxville, May 2015

Page 4: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

For many optimization problems we only have access to arandom estimate of the value of the objective function H and ofthe optimization algorithm function f .

The uncertainty can come from noisy measurements. Also, itcan be due to Monte Carlo sampling error for cases where Hand/or f involve expectations with respect to a probabilitydensity.

Stochastic optimization algorithm1 Pick an initial approximation x (0).

2 For k ≥ 0, compute x (k+1) = fs(x (k)).

3 Evaluate Hs(x (k+1)). If the value Hs(x (k+1)) satisfies astopping criterion or k + 1 is equal to a maximum numberof iterations kmax , stop. Else, set k = k + 1 and proceed toStep 2.

Barrett Lectures, UT Knoxville, May 2015

Page 5: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Naive stochastic global optimization algorithm1 Pick a collection of initial approximations x (0)

1 , . . . , x (0)N .

2 Run N realizations of the stochastic optimization algorithm,one for each of the different initial conditions x (0)

1 , . . . , x (0)N ,

say for M iterations.3 Choose x̃ = arg min

i=1...NHs(x

(M)i ).

In the naive stochastic global optimization algorithm eachrealization of the stochastic optimization algorithm evolvesindependently of the other realizations.

We would like to have a way to allocate more realizations inareas of the parameter vector space which seem morepromising in an optimization sense.

Main idea: Treat the stochastic global optimization problem asa filtering problem.

Barrett Lectures, UT Knoxville, May 2015

Page 6: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Filtering reformulation of stochastic optimization

The filtering problem consists of the problem of incorporatingthe information from noisy observations of a system (noisymeasurements of some function of the system) to correct theevolution trajectory of a system.

The evolved system is usually stochastic and, in the case ofdiscrete time formulations, a stochastic map.

In the stochastic optimization algorithm context, we can thinkof: i) the iterative optimization algorithm, i.e. x (k+1) = fs(x (k)),as the stochastic map and ii) the evaluation of the objectivefunction, i.e. Hs(x (k+1)), as the noisy observation of the system.

This allows us to recast (reformulate) the stochasticoptimization problem as a filtering problem.

Barrett Lectures, UT Knoxville, May 2015

Page 7: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Let k be a discrete time index. Consider the followingMarkovian model which is motivated by the stochasticoptimization algorithm:

x (k+1) = fs(x (k)), (1)

y (k+1) = Hs(x (k+1)) + v (k+1), (2)

where v (k+1) is a random variable with known properties.

The filtering problem is to compute, for any time t , the posteriordistribution distribution p(x (t)|y (1:t)),

Prediction : p(x (t)|y (1:t−1)) =

∫p(x (t)|x (t−1))p(x (t−1)|y (1:t−1))dx (t−1)

Update : p(x (t)|y (1:t)) =p(y (t)|x (t))p(x (t)|y (1:t−1))∫

p(y (t)|x (t))p(x (t)|y (1:t−1))dx (t)

Barrett Lectures, UT Knoxville, May 2015

Page 8: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Usually, we need to compute approximations to the marginaldistribution which should converge in some limit to the exactdistributions.

In filtering problems, the observations are provided by someexternal measurement of the system. In the context of thestochastic optimization algorithm, there is no externalmeasurement of the system.

The filtering reformulation of the stochastic optimizationproblem consists of solving the filtering problem for

x (k+1) = fs(x (k)),

y (k+1) = Hs(x (k+1)) + v (k+1),

where y (k+1) is a non-increasing sequence. Also, v (k+1) is asequence of independent random variables with propertiesdetermined by the accuracy with which we know the value ofHs(x (k+1)).

Barrett Lectures, UT Knoxville, May 2015

Page 9: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Let x (k+1) ∼ h(x (k+1)|x (k)) and y (k+1) ∼ g(y (k+1)|x (k+1)).Suppose that at time t we have a collection of M randomsamples (particles) (x (t)

1 , . . . , x (t)M ) which follow approximately

the current marginal distribution p(x (t)|y (1:t)).

Particle filter algorithm1 Draw x (t+1)

∗j from the state density h(x (t+1)|x (t)j ), for

j = 1, . . . ,M.

2 Assign to each draw the weight (also known as likelihood)w∗

j = g(y (t+1)|x (t+1)∗j ), for j = 1, . . . ,M.

3 Compute the normalized weights wj =w∗

j∑Ml=1 w∗

l, for

j = 1, . . . ,M.

4 Resample from (x (t+1)∗1 , . . . , x (t+1)

∗M ) with probabilityproportional to wj , j = 1, . . . ,M to produce new samples(x (t+1)

1 , . . . , x (t+1)M ) at time t + 1.

5 Set t = t + 1 and proceed to Step 1.

Barrett Lectures, UT Knoxville, May 2015

Page 10: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

The particle filter approximates the marginal distributionp(x (t)|y (1:t)) by the distribution p̃(x (t)|y (1:t)) = 1

M∑M

i=1 δx (t)i,

where δx (t)

iis the Dirac measure located at x (t)

i .

If one thinks of the particles in the particle filter construction asdifferent initial conditions for the stochastic global optimizationalgorithm, then the particle filter becomes the filter analog ofstochastic global optimization.

Following the notation of the particle filter algorithm, we canspecify the value of the observation at step k asy (k) = min

j=1,...,MHs(x

(k)∗j ).

Barrett Lectures, UT Knoxville, May 2015

Page 11: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

1 Draw samples x (0)1 , . . . , x (0)

M from an initial density µ0(x).Set t = 0.

2 Draw samples x (t+1)∗j from the state density h(x (t+1)

j |x (t)j ),

for j = 1, . . . ,M. Compute Hs(x(t+1)∗j ) for j = 1, . . . ,M.

3 Set y (t+1) = minj=1,...,M

Hs(x(t+1)∗j ).

4 Compute the weights w∗j (x

(t+1)∗j ) = g(y (t+1)|x (t+1)

∗j ), forj = 1, . . . ,M.

5 Compute the normalized weights wj(x(t+1)∗j ) =

w∗j∑M

l=1 w∗l, for

j = 1, . . . ,M.6 Choose the estimate x̃t+1 = arg max

j=1...Mwj(x

(t+1)∗j ). If t + 1 is

equal to a maximum allowed number of iterations tmax ory (t+1) satisfies a stopping criterion, terminate.

7 Resample from (x (t+1)∗1 , . . . , x (t+1)

∗M ) with probabilityproportional to wj , j = 1, . . . ,M to produce new samples(x (t+1)

1 , . . . , x (t+1)M ) at time t + 1.

8 Set t = t + 1 and proceed to Step 2.Barrett Lectures, UT Knoxville, May 2015

Page 12: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

The particle filter reformulation of global optimization does notsuffer from any convergence drawbacks compared to the naiveglobal optimization.

Naive global optimization will converge to a global minimumgiven an adequate number of particles. What the particle filteradds to the naive optimization is to pick, after every iteration,the solution that seems more promising in an optimizationsense.

Since we have chosen the value of the observation to be equalto the minimum of the values of the objective function over allparticles, the sequence of values picked for the objectivefunction is guaranteed to be non-increasing.

However, the particle filter algorithm will not necessarilyconverge faster than the naive optimization algorithm.

Barrett Lectures, UT Knoxville, May 2015

Page 13: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Modified particle filter algorithm

The main motivation behind the modified particle filter algorithmis that, in general, optimization algorithms depend on thevalues of some parameters which have to be tuned to ensurethe efficient performance of the optimization algorithm.

The resampling step in the particle filter algorithm producesmore copies (call them children) of the promising particles (callthem parents). However, all the children particles inherit thesame properties from their parent particle. This includes thevalues of the parameters that control the behavior of theoptimization algorithm.

Main idea: Different children can be assigned different valuesof the parameters. This allows a faster exploration of theneighborhood of a promising parent particle.

Barrett Lectures, UT Knoxville, May 2015

Page 14: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Exponential density estimation

Suppose that we are given a collection of N independentsamples of an n-dimensional random vector x. In general, wedo not know which density the samples are drawn from.

There are many examples in practical applications where therandom vector comes from an exponential density. However,even if we do not know that the samples are drawn from anexponential density, we can try to fit an exponential density tothe samples.

Let x = (x1, . . . , xn) be an n-dimensional random vector. Also,let ψk (x), k = 1, . . . , l be a collection of functions of x.

p(x, α) =exp(−〈α,ψ(x)〉)

Z (α),

Barrett Lectures, UT Knoxville, May 2015

Page 15: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

We want to estimate the unknown exponential density bymaximizing the likelihood function of the samples

L =N∏

j=1

p(xj , α), (3)

where p(xj , α) is the unknown exponential density whoseparameters α we wish to determine.

Differentiation of log L with respect to the αk and setting thederivative equal to zero results in

Eα[ψk (x)] =1N

N∑j=1

ψk (xj), k = 1, . . . , l

where

Eα[ψk (x)] =∫ψk (x)exp(−〈α,ψ(x)〉)dx∫

exp(−〈α,ψ(x)〉)dxis the expectation value of the function ψk with respect to thedensity p(x, α).

Barrett Lectures, UT Knoxville, May 2015

Page 16: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Define the l-dimensional vector f(α) = (f1(α), f2(α), . . . , fl(α))as

fk (α) = Eα[ψk (x)]−1N

N∑j=1

ψk (xj), k = 1, . . . , l

The moment-matching problem amounts to solving the systemof (nonlinear) equations fk (α) = 0, k = 1, . . . , l .

Define the error function ε(α) as

ε(α) =12

l∑k=1

f 2k (α).

We will use the Levenberg-Marquardt (LM) algorithm to solvethe optimization problem.

Barrett Lectures, UT Knoxville, May 2015

Page 17: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

The LM algorithm uses a positive parameter λ to controlconvergence and the updates of the parameters at step m + 1are calculated through the formula

[JT J + λdiag(JT J)](α(m+1) − α(m)) = −JT f(α(m)),

where J = ∂fi∂αj|α=α(m) , i , j = 1, . . . , l is the Jacobian of f(α(m))

and JT its transpose. We have to prescribe a way of computingthe Jacobian J(α(m)). The element Jij of the Jacobian is givenby

Jij(α(m)) = −(Eα(m) [ψi(x)ψj(x)]− Eα(m) [ψi(x)]Eα(m) [ψj(x)])

for i , j = 1, . . . , l (note that the Jacobian is symmetric)

So, all the quantities involved can be expressed as expectationvalues with respect to the m-th step parameter estimate α(m).

Barrett Lectures, UT Knoxville, May 2015

Page 18: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

For the objective function Hs(α(t+1)∗j ) we define

Hs(α(t+1)∗j ) = (

2∗ε̃(α(t+1)∗j )

l )1/2, where ε̃(α(t+1)∗j ) is the Monte Carlo

estimate of the error ε(α(t+1)∗j ) of the j-th particle.

The quantity (2∗ε̃(α(t+1)

∗j )

l )1/2 is the average error per moment forthe j-th particle.

We have to specify also the observation densityg(y (t+1)|α(t+1)

∗j ). Recall that we have defined the observation

process y (k+1) as y (k+1) = Hs(α(k+1)∗ ) + v (k+1), where α(k+1)

∗ isthe predicted state before the observation.

We pick the random variable v (k+1) to be distributed as aGaussian variable of mean zero and variance O(1/NE), whereNE the number of samples used in computing the MCapproximations of the moments.

Barrett Lectures, UT Knoxville, May 2015

Page 19: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

Numerical results

The exponential density whose parameters are to be estimatedis given by

p(x, α) =exp(−〈α,ψ(x)〉)

Z (α),

where α = (α1, . . . , α24) and x = (x1, . . . , x4). The potentialfunction vector ψ(x) is defined as

ψ1−4(x) =xi , for i = 1, . . . ,4ψ5−14(x) =xixj , for i , j = 1, . . . ,4 and j ≥ i

ψ15−24(x) =x2i x2

j , for i , j = 1, . . . ,4 and j ≥ i

(4)

In addition, to ensure the integrability of p(x, α), we enforceα5, α9, α12, α14 and α15, . . . , α24 to be nonnegative.

Barrett Lectures, UT Knoxville, May 2015

Page 20: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

0 5 10 15 20Iterations

0

0.2

0.4

0.6

0.8

1

Err

or

No PF PF PFB gamma=1.1

First example: Independent variables with known density.

PBF corresponds to the particle filter with values of λ assignedaccording to λ(t+1)

i = λP/γi−1, with γ > 1 for i = 1, . . . ,B where

B the number of children of each particle.

Barrett Lectures, UT Knoxville, May 2015

Page 21: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

0 5 10 15 20Iterations

0.2

0.4

0.6

0.8

1

Err

or

No PF PFPFB gamma=1.1

Second example: Dependent variables with known density.

We have α5 = α9 = α12 = α14 = 0.5, α15 = . . . = α24 = 1 andthe rest of coefficients were set equal to zero.

Barrett Lectures, UT Knoxville, May 2015

Page 22: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

0 5 10 15 20Iterations

0.05

0.1

0.15

0.2

Err

or

No PF PFPFB gamma=1.1

Third example: Sinusoidal signal with additive noise. We havesamples from a signal u(t) = sin(2πt) + ηt where ηt isGaussian white noise. At any instant t , the noise ηt ∼ N(0,1).

Barrett Lectures, UT Knoxville, May 2015

Page 23: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

0 5 10 15 20Iterations

0.1

0.2

0.3

0.4

Err

or

No PF PF PFB gamma=1.1

Fourth example: Sinusoidal signal with random phase. Wehave samples from a signal u(t) = sin(2π(t + η)) whereη ∼ N(0,0.1), i.e. a signal with a random phase.

Barrett Lectures, UT Knoxville, May 2015

Page 24: Stochastic global optimization as a filtering problem · In order to reach a global extremum we should use an optimization algorithm with multiple starting values and choose the best

One can also use a filtering approach for deterministicoptimization problems (Maslov optimization theory)

P. Stinis, Stochastic global optimization as a filtering problem,Journal of Computational Physics 231 (2012), pp. 2002-2014.

Barrett Lectures, UT Knoxville, May 2015