25 feb 2014 single variable

Post on 23-Apr-2017

217 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Unconstrained-Single Variable Optimization

Yandra Arkeman

Types of minima

• which of the minima is found depends on the starting point

• such minima often occur in real applications

x

f(x)stronglocal

minimum

weaklocal

minimum strongglobal

minimum

stronglocal

minimum

feasible region

Unconstrained univariate optimization

Assume we can start close to the global minimum

How to determine the minimum?• Search methods (Dichotomous, Fibonacci, Golden-Section)• Approximation methods

1. Polynomial interpolation2. Newton method

• Combination of both (alg. of Davies, Swann, and Campey) • Inexact Line Search (Fletcher)

1D function

As an example consider the function

(assume we do not know the actual function expression from now on)

Search methods

• Start with the interval (“bracket”) [xL, xU] such that the minimum x* lies inside.

• Evaluate f(x) at two point inside the bracket.• Reduce the bracket.• Repeat the process.

• Can be applied to any function and differentiability is not essential.

Bracketing and Search in 1DBracket a minimum means that for given a < b < c, we have f(b) < f(a), and f(b) < f(c). There is a minimum in the interval (a,c).

a

b

c

Search methods

xL

xU

xL

xU

xLxU

xL

xU

xL

xU

xLxU

xLxU

1 2 3 5 8

xLxU

1 2 3 5 8

xL xU

1 2 3 5 8

xL xU

1 2 3 5 8

xLxU

1 2 3 5 8

Dichotomous

Fibonacci: 1 1 2 3 5 8 … Ik+5 Ik+4 Ik+3 Ik+2 Ik+1 Ik

Golden-Section Searchdivides intervals by K = 1.6180

8

Line Search

Line search techniques are in essence optimization algorithms for one-dimensional minimization problems.

They are often regarded as the backbones of nonlinear optimization algorithms.

Typically, these techniques search a bracketed interval.Often, unimodality is assumed.

Exhaustive search requires N = (b-a)/ + 1 calculations to search the above interval, where is the resolution.

a bx*

9

Basic bracketing algorithm

Two point search (dichotomous search) for finding the solution to minimizing ƒ(x):

0) assume an interval [a,b]

1) Find x1 = a + (b-a)/2 - /2 and x2 = a+(b-a)/2 + /2 where is the resolution.

2) Compare ƒ(x1) and ƒ(x2)3) If ƒ(x1) < ƒ(x2) then eliminate x > x2 and set b = x2

If ƒ(x1) > ƒ(x2) then eliminate x < x1 and set a = x1If ƒ(x1) = ƒ(x2) then pick another pair of points

4) Continue placing point pairs until interval < 2

a bx1 x2

Exercise 1

Minimize: x4-15x3+72x2-135x

1<= x <= 15error = 0.5

Variation of Line Search

• Fibonacci Search• Golden Section Search• (*Film “Da Vinci Code”: about these

“special” numbers)

12

Fibonacci Search

Fibonacci numbers are:1,1,2,3,5,8,13,21,34,.. that is , the sum of the last 2 numbers

Fn = Fn-1 + Fn-2

a bx1 x2

L2

L2

L3

L1 L1 = L2 + L3

It can be derived that

Ln = (L1 + Fn-2 ) / Fn

13

5.3 Nonlinear Programming Methods

5.3.1 Single-variable nonlinear programming methods

Golden section search

f(x)

xa bx 1 x 2

L 0

τL 0

τL 0(1-τ)L 0

(1-τ)L 0

x 3

τ 2 L 0

τ

1-τ

1 5τ 0,61803...2

Golden section ratio:

Fig. 5.4. Golden section search.

14

Golden section search

Length of the initial interval containing the optimum point:

L0 = b – a

The function f(x) is evaluated at the two points:

1 0x α 1 τ L

2 0x α τ L

(5.19a)

(5.19b)

If f(x1) < f(x2), then x* is located in the interval (a, x2).

If f(x1) ≥ f(x2), then x* is located in the interval (x1, b).

Length of the new interval: 1 2 1 0L x a b x = τ L

15

Golden section search

Length of the interval of uncertainty after N iterations: N

N 0L τ L (5.21)

Number of iterations needed for a satisfactory interval of uncertainty, LN:

N 0n L LN

n τ

(5.22)

Convergence criteria:

maxN N

N 1L ε

N 1 N 2f x f x ε

(i)

(ii)

(iii)

16

Golden Section

a

a

b

b

a - b

In Golden Section, you try to have b/(a-b) = a/b which implies b*b = a*a - abSolving this gives a = (b ± b* sqrt(5)) / 2 a/b = -0.618 or 1.618 (Golden Section ratio)See also 36 in your book for the derivation.Note that 1/1.618 = 0.618

Discard

17

Bracketing a Minimum using Golden Section

a bx1 x2

Initialize:x1 = a + (b-a)*0.382x2 = a + (b-a)*0.618f1 = ƒ(x1) f2 = ƒ(x2) Loop:if f1 > f2 then

a = x1; x1 = x2; f1 = f2x2 = a + (b-a)*0.618f2 = ƒ(x2)

elseb = x2; x2 = x1; f2 = f1x1 = a + (b-a)*0.382f1 = ƒ(x1)

endif

Exercise 2

Newton’s Method

Newton method

• Expand f(x) locally using a Taylor series.

• Find the δx which minimizes this local quadratic approximation.

• Update x.

Fit a quadratic approximation to f(x) using both gradient and curvature information at x.

Newton’s Method

When solving the equation f (x) = 0 to find a minimum or maximum, one can use the iteration step:

)()(

''

'1

k

kkk

xfxfxx

where k is the current iteration. Iteration is continued until |xk+1 – xk| < where is some specified tolerance.

Newton’s Method Diagram

Newton’s Method approximates f (x) as a straight line at xk and obtains a new point (xk+1), which is used to approximate the function at the next iteration. This is carried on until the new point is sufficiently close to x*.

xx* xk+1 xk

Tangent of f (x) at xk

f (x)

Newton’s Method Comments

• One must ensure that f (xk+1) < f (xk) for finding a minimum and f (xk+1) > f (xk) for finding a maximum.

• Disadvantages:– Both the first and second derivatives must be

calculated– The initial guess is very important – if it is not

close enough to the solution, the method may not converge

Newton method

• avoids the need to bracket the root• quadratic convergence (decimal accuracy doubles

at every iteration)

Newton method

• Global convergence of Newton’s method is poor.• Often fails if the starting point is too far from the minimum.

• in practice, must be used with a globalization strategy which reduces the step length until function decrease is assured

Newton’s method

Newton’s method for finding minimum normally has quadratic convergence rate, but must be started close enough to solution to converge

Example

Thank You

top related