enci 303 lecture ps-19 optimization 2. glory to you! 19-2 overview of lecture linear optimization...
TRANSCRIPT
ENCI 303 Lecture PS-19
Optimization 2
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-2
Overview of lecture
Linear optimization problems.
Unconstrained optimization.
Constrained optimization.
Looking ahead… Next Monday: Optimization case study.
Next Wednesday and Thursday: Network analysis.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-3
Linear optimization problems (1)
Linear optimization problem: Objective function and constraints are all linear in the design variables.
Find x1,…,xn to
maximize c1x1+ + cnxn
subject to ai1x1+ + ainxn bi (inequality constraints)
aj1x1+ + ajnxn = bj (equality constraints).
Example: The textile example and transportation example are linear optimization problems.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-4
Linear optimization problems (2)
Structure of linear optimization problems:
For a linear optimization problem with two design variables, the feasible region is a polygon and a global optimum occurs at a corner or along an edge of the polygon.
If a global optimum occurs at a corner, then it is unique; if it occurs along an edge, then any other point on that same edge is also a global optimum.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-5
Linear optimization problems (3)
Example:
Find x1 and x2 to
maximize 2x1+ x2
subject to 2x1 x2 8
x1+ 2x2 14
x1+ x2 4
x1, x2 0.
Using the Excel Solver, solution is x1 = 6 and x2 = 4.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-6
globalmaximum
1x
2x
82 21 xx
142 21 xx421 xx
01 x
02 x02 21 xx
Linear optimization problems (4)
(…continued)
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-7
Linear optimization problems (5)
Exercise: In a factory producing electronic components, x1 is the number of batches of resistors and x2 the number of batches of capacitors produced per week. Each batch of resistors makes 7 units of profit and each batch of capacitors makes 13 units of profit.
Both resistors and capacitors require a two-stage process to produce. In any given week, at most 18 units of time may be allocated to processes in stage 1, and at most 54 units of time to processes in stage 2.
A batch of resistors requires 1 unit of time in stage 1 and 5 units of time in stage 2.
A batch of capacitors requires 3 units of time in stage 1 and 6 units of time in stage 2.
How many units of resistors and capacitors should be produced each week so as to maximize profit?
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-8
Linear optimization problems (6)
Exercise: (…continued)
What are the design variables?
What is the objective function?
What are the constraints?
21, xx
21 137 xx
0,
5465
183
21
21
21
xx
xx
xx
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-9
Exercise: (…continued)
Show the feasible region on a graph and use it to find the optimum solution.
Solution is x1 = 6, x2 = 4.
Linear optimization problems (7)
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-10
Linear optimization problems (8)
With n design variables, the feasible region is a polytope in n dimensions, whose boundaries are (n1)-dimensional hyperplanes.
If a unique global optimum exists, it will occur at one of the corners of the polytope.
An algorithm for finding a global optimum for a linear optimization problem is the simplex method. It works by moving from one corner of the feasible region polytope to another along the boundaries, to locate one that optimizes the objective function.
If one or more design variables are integer-valued, the branch and bound algorithm is used to solve a sequence of linear optimization problems using the simplex method, with additional constraints imposed at each stage to force integer design variables to take integer values.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-11
Unconstrained optimization (1)
Unconstrained optimization problem: Objective function is, in general, a nonlinear function of the design variables, and there are no constraints on the design variables.
Find x1,…,xn to
maximize f(x1,…,xn).
Example: Least squares estimation in linear regression is an example of unconstrained nonlinear optimization.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-12
Unconstrained optimization (2)
Example: (figure)
The displacements, dx and dy, of a nonlinear spring system with two springs, under an applied load, can be obtained by minimizing the potential energy:
where Fx and Fy are the forces in the x and y directions resulting from the applied load, k1 and k2 are the spring constants, and 1 and 2 are the extensions of the springs, which are related to the displacements according to
,)(21 2
22211 yyxx dFdFkkE
,210)10()10( 221 yx dd
.210)10()10( 222 yx dd
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-13
Unconstrained optimization (3)
Example: (…continued)
If k1 =1, k2 = 2, Fx = 0 and Fy = 2, find dx and dy.
This is an unconstrained nonlinear optimization problem:
Find dx and dy to
minimize
Using the Excel Solver, the solution is dx = 0.46, dy = 1.35 .
.)(21 2
22211 yyxx dFdFkkE
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-14
Unconstrained optimization (4)
Notations and definitions:
Let x = (x1,…,xn) be the vector of design variables.
The gradient vector, f, and Hessian, 2f, of f(x) are the column vector and nn symmetric matrix defined by
.,
2
2
2
2
1
2
2
2
22
2
12
21
2
21
2
21
2
2
1
nnn
n
n
n
xf
xxf
xxf
xxf
xf
xxf
xxf
xxf
xf
f
xf
xf
f
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-15
Unconstrained optimization (5)
Result: The sufficient conditions for a point x* to be a local optimum of f(x) are
and
0
0)( * xf
maximum. local afor definite negativeminimum, local afor definite positive is )( *2 xf
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-16
Unconstrained optimization (6)
Methods for solving unconstrained optimization problems are iterative in nature, i.e. they move from one point to another until they get to an optimum solution or close to one.
All of the methods have four basic components:
A starting point, x0.
Search direction d = (d1,…,dn).
Step size > 0.
Stopping rule.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-17
Unconstrained optimization (7)
In the first iteration, a search direction vector d0 and step size 0 are computed.
Algorithm moves from starting point x0 to a new point x1 according to
x1 = x0 + 0d0.
The search direction and step size are chosen so that
f(x1) < f(x0) for a minimization problem;
f(x1) > f(x0) for a maximization problem.
f(x1) is computed and the stopping rule is checked to see whether to stop the algorithm.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-18
Unconstrained optimization (8)
The steps for iteration k are
Compute search direction vector dk1.
Compute step size k1.
Compute new point: xk = xk1 + k1dk1.
Compute f(xk).
Check stopping rule: If stop, solution is xk; otherwise, do another iteration.
The search direction and step size are chosen so that
f(xk) < f(xk1) for a minimization problem;
f(xk) > f(xk1) for a maximization problem.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-19
Unconstrained optimization (9)
Choice of starting point x0 is important because the methods can only find the local optimum that is closest to the starting point.
Even though the design variables are unconstrained, it is usually possible in practice to specify upper and lower bounds for the variables.
A grid can then be defined between those bounds and a grid search can be performed to obtain a starting point, i.e. compute the value of the objective function at each grid point and choose the point with the smallest (for a min problem) or largest (for a max problem) objective function value as the starting point.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-20
Unconstrained optimization (10)
Some common stopping rules: Upper bound on computation time: Stop if t > tmax.
Upper bound on number of iterations: Stop if k > kmax.
Lower bound on relative change in objective function values: Stop if
for some small positive . Lower bound on the norm of the gradient vector: Stop if
for some small positive .
)()()(
1
1
k
kk
fff
xxx
22
1
)(kk
nk x
fxf
fxxxx
x
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-21
Unconstrained optimization (11)
The methods differ in the way the search direction and step size are computed. We shall look at four methods: Steepest descent method.
Conjugate gradient method.
Newton method.
Quasi-Newton methods.
We shall describe these methods in the context of a minimization problem.
In the Excel Solver the conjugate gradient method or a quasi-Newton method are available.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-22
Unconstrained optimization: Steepest descent method (1)
At iteration k, must choose dk1 and k1 to get
xk = xk1 + k1dk1,
so that f(xk) < f(xk1).
By Taylor’s expansion,
f(xk) = f(xk1 + k1dk1) f(xk1) + k1f(xk1)T dk1,
and so to achieve f(xk) < f(xk1), must have
f(xk) f(xk1) < 0 k1f(xk1)T dk1 < 0.
Since the step size must be positive, must have
f(xk1)T dk1 < 0.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-23
Unconstrained optimization: Steepest descent method (2)
(…continued)
Choose dk1 = f(xk1) as the search direction. This is called the steepest descent direction. (figure)
Example: If find the steepest descent direction at xk1 = (1, 2).
,),( 22121 xxxxf
.44
2)(
)2,1(21
22
)2,1(2
111
x
x
xdxx
x
xfxf
f kk
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-24
Unconstrained optimization: Steepest descent method (3)
After finding the search direction dk1, the step size can be found by searching along dk1 for an that minimizes f(xk1 + dk1). This is called line search and is itself an optimization problem in a single variable:
Find to
minimize f(xk1 + dk1)
subject to > 0.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-25
Unconstrained optimization: Steepest descent method (4)
Example: If xk1 = (2, 1),
and dk1 = (1, 0), find the step size k1.
and so
Find to
minimize
subject to > 0.
Using the Excel Solver, k1 = 1.
Illustration of the steepest descent method. (figure)
1
201
12
11 kk dx
.)1(])2(1[100)1,2()( 22211 ff kk dx
,)1()(100),( 21
221221 xxxxxf
222 )1(])2(1[100
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-26
Unconstrained optimization: Conjugate gradient method
Initial search direction is the steepest descent direction: d0 = f(x0).
For k 2, (Polak-Rebiere conjugate direction)
or (Fletcher-Reeves conjugate direction)
Step size: Use line search.
Illustration of the conjugate gradient method. (figure)
222
21111 )()(
)]()([)()(
k
kT
k
kkT
kkk ff
ffff d
xxxxx
xd
.)()()()(
)( 222
1111
k
kT
k
kT
kkk ff
fff d
xxxx
xd
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-27
Unconstrained optimization: Newton method
Quadratic approximation of f(x) using Taylor’s expansion is
Minimum of f(x) is found by setting f = 0, giving
Putting x = xk, we have
and so the search direction is
Step size = 1.
.))(()(21
)()()()(
112
1
111
kkT
k
kT
kk
f
fff
xxxxx
xxxxx
.)())(( 1112
kkk ff xxxx
,)()( 1112
kkk ff xdx
.)()]([ 11
12
1
kkk ff xxd
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-28
Unconstrained optimization: Quasi-Newton methods
Search direction: Replace [2f(xk1)]1 in the search direction for Newton method by a symmetric, positive definite matrix Hk1, i.e.
Hk1 must satisfy the quasi-Newton condition,
so that it serves as an approximation to [2f(xk1)]1.
Step size: Use line search.
.)( 111 kkk f xHd
,)]()([ 21211 kkkkk ff xxxxH
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-29
Constrained optimization (1)
Constrained optimization problem: Objective function is, in general, a nonlinear function of the design variables. Constraints may also involve nonlinear functions of the design variables.
Find x1,…,xn to
maximize f(x1,…,xn)
subject to gi(x1,…,xn) 0 (inequality constraints)
hj(x1,…,xn) = 0 (equality constraints).
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-30
Constrained optimization (2)
The Excel Solver uses the generalized reduced gradient method for constrained optimization. The method has the same basic components (i.e. starting point, search direction, step size and stopping rule) as any unconstrained optimization method, but differ in the details, which enable it to handle the constraints.
ENCI 303 Lecture PS-19 Optimization 2
Glo
ry t
o Y
ou!
19-31
Reading assignment
Next lecture: Sec. 11.1, 11.5