introduction to optimization (part 1)
DESCRIPTION
Introduction to Optimization (Part 1). Daniel Kirschen. L. A. B. C. Economic d ispatch problem. Several generating units serving the load What share of the load should each generating unit produce? Consider the limits of the generating units Ignore the limits of the network. - PowerPoint PPT PresentationTRANSCRIPT
Introduction to Optimization(Part 1)
Daniel Kirschen
2
Economic dispatch problem
• Several generating units serving the load• What share of the load should each
generating unit produce?• Consider the limits of the generating units• Ignore the limits of the network
A B C L
© 2011 D. Kirschen and University of Washington
3
Characteristics of the generating units
© 2011 D. Kirschen and University of Washington
• Thermal generating units• Consider the running costs only• Input / Output curve
– Fuel vs. electric power• Fuel consumption measured by its energy content• Upper and lower limit on output of the generating unit
B T G
(Input)Electric PowerFuel
(Output)
OutputPmin Pmax
Inpu
t
J/h
MW
4
Cost Curve
• Multiply fuel input by fuel cost• No-load cost
– Cost of keeping the unit running if it could produce zero MW
OutputPmin Pmax
Cost
$/h
MWNo-load cost
© 2011 D. Kirschen and University of Washington
5
Incremental Cost Curve
• Incremental cost curve
• Derivative of the cost curve• In $/MWh• Cost of the next MWh
© 2011 D. Kirschen and University of Washington
∆F∆P
Cost [$/h]
MW
Incremental Cost [$/MWh]
MW
6
Mathematical formulation
• Objective function
• Constraints– Load / Generation balance:
– Unit Constraints:
© 2011 D. Kirschen and University of Washington
A B C L
This is an optimization problem
Introduction to Optimization
8
“An engineer can do with one dollar which any bungler can do with two”
A. M. Wellington (1847-1895)
© 2011 D. Kirschen and University of Washington
9
Objective
• Most engineering activities have an objective:– Achieve the best possible design – Achieve the most economical operating conditions
• This objective is usually quantifiable• Examples:
– minimize cost of building a transformer– minimize cost of supplying power– minimize losses in a power system– maximize profit from a bidding strategy
© 2011 D. Kirschen and University of Washington
10
Decision Variables
• The value of the objective is a function of some decision variables:
• Examples of decision variables:– Dimensions of the transformer– Output of generating units, position of taps– Parameters of bids for selling electrical energy
© 2011 D. Kirschen and University of Washington
11
Optimization Problem
• What value should the decision variables take so that
is minimum or maximum?
© 2011 D. Kirschen and University of Washington
12
Example: function of one variable
© 2011 D. Kirschen and University of Washington
x
f(x)
x*
f(x*)
f(x) is maximum for x = x*
13
Minimization and Maximization
© 2011 D. Kirschen and University of Washington
x
f(x)
x*
f(x*)
If x = x* maximizes f(x) then it minimizes - f(x)
-f(x)-f(x*)
14
Minimization and Maximization
• maximizing f(x) is thus the same thing as minimizing g(x) = -f(x)
• Minimization and maximization problems are thus interchangeable
• Depending on the problem, the optimum is either a maximum or a minimum
© 2011 D. Kirschen and University of Washington
15
Necessary Condition for Optimality
© 2011 D. Kirschen and University of Washington
x
f(x)
x*
f(x*)
16
Necessary Condition for Optimality
© 2011 D. Kirschen and University of Washington
x
f(x)
x*
17
Example
© 2011 D. Kirschen and University of Washington
x
f(x)
For what values of x is ?
In other words, for what values of x is the necessary condition for optimality satisfied?
18
Example
• A, B, C, D are stationary points• A and D are maxima• B is a minimum• C is an inflexion point
x
f(x)
A B C D
© 2011 D. Kirschen and University of Washington
19
How can we distinguish minima and maxima?
© 2011 D. Kirschen and University of Washington
x
f(x)
A B C D
The objective function is concave around a maximum
20
How can we distinguish minima and maxima?
x
f(x)
A B C D
The objective function is convex around a minimum© 2011 D. Kirschen and University of Washington
21
How can we distinguish minima and maxima?
© 2011 D. Kirschen and University of Washington
x
f(x)
A B C D
The objective function is flat around an inflexion point
22
Necessary and Sufficient Conditions of Optimality
• Necessary condition:
• Sufficient condition:– For a maximum:
– For a minimum:
© 2011 D. Kirschen and University of Washington
23
Isn’t all this obvious?
• Can’t we tell all this by looking at the objective function?– Yes, for a simple, one-dimensional case when we
know the shape of the objective function– For complex, multi-dimensional cases (i.e. with
many decision variables) we can’t visualize the shape of the objective function
– We must then rely on mathematical techniques
© 2011 D. Kirschen and University of Washington
24
Feasible Set
• The values that the decision variables can take are usually limited
• Examples:– Physical dimensions of a transformer must be
positive– Active power output of a generator may be limited
to a certain range (e.g. 200 MW to 500 MW)– Reactive power output of a generator may be
limited to a certain range (e.g. -100 MVAr to 150 MVAr)
© 2011 D. Kirschen and University of Washington
25
Feasible Set
x
f(x)
A D xMAXxMIN
Feasible Set
The values of the objective function outside the feasible set do not matter
© 2011 D. Kirschen and University of Washington
26
Interior and Boundary Solutions
• A and D are interior maxima• B and E are interior minima• XMIN is a boundary minimum• XMAX is a boundary maximum
x
f(x)
A D xMAXxMIN B E
Do not satisfy theOptimality conditions!
© 2011 D. Kirschen and University of Washington
27
Two-Dimensional Case
x1
x2
f(x1,x2)
x2*
x1*
f(x1,x2) is minimum for x1*, x2
* © 2011 D. Kirschen and University of Washington
28
Necessary Conditions for Optimality
x1
x2
f(x1,x2)
x2*
x1*
© 2011 D. Kirschen and University of Washington
29
Multi-Dimensional Case
At a maximum or minimum value of
we must have:
A point where these conditions are satisfied is called a stationary point
© 2011 D. Kirschen and University of Washington
30
Sufficient Conditions for Optimality
x1
x2
f(x1,x2) minimum maximum
© 2011 D. Kirschen and University of Washington
31
Sufficient Conditions for Optimality
x1
x2
f(x1,x2)
Saddle point
© 2011 D. Kirschen and University of Washington
32
Sufficient Conditions for Optimality
Calculate the Hessian matrix at the stationary point:
© 2011 D. Kirschen and University of Washington
33
Sufficient Conditions for Optimality
• Calculate the eigenvalues of the Hessian matrix at the stationary point
• If all the eigenvalues are greater or equal to zero:– The matrix is positive semi-definite– The stationary point is a minimum
• If all the eigenvalues are less or equal to zero:– The matrix is negative semi-definite– The stationary point is a maximum
• If some or the eigenvalues are positive and other are negative:– The stationary point is a saddle point
© 2011 D. Kirschen and University of Washington
34
Contours
x1
x2
f(x1,x2)
F1 F2
F2
F1
© 2011 D. Kirschen and University of Washington
35
Contours
x1
x2
Minimum or maximum
A contour is the locus of all the point that give the same valueto the objective function
© 2011 D. Kirschen and University of Washington
36
Example 1
is a stationarypoint
© 2011 D. Kirschen and University of Washington
37
Example 1Sufficient conditions for optimality:
must be positive definite (i.e. all eigenvalues must be positive)
The stationary point is a minimum
© 2011 D. Kirschen and University of Washington
38
Example 1
© 2011 D. Kirschen and University of Washington
x1
x2
C=1C=4
C=9
Minimum: C=0
39
Example 2
is a stationarypoint
© 2011 D. Kirschen and University of Washington
40
Example 2Sufficient conditions for optimality:
The stationary point is a saddle point
© 2011 D. Kirschen and University of Washington
41
Example 2
© 2011 D. Kirschen and University of Washington
x1
x2
C=1
C=4
C=9
C=1
C=4
C=9
C=-1 C=-4 C=-9
C=0
C=0
C=-9 C=-4
This stationary point is a saddle point
Optimization with Constraints
43
Optimization with Equality Constraints
• There are usually restrictions on the values that the decision variables can take
© 2011 D. Kirschen and University of Washington
Objective function
Equality constraints
44
Number of Constraints
• N decision variables• M equality constraints• If M > N, the problems is over-constrained
– There is usually no solution• If M = N, the problem is determined
– There may be a solution• If M < N, the problem is under-constrained
– There is usually room for optimization
© 2011 D. Kirschen and University of Washington
45
Example 1
x1
x2
Minimum
© 2011 D. Kirschen and University of Washington
46
Example 2: Economic Dispatch
LG1 G2
x1 x2
Cost of running unit 1
Cost of running unit 2
Total cost
Optimization problem:
© 2011 D. Kirschen and University of Washington
47
Solution by substitution
Unconstrained minimization
© 2011 D. Kirschen and University of Washington
48
Solution by substitution
• Difficult• Usually impossible when constraints are non-
linear• Provides little or no insight into solution
• Solution using Lagrange multipliers
© 2011 D. Kirschen and University of Washington
49
Gradient
© 2011 D. Kirschen and University of Washington
50
Properties of the Gradient
• Each component of the gradient vector indicates the rate of change of the function in that direction
• The gradient indicates the direction in which a function of several variables increases most rapidly
• The magnitude and direction of the gradient usually depend on the point considered
• At each point, the gradient is perpendicular to the contour of the function
© 2011 D. Kirschen and University of Washington
51
Example 3
x
y
© 2011 D. Kirschen and University of Washington
A
B
C
D
52
Example 4
x
y
© 2011 D. Kirschen and University of Washington
53
Lagrange multipliers
© 2011 D. Kirschen and University of Washington
54
Lagrange multipliers
© 2011 D. Kirschen and University of Washington
55
Lagrange multipliers
© 2011 D. Kirschen and University of Washington
56
Lagrange multipliersThe solution must be on the constraint
© 2011 D. Kirschen and University of Washington
A
B
To reduce the value of f, we must move in a direction opposite to the gradient
?
57
Lagrange multipliers• We stop when the gradient of the function
is perpendicular to the constraint because moving further would increase the value of the function
At the optimum, the gradient of the function is parallel to the gradient of the constraint
© 2011 D. Kirschen and University of Washington
A
B
C
58
Lagrange multipliersAt the optimum, we must have:
Which can be expressed as:
is called the Lagrange multiplier
The constraint must also be satisfied:
In terms of the co-ordinates:
© 2011 D. Kirschen and University of Washington
59
Lagrangian functionTo simplify the writing of the conditions for optimality,it is useful to define the Lagrangian function:
The necessary conditions for optimality are then given by the partial derivatives of the Lagrangian:
© 2011 D. Kirschen and University of Washington
60
Example
© 2011 D. Kirschen and University of Washington
61
Example
© 2011 D. Kirschen and University of Washington
62
Example
x1
x2
Minimum
4
1
© 2011 D. Kirschen and University of Washington
63
Important Note!If the constraint is of the form:
It must be included in the Lagrangian as follows:
And not as follows:
© 2011 D. Kirschen and University of Washington
64
Application to Economic Dispatch
LG1 G2
x1 x2
Equal incremental costsolution
© 2011 D. Kirschen and University of Washington
65
Equal incremental cost solution
x1 x2
Cost curves:
x1 x2
Incrementalcost curves:
© 2011 D. Kirschen and University of Washington
66
Interpretation of this solution
x1 x2
L+
--
If < 0, reduce λIf > 0, increase λ
© 2011 D. Kirschen and University of Washington
67
Physical interpretation
x
x
The incremental cost is the cost ofone additional MW for one hour. This cost depends on the output of the generator.
© 2011 D. Kirschen and University of Washington
68
Physical interpretation
© 2011 D. Kirschen and University of Washington
69
Physical interpretation
It pays to increase the output of unit 2 and decrease the output of unit 1 until we have:
The Lagrange multiplier λ is thus the cost of one more MWat the optimal solution.
This is a very important result with many applications in economics.
© 2011 D. Kirschen and University of Washington
70
Generalization
Lagrangian:
• One Lagrange multiplier for each constraint• n + m variables: x1, …, xn and λ1, …, λm
© 2011 D. Kirschen and University of Washington
71
Optimality conditions
n equations
m equations
n + m equations inn + m variables
© 2011 D. Kirschen and University of Washington