mathematical economics - part i -...
TRANSCRIPT
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Mathematical Economics - Part I -Optimization
Filomena Garcia
Fall 2009
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
1 Optimization in Rn
Optimization Problems in Rn
Optimization Problems in Parametric FormOptimization Problems: Some ExamplesThe objectives of Optimization Theory
2 Existence of Solutions
3 Unconstrained Optima
4 Equality Constraints
5 Inequality Constraints
6 Convex Structures in Optimization Theory
7 Quasiconvexity in Optimization
8 Parametric Continuity: The Maximum Theorem
9 Supermodularity and Parametric Monotonicity
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Optimization Problems in Rn
An optimization problem in Rn is one where the values of agiven function f : Rn → R are to be maximized or minimizedover a given set D ⊂ Rn. The function f is called objectivefunction and the set D is called the constraint set.We denote the optimization problem as:
max {f (x) |x ∈ D}
A solution to the problem is a point x ∈ D such thatf (x) ≥ f (y) for all y ∈ D.We call f (D) the set of attainable values of f in D.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
It is worth noting the following:
1 A solution to an optimization problem may not exist
Example: Let D = R+ and f (x) = x , then f (D) = R+ andsupf (D) = +∞, so the problem max {f (x)|x ∈ D} has nosolution.
2 There may be multiple solutions to the optimizationproblem
Example: Let D = [−1, 1] and f (x) = x2, then themaximization problem max {f (x)|x ∈ D} has twosolutions: x = 1 and x = −1.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
We will be concerned about the set of solutions to theoptimization problem, knowing that this set could be empty.
argmax [f (x)|x ∈ D] = {x ∈ D |f (x) ≥ f (y) ,∀y ∈ D}
Two important results to bear in mind:
1 x is a maximum of f on D if and only if it is a minimumof −f on D.
2 Let ϕ : R→ R be a strictly increasing function. Then x isa maximum of f on D if and only if x is also a maximumof the composition ϕ ◦ f .
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Optimization Problems in Parametric Form
Optimization problems are often presented in parametric form,i.e. both the objective function and/or the feasible set dependon some parameter θ from a set of feasible parameter values Θ.In this case, we denote the optimization problem as:
max {f (x , θ)|x ∈ D(θ)}
In general, this problem has a solution which also depends onθ, i.e.
x(θ) = argmax {f (x , θ)|x ∈ D(θ)}
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Optimization Problems: Utility Maximization
1 Utility Maximization
In consumer theory the agent maximizes his utility fromconsuming xi units of commodity i .The utility is given by u(x1, ..., xn).The constraint set is the set of feasible bundles given thatthe agent has an income I and the commodities are pricedp = (p1, ...pn), i.e.
B(p, I ) ={
x ∈ Rn+|p · x ≤ I
}Notice that this is a parametrized optimization problem,the parameters being (p, I ).1.
1Other parameters may exist within the utility function, namely CobbDouglas weights
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Optimization Problems: Expenditure Minimization
2 Expenditure Minimization
It is the dual problem to the utility maximization.Consists on minimizing the expenditure constrained onattaining a certain level of utility u.The problem is given by:
min {p · x |u(x) ≥ u}
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Optimization Problems: Profit Maximization
3 Profit Maximization
It is the problem of a firm which produces a single outputusing n inputs through the production relationshipy = g(x1, ..., xn).Given the production of y units of good, the firm maycharge the price p(y) 2.w denotes the vector of input prices.The firm chooses the input mix which maximizes herprofits, i.e.,
max{
p(g(x))g(x)− w · x |x ∈ Rn+
}
2p(y) is the inverse demand functionFilomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Optimization Problems: Cost Minimization
4 Cost Minimization
It is the dual problem to the profit maximization.Consists on minimizing the cost of producing at least yunits of output.The problem is given by:
min {w · x |g(x) ≥ y}
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Optimization Problems: Consumption-LeisureChoice
5 Consumption-Leisure Choice
It is a model of the labor-supply decision process ofhouseholds in which the income becomes also an object ofchoice.Let H be the amount of hours that the agent has available.The wage rate of labor time is given by w . If working for Lunits of time, the agent earns wL.The agent obtains utility from consumption of x and fromleisure time, l .Then, the agent’s problem is:
max {u(x , l)|p · x ≤ wL, L + l ≤ H}
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
The objectives of Optimization Theory
The objectives of Optimization Theory are:
1 Identify the set of conditions on f and D under which theexistence of solutios to optimization problems isguaranteed
2 Obtain a characterization of the set of optimal points,namely:
The identification of necessary conditions for an optimum,i.e. conditions that every solution must verify.The identification of sufficient conditions for an optimum,i.e. conditions such that any point that meets them is asolution.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
The objectives of Optimization Theory
The identification of conditions ensuring the uniqueness ofthe solution.
The identification of a general theory of parametricvariation in a parametrized family of optimizationproblems. For example:
The identification of conditions under which the solutionset varies continuously with the parameter θ.In problems where the parameters and actions have anatural ordering, the identification of conditions in whichparametric monotonicity is verified, i.e., increasing thevalue of the parameter, increases the value of the action.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
How we will proceed
Section 2: Studies the question of existence of solutions.The main result is the Weierstrass Theorem, whichprovides a general set of conditions under whichoptimization problems are guaranteed to have bothmaxima and minima.
From Section 3 to Section 5 we will examine the necessaryconditions for optima using differentiability assumptionson the undelying problem. We focus on local optima,given that differentiability is only a local property.
Section 3 focuses on the situation in which the optimumoccurs in the interior the feasible set D. The main resultare the necessary conditions that must be met by theobjective function at the optimum. Also sufficientconditions that identify optima are provided.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
How we will proceed
In Sections 4 and 5 some or all of the constraintsdetermine the optima. :
Section 4 focuses on equality constraints and the mainresult is the Theorem of Lagrange (both necessary andsufficient conditions for optima are presented.
Section 5 focuses on inequality constraints and the mainresult is the Theorem of Kuhn and Tucker which describesnecessary conditions for optima.
Sections 6 and 7 turn to the study of sufficient conditions,i.e., conditions that, when met, will identify points asbeing global optima. These are mainly related to theconvexity and quasiconvexity of the feasible sets andobjective functions.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
How we will proceed
Section 8 and 9 address respectivey the issue of parametriccontinuity and monotonicity of solutions.
The main result of Section 8 is the Maximum Theorem,which states that the continuity assumptions of theprimitives of the optimization problem are inherited,although not entirely, by the solutions.
Section 9 relies on the concept of supermodularity of theprimitives to conclude about the parametric monotonicityof solutions. The main result is the Theorem of Tarski.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Existence of Solutions
Main Result:
The Weierstrass Theorem
Let D ∈ Rn be a compact and let f : D → R be a continuousfunction on D. Then f attains a max and a min on D, i.e.,∃z1, z2 ∈ D: ∀x ∈ D
f (z1) ≥ f (x) ≥ f (z2)
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Existence of Solutions
Some Definitions:Compact set: closed and bounded
bounded set: is countained within a finite ball.
closed: contains its frontier. (valid even if the frontier isempty like in R)
Examples: closed interval in R.Continuous function: for any point a in the domain, we havethat:
limx→af (x) = f (a)
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Existence of Solutions
Notice: The Weierstrass Theorem gives us sufficient conditionsfor maxima or minima to exist, however, we don’t know whathappens when some or even all of the conditions of theTheorem are violated.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Existence of Solutions
Failure of the Weierstrass Theorem:
1 Let D = R and f (x) = x3. f is continuous but D is notcompact (it is not bounded). Since f (D) = R, f does notattain a maximum or a minimum on D.
2 Let D = (0, 1) and f (x) = x . f is continuous but D is notcompact (it is not closed). The set f (D) = (0, 1), so fdoes not attain a maximum or a minimum on D.
3 Let D = [−1, 1] and let f be given by
f (x) =
{0, x = −1, x = 1
1,−1 < x < 1
D is compact, but f fails to be continuous. f (D) is theopen interval (−1, 1)
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Existence of Solutions
Intuition of the Proof of the Weierstrass Theorem:
1 First we show that under the stated conditions f (D) is acompact set, hence, closed and bounded.
2 Second, it is shown that if A is a compact set in R, thenmax A and min A are always well defined, never beeingthe case that A fails to contain the supremum and theinfimum. Hence max f (D) and min f (D) must exist on D.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
The Weierstrass Theorem in Applications
Consider the applications mentioned in section 1.The Utility Maximization Problem:
Maximize u(x) subject to x ∈ B(p, I ) ={
x ∈ Rn+|p · x ≤ I
}We assume u(x) to be continuous.
By the Weierstrass Theorem, if B is compact, we musthave a solution. ⇒ p >> 0
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
The Weierstrass Theorem in Applications
The Cost Minimization Problem:
Minimize wx over x ∈ F (y) ={
x ∈ Rn+|g(x) ≥ y
}The objective function is continuous.
But the feasible action set is not compact because it is notbounded.
The Weierstrass Theorem does not guarantee or preclude asolution. What can we do to guarantee a solution? Bound theset.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Unconstrained Optima
Unconstrained optima The constraints do not affect theoptimal point; in other words, the optimum is interior.
Local Maximum: A point x is a local maximum of f on ifthere is r > 0 such that f (x) ≥ f (y) for all y ∈ B(x , r).
Global Maximum: A point x is a global maximum of f iffor all r > 0, and all y ∈ D, f (x) ≥ f (y) for ally ∈ B (x , r).
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
First Order Conditions
Necessary Conditions
If x∗ is a local maximum (or minimum) of the function f on Dand f is differentiable, then Df (x∗) = 0
Notice, the reverse is not necessarily true.Example:x = 0 for f (x) = x3
Any point that satisfies the first order conditions (FOC) is acritical point of f on D.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Second Order Conditions
Sufficient Conditions
Suppose f is a C 2 function on D, and x∗ is a point in theinterior of D. Then:
If Df (x∗) = 0 and D2f (x∗) is negative definite at x∗, thenx∗ is a local strict local maximum of f on D.
If Df (x∗) = 0 and D2f (x∗) is positive definite at x∗, thenx∗ is a local strict local minimum of f on D.
Notice: This conditions only identify local optima (not global).
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Second Order Conditions
Example:Let f (x) = 2x3 − 3x2 and D = R. The first order conditionsare: f ′(x) = 0 and identify as critical points x = 0 and x = 1.At these points, the second order conditions are:f ′′(0) = −6 < 0 and f ′′(1) = 6 > 0. Hence, 0 is a localmaximum and 1 is a local minimum. However, these are notglobal max or min. In fact, this function has no global max normin (why?).
The existence of a global maximum (resp. minimum) in D isguaranteed only if the function is concave (resp. convex) in D
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Constrained Optimization Problems - EqualityConstraints
1 It is not often that optimization problems haveunconstrained solutions.
2 Typically some, or all of the constraints will matter.
3 In this chapter we will look into the constraints of theform: g(x) = 0 where g : Rn → Rk
The problem:
Maximize f (x) s.t. g(x)=0
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Lagrange Theorem - 2 variables, one restriction
We can solve graphically
The solution is:
∂f∂x1
∂f∂x2
=
∂g∂x1
∂g∂x2
g(x1, x2) = 0 (1)
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Lagrange Theorem - 2 variables, one restriction
We can solve by substitutionSince g(x1, x2) = 0 in the solution, and ∂g
∂x26= 0, we have an
implicit funtion (what is this?) x2(x1).So, the problem becomes:
maxx1 f (x1, x2(x1))
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Lagrange Theorem - 2 variables, one restriction
The FOC is:
∂f
∂x1+∂f
∂x2
∂x2
∂x1= 0
By the IMPLICIT FUNCTION THEOREM:
∂x2
∂x1= −
∂g∂x1
∂g∂x2
The solution is:
∂f∂x1
∂f∂x2
=
∂g∂x1
∂g∂x2
g(x1, x2) = 0 (2)
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Lagrange Theorem - 2 variables, one restriction
We can rewrite the FOC of the previous problem as:
∂f
∂x1− ∂f
∂x2
∂g∂x1
∂g∂x2
= 0
or:∂f
∂x1+ λ
∂g
∂x1= 0
where
λ = −∂f∂x2
∂g∂x2
which is the FOC of the Lagrangean.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Lagrange Theorem
Lagrange Theorem
Let f , g : R2 → R be C 1 functions. Suppose that x∗ = (x∗1 , x∗2 )
is a local maximizer or minimizer of f subject to g(x1; x2) = 0.Suppose also that Dg(x∗) 6= 0. Then, there exists a scalarλ∗ ∈ R such that (x∗1 , x
∗2 , λ
∗) is a critical point of thelagrangean function:
L(x1; x2;λ) = f (x1, x2) + λg(x1, x2)
In other words, at (x∗1 , x∗2 , λ
∗): ∂L∂x1
= 0, ∂L∂x2
= 0 and ∂L∂λ = 0
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Lagrange Theorem
The constraint qualification
We must assume Dg(x∗) 6= 0. (If there are many variablesand restrictions, this amounts to say that the Jacobianmatrix of g is invertible or that the rank is equal to thenumber of constraints).
This condition allows us to identify g as an implicitfunction.
The inverse of Dg is used in computing λ.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Lagrange Theorem - Necessary conditions
Notice: The Lagrange Theorem provides necessary but notsufficient conditions for local optima.It is not said that if we find x and λ such that g(x) = 0 andthe FOC of the Lagrangean are equal to zero, that x , λ areindeed an optimum, even if Dg(x) 6= 0.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Lagrange Theorem - Necessary conditions
Then why can we use the Lagrange method so often?
Proposition
Suppose that the following two conditions hold:
1 A global optimum x∗ exists in the given problem.
2 The constraint qualification is met at x∗.
Then, there exists a λ∗ such that (x∗, λ∗) is a critical point ofL.
So, under the conditions of Proposition 27, the Lagrangeanmethod does identify the optimum.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Lagrange Theorem - Sufficient Conditions
Sufficient Conditions
Suppose that there is x∗ and λ∗ which satisfy the constraintqualification and solve the FOC of the Lagrangean. Then:
If the Bordered Hessean has a positive determinant, wehave a local maximum
If the Bordered Hessean has a negative determinant, wehave a local minimum
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Equality Constraints
An Example:
Maxx x2 − y2 s.t. x2 + y2 = 1
First we verify that the conditions of Proposition work:
1 The function f is continuous and the constraint set iscompact.
2 The Constraint qualification, namelyDg(x , y) = (−2x ,−2y) 6= 0, is met.
Then, we can rely on the Theorem of Lagrange to pinpoint theoptima:
L(x ; y ;λ) = f (x , y) + λg(x , y) = x2 − y2 + λ(1− x2 − y2)
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Equality Constraints
An Example (cont.): The first order condtitions are:
∂L∂x = 0⇔ 2x − λ2x = 0∂L∂y = 0⇔ −2y − λ2y = 0
∂L∂λ = 0⇔ x2 + y2 = 1
The critical points are: (1,0,1);(-1,0,1);(0,1,-1);(0,-1,-1). Thefirst two are global maximizers, and the second are globalminimizers.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Inequality constraints and Kuhn Tucker
Most economic problems include non-negativity restrictions andinequalities in the optimization. For instance, it is natural toimpose that consumption be non negative, or work hours, orcapital.In this chapter we will study problems of the form:
maxxs.t.g(x) = 0andh(x) ≥ 0
Our tool will be the Kuhn-Tucker Theorem.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Kuhn-Tucker Theorem
Kuhn-Tucker Theorem - Maximization
Let f : Rn → R and hj : Rn → R be C 1 functions, i = 1, ..., l .Suppose x∗ is a local maximum of f on
D = {x ∈ Rn|hi (x) ≥ i = 1, ..., l}
Let E denote the set of effective (binding) constraints at x∗,and let hE = (hi )i∈E . Suppose that the rank of DhE = |E |.Then, there exists a vector λ∗ = (λ∗1, λ
∗2..., λ
∗l ) such that the
following constraints are met:
1 λ∗i ≥ 0 and λ∗i hi (x∗) = 0, for i = 1, ..., l .
2 Df (x∗) +∑l
i=1 λ∗i Dhi (x∗) = 0
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Kuhn-Tucker Theorem
Kuhn-Tucker Theorem - Minimization
Let f : Rn → R and hj : Rn → R be C 1 functions, i = 1, ..., l .Suppose x∗ is a local minimum of f on D. Let E denote theset of effective (binding) constraints at x∗, and lethE = (hi )i∈E . Suppose that the rank of DhE = |E |. Then,there exists a vector λ∗ = (λ∗1, λ
∗2..., λ
∗z l) such that the
following constraints are met:
1 λ∗i ≥ 0 and λ∗i hi (x∗) = 0, for i = 1, ..., l .
2 Df (x∗)−∑l
i=1 λ∗i Dhi (x∗) = 0
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Kuhn-Tucker Theorem - Some Observations
1 A pair x∗, λ∗ meets the FOC condition for a maximum inan inequality constrained maximization problem if itsatisfies h(x∗) ≥ 0 and conditions 1 and 2 of the theorem.
2 Condition 1 is called the Complementary slacknesscondition.
3 K-T theorem provides only necessary conditions for localoptima which meet the constraint qualification.
4 Df (x∗)−∑l
i=1 λ∗i Dhi (x∗) = 0
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Kuhn-Tucker Theorem - Interpreting the conditions
Complementary Slackness conditionThe Lagrange multipliers give us the effect in the maximum ofrelaxing the restriction. Hence we can have one of threesituations:
1 the restriction is verified with strict inequality, and in thatcase, relaxing the restricting will not change the maximumvalue attained: λ = 0 and h > 0.
2 the restriction is verified with equality, and in that case,relaxing the restricting will at least increase the valueattained: λ > 0 and h = 0.
3 the restriction is verified with equality, and in that case,relaxing the restricting will maintain the value attained:λ = 0 and h = 0.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Kuhn-Tucker Theorem - Constraint Qualification
Constraint Qualification
r(DhE (x∗)) = |E |
An example where the constraint qualification fails:Let: f (x , y) = −(x2 + y2) and h(x , y) = (x − 1)3 − y2.Consider the problem of maximizing f subject to h ≥ 0.By inspection we see that f attains its max when (x2 + y2)attains its min. We also must have x ≥ 1 and y ≥ 0. So themaximum is obtained at (x∗, y∗) = (1, 0), a point where theconstraint is binding.Dh(x∗, y∗) = (3(x∗ − 1)2, 2y∗) = (0, 0). The K-T theoremfails. In fact:
Df (x∗, y∗) + λ(x∗, y∗) = (2, 0)⇔ .
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Why the procedure usually works
In general we can rely on the K-T theorem due to the followingproposition:
Proposition
Suppose the following conditions hold:
1 A global maximum x∗ exists to the giveninequality-constrained problem.
2 The constraint qualification is met at x∗.
Then, there exists λ∗ such that (x∗, λ∗) is a critical point of L.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
An Example
Let g(x , y) = 1− x2 − y2. Consider the problem of maximizingf (x , y) = x2 − y over the set D = (x , y)|g(x , y) ≥ 0.
1 The conditions of proposition 2 are met: by weierstrassand the fact that if the constraint is met, x and y cannotbe simmultaneously zero, and hence r(Dg) = 1.
2 We can use the K-T theorem.
L(x , y , λ) = x2 − y + λ(1− x2 − y2)
∂L∂x = 0⇔ 2x − 2λx = 0∂L∂y = 0⇔ −1− 2λy = 0
λ ≥ 0,1− x2 − y2 ≥ 0, λ(1− x2 − y2) = 0
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
An Example (cont.)
From (1) either x = 0 or λ = 1. If λ = 1, (2) implies that
y = −1/2 and x2 + y2 = 1⇔ x = ±√
32 .
If x = 0, we may have λ = 0 or λ > 0. Take the first case:
λ > 0, then from the CSC we must have y = ±1. y = 1 isinconsistent from (2). so y = −1 and λ = 1
2 .
λ = 0 is inconsistent with (2).
So, the critical points are: (x , y , λ) = (±√
32 ,−1
2 , 1) and(x , y , λ) = (0,−1, 1
2) the value of the function is respectively5/4 and 1, hence we obtain the maxima. (why not theminimum)?
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Interpreting the Lagrange Multipliers
The Lagrange multipliers of the Lagrange Theorem and of theKuhn-Tucker Theorem can be interpreted as the sensitivity ofthe value of the function in the optimum to relaxing therestriction.Case 1: Equality restrictionsSuppose that we have the following optimization problem:
maxx f (x)s.t.g(x) + c = 0
We would solve the problem by writing the FOC of theLagrangean w.r.t. x and λ, namely:
∂L
∂x=∂f (x)
∂x+ λ
∂g(x)
∂x; g(x) + c = 0
The optimum is x∗(c) and the value of the function in theoptimum is f (x∗(c)).
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
We can compute the effect of c on the optimum of thefunction.
∂f
∂c=∂f (x∗(c))
∂x
∂x∗(c)
∂c
∂f (x∗(c))
∂x= −λ∂g(x∗(c))
∂x
Also: ∂g(x∗(c))∂x = −1 (because the constraint must be observed
in the optimum), hence:
∂f
∂c= λ
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Interpreting the Lagrange Multipliers
Case 2: Inequality restrictionsSuppose that we have the following optimization problem:
maxx f (x)s.t.h(x) + c ≥ 0
We would solve the problem by writing the FOC of theLagrangean w.r.t. x and λ, namely:
∂L
∂x=∂f (x)
∂x+ λ
∂g(x)
∂x
λ ≥ 0, h(x) + c ≥ 0, λ(h(x) + c) = 0
The optimum is x∗(c) and the value of the function in theoptimum is f (x∗(c)).
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
We can compute the effect of c on the optimum of thefunction.
∂f
∂c=∂f (x∗(c))
∂x
∂x∗(c)
∂c
∂f (x∗(c))
∂x= −λ∂h(x∗(c))
∂x
If the restriction is not binding, λ = 0 and, hence, ∂f (x∗(c))∂x = 0
If the restriction is binding,h(x) + c = 0⇒ ∂h(x∗(c))
∂x∂x∗(c)∂c = −1
So: ∂f∂c = λ.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Convexity and Optimization
The notion of convexity occupies a central position in the studyof optimization theory. It encompasses not only the idea ofconvex sets, but also of concave and convex functions. Theattractiveness of convexity for optimization theory arises fromthe fact that when an optimization problem meets suitableconvexity conditions, the same first-order conditions that wehave shown in previous to be necessary for local optima, alsobecome sufficient for global optima. Moreover, if the convexityconditions are tightened to what are called strict convexityconditions, we get the additional bonus of uniqueness of thesolution.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Convexity Defined
We say that an optimization problem is convex if the objectivefunction is concave (max) or convex (min) and the feasible setis convex.Convex setA set D is said to be convex if given x1, x2 ∈ D we haveλx1 + (1− λ)x2 ∈ D.concave functionA function defined on a convex set D is concave if: givenx1, x2 ∈ D
f (λx1 + (1− λ)x2) ≥ λf (x1) + (1− λ)f (x2)
convex functionA function defined on a convex set D is concave if: givenx1, x2 ∈ D
f (λx1 + (1− λ)x2) ≤ λf (x1) + (1− λ)f (x2)
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Convexity Defined (cont.)
Strictly concave functionA function defined on a convex set D is concave if: givenx1, x2 ∈ D
f (λx1 + (1− λ)x2) > λf (x1) + (1− λ)f (x2)
Strictly convex functionA function defined on a convex set D is concave if: givenx1, x2 ∈ D
f (λx1 + (1− λ)x2) < λf (x1) + (1− λ)f (x2)
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Implications of convexity
Every concave or convex function must also be continuouson the interior of its domain.
Every concave or convex function must possess minimaldifferentiability properties.
The concavity or convexity of an everywhere differentiablefunction f can be completely characterized in terms of thebehavior of its second derivative D2f
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Convexity and Optimization
Some General Observations
Theorem 1
Suppose D ⊂ Rn is convex and f : D → R is concave. Then
1 Any local maximum of f is a global maximum of f .
2 The set argmax {f (x)|x ∈ D} of maximizers of f on D iseither empty or convex.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Convexity and Optimization
Some General Observations
Theorem 2
Suppose D ⊂ Rn is convex and f : D → R is strictly concave.Then
1 Any local maximum of f is a global maximum of f .
2 The set argmax {f (x)|x ∈ D} of maximizers of f on D iseither empty or or contains a single point (uniqueness).
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Convexity and Optimization
Convexity and Unconstrained Optimization
Theorem 3
Suppose D ⊂ Rn is convex and f : D → R is concave anddifferentiable function on D. Then, x is an unconstrainedmaximum of f on D if and only if Df (x) = 0.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Convexity and Optimization
Convexity and the Theorem of Kuhn-Tucker
Theorem 4
Suppose D ⊂ Rn is open and convex and f : D → R is concaveand differentiable function on D. For i = 1, 2...l , lethi : D → R also concave. Suppose that there is some x ∈ Dsuch that hi (x) > 0, i = 1, ...l . Then, x∗ maximizes f over theconstraint set, if and only if there is λ∗ ∈ Rk such that theKuhn Tucker first order conditions hold.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Convexity and Optimization
The Slater Condition The condition that there is some x ∈ Dsuch that hi (x) > 0 is called the Slater Condition
1 The SC is used only in the proof that the K-T conditionsare necessary at an optimum. It is not related tosufficiency.
2 The rank condition established in the K-T theorem can bereplaced by the SC and concavity of the constraints.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Quasiconvexity in Optimization
Convexity carries important implications for optimization.However it is a quite restrictive assumption. For instance, sucha commonly used utility function, such as the Cobb Douglasfunction is not concave unless
∑ni=1 αi ≤ 1.
u(x1, ..., xn = xα11 ...xαn
n .
In this chapter we examine optimization under a weakening ofthe condition of convexity, which is called quasi-convexity.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Quasiconvexity and Quasiconcavity defined
quasi-concave functionA function f defined on a convex set D is quasi-concave if andonly if: for all x1, x2 ∈ D and for all λ ∈ (0, 1) it is the case that
f (λx1 + (1− λ)x2) ≥ minf (x1), f (x2)
quasi-convex functionA function defined on a convex set D is concave if: givenx1, x2 ∈ D
f (λx1 + (1− λ)x2) ≤ maxf (x1), f (x2)
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Quasiconvexity and Quasiconcavity defined
strict quasi-concave functionA function f defined on a convex set D is quasi-concave if andonly if: for all x1, x2 ∈ D and for all λ ∈ (0, 1) it is the case that
f (λx1 + (1− λ)x2) > minf (x1), f (x2)
strict quasi-convex functionA function defined on a convex set D is concave if: givenx1, x2 ∈ D
f (λx1 + (1− λ)x2) < maxf (x1), f (x2)
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Quasiconvexity as a generalization of convexity
If a function f is concave on a convex set, it is alsoquasi-concave on the convex set. Same for concavity. Thereverse does not hold.Let f : R→ R.If f is a nondecreasing function on R, then, f isboth quasi concave and quasi-convex.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Implications of quasi-convexity
Quasi-concave and quasi-convex functions are notnecessarily continuous on the interior of their domains.
Quasi-concave and quasi-convex functions may have localoptima that are not global optima.
First order conditions are not necessary to identify everylocal optima under quasi-convexity.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Quasiconvexity and Optimization
Theorem
Suppose that f : D → R and hi are strictly quasiconcave andD is convex. Suppose that there are x∗ and λ∗ such that theK-T conditions are satisfied. Then, x∗ maximizes f over Dprovided at least one of the following conditions holds:
1 Df (x∗) 6= 0
2 f is concave
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Quasiconvexity and Optimization
Theorem
Suppose that f : D → R is strictly quasiconcave and D isconvex. Then, any local maximum of f on D is also a globalmaximum of f on D. Moreover, the set or maximizers of f iseither empty or a singleton. Same holds for a minimum andstrictly quasiconvex functions.
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Filomena Garcia Optimization
Optimization
FilomenaGarcia
Optimization
OptimizationProblems in Rn
ParametricForm
Examples
The objectivesof OptimizationTheory
Existence ofSolutions
UnconstrainedOptima
EqualityConstraints
InequalityConstraints
ConvexStructures inOptimizationTheory
QuasiconvexityinOptimization
ParametricContinuity:TheMaximumTheorem
SupermodularityandParametricMonotonicity
Filomena Garcia Optimization