convex optimization problems - university of...
TRANSCRIPT
![Page 1: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/1.jpg)
Convex optimization problems
I optimization problem in standard form
I convex optimization problems
I linear optimization
I quadratic optimization
I geometric programming
I quasiconvex optimization
I generalized inequality constraints
I semidefinite programming
I vector optimization
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–1
Optimization problem in standard form
minimize f0(x)subject to f
i
(x) 0, i = 1, . . . ,mhi
(x) = 0, i = 1, . . . , p
I x 2 Rn is the optimization variable
I f0 : Rn ! R is the objective or cost function
I fi
: Rn ! R, i = 1, . . . ,m, are the inequality constraintfunctions
I hi
: Rn ! R are the equality constraint functions
optimal value:
p? = inf{f0(x) | fi (x) 0, i = 1, . . . ,m, hi
(x) = 0, i = 1, . . . , p}
I p? = 1 if problem is infeasible (no x satisfies the constraints)
I p? = �1 if problem is unbounded belowIOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–2
![Page 2: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/2.jpg)
Optimal and locally optimal pointsI x is feasible if x 2 dom f0 and it satisfies the constraintsI a feasible x is optimal if f0(x) = p?; X
opt
is the set of optimalpoints
I x is locally optimal if there is an R > 0 such that x isoptimal for
minimize (over z) f0(z)subject to f
i
(z) 0, i = 1, . . . ,m,hi
(z) = 0, i = 1, . . . , pkz � xk2 R
examples (with n = 1, m = p = 0)I f0(x) = 1/x , dom f0 = R++: p? = 0, no optimal pointI f0(x) = � log x , dom f0 = R++: p? = �1I f0(x) = x log x , dom f0 = R++: p? = �1/e, x = 1/e is
optimalI f0(x) = x3 � 3x , p? = �1, local optimum at x = 1
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–3
Implicit constraintsThe standard form optimization problem has an implicitconstraint
x 2 D =m\
i=0
dom fi
\p\
i=1
dom hi
,
I we call D the domain of the problem
I the constraints fi
(x) 0, hi
(x) = 0 are the explicitconstraints
I a problem is unconstrained if it has no explicit constraints(m = p = 0)
example:
minimize f0(x) = �Pk
i=1 log(bi � aTi
x)
is an unconstrained problem with implicit constraintsaTi
x < bi
, i = 1, . . . , k
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–4
![Page 3: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/3.jpg)
Feasibility problem
find xsubject to f
i
(x) 0, i = 1, . . . ,mhi
(x) = 0, i = 1, . . . , p
can be considered a special case of the general problem withf0(x) = 0:
minimize 0subject to f
i
(x) 0, i = 1, . . . ,mhi
(x) = 0, i = 1, . . . , p
I p? = 0 if constraints are feasible; any feasible x is optimal
I p? = 1 if constraints are infeasible
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–5
Standard form convex optimization problem
minimize f0(x)subject to f
i
(x) 0, i = 1, . . . ,maTi
x = bi
, i = 1, . . . , p
I f0, f1, . . . , fm are convex; equality constraints are a�ne
I problem is quasiconvex if f0 is quasiconvex (and f1, . . . , fmconvex)
often written as
minimize f0(x)subject to f
i
(x) 0, i = 1, . . . ,mAx = b
important property: feasible set of a convex optimization problemis convex
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–6
![Page 4: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/4.jpg)
Standard form convex optimization problemExample
minimize f0(x) = x21 + x22subject to f1(x) = x1/(1 + x22 ) 0
h1(x) = (x1 + x2)2 = 0
I f0 is convex; feasible set {(x1, x2) | x1 = �x2 0} is convexI not a convex problem (according to our definition): f1 is not
convex, h1 is not a�neI equivalent (but not identical) to the convex problem
minimize x21 + x22subject to x1 0
x1 + x2 = 0
I Keep in mind:I Some results we will prove for convex problem also apply to
problems of minimizing a convex function over a convex setI But not all!
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–7
Local and global optimaAny locally optimal point of a convex problem is (globally) optimalproof: suppose x is locally optimal and y is feasible withf0(y) < f0(x).“x locally optimal” means there is an R > 0 such that
z feasible, kz � xk2 R =) f0(z) � f0(x).
Consider z = ✓y + (1� ✓)x with ✓ = R/(2ky � xk2)I ky � xk2 > R , so 0 < ✓ < 1/2
I z is a convex combination of two feasible points, hence alsofeasible
I kz � xk2 = R/2 and
f0(z) ✓f0(y) + (1� ✓)f0(x) < f0(x),
which contradicts our assumption that x is locally optimal
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–8
![Page 5: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/5.jpg)
Optimality criterion for di↵erentiable f0
x is optimal if and only if it is feasible and
rf0(x)T (y � x) � 0 for all feasible y
Optimality criterion for differentiable f0
x is optimal if and only if it is feasible and
∇f0(x)T (y − x) ≥ 0 for all feasible y
PSfrag replacements
−∇f0(x)
Xx
if nonzero, ∇f0(x) defines a supporting hyperplane to feasible set X at x
Convex optimization problems 4–9
if nonzero, rf0(x) defines a supporting hyperplane to feasible setX at x
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–9
Optimality criterion: special casesI unconstrained problem: x is optimal if and only if
x 2 dom f0, rf0(x) = 0
I equality constrained problem
minimize f0(x) subject to Ax = b
x is optimal if and only if there exists a ⌫ such that
x 2 dom f0, Ax = b, rf0(x) + AT⌫ = 0
I minimization over nonnegative orthant
minimize f0(x) subject to x ⌫ 0
x is optimal if and only if
x 2 dom f0, x ⌫ 0,
⇢ rf0(x)i � 0 xi
= 0rf0(x)i = 0 x
i
> 0
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–10
![Page 6: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/6.jpg)
Equivalent convex problemstwo problems are (informally) equivalent if the solution of one isreadily obtained from the solution of the other, and vice-versa.Some common transformations that preserve convexity:
I eliminating equality constraints
minimize f0(x)subject to f
i
(x) 0, i = 1, . . . ,mAx = b
is equivalent to
minimize (over z) f0(Fz + x0)subject to f
i
(Fz + x0) 0, i = 1, . . . ,m
where F and x0 are such that
Ax = b () x = Fz + x0 for some z
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–11
Equivalent convex problemsI introducing equality constraints
minimize f0(A0x + b0)subject to f
i
(Ai
x + bi
) 0, i = 1, . . . ,m
is equivalent to
minimize (over x , yi
) f0(y0)subject to f
i
(yi
) 0, i = 1, . . . ,myi
= Ai
x + bi
, i = 0, 1, . . . ,m
I introducing slack variables for linear inequalities
minimize f0(x)subject to aT
i
x bi
, i = 1, . . . ,m
is equivalent to
minimize (over x , s) f0(x)subject to aT
i
x + si
= bi
, i = 1, . . . ,msi
� 0, i = 1, . . .m
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–12
![Page 7: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/7.jpg)
Equivalent convex problemsI epigraph form: standard form convex problem is equivalent
to
minimize (over x , t) tsubject to f0(x)� t 0
fi
(x) 0, i = 1, . . . ,mAx = b
I minimizing over some variables
minimize f0(x1, x2)subject to f
i
(x1) 0, i = 1, . . . ,m
is equivalent to
minimize f0(x1)subject to f
i
(x1) 0, i = 1, . . . ,m
where f0(x1) = infx2 f0(x1, x2)
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–13
Linear program (LP)
minimize cT x + dsubject to Gx � h
Ax = b
I convex problem with a�ne objective and constraint functions
I feasible set is a polyhedron
Linear program (LP)
minimize cTx + dsubject to Gx ≼ h
Ax = b
• convex problem with affine objective and constraint functions
• feasible set is a polyhedron
PSfrag replacementsP x⋆
−c
Convex optimization problems 4–17IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–14
![Page 8: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/8.jpg)
Examplesdiet problem: choose quantities x1, . . . , xn of n foods
I one unit of food j costs cj
, contains amount aij
of nutrient i
I healthy diet requires nutrient i in quantity at least bi
to find cheapest healthy diet,
minimize cT xsubject to Ax ⌫ b, x ⌫ 0
piecewise-linear minimization
minimize maxi=1,...,m(aT
i
x + bi
)
equivalent to an LP
minimize tsubject to aT
i
x + bi
t, i = 1, . . . ,m
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–15
Chebyshev center of a polyhedron
Chebyshev center of
P = {x | aTi
x bi
, i = 1, . . . ,m}
is center of largest inscribed ball
B = {xc
+ u | kuk2 r}
Chebyshev center of a polyhedron
Chebyshev center of
P = {x | aTi x ≤ bi, i = 1, . . . ,m}
is center of largest inscribed ball
B = {xc + u | ∥u∥2 ≤ r}PSfrag replacements
xchebxcheb
• aTi x ≤ bi for all x ∈ B if and only if
sup{aTi (xc + u) | ∥u∥2 ≤ r} = aT
i xc + r∥ai∥2 ≤ bi
• hence, xc, r can be determined by solving the LP
maximize rsubject to aT
i xc + r∥ai∥2 ≤ bi, i = 1, . . . ,m
Convex optimization problems 4–19
I aTi
x bi
for all x 2 B if and only if
sup{aTi
(xc
+ u) | kuk2 r} = aTi
xc
+ rkai
k2 bi
I hence, xc
, r can be determined by solving the LP
maximizex
c
,r rsubject to aT
i
xc
+ rkai
k2 bi
, i = 1, . . . ,m
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–16
![Page 9: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/9.jpg)
Quadratic program (QP)
minimize (1/2)xTPx + qT x + rsubject to Gx � h
Ax = b
I P 2 Sn
+, so objective is convex quadratic
I minimize a convex quadratic function over a polyhedron
Quadratic program (QP)
minimize (1/2)xTPx + qTx + rsubject to Gx ≼ h
Ax = b
• P ∈ Sn+, so objective is convex quadratic
• minimize a convex quadratic function over a polyhedron
PSfrag replacementsP
x⋆
−∇f0(x⋆)
Convex optimization problems 4–22
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–17
Examplesleast-squares
minimize kAx � bk22A 2 Rm⇥n
I analytical solution x? = A†b (A† is pseudo-inverse)
I can add linear constraints, e.g., l � x � u
linear program with random cost
minimize cT x + �xT⌃x = E cT x + � var(cT x)subject to Gx � h, Ax = b
I c is random vector with mean c and covariance ⌃
I hence, cT x is random variable with mean cT x and variancexT⌃x
I � > 0 is risk aversion parameter; controls the trade-o↵between expected cost and variance (risk)
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–18
![Page 10: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/10.jpg)
Quadratically constrained quadratic program (QCQP)
minimize (1/2)xTP0x + qT0 x + r0subject to (1/2)xTP
i
x + qTi
x + ri
0, i = 1, . . . ,mAx = b
I Pi
2 Sn
+; objective and constraints are convex quadratic
I if P1, . . . ,Pm
2 Sn
++, feasible region is intersection of mellipsoids and an a�ne set
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–19
Second-order cone programming
minimize f T xsubject to kA
i
x + bi
k2 cTi
x + di
, i = 1, . . . ,mFx = g
(Ai
2 Rn
i
⇥n, F 2 Rp⇥n)
I inequalities are called second-order cone (SOC) constraints:
(Ai
x + bi
, cTi
x + di
) 2 second-order cone in Rn
i
+1
I for ni
= 0, reduces to an LP; if ci
= 0, reduces to a QCQP
I more general than QCQP and LP
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–20
![Page 11: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/11.jpg)
Robust linear programmingthe parameters in optimization problems are often uncertain, e.g.,in an LP
minimize cT xsubject to aT
i
x bi
, i = 1, . . . ,m,
there can be uncertainty in c , ai
, bi
two common approaches to handling uncertainty (in ai
, forsimplicity)
I deterministic model: constraints must hold for all ai
2 Ei
minimize cT xsubject to aT
i
x bi
for all ai
2 Ei
, i = 1, . . . ,m,
I stochastic model: ai
is random variable; constraints must holdwith probability ⌘
minimize cT xsubject to prob(aT
i
x bi
) � ⌘, i = 1, . . . ,m
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–21
Deterministic approach via SOCP
I choose an ellipsoid as Ei
:
Ei
= {ai
+ Pi
u | kuk2 1} (ai
2 Rn, Pi
2 Rn⇥n)
center is ai
, semi-axes determined by singular values/vectorsof P
i
I robust LP
minimize cT xsubject to aT
i
x bi
8ai
2 Ei
, i = 1, . . . ,m
is equivalent to the SOCP
minimize cT xsubject to aT
i
x + kPT
i
xk2 bi
, i = 1, . . . ,m
(follows from supkuk21(ai + Pi
u)T x = aTi
x + kPT
i
xk2)IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–22
![Page 12: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/12.jpg)
Stochastic approach via SOCPI assume a
i
is Gaussian with mean ai
, covariance ⌃i
(ai
⇠ N (ai
,⌃i
))I aT
i
x is Gaussian r.v. with mean aTi
x , variance xT⌃i
x ; hence
prob(aTi
x bi
) = �
bi
� aTi
x
k⌃1/2i
xk2
!
where �(z) = (1/p2⇡)
Rz
�1 e�t
2/2 dt is CDF of N (0, 1)I robust LP
minimize cT xsubject to prob(aT
i
x bi
) � ⌘, i = 1, . . . ,m,
with ⌘ � 1/2, is equivalent to the SOCP
minimize cT x
subject to aTi
x + ��1(⌘)k⌃1/2i
xk2 bi
, i = 1, . . . ,m
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–23
Geometric programming
monomial function
f (x) = cxa11 xa22 · · · xann
, dom f = Rn
++
with c > 0; exponent ↵i
can be any real numberposynomial function: sum of monomials
f (x) =KX
k=1
ck
xa1k1 xa2k2 · · · xankn
, dom f = Rn
++
geometric program (GP)
minimize f0(x)subject to f
i
(x) 1, i = 1, . . . ,mhi
(x) = 1, i = 1, . . . , p
with fi
posynomial, hi
monomial
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–24
![Page 13: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/13.jpg)
Geometric program in convex formchange variables to y
i
= log xi
, and take logarithm of cost,constraints
I monomial f (x) = cxa11 · · · xann
transforms to
log f (ey1 , . . . , eyn) = aT y + b (b = log c)
I posynomial f (x) =P
K
k=1 ckxa1k1 xa2k2 · · · xank
n
transforms to
log f (ey1 , . . . , eyn) = log
KX
k=1
eaT
k
y+b
k
!(b
k
= log ck
)
I geometric program transforms to convex problem
minimize log⇣P
K
k=1 exp(aT
0ky + b0k)⌘
subject to log⇣P
K
k=1 exp(aT
ik
y + bik
)⌘ 0, i = 1, . . . ,m
Gy + d = 0
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–25
Design of cantilever beamDesign of cantilever beamPSfrag replacements
F
segment 4 segment 3 segment 2 segment 1
• N segments with unit lengths, rectangular cross-sections of size wi × hi
• given vertical force F applied at the right end
design problem
minimize total weightsubject to upper & lower bounds on wi, hi
upper bound & lower bounds on aspect ratios hi/wi
upper bound on stress in each segmentupper bound on vertical deflection at the end of the beam
variables: wi, hi for i = 1, . . . , N
Convex optimization problems 4–31
I N segments with unit lengths, rectangular cross-sections ofsize w
i
⇥ hi
I given vertical force F applied at the right end
design problem
minimize total weightsubject to upper & lower bounds on w
i
, hi
upper bound & lower bounds on aspect ratios hi
/wi
upper bound on stress in each segmentupper bound on vertical deflection at the end of the beam
variables: wi
, hi
for i = 1, . . . ,NIOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–26
![Page 14: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/14.jpg)
Objective and constraint functions
I total weight w1h1 + · · ·+ wN
hN
is posynomial
I aspect ratio hi
/wi
and inverse aspect ratio wi
/hi
aremonomials
I maximum stress in segment i is given by 6iF/(wi
h2i
), amonomial
I the vertical deflection yi
and slope vi
of central axis at theright end of segment i are defined recursively as
vi
= 12(i � 1/2)F
Ewi
h3i
+ vi+1
yi
= 6(i � 1/3)F
Ewi
h3i
+ vi+1 + y
i+1
for i = N,N � 1, . . . , 1, with vN+1 = y
N+1 = 0 (E is Young’smodulus)vi
and yi
are posynomial functions of w , h
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–27
Formulation as a GP
minimize w1h1 + · · ·+ wN
hN
subject to w�1max
wi
1, wmin
w�1i
1, i = 1, . . . ,N
h�1max
hi
1, hmin
h�1i
1, i = 1, . . . ,N
S�1max
w�1i
hi
1, Smin
wi
h�1i
1, i = 1, . . . ,N
6iF��1max
w�1i
h�2i
1, i = 1, . . . ,N
y�1max
y1 1
noteI we write w
min
wi
wmax
and hmin
hi
hmax
wmin
/wi
1, wi
/wmax
1, hmin
/hi
1, hi
/hmax
1
I we write Smin
hi
/wi
Smax
as
Smin
wi
/hi
1, hi
/(wi
Smax
) 1
I The number of monomials appearing in y1 growsapproximately as N2.
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–28
![Page 15: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/15.jpg)
Minimizing spectral radius of nonnegative matrixPerron-Frobenius eigenvalue �
pf
(A)I Consider (elementwise) nonnegative A 2 Rn⇥n that is
irreducible: (I + A)n�1 > 0.I P-F Theorem: there is a real, positive eigenvalue of A, �
pf
,equal to spectral radius max
i
|�i
(A)|I determines asymptotic growth (decay) rate of Ak : Ak ⇠ �k
pf
as k ! 1I alternative characterization:
�pf
(A) = inf{� | Av � �v for some v � 0}minimizing spectral radius of matrix of posynomials
I minimize �pf
(A(x)), where the elements A(x)ij
areposynomials of x
I equivalent geometric program:
minimize �subject to
Pn
j=1 A(x)ijvj/(�vi ) 1, i = 1, . . . , n
variables �, v , xIOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–29
Quasiconvex optimization
minimize f0(x)subject to f
i
(x) 0, i = 1, . . . ,mAx = b
with f0 : Rn ! R quasiconvex, f1, . . . , fm convex
can have locally optimal points that are not (globally) optimal
Quasiconvex optimization
minimize f0(x)subject to fi(x) ≤ 0, i = 1, . . . ,m
Ax = b
with f0 : Rn → R quasiconvex, f1, . . . , fm convex
can have locally optimal points that are not (globally) optimal
PSfrag replacements
(x, f0(x))
Convex optimization problems 4–14IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–30
![Page 16: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/16.jpg)
Convex representation of sublevel sets of f0if f0 is quasiconvex, there exists a family of functions �
t
such that:
I �t
(x) is convex in x for fixed t
I t-sublevel set of f0 is 0-sublevel set of �t
, i.e.,
f0(x) t () �t
(x) 0
I �t
(x) is non-increasing in t for fixed x
example
f0(x) =p(x)
q(x)
with p convex, q concave, and p(x) � 0, q(x) > 0 on dom f0can take �
t
(x) = p(x)� tq(x):
I for t � 0, �t
convex in x
I p(x)/q(x) t if and only if �t
(x) 0
I If s � t, p(x)� tq(x) � p(x)� sq(x)
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–31
Quasiconvex optimization via convex feasibility problems
�t
(x) 0, fi
(x) 0, i = 1, . . . ,m, Ax = b (1)
I for fixed t, a convex feasibility problem in x
I if feasible, we can conclude that t � p?; if infeasible, t p?
Bisection method for quasiconvex optimization
given l p?, u � p?, tolerance ✏ > 0.
repeat1. t := (l + u)/2.2. Solve the convex feasibility problem (1).3. if (1) is feasible, u := t; else l := t.
until u � l ✏.
requires exactly dlog2((u � l)/✏)e iterations (where u, l are initialvalues)
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–32
![Page 17: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/17.jpg)
Linear-fractional program
minimize f0(x)subject to Gx � h
Ax = b
linear-fractional program
f0(x) =cT x + d
eT x + f, dom f0(x) = {x | eT x + f > 0}
I a quasiconvex optimization problem; can be solved bybisection
I also, if feasible, equivalent to the LP (variables y , z)
minimize cT y + dzsubject to Gy � hz
Ay = bzeT y + fz = 1z � 0
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–33
Generalized linear-fractional program
f0(x) = maxi=1,...,r
cTi
x + di
eTi
x + fi
, dom f0(x) = {x | eTi
x+fi
> 0, i = 1, . . . , r}
a quasiconvex optimization problem; can be solved by bisectionexample: Von Neumann model of a growing economy
maximize (over x , x+) mini=1,...,n x
+i
/xi
subject to x+ ⌫ 0, Bx+ � Ax
I x , x+ 2 Rn: activity levels of n sectors, in current and nextperiod
I (Ax)i
, (Bx+)i
: produced, resp. consumed, amounts of good i
I x+i
/xi
: growth rate of sector i
allocate activity to maximize growth rate of slowest growing sector
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–34
![Page 18: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/18.jpg)
Convexity of vector-valued functions
f : Rn ! Rm is K -convex (K is a proper cone) if dom f is convexand
f (✓x + (1� ✓)y) �K
✓f (x) + (1� ✓)f (y)
for x , y 2 dom f , 0 ✓ 1
example f : Sm ! Sm, f (X ) = X 2 is Sm
+-convex
proof: for fixed z 2 Rm, zTX 2z = kXzk22 is convex in X , i.e.,
zT (✓X + (1� ✓)Y )2z ✓zTX 2z + (1� ✓)zTY 2z
for X ,Y 2 Sm, 0 ✓ 1
therefore (✓X + (1� ✓)Y )2 � ✓X 2 + (1� ✓)Y 2
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–35
Vector optimization
general vector optimization problem
minimize (w.r.t. K ) f0(x)subject to f
i
(x) 0, i = 1, . . . ,mhi
(x) = 0, i = 1, . . . , p
vector objective f0 : Rn ! Rq, minimized w.r.t. proper cone
K 2 Rq
convex vector optimization problem
minimize (w.r.t. K ) f0(x)subject to f
i
(x) 0, i = 1, . . . ,mAx = b
with f0 K -convex, f1, . . . , fm convex
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–36
![Page 19: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/19.jpg)
Optimal and Pareto optimal points
set of achievable objective values
O = {f0(x) | x feasible}
I feasible x is optimal if f0(x) is a minimum value of OI feasible x is Pareto optimal if f0(x) is a minimal value of O
Optimal and Pareto optimal points
set of achievable objective values
O = {f0(x) | x feasible}
• feasible x is optimal if f0(x) is a minimum value of O
• feasible x is Pareto optimal if f0(x) is a minimal value of O
PSfrag replacements
O
f0(x⋆)
x⋆ is optimal
PSfrag replacements
O
f0(xpo)
xpo is Pareto optimal
Convex optimization problems 4–41
Optimal and Pareto optimal points
set of achievable objective values
O = {f0(x) | x feasible}
• feasible x is optimal if f0(x) is a minimum value of O
• feasible x is Pareto optimal if f0(x) is a minimal value of O
PSfrag replacements
O
f0(x⋆)
x⋆ is optimal
PSfrag replacements
O
f0(xpo)
xpo is Pareto optimal
Convex optimization problems 4–41IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–37
Multicriteria optimizationvector optimization problem with K = Rq
+
f0(x) = (F1(x), . . . ,Fq(x))
I q di↵erent objectives Fi
; roughly speaking we want all Fi
’s tobe small
I feasible x? is optimal if
y feasible =) f0(x?) � f0(y)
if there exists an optimal point, the objectives arenoncompeting
I feasible xpo is Pareto optimal if
y feasible, f0(y) � f0(xpo) =) f0(x
po) = f0(y)
if there are multiple Pareto optimal values, there is a trade-o↵between the objectives
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–38
![Page 20: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/20.jpg)
Regularized least-squares
multicriteria problem with two objectives
F1(x) = kAx � bk22, F2(x) = kxk22
I example withA 2 R100⇥10
I shaded region is OI heavy line is formed by
Pareto optimal points
Regularized least-squares
multicriterion problem with two objectives
F1(x) = ∥Ax − b∥22, F2(x) = ∥x∥2
2
• example with A ∈ R100×10
• shaded region is O
• heavy line is formed by Paretooptimal points
PSfrag replacements
F1(x) = ∥Ax − b∥22
F2(x
)=
∥x∥
2 2
0 5 10 150
5
10
15
Convex optimization problems 4–43IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–39
Risk return trade-o↵ in portfolio optimization
minimize (w.r.t. R2+) (�pT x , xT⌃x)
subject to 1T x = 1, x ⌫ 0
I x 2 Rn is investment portfolio; xi
is fraction invested in asset i
I p 2 Rn is vector of relative asset price changes; modeled as arandom variable with mean p, covariance ⌃
I pT x = E r is expected return; xT⌃x = var r is return variance
example
Risk return trade-off in portfolio optimization
minimize (w.r.t. R2+) (−pTx, xTΣx)
subject to 1Tx = 1, x ≽ 0
• x ∈ Rn is investment portfolio; xi is fraction invested in asset i
• p ∈ Rn is vector of relative asset price changes; modeled as a randomvariable with mean p, covariance Σ
• pTx = E r is expected return; xTΣx = var r is return variance
examplePSfrag replacements
mea
nre
turn
standard deviation of return0% 10% 20%
0%
5%
10%
15%
PSfrag replacements
standard deviation of return
allo
cation
x
x(1)
x(2)x(3)x(4)
0% 10% 20%
0
0.5
1
Convex optimization problems 4–44
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–40
![Page 21: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/21.jpg)
Scalarization
to find Pareto optimal points: choose � �K
⇤ 0 and solve scalarproblem
minimize �T f0(x)subject to f
i
(x) 0, i = 1, . . . ,mhi
(x) = 0, i = 1, . . . , p
if x is optimal for scalar problem,then it is Pareto-optimal forvector optimization problem
Scalarization
to find Pareto optimal points: choose λ ≻K∗ 0 and solve scalar problem
minimize λTf0(x)subject to fi(x) ≤ 0, i = 1, . . . ,m
hi(x) = 0, i = 1, . . . , p
if x is optimal for scalar problem,then it is Pareto-optimal for vectoroptimization problem
PSfrag replacements O
f0(x1)
λ1f0(x2)
λ2
f0(x3)
for convex vector optimization problems, can find (almost) all Paretooptimal points by varying λ ≻K∗ 0
Convex optimization problems 4–45
for convex vector optimization problems, can find (almost) allPareto optimal points by varying � �
K
⇤ 0
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–41
Examples
I for multicriteria problem, find Pareto optimal points byminimizing positive weighted sum
�T f0(x) = �1F1(x) + · · ·+ �q
Fq
(x)
I regularized least-squares (with � = (1, �))
minimize kAx � bk22 + �kxk22for fixed � > 0, a least-squares problem
I risk-return trade-o↵ (with � = (1, �))
minimize �pT x + �xT⌃xsubject to 1T x = 1, x ⌫ 0
for fixed � > 0, a QP
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–42
![Page 22: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/22.jpg)
Generalized inequality constraintsconvex problem with generalized inequality constraints
minimize f0(x)subject to f
i
(x) �K
i
0, i = 1, . . . ,mAx = b
I f0 : Rn ! R convex; f
i
: Rn ! Rk
i Ki
-convex w.r.t. propercone K
i
I same properties as standard convex problem (convex feasibleset, local optimum is global, etc.)
conic form problem: special case with a�ne objective andconstraints
minimize cT xsubject to Fx + g �
K
0Ax = b
extends linear programming (K = Rm
+) to nonpolyhedral cones
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–43
Semidefinite program (SDP)
minimize cT xsubject to x1F1 + x2F2 + · · ·+ x
n
Fn
+ G � 0Ax = b
with Fi
, G 2 Sk
I inequality constraint is called linear matrix inequality (LMI)
I includes problems with multiple LMI constraints: for example,
x1F1 + · · ·+ xn
Fn
+ G � 0, x1F1 + · · ·+ xn
Fn
+ G � 0
is equivalent to single LMI
x1
F1 00 F1
�+x2
F2 00 F2
�+· · ·+x
n
Fn
00 F
n
�+
G 00 G
�� 0
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–44
![Page 23: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/23.jpg)
LP and SOCP as SDP
LP and equivalent SDP
LP: minimize cT xsubject to Ax � b
SDP: minimize cT xsubject to diag(Ax � b) � 0
(note di↵erent interpretation of generalized inequality �)
SOCP and equivalent SDP
SOCP: minimize f T xsubject to kA
i
x + bi
k2 cTi
x + di
, i = 1, . . . ,m
SDP: minimize f T x
subject to
(cT
i
x + di
)I Ai
x + bi
(Ai
x + bi
)T cTi
x + di
�⌫ 0, i = 1, . . . ,m
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–45
Examples of SDP problems
I A few words on formats of SDPsI Convex Optimization:
I Eigenvalue problemsI log det(X ) problems
I Combinatorial optimization: MAX CUT
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–46
![Page 24: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/24.jpg)
Forms of SDP problems
I Data: C , Ai
, i = 1, . . . ,m symmetric matrices
I Standard dual form:
SDD : maximizey
mPi=1
yi
bi
s.t. C �mPi=1
yi
Ai
⌫ 0
I An LMI constraint: M(z) ⌫ 0, where z 2 Rn andM : Rn ! Sk is a linear (matrix-valued) function
I Standard primal form:
SDP : minimizeX
tr(CX )s.t. tr(A
i
X ) = bi
, i = 1, . . . ,m,X ⌫ 0.
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–47
Example
A1 =
0
@1 0 10 3 71 7 5
1
A , A2 =
0
@0 2 82 6 08 0 4
1
A , b =
✓1119
◆, and C =
0
@1 2 32 9 03 0 7
1
A
SDD : maximize 11y1 + 19y2
s.t. y1
0
@1 0 10 3 71 7 5
1
A+ y2
0
@0 2 82 6 08 0 4
1
A �0
@1 2 32 9 03 0 7
1
A ,
which we can rewrite in the following form:
SDD : maximize 11y1 + 19y2s.t. 0
@1� 1y1 � 0y2 2� 0y1 � 2y2 3� 1y1 � 8y22� 0y1 � 2y2 9� 3y1 � 6y2 0� 7y1 � 0y23� 1y1 � 8y2 0� 7y1 � 0y2 7� 5y1 � 4y2
1
A ⌫ 0.
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–48
![Page 25: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/25.jpg)
Example
A1 =
0
@1 0 10 3 71 7 5
1
A , A2 =
0
@0 2 82 6 08 0 4
1
A , b =
✓1119
◆, and C =
0
@1 2 32 9 03 0 7
1
A
SDP : minimize x11 + 4x12 + 6x13 + 9x22 + 0x23 + 7x33s.t. x11 + 0x12 + 2x13 + 3x22 + 14x23 + 5x33 = 11
0x11 + 4x12 + 16x13 + 6x22 + 0x23 + 4x33 = 19
X =
0
@x11 x12 x13x21 x22 x23x31 x32 x33
1
A ⌫ 0.
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–49
SDP in Convex OptimizationEigenvalue Optimization — Typical Eigenvalue Problems
I We are given symmetric matrices B and Ai
, i = 1, . . . , k
I We choose weights w1, . . . ,wk
to create a new matrix S :
S := B �kX
i=1
wi
Ai
.
I There might be restrictions on the weights w : Gw dI The typical goal is for S to have some nice property, such as:
I �min(S) is maximizedI �max(S) is minimizedI �max(S)� �min(S) is minimized
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–50
![Page 26: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/26.jpg)
SDP in Convex OptimizationEigenvalue Optimization — Useful Relationships
PropertyM 2 Sm if and only if M = QDQT , where D is diagonal andQTQ = I . M ⌫ 0 if and only if diag(D) ⌫ 0 (or D ⌫ 0).
Schur complement propertyConsider
X =
A BBT C
�,
where A 2 Sn, C 2 Sm and B 2 Rn⇥m. Define
S = C � BTA�1B 2 Sm.
Then:
I X � 0 if and only if A � 0 and S � 0
I If A � 0, then X ⌫ 0 if and only if S ⌫ 0
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–51
SDP in Convex OptimizationEigenvalue Optimization — Useful Relationships
PropertyM ⌫ tI if and only if �min(M) � t.
Proof: M = QDQT . Consider R defined as:
R = M � tI = QDQT � tQIQT = Q(D � tI )QT .
Then
M ⌫ tI () R ⌫ 0 () D � tI ⌫ 0 () �min(M) � t
PropertyM � tI if and only if �max(M) t.
Proof: v =P
n
i=1 ↵i
qi
, where qi
’s are orthonormal e-vectors of M.
M � tI , vT (tI�M)v � 0 8v ,nX
i=1
↵2i
(t��i
) � 0 8↵ , �max(M) t
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–52
![Page 27: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/27.jpg)
SDP in Convex OptimizationEigenvalue Optimization
Consider the problem:
EOP : minimize �max(S)� �min(S)w , S
s.t. S = B �kP
i=1wi
Ai
Gw d .
This is equivalent to:
EOP : minimize µ� �w , S , µ,�
s.t. S = B �kP
i=1wi
Ai
Gw d�I � S � µI .
This is an SDPIOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–53
Matrix norm minimization
minimize kA(x)k2 =��max
(A(x)TA(x))�1/2
where A(x) = A0 + x1A1 + · · ·+ xn
An
(with given Ai
2 Rp⇥q)equivalent SDP
minimize t
subject to
tI A(x)
A(x)T tI
�⌫ 0
I variables x 2 Rn, t 2 R
I constraint follows from
kAk2 t () ATA � t2I , t � 0
()
tI AAT tI
�⌫ 0,
using Schur complement: S = tI � AT
1t
A.IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–54
![Page 28: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/28.jpg)
SDP in Convex OptimizationThe Logarithmic Barrier Function
I Let X 2 Sn
+.
I X will have n eigenvalues �1(X ), . . . ,�n
(X )
I We have shown that the following function of X is convex:
B(X ) := �nX
j=1
ln(�i
(X )) = � ln
0
@nY
j=1
�i
(X )
1
A = � ln(det(X ))
I This function is called the log-determinant function or thelogarithmic barrier function for the semidefinite cone
I The name “barrier function” stems from the fact thatB(X ) ! +1 as X approaches boundary of Sn
+:
@Sn
+ = {X 2 Sn : �j
(X ) � 0, j = 1, . . . , n, and �j
(X ) = 0 for some j}
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–55
SDP in Convex OptimizationThe Analytic Center Problem for SDP
I An LMI:mX
i=1
yi
Ai
� C ,
I The analytic center is the solution (y , S) of:
(ACP:) maximizey ,S
nQi=1
�i
(S)
s.t.P
m
i=1 yiAi
+ S = CS ⌫ 0.
I This is the same as:
(ACP:) minimizey ,S � ln det(S)
s.t.P
m
i=1 yiAi
+ S = CS � 0.
I y is “centrally” located in the set of solutions of the LMIIOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–56
![Page 29: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/29.jpg)
SDP in Convex OptimizationMinimum Volume Circumscription Problem
I Given R � 0, z 2 Rn we define an ellipsoid in Rn:
ER,z := {y | (y � z)TR(y � z) 1}
I Volume of ER,z is proportional to
pdet(R�1)
I Given a convex set X = conv{c1, . . . , ck
} ⇢ Rn, find anellipsoid circumscribing X that has minimum volume
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–57
SDP in Convex OptimizationMinimum Volume Circumscription Problem
I Problem formulation
MCP : minimizeR,z vol (E
R,z)s.t. c
i
2 ER,z , i = 1, . . . , k
I Equivalent to
MCP : minimizeR,z � ln(det(R))
s.t. (ci
� z)TR(ci
� z) 1, i = 1, . . . , kR � 0,
I Factor R = M2 where M � 0:
MCP : minimizeM,z � ln(det(M2))
s.t. (ci
� z)TMTM(ci
� z) 1, i = 1, . . . , kM � 0.
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–58
![Page 30: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/30.jpg)
SDP in Convex OptimizationMinimum Volume Circumscription Problem
MCP : minimizeM,z � ln(det(M2))
s.t. (ci
� z)TMTM(ci
� z) 1, i = 1, . . . , kM � 0
I Next notice the equivalence:✓
I Mci
�Mz(Mc
i
�Mz)T 1
◆⌫ 0 () (c
i
�z)TMTM(ci
�z) 1
I In this way we can write MCP as:
MCP : minimizeM,z �2 ln(det(M))
s.t.
✓I Mc
i
�Mz(Mc
i
�Mz)T 1
◆⌫ 0, i = 1, . . . , k,
M � 0
I Substitute y = Mz to obtain:
MCP : minimizeM,y �2 ln(det(M))
s.t.
✓I Mc
i
� y(Mc
i
� y)T 1
◆⌫ 0, i = 1, . . . , k,
M � 0
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–59
SDP in Convex OptimizationMinimum Volume Circumscription Problem
MCP : minimizeM,y �2 ln(det(M))
s.t.
✓I Mc
i
� y(Mc
i
� y)T 1
◆⌫ 0, i = 1, . . . , k,
M � 0
I All of the matrix coe�cients are linear functions of thevariables M and y
I LMI constraints
I Objective is the logarithmic barrier function � ln(det(M))
I Easy to solve
I After solving, recover the matrix R and the center z of theoptimal ellipsoid by computing
R = M2 and z = M�1y
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–60
![Page 31: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/31.jpg)
SDP in Combinatorial OptimizationMAX CUT Problem
I G is an undirected graph with nodes N = {1, . . . , n}, andedge set E .
I Let wij
= wji
be the weight on edge (i , j), for (i , j) 2 E .
I We assume that wij
� 0 for all (i , j) 2 E .I The MAX CUT problem is to determine a subset S of the
nodes N for which the sum of the weights of the edges thatcross from S to its complement S is maximized
I S := N \ S
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–61
SDP in Combinatorial OptimizationMAX CUT Problem — Formulations
I Let xj
= 1 for j 2 S and xj
= �1 for j 2 S .
MAXCUT : maximizex
14
nPi=1
nPj=1
wij
(1� xi
xj
)
s.t. xj
2 {�1, 1}, j = 1, . . . , n
I Let Y = xxT . Then
Yij
= xi
xj
, i = 1, . . . , n, j = 1, . . . , n
I Let W 2 Sn with Wij
= wij
for i , j = 1, . . . , nI Reformulation:
MAXCUT : maximizeY ,x
14
nPi=1
nPj=1
wij
� 14 tr(WY )
s.t. xj
2 {�1, 1}, j = 1, . . . , nY = xxT
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–62
![Page 32: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/32.jpg)
SDP in Combinatorial OptimizationMAX CUT Problem — Formulations
MAXCUT : maximizeY ,x
14
nPi=1
nPj=1
wij
� 14 tr(WY )
s.t. xj
2 {�1, 1}, j = 1, . . . , nY = xxT
I The first set of constraints are equivalent toYjj
= 1, j = 1, . . . , n
MAXCUT : maximizeY ,x
14
nPi=1
nPj=1
wij
� 14 tr(WY )
s.t. Yjj
= 1, j = 1, . . . , nY = xxT
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–63
SDP in Combinatorial OptimizationMAX CUT Problem — Relaxation
MAXCUT : maximizeY ,x
14
nPi=1
nPj=1
wij
� 14 tr(WY )
s.t. Yjj
= 1, j = 1, . . . , nY = xxT
I Constraint “Y = xxT” is equivalent to “Y a symmetricpositive semidefinite matrix of rank 1”
I We relax this condition by removing the rank-1 restriction,and obtain the relaxation of MAX CUT, which is an SDP:
RELAX : maximizeY
14
nPi=1
nPj=1
wij
� 14 tr(WY )
s.t. Yjj
= 1, j = 1, . . . , nY ⌫ 0
I MAXCUT RELAXIOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–64
![Page 33: Convex optimization problems - University of Michiganmepelman/teaching/IOE611/Handouts/Problems.pdfConvex optimization problems I optimization problem in standard form I convex optimization](https://reader031.vdocuments.net/reader031/viewer/2022020214/5b05dc2a7f8b9a5c308c1f67/html5/thumbnails/33.jpg)
SDP in Combinatorial OptimizationMAX CUT Problem — Relaxation
RELAX : maximizeY
14
nPi=1
nPj=1
wij
� 14 tr(WY )
s.t. Yjj
= 1, j = 1, . . . , nY ⌫ 0
I As it turns out, one can also prove that:
0.87856 RELAX MAXCUT RELAX .
I I.e., the value of the semidefinite relaxation is guaranteed tobe no more than 12.2% higher than the value of NP-hardproblem MAX CUT.
IOE 611: Nonlinear Programming, Fall 2017 4. Convex optimization problems Page 4–65