a class of separable convex optimization problems in
TRANSCRIPT
A class of separable convex optimization
problems in communication
Arun Padakandlajoint work with Dr. Rajesh Sundaresan
Dept of Electrical Communication Engineering,
Indian Institute of Science, Bangalore.
A simple separable convex optimization problem in
geometry
Let L be a natural number.
min∑L
l=1 x2l
subject to 0 ≤ xl ≤ βl∑
L
l=1 xl = K
For illustration consider L = 2,Applications in MVU estimator of parameters in wireless sensor networks(Zacharias and Sundaresan [2007]).
Single user parallel Gaussian channel
L independent channels with noise variancesσ2
1 ≤ · · · ≤ σ2L
Y = X + N
Sum power constraint P Joules per channel use.
Communicating independently on the L channels is optimal
Goal : Identify powers P1,P2, · · · ,PL satisfying∑
L
l=1 Pl = P toachieve maximum rate?
Allocating power Pl on channel with noise variance σ2l
yields a rate12
log(
1 + Pl
σ2l
)
Single user parallel Gaussian channel
max∑
L
l=112log
(
1 + Pl
σ2l
)
subject to Pl ≥ 0 for l = 1, 2, · · · ,L
∑L
l=1 Pl ≤ P
The solution is a technique popularly called water-filling.
The problem involves
◮ maximizing a separable concave function
◮ subject to positivity and a linear constraint
Multiple users parallel Gaussian channel
K user multiple access channel.
Bandwidth constraint restricts signals to a vectorspace of L dimensions, K > L.
Users assigned vectors (directions in RL) that
are modulated.
Consider an allocation of vectors and let(µ1, · · · , µL) =eig( signal + interference matrix).
Each user is assigned a single vector and hencewill operate along one direction in contrast tothe single user parallel Gaussian case.
Multiple users parallel Gaussian channelµ must satisfy the followingpowers are allocated in every direction i.e.,
µl − σ2l≥ 0 for l = 1, 2, · · · , L
the first user (largest power constraint) can beaccomodated in the least noisy dimension i.e.
µ1 − σ21 ≥ P1,
the first n users (large power constraint) can beaccomodated in the least n noisy dimension i.e.
n∑
l=1
µl − σ2l ≥
n∑
l=1
Pl , for n = 1, 2, · · · , L− 1
the sum power constraint is obeyed
L∑
l=1
µl − σ2l
=
K∑
k=1
Pk .
Multiple users parallel Gaussian channel
µ satisfies above constraints ⇒ an allocation of directions to theusers.For simplicity let xl
def= µl − σ2
l
L∑
l=1
1
2log (µl)
︸ ︷︷ ︸
log det (S + I)
−
L∑
l=1
1
2log
(σ2
l
)
︸ ︷︷ ︸
log det (noise)
=∑
L
l=112
log(
1 + xl
σ2l
)
This is the sum rate of the K users.
Multiple users parallel Gaussian channel
Sum rate maximization can be stated as
maximize∑
L
l=1 log(
1 + xl
σ2l
)
subject to xl ≥ 0 for l = 1, 2, · · · ,L
∑n
l=1 xl ≥∑
n
k=1 Pk , for n = 1, 2, · · · ,L− 1
∑L
l=1 xl =∑
K
k=1 Pk .
Single user parallel Gaussian channel power allocation problem hadonly the first positivity and the last equality constraint.
Abstraction of the problem
Suppose gl , l = 1, · · · , L be functions that satisfy the following:
◮ gl : (al , bl)→ R where 0 ∈ (al , bl)
◮ gl strictly convex, continuously differentiable in (al , bl);
◮ g ′
l(0) are in increasing order with respect to the index l i.e.:
g ′
1(0) ≤ g ′
2(0) ≤ · · · ≤ g ′
L(0); (1)
min∑L
l=1 gl(xl )
subject to xl ≥ 0 for l = 1, · · · , L
∑n
l=1 xl ≥∑
n
k=1 αk , for n = 1, · · · , L− 1
∑L
l=1 xl =∑K
k=1 αk .
An explanation of the finite step algorithm
min∑
L
l=1 gl(xl ) gl strictly convex, continuously differentiable
subject to∑
L
l=1 xl =∑
K
k=1 αk .
An explanation of the finite step algorithm
min∑
L
l=1 gl(xl ) gl strictly convex, continuously differentiable
subject to∑
L
l=1 xl =∑
K
k=1 αk .
Suppose you find (x1, x2, · · · , xL) ∈ RL+ such that
x1 + x2 + · · ·+ xL = α1 + α2 + · · ·+ αK
and
h1(x1) = h2(x2) = · · · = hL(xL)def= ΘL
1
An explanation of the finite step algorithm
min∑
L
l=1 gl(xl ) gl strictly convex, continuously differentiable
subject to∑
L
l=1 xl =∑
K
k=1 αk .
Suppose you find (x1, x2, · · · , xL) ∈ RL+ such that
x1 + x2 + · · ·+ xL = α1 + α2 + · · ·+ αK
and
h1(x1) = h2(x2) = · · · = hL(xL)def= ΘL
1
Tempted to declare (x1, x2, · · · , xL) =(h−1
1 (ΘL1), h
−12 (ΘL
2), · · · , h−1L
(ΘL1)
)
as the solution
An explanation of the finite step algorithm
Wait!!Suppose there exists (x ′
1, x′
2, · · · , x′
n) ∈ Rn+(n < L) such that
x ′
1 + x ′
2 + · · ·+ x ′
n = α1 + α2 + · · · + αn
and
h1(x′
1) = h2(x′
2) = · · · = hn(x′
n)def= θn
1 < ΘL1
An explanation of the finite step algorithm
Wait!!Suppose there exists (x ′
1, x′
2, · · · , x′
n) ∈ Rn+(n < L) such that
x ′
1 + x ′
2 + · · ·+ x ′
n = α1 + α2 + · · · + αn
and
h1(x′
1) = h2(x′
2) = · · · = hn(x′
n)def= θn
1 < ΘL1
You are in trouble. Note that
x1 + x2 + · · ·+ xn =∑
n
l=1 h−1m (ΘL
1)
>∑n
l=1 h−1m
(θn1) h−1
mis strictly increasing
= x ′
1 + x ′
2 + · · ·+ x ′
n
= α1 + α2 + · · ·+ αn
(2)
(x1, · · · , xL) =(h−1
1 (ΘL1), h
−12 (ΘL
2), · · · , h−1L
(ΘL1)
)is not feasible!!!
An explanation of the finite step algorithm
Let us look at
ξ1 = max{ΘL
1 , hL(0), θn
1 ; n = 1, 2, · · · , L− 1}
where θn
1 defines a (x ′
1, x′
2, · · · , x′
n)
θn
1
def= h1(x
′
1) = h2(x′
2) = · · · = hn(x′
n)
and
x ′
1 + x ′
2 + · · ·+ x ′
n= α1 + α2 + · · ·+ αn.
An explanation of the finite step algorithm
Let us look at
ξ1 = max{ΘL
1 , hL(0), θn
1 ; n = 1, 2, · · · , L− 1}
where θn
1 defines a (x ′
1, x′
2, · · · , x′
n)
θn
1
def= h1(x
′
1) = h2(x′
2) = · · · = hn(x′
n)
and
x ′
1 + x ′
2 + · · ·+ x ′
n= α1 + α2 + · · ·+ αn.
ξ1 is picked max to ensure feasibility.
An explanation of the finite step algorithm
Let us look at
ξ1 = max{ΘL
1 , hL(0), θn
1 ; n = 1, 2, · · · , L− 1}
where θn
1 defines a (x ′
1, x′
2, · · · , x′
n)
θn
1
def= h1(x
′
1) = h2(x′
2) = · · · = hn(x′
n)
and
x ′
1 + x ′
2 + · · ·+ x ′
n= α1 + α2 + · · ·+ αn.
ξ1 is picked max to ensure feasibility.
If hL(0) is the largest candidate ⇒ marginal cost too large.
Set xL = 0 and focus on reduced problem with L now equal to L− 1
An explanation of the finite step algorithm
Suppose ξ1 = ΘL
1 . The intermediate ladder constraints did not matter.
An explanation of the finite step algorithm
Suppose ξ1 = ΘL
1 . The intermediate ladder constraints did not matter.
Suppose ξ1 = θn
1 for some n < L. Set xl ← x ′
lfor l = 1, 2, · · · , n
An explanation of the finite step algorithm
Suppose ξ1 = ΘL
1 . The intermediate ladder constraints did not matter.
Suppose ξ1 = θn
1 for some n < L. Set xl ← x ′
lfor l = 1, 2, · · · , n
By definition of θn
1 we have met
n∑
l=1
xl ≥
n∑
k=1
αk ,
constraint with equality
An explanation of the finite step algorithm
Suppose ξ1 = ΘL
1 . The intermediate ladder constraints did not matter.
Suppose ξ1 = θn
1 for some n < L. Set xl ← x ′
lfor l = 1, 2, · · · , n
By definition of θn
1 we have met
n∑
l=1
xl ≥
n∑
k=1
αk ,
constraint with equality and furthermore θn1 being maximum of the
candidates the assignment respects
l∑
l=1
xl ≥
l∑
k=1
αk , for l ≥ 1, · · · , n − 1.
Thus focus on the setting the variables xl+1, · · · , xL.
Concluding remarks
We focused on a separable convex optimization problem with linearascending constraints.
Applications in optimal sequence design for CDMA.
We provided a finite step algorithm to identify the optimal solution.