consensus admm and proximal admm for economic dispatch...

6
1 Consensus ADMM and Proximal ADMM for Economic Dispatch and AC OPF with SOCP Relaxation Minyue Ma, Student Member, IEEE, Lingling Fan, Senior Member, IEEE, Zhixin Miao, Senior Member, IEEE Abstract—In this paper, we reviewed several forms of alter- nating direction method of multipliers (ADMM) for distributed power system computing. The major focus is on ADMM based distributed parallel optimization algorithm which is feasible to be implemented in power network. Firstly, we introduced the gen- eral form of ADMM, and extend the 2-block ADMM to N -block multi-block ADMM. Next, we focus on two distributed parallel ADMM based optimization algorithms: Consensus ADMM (C- ADMM) and Proximal Jacobian ADMM (PJ-ADMM). A three- area DC optimal power flow (OPF) problem and a two-area AC OPF problem are tested for ADMM implementation. Information exchange structure and the numerical convergence results of the ADMM algorithms are given. Index Terms—Parallel distributed optimization, power system, ADMM, Consensus ADMM, Proximal Jacobian ADMM. I. I NTRODUCTION The conventional optimization algorithms, e.g., economic dispatch and security constrained OPF in power systems are centralized algorithms. There are two challenges that conven- tional centralized algorithms encounter. Firstly, the size of data sets becomes very large with more and more distributed generation. It means high expense on communication and computation, because all data have to be transferred to and processed in the central controller. Secondly, distribution re- gion in a power network may belong to different owners, they might refuse to share some data because of security concerns. In order to solve the optimization problem in power networks and deal with the problem that brought by large data sets and security concerns, distributed parallel algorithm applications are sought. In distributed parallel algorithms, each agent is trying to in- dependently solve a subproblem based on limited information communication. Alternating direction method of multipliers (ADMM) is a distributed optimization method that has been adopted in distributed power system computing. For example, ADMM is adopted to solve DC OPF in [1], AC OPF in [2], state estimation in [3]. ADMM has a few variations and will be discussed in Section II. Not all variations are suitable for parallel computing implementation. The objective of this paper is to find the suitable ADMM variations that are feasible for parallel computing of economic dispatch and AC OPF. The information structure and detailed ADMM implementation will be investigated in this paper for two applications: DC OPF and AC OPF with second-order Minyue Ma, L. Fan and Z. Miao are with Dept. of Electrical Engineering, University of South Florida, Tampa FL 33620. Email: [email protected]. conic relaxation (SOCP). The second application deals with AC OPF which is a nonconvex optimization problem. Conver- gence for ADMM is guaranteed only for convex programming problems. Therefore, we implement ADMM to AC OPF with SOCP relaxation. Fortunately, for radial networks, ACOPF with SOCP relaxation leads to exact solutions. The rest paper is arranged as follows. Section II discusses ADMM variations. Section III discusses ADMM variations suitable for parallel computing: consensus ADMM and Proxi- mal Jacobian ADMM. Section IV presents ADMM implemen- tation of DC OPF. Section V presents ADMM implementation of AC OPF with SOCP relaxation. Section VI concludes the paper. II. ADMM The formulation of distributed optimization problem to implement ADMM in power network has been discussed in many researches [4]–[6]. Considering the distributed feature of the distribution power network, the general optimization problem in the power network can be formulated as a multi- block optimization problem: min x1,x2...x N N X i=1 f (x i ) (1) s.t N X i=1 A i x i = c, other constraints... where i ∈{1, 2, ..., N } denote each area i in a distribution network, x i R Ni are variables in the area i, convex closed proper function f (x i ) is the objective function which proposed to be optimized in the area i, e.g. the cost function of power generation [2], the error magnitude function in power system state estimation [3], A i R M×Ni and c R M are constant matrix and vector. This formulation structure provide the framework to implement ADMM for solving the problem (1). More detailed implementing process will be introduced in Section IV through examples. A. General ADMM for two-block systems ADMM was proposed by [7], [8] and recently reintroduced in [4], [9] to solve the equality-constraint convex optimization problem: min f (x 1 )+ f (x 2 ) (2) s.t A 1 x 1 + A 2 x 2 = c

Upload: lykiet

Post on 21-Jun-2018

223 views

Category:

Documents


3 download

TRANSCRIPT

1

Consensus ADMM and Proximal ADMM forEconomic Dispatch and AC OPF with SOCP

RelaxationMinyue Ma, Student Member, IEEE, Lingling Fan, Senior Member, IEEE, Zhixin Miao, Senior Member, IEEE

Abstract—In this paper, we reviewed several forms of alter-nating direction method of multipliers (ADMM) for distributedpower system computing. The major focus is on ADMM baseddistributed parallel optimization algorithm which is feasible to beimplemented in power network. Firstly, we introduced the gen-eral form of ADMM, and extend the 2-block ADMM to N -blockmulti-block ADMM. Next, we focus on two distributed parallelADMM based optimization algorithms: Consensus ADMM (C-ADMM) and Proximal Jacobian ADMM (PJ-ADMM). A three-area DC optimal power flow (OPF) problem and a two-area ACOPF problem are tested for ADMM implementation. Informationexchange structure and the numerical convergence results of theADMM algorithms are given.

Index Terms—Parallel distributed optimization, power system,ADMM, Consensus ADMM, Proximal Jacobian ADMM.

I. INTRODUCTION

The conventional optimization algorithms, e.g., economicdispatch and security constrained OPF in power systems arecentralized algorithms. There are two challenges that conven-tional centralized algorithms encounter. Firstly, the size ofdata sets becomes very large with more and more distributedgeneration. It means high expense on communication andcomputation, because all data have to be transferred to andprocessed in the central controller. Secondly, distribution re-gion in a power network may belong to different owners, theymight refuse to share some data because of security concerns.In order to solve the optimization problem in power networksand deal with the problem that brought by large data sets andsecurity concerns, distributed parallel algorithm applicationsare sought.

In distributed parallel algorithms, each agent is trying to in-dependently solve a subproblem based on limited informationcommunication. Alternating direction method of multipliers(ADMM) is a distributed optimization method that has beenadopted in distributed power system computing. For example,ADMM is adopted to solve DC OPF in [1], AC OPF in [2],state estimation in [3]. ADMM has a few variations and willbe discussed in Section II. Not all variations are suitable forparallel computing implementation.

The objective of this paper is to find the suitable ADMMvariations that are feasible for parallel computing of economicdispatch and AC OPF. The information structure and detailedADMM implementation will be investigated in this paper fortwo applications: DC OPF and AC OPF with second-order

Minyue Ma, L. Fan and Z. Miao are with Dept. of Electrical Engineering,University of South Florida, Tampa FL 33620. Email: [email protected].

conic relaxation (SOCP). The second application deals withAC OPF which is a nonconvex optimization problem. Conver-gence for ADMM is guaranteed only for convex programmingproblems. Therefore, we implement ADMM to AC OPF withSOCP relaxation. Fortunately, for radial networks, ACOPFwith SOCP relaxation leads to exact solutions.

The rest paper is arranged as follows. Section II discussesADMM variations. Section III discusses ADMM variationssuitable for parallel computing: consensus ADMM and Proxi-mal Jacobian ADMM. Section IV presents ADMM implemen-tation of DC OPF. Section V presents ADMM implementationof AC OPF with SOCP relaxation. Section VI concludes thepaper.

II. ADMMThe formulation of distributed optimization problem to

implement ADMM in power network has been discussed inmany researches [4]–[6]. Considering the distributed featureof the distribution power network, the general optimizationproblem in the power network can be formulated as a multi-block optimization problem:

minx1,x2...xN

N∑i=1

f(xi) (1)

s.t

N∑i=1

Aixi = c, other constraints...

where i ∈ {1, 2, ..., N} denote each area i in a distributionnetwork, xi ∈ RNi are variables in the area i, convex closedproper function f(xi) is the objective function which proposedto be optimized in the area i, e.g. the cost function of powergeneration [2], the error magnitude function in power systemstate estimation [3], Ai ∈ RM×Ni and c ∈ RM are constantmatrix and vector. This formulation structure provide theframework to implement ADMM for solving the problem (1).More detailed implementing process will be introduced inSection IV through examples.

A. General ADMM for two-block systems

ADMM was proposed by [7], [8] and recently reintroducedin [4], [9] to solve the equality-constraint convex optimizationproblem:

min f(x1) + f(x2) (2)s.t A1x1 +A2x2 = c

2

The general form of ADMM iteration process can be expressedas:

xk+11 = arg minx1

Lρ(x1, xk2 , λ

k) (3)

xk+12 = arg minx2

Lρ(xk+11 , x2, λ

k) (4)

λk+1 = λk + ρ(A1xk+11 +A2x

k+12 − c) (5)

where Lρ(x1, x2, λ) is defined as augmented Lagrangian:

Lρ(x1, x2, λ) = f1(x1) + f2(x2)+

λT (A1x1 +A2x2 − c) + (ρ/2) ‖A1x1 +A2x2 − c‖22 (6)

with λ ∈ RM is dual variable, ρ > 0 is the augmentedLagrangian parameter. To simplify the formulas, processes(3) - (5) which are called unscaled form of ADMM can betransferred to an equivalent scaled form [4]:

Augmented Lagrangian:

Lρ1(x1, x2, u) = f1(x1) +ρ

2‖A1x1 +A2x2 − c− u‖22 (7)

Lρ2(x1, x2, u) = f2(x2) +ρ

2‖A1x1 +A2x2 − c− u‖22 (8)

Iteration steps:

xk+11 = arg minx1

Lρ1(x1, xk2 , u

k) (9)

xk+12 = arg minx2

Lρ2(xk+11 , x2, u

k) (10)

uk+1 = uk − (A1xk+11 +A2x

k+12 − c). (11)

where u = λρ . There are two important assumptions required

for implementing ADMM [4]:• Function f1(x1) and f2(x2) should be proper closed and

convex.• The augmented Lagrangian Lρ has a saddle point.

If these two assumptions are satisfied, the ADMM iterationwill converge to the following results:

limk→∞

A1xk1 −A2x

k2 − c

f1(xk1) + f2(xk2)λk

=

0P ∗

λ∗

(12)

where P ∗ is the optimal value for the objective function inProblem (2), λ∗ is the optimal value for the dual variables.

B. Muti-Block ADMM

Apparently the problem (2) is a special case of problem(1) when N = 2. The update process of x1 and x2 isindependent for each iteration step in the original 2-blockADMM (given by step (9) to (11)). It means extendedADMM to solve problem (1) for any N(N ≥ 2) case ispossible. The procedure of ADMM to solve the problem (1)is called multi-block ADMM. According the introduction in[5], [9], there are three fundamental types of multi-blockADMM which are: Variable Splitting ADMM(VS-ADMM),Gauss-Seidel ADMM(GS-ADMM), and Jacobian ADMM(J-ADMM). Variable splitting ADMM will not be discussed inthis paper because it is not effective for the large N case [9].The general form of GS-ADMM and J-ADMM is introducedas follows.

a) Gauss-Seidel ADMM:

Augmented Lagrangian:Lρ(x1, ...xi...xN , u) = f(xi)+

ρ

2

∥∥∥∥∥∥∑j<i

Ajxj +Aixi +∑j>i

Ajxj − c− u

∥∥∥∥∥∥2

2

(13)

Iteration process:

xk+1i = min

xi

Lρ(xk+11 , ...xk+1

i−1 , xi, xki+1...x

kN , u

k) (14)

uk+1 = uk − (

N∑i=1

Aixk+1i − c). (15)

GS-ADMM is a very nature extension of the original ADMM[9]. It simply increases the number of blocks in the process (9)- (11) without any obvious structure change. The efficiency ofthis method has been verified in some practical problems (e.g.[10], [11]). However, it can not be directly implemented asa distributed parallel algorithm to solve optimization problemdue to two disadvantages [9]:• Its convergence is not guaranteed when N ≥ 3.• The updating process has to be conducted one agent after

another. This process is not parallel.b) Jacobian ADMM: In Jacobian ADMM, the aug-

mented Lagrangian is

Lρ(x1, ...xi...xN , u) = f(xi) + qρ

2

∥∥∥∥∥∥Aixi +∑j 6=i

Ajxj − c− u

∥∥∥∥∥∥2

2

.

(16)

And the iteration process is as follows.

xk+1i = min

xi

Lρ(xk1 , ...x

ki−1, xi, x

ki ...x

kN , u

k) (17)

uk+1 = uk − (

N∑i=1

Aixk+1i − c). (18)

Apparently, the update process of J-ADMM is parallel whichavoids the second disadvantages of GS-ADMM. However, itmay diverge for many general case [9], even for the N = 2case. Thus for both of the GS-ADMM and J-ADMM, it isnecessary to propose some modifications for implementing onpower network optimization.

III. FEASIBLE ADMM FORMS FOR PARALLEL COMPUTING

In order to make the ADMM based optimization algorithmachieve parallel updating and ensure convergence, GS-ADMMand J-ADMM are respectively extend to Consensus ADMM(C-ADMM) and Proximal Jacobian ADMM (PJ-ADMM).

A. Consensus ADMM

Considering a special case of problem (1):

min

N∑i=1

fi(xi) (19)

s.t xi − z = 0 (20)

3

where z is a common global variable for all xi. This problemis called global consensus problem [4]. If we try to use GS-ADMM to solve it, we can obtain the following process [4]:

Augmented Lagrangian:

Lρ(x1..xi..xN , z) =

N∑i=1

(fi(xi)− λTi (xi − z) +

ρ

2‖xi − z‖22

)(21)

Iteration process:

xk+1i = arg minxi

{fi(xi)− (λki )T (xi − zk) +

ρ

2

∥∥xi − zk∥∥22}z(k+1) =

1

N

N∑i=1

(xk+1i − (1/ρ)λki )

λk+1i = λki − ρ(xk+1

i − zk+1) (22)

where λi is the dual variable in area i for the equalityconstraint (20). As

∑Ni=1 λi = 0 after k = 1 is provided

by [4], the process (22) can be simplified to:

xk+1i = arg minxi

(f(xi)− λkTi (xi − xk) +

ρ

2

∥∥xi − xk∥∥22)λk+1i = λki − ρ(xk+1

i − xk+1) (23)

where xk = (1/N)∑Ni=1 x

ki . Through the above steps, it

can be observed that the C-ADMM essentially is the GS-ADMM(unscaled form), but its update process performs in aparallel manner as the update of z only relates with xi. And itsconvergence is guaranteed when ρ > 0 under the assumption1 and 2 [12].

B. Proximal Jacobian ADMMTo improve convergence of J-ADMM, PJ-ADMM is derived

in [9]. Its update procedure is shown as follow:

Augmented Lagrangian:Lρ(x1, ...xi...xN , u) = f(xi)+

ρ

2

∥∥∥∥∥∥Aixi +∑j 6=i

Ajxj − c− u

∥∥∥∥∥∥2

2

+1

2

∥∥xi − xki ∥∥2Pxi(24)

Iteration process:

xk+1i = min

xi

Lρ(xk1 , ...x

ki−1, xi, x

ki+1...x

kN , u

k) (25)

uk+1 = uk − γ(

N∑i=1

Aixk+1i − c). (26)

where γ > 0 is a damping parameter for update of u,12

∥∥xi − xki ∥∥2Pxi= (xi−xki )TPxi(xi−xki ) is a proximal term,

Pxi is some symmetric and positive semi-definite matrix. Thesufficient conditions of convergence are:

Pxi > ρ

(N

2− γ− 1

)ATi Ai (27)

0 < γ < 2 (28)

The update process of PJ-ADMM is parallel. Moreover itsconvergence can be guaranteed, if the condition (27) and(28) are satisfied. In next section, examples will be given toshow the performance of GS-ADMM and PJ-ADMM on somesimple but general power system optimization problems.

IV. ADMM FOR DC OPFConsider a three-area system connected through a network.

Each area has a generator and a load. Denotes Ni = {A,B,C}is the index of areas, na = {1, 2, 3} is the index of buses, anyvariable on bus a in area i is xi,a, Pg is output power ofgenerator, and θ is voltage angle.The DC OPF problem is asfollows.

min CA(PgA) + CB(PgB) + CC(PgC) (29a)

s.t. Pgi,a − Pgi,a =

3∑b=1

(−Bab)θi,b (29b)

i ∈ Ni, a, b ∈ na (29c)

where Bab is the element of matrix B which is the imaginarypart of admittance matrix. In this case, B is defined as follows,according the Fig. 1.

B =

−10 5 55 −15 105 10 −15

(30)

1  2 

3

Area A Area B

1  2

Bus 1 Bus 2

3

Bus 3

Area C

1  2

3

Fig. 1. A meshed network.

Denotes j ∈ Ni is a neighbor area of i. Basing on thefact that any local xi,a is the same as xi, extra constraintsxi,a = xa or xi,a = xj,a can be integrated into the (29a).

A. Consensus ADMMThe problem (29a) for the meshed network is solved by

consensus ADMM, set ρ = 20. Denotes all θa as the globalvariable z. For each area i , θi = z is the constraint. Theupdating procedure for z, θi, Pgi and λi are given as follows.Note that λi are the dual variable vector related to θi = zconstraint. It is not the same as the LMP price vector.

P kgi, θk+1i = argmin{Ci(Pgi) + (λki )T (θi − zk)+

ρ

2‖θi − zk‖2 s.t. (29b)}

zk+1 =1

3

∑θk+1i

λk+1i = λki + ρ(θk+1

i − zk+1)

Fig. 2 gives the iterative results for C-ADMM method. Fig. 3show the information flow of the C-ADMM in this case.

4

0 50 100 1500

2

4

6

8

10P

P

g1

Pg2

Pg3

0 20 40 60 80 100 120 140 160 180 2000

2

4

6

8

10

12

14

Tot

al C

ost

Iteration

Fig. 2. Generator power at each bus and total cost solved by C-ADMMmethod in 3 areas system DC power flow.

Area A Area B𝜃𝐴𝑘+1

Global variable update

Master data collector

Area C

Area A

Master data collector

Average update

𝜃𝐶𝑘+1

𝜃𝐵𝑘+1

𝜃 1𝑘 ,𝜃 2

𝑘 ,𝜃 3𝑘

𝜃1𝐴𝑘+1,𝜃2𝐴

𝑘+1,𝜃3𝐴𝑘+1

𝑧𝑘+1

𝑧𝑘+1

𝑧𝑘+1

Fig. 3. Information flow for 3 area system DC OPF viewed by Master Datacollector

B. Proximal Jacobian ADMM

Define ij is the index set of variables in the overlap areawhich are shared by area i and j, e.g. in Fig. 1, area A shareθ1, θ2 with area B, so θA,AB = θA,(1,2), θB,BA = θB,(1,2),and θA,AB = θB,BA. In this case, each overlap area is onlyshared by two areas, thus we have uij = ui,ij = uj,ji [13].In order to ensure the convergence, setting γ = 1.5, Pxi =1.1ρ( N

2−γ − 1)ATi Ai (where Ai is an identity matrix) The PJ-ADMM procedure for (29a) is:

Iteration process:

P k+1gi , θk+1

i = argmin{Ci(Pi) +∑i,j∈Ni

∥∥θi,ij − θkj,ji − uki,ij∥∥22+

1

2

∥∥θi − θki ∥∥2Pxi, s.t. (29b)} (31)

uk+1i,ij = uki,ij − γ(θk+1

i,ij − θk+1j,ji ) (32)

The information flow is shown in Fig. 4 And the simulationresults are given by Fig. 5.

Area A Area B𝜃𝐴,1𝑘+1,𝜃𝐴,2

𝑘+1

Area C

𝜃𝐵,1𝑘 ,𝜃𝐵,2

𝑘

𝜃𝐴,1,𝑘+1,𝜃𝐴,3

𝑘+1

𝜃𝐶,1,𝑘 ,𝜃𝐶,3

𝑘

Fig. 4. Information exchange viewed by Area A.

0 50 100 1500

2

4

6

8

10

P

0 20 40 60 80 100 120 140 160 180 2002

4

6

8

10

12

14

Tot

al C

ost

Iteration

Pg1

Pg2

Pg3

Fig. 5. Generator power at each bus and total cost solved by PJ-ADMMmethod in 3 areas system DC power flow.

V. ADMM FOR AC OPF WITH SOCP RELAXATION

AC OPF is a nonconvex optimization problem with theobjective function to minimize the cost of generation or powerloss, equality constraints representing the relationship of buspower injection versus bus voltage magnitudes (notated bya vector V ∈ Rn, where n is the number of buses in thesystem) and phase angles (θ ∈ Rn), and inequality constraintsrepresenting voltage limits, line flow limits, generation limits,etc. The decision variables of AC OPF are voltage magnitudesV , phase angles θ, and generators’ real and reactive poweroutputs Pg and Qg . Nonconvexity is mainly due to thenonlinear equality constraints related to power injections.

For AC OPF with SOCP relaxation [14], a new set of

5

variables cij and sij to replace the voltage phasors.

cii = V 2i , cij = ViVj cos(θi − θj)

sii = 0, sij = ViVj sin(θi − θj) (33)

where cij = cji and sij = −sji.The following relationship can be obtained:

c2ij + s2ij = V 2i V

2j = ciicjj . (34)

The AC OPF problem’s power injection constraints are nowlinear in terms of cij and sij .

∑k∈Busi

(Pgk − PLk) =∑nj=1(Gijcij +Bijsij),∑

k∈Busi(Qgk −QLk) =

∑nj=1(Gijsij −Bijcij)

(35)

where Pgk and Qgk notate the real and reactive power fromthe generators connected to Bus i, PLk and QLk notate thetotal real and reactive power consumed by the loads at Bus i.

In addition, (34) will be relaxed as a second-order cone andhence the optimization problem is referred as AC OPF withSOCP relaxation.

c2ij + s2ij ≤ ciicjj . (36)

For radial systems, in majority cases, SOCP relaxation isexact. For the system in Fig. 6, the AC OPF problem withSOCP is formulated as follows.

min

4∑k=1

Ck(Pgk) (37a)

s.t.∑

k∈bus i

(Pgk − PLk) =

2∑j=1

(Gijcij +Bijsij)

∑k∈bus i

(Qgk −QLk) =

2∑j=1

(Gijsij −Bijcij)

Pmingk ≤ Pgk ≤ Pmaxgk , Qmingk ≤ Qgk ≤ Qmaxgk

V12 ≤ c11 ≤ V1

2, V2

2 ≤ c22 ≤ V22

(37b)

c212 + s212 ≤ c11c22, c21 = c12, s12 = −s21 (37c)k = 1, · · · , 4, i = 1, 2 (37d)

The costs are assumed to be linear costs: 7Pg1, 8Pg2, 9Pg3and 10Pg4. where Gij + jBij is element in admittance matrixof the system, and in this case:

B =

[−1.9231 1.92311.9231 −1.9231

]G =

[0.3846 −0.3846−0.3846 0.3846

]

A. Consensus ADMM

For solving problem (37a) via C-ADMM, define all cij andsij as the following manner:

ci =

ci11ci22ci12si12

i ∈ {A,B} (38)

2  4 

Area A Area B

1 1 2 2

Bus 1 Bus 21 

 

3

 2 

 

4

 

Fig. 6. Two-area four-machine system.

and all ci = z where z is the global variable. Then the C-ADMM procedure is:

P k+1gl , ci,k+1 = argmin{

∑Cl(Pgl)− (λi,k)T (ci − zk)

+ (ρ/2)∥∥ci − zk∥∥2

2, s.t. other constaints in Area i} (39)

zk+1 =cA,k+1 + cB,k+1

2(40)

λi,k+1 = λi,k − ρ(ci,k+1 − zk+1) (41)

where ρ = 20. Fig. 7 gives the simulation results, Pg2 andPg4 are very close to 0. And the information flow is shown inFig.8.

0 50 100 1500

2

4

6

8

10

P

Pg1

Pg2

Pg3

Pg4

0 50 100 15080

85

90

95

100

105

Tot

al c

ost

iteration

Fig. 7. Generator power at each bus and total cost solved by C-ADMMmethod in 2 areas system AC power flow.

B. Proximal Jacobian ADMM

To implement PJ-ADMM, configure ρ = 20, and λ = 0.5,Pxi = 1.1ρ( N

2−γ −1)ATi Ai, and denote j is the neighbor areaof i. Because the overlap region are only shared by two areasin this case, we can use ui = uj = u, and express the update

6

Area A

Master data collector

λ and variable average update

Area B

𝐶 11𝑘 ,𝐶 22

𝑘 ,𝐶 12𝑘 , 𝑆 12

𝑘

𝐶11𝐴𝑘+1,𝐶22𝐴

𝑘+1,𝐶12𝐴𝑘+1, 𝑆12𝐴

𝑘+1

𝜆𝑘

Area A

Master data collector

global variable update

Area B

𝑐𝐴,𝑘+1 𝑐𝐵,𝑘+1

𝑧𝑘+1 𝑧𝑘+1

Fig. 8. Information flow for the C-ADMM in the two-area system AC OPFviewed by the master data collector.

process as:

P k+1gl , ci,k+1 = argmin{

∑l

Cl(Pgl) +ρ

2

∥∥ci − cj,k − u∥∥22

+1

2

∥∥ci − ci,k∥∥2Pxi

,

s.t other constraints in Area i} (42)

uk+1i = uki − γ(ci,k+1 − cj,k+1) (43)

The simulation results and the data flow diagram are re-spectively shown in Fig. 9 and Fig. 10, Pg2 and Pg4 are veryclose to 0.

0 50 100 1500

2

4

6

8

10

P

Pg1

Pg2

Pg3

Pg4

0 50 100 15080

85

90

95

100

105

Tot

al c

ost

iteration

Fig. 9. Generator power at each bus and total cost solved by PJ-ADMMmethod in the two-area system AC power flow.

VI. CONCLUSION

In this paper, we have reviewed several forms of ADMM,especially the C-ADMM and PJ-ADMM which are suitable to

Area A Area B

Area A Area B

𝑐A,k+1

𝑐B,k

𝐶11𝐴𝑘 ,𝐶22𝐴

𝑘 ,𝐶12𝐴𝑘 , 𝑆12𝐴

𝑘

𝐶11𝐵𝑘+1,𝐶22𝐵

𝑘+1,𝐶12𝐵𝑘+1, 𝑆12𝐵

𝑘+1

𝐶11𝐵𝑘+1,𝐶22𝐵

𝑘+1,𝐶12𝐵𝑘+1, 𝑆12𝐵

𝑘+1

𝜆𝑘

λ update

Master data collector

Area A

Master data collector

λ update

Area B

𝐶11𝐴𝑘+1,𝐶22𝐴

𝑘+1,𝐶12𝐴𝑘+1, 𝑆12𝐴

𝑘+1

𝜆𝑘+1

𝐶11𝐵𝑘+1,𝐶22𝐵

𝑘+1,𝐶12𝐵𝑘+1, 𝑆12𝐵

𝑘+1

𝜆𝑘+1

Fig. 10. Information flow for the PJ-ADMM in 2 area system AC OPF viewedby Area A.

be implemented as distributed parallel algorithms for powernetworks. The problem formulation and implementation pro-cess for these two algorithms are described base on a 3-area system DC OPF problem and a 2-area system AC OPFproblem. We also obtained the numerical simulation resultsfor these two example problems.

REFERENCES

[1] Y. Wang, L. Wu, and S. Wang, “A fully-decentralized consensus-basedadmm approach for dc-opf with demand response.”

[2] T. Erseghe, “Distributed optimal power flow using admm,” PowerSystems, IEEE Transactions on, vol. 29, no. 5, pp. 2370–2380, 2014.

[3] H. Zhu and G. B. Giannakis, “Power system nonlinear state estimationusing distributed semidefinite programming,” IEEE Journal of SelectedTopics in Signal Processing, vol. 8, no. 6, pp. 1039–1050, Dec 2014.

[4] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributedoptimization and statistical learning via the alternating direction methodof multipliers,” Foundations and Trends R© in Machine Learning, vol. 3,no. 1, pp. 1–122, 2011.

[5] L. Liu and Z. Han, “Multi-block admm for big data optimization insmart grid,” in Computing, Networking and Communications (ICNC),2015 International Conference on. IEEE, 2015, pp. 556–561.

[6] U. Mosca, “A novel distributed approach for optimal power flow problemin smart grids,” 2013.

[7] D. Gabay and B. Mercier, “A dual algorithm for the solution of nonlinearvariational problems via finite element approximation,” Computers &Mathematics with Applications, vol. 2, no. 1, pp. 17–40, 1976.

[8] R. Glowinski and A. Marroco, “Sur l’approximation, par elements finisd’ordre un, et la resolution, par penalisation-dualite d’une classe deproblemes de dirichlet non lineaires,” Revue francaise d’automatique,informatique, recherche operationnelle. Analyse numerique, vol. 9, no. 2,pp. 41–76, 1975.

[9] W. Deng, M.-J. Lai, Z. Peng, and W. Yin, “Parallel multi-block admmwith o (1/k) convergence,” arXiv preprint arXiv:1312.3040, 2013.

[10] M. Tao and X. Yuan, “Recovering low-rank and sparse components ofmatrices from incomplete and noisy observations,” SIAM Journal onOptimization, vol. 21, no. 1, pp. 57–81, 2011.

[11] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, “Rasl: Robustalignment by sparse and low-rank decomposition for linearly correlatedimages,” Pattern Analysis and Machine Intelligence, IEEE Transactionson, vol. 34, no. 11, pp. 2233–2246, 2012.

[12] T.-H. Chang, M. Hong, and X. Wang, “Multi-agent distributed large-scale optimization by inexact consensus alternating direction method ofmultipliers,” in Acoustics, Speech and Signal Processing (ICASSP), 2014IEEE International Conference on. IEEE, 2014, pp. 6137–6141.

[13] W. Zheng, W. Wu, B. Zhang, H. Sun, and Y. Liu, “A fully distributedreactive power optimization and control method for active distributionnetworks,” IEEE Transactions on Smart Grid, vol. 7, no. 2, pp. 1021–1033, March 2016.

[14] R. A. Jabr, “Radial distribution load flow using conic programming,”Power Systems, IEEE Transactions on, vol. 21, no. 3, pp. 1458–1459,2006.