approximating the cut-norm

41
Approximating the Cut- Norm Hubert Chan

Upload: wayne

Post on 07-Jan-2016

62 views

Category:

Documents


0 download

DESCRIPTION

Approximating the Cut-Norm. Hubert Chan. “Approximating the Cut-Norm via Grothendieck’s Inequality” Noga Alon, Assaf Naor appearing in STOC ‘04. _. _. _. ++++. +. _. +. +. +. +. _. _. _. +. _. Problem Definition. Main Result. The problem is MAX SNP hard. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Approximating the Cut-Norm

Approximating the Cut-Norm

Hubert Chan

Page 2: Approximating the Cut-Norm

• “Approximating the Cut-Norm via Grothendieck’s Inequality”

Noga Alon, Assaf Naor

appearing in STOC ‘04

Page 3: Approximating the Cut-Norm

Problem Definition

.

sum themaximizes that }1,1{, find

),(matrix real an Given

ijjiij

ji

ij

yxa

yx

aAnm

_ __ ++++

++

++

__

_

_

_ +

+

.||||by denoted , of norm the

sum maximized thecall We

AA

Page 4: Approximating the Cut-Norm

Main Result

• The problem is MAX SNP hard.

• Randomized polynomial algorithm that gives expected 0.56-approximation.

For maximization problem, approximation ratio always less than 1.

The authors showed a deterministic algorithm that gives 0.03-approximation.

De-randomization: paper by Mahajan and Ramesh

Page 5: Approximating the Cut-Norm

Road Map

• Motivation• Hardness Result

• General Approach

• Outline of Algorithm

• Conclusion

Page 6: Approximating the Cut-Norm

Motivation

• Inspired by the MAX-CUT problem

Frieze and Kannan proposed decomposition scheme for solving problems on dense graphs

• Estimating the norm of a matrix is a key step in the decomposition scheme

Page 7: Approximating the Cut-Norm

Comparing with Previous result

• Previously, computes norm with additive errormn

• This is good only for a matrix whose norm is large.

• The new algorithm approximates norm for all real matrices within constant factor 0.56 in expectation.

Page 8: Approximating the Cut-Norm

Road Map

• Motivation

• Hardness Result• General Approach

• Outline of Algorithm

• Conclusion

Page 9: Approximating the Cut-Norm

MAX-SNP

A maximization problem is MAX-SNP hard if

.factor ithin solution w optimal theeapproximat

can algorithm timepolynomial no 0

For example, there is a well-known polynomial algorithm for MAX-CUT that returns a cut with size at least 0.5 of the maximum cut.

However, there is no polynomial algorithm that gives 16/17-approximation.

Page 10: Approximating the Cut-Norm

MAX-CUTGraph G=(V,E)

W V\W

Page 11: Approximating the Cut-Norm

The problem is MAX SNP hard

• Reduction from MAX-CUT• Given graph G = (V,E),

construct 2|E| x |V| matrix A:

for each edge e = (u,v),

4

1 ,

4

14

1 ,

4

1

,2,,2,

,1,,1,

veue

veue

AA

AA

eu

v

u v

e,1e,2

1/41/4-1/4

-1/4

Page 12: Approximating the Cut-Norm

MAX-CUT · ||A||§

otherwise. 1- , if 1Set

cut.max a forms )\,( Suppose

Wjy

WVW

j u v

1/4 -1/4

_u v

1/4 -1/41/4-1/4

+

+

_

For e = (u,v) not in max cut, there is no contribution no matter what xe,1 and xe,2 are.

For e = (u,v) in max cut, we can set xe,1 and xe,2 to give contribution 1.

Page 13: Approximating the Cut-Norm

MAX-CUT ¸ ||A||§.|||| attains s' and s' of choice some Suppose Ayx ji

}.1:{Set jyjWu v

1/4 -1/4

_u v

1/4 -1/41/4-1/4

+

+

_

For e = (u,v) not in cut (W,V\W), there is no contribution no matter what xe,1 and xe,2 are.

For e = (u,v) in cut (W, V\W), the contribution from rows e,1 and e,2 is at most 1.

Page 14: Approximating the Cut-Norm

Road Map

• Motivation

• Hardness Result

• General Approach• Outline of Algorithm

• Conclusion

Page 15: Approximating the Cut-Norm

Relaxation Schemes

}1,1{,

max||||,

ji

jji

iij

yx

yxaA

• Recall the problem:

Objective function not linear Could introduce extra variables, but rounding might

be tricky How about Semidefinite Program Relaxation?

Linear Programming Relaxation?

Page 16: Approximating the Cut-Norm

Semidefinite Program Relaxation

.product dot with tion multiplica Replace

.or with vect variableReplace

.or with vect variableReplace

jiji

jj

ii

vuyx

vy

ux

||A||SDP = max ij aij ui vj

ui ² ui = 1

vj ² vj = 1

where ui and vj are vectors in

m+n dimensional Euclidean space

Page 17: Approximating the Cut-Norm

Remarks about SDP.|||| |||| that Note AA SDP

² Are (m+n)-dimensions sufficient?

Yes, since any m+n vectors in a higher dimensional Euclidean space lie on an (m+n)-dimensional subspace.

² Fact:

There exists an algorithm that given > 0, returns solution vectors ui’s and vj’s that attains value at least ||A||SDP - in time polynomial in the length of input and the logarithm of 1/.

Page 18: Approximating the Cut-Norm

Are we done?We need to convert the vectors back to integers in {-1,1}!

General strategy:

1. Obtain optimal vectors ui and vj for the SDP.

2. Use some randomized procedure to reconstruct integer solutions xi, yj 2 {-1,1} from the vectors.

3. Give good expected bound:Find some constant > 0 such that

E[ij aij xi yj] ¸ ||A||SDP ¸ ||A||§

Page 19: Approximating the Cut-Norm

Road Map

• Motivation

• Hardness Result

• General Approach

• Outline of Rounding Algorithm• Conclusion

Page 20: Approximating the Cut-Norm

Random Hyperplane

)(

)(Set

.r unit vecto random Generate

zv sign y

zu sign x

z

jj

ii

+_

z

Recall we need to show:E[ij aij xi yj] ¸ ij aij ui ² vj

Page 21: Approximating the Cut-Norm

Analyzing E[xy]

z u

v

Unit vectors u and v such that cos = u ² v

A random unit vector z determines a hyperplane.

Pr[u and v are separated] = /

Set x = sign(u ² z), y = sign(v ² z).

E[xy] = (1 - / ) - /

= 1 - 2 /

= 2/ ( / 2 - )

= 2/ arcsin(u ² v)

Page 22: Approximating the Cut-Norm

How do sine and arcsine relate?

.2

|arcsin

| 1 ],11[For

t

t,-t

Is this good news?

Page 23: Approximating the Cut-Norm

Performance Guarantee?

)arcsin( ][ ][ jiij

ijij

jiijij

jiij vuayxEayxaE

• We have term by term constant factor approximation.

• Bad news: cancellation because terms have different signs

• Hence, we need global approximation.

Page 24: Approximating the Cut-Norm

An Equivalent Way to Round Vectors

+_

R

Generate standard, independent Gaussian random variables

r1, r2, …, rm+n

R = (r1, r2, …, rm+n)

Set xi = sign(ui ² R), yj = sign(vj ² R)

Page 25: Approximating the Cut-Norm

What we would like to see….

.0constant somefor

, )arcsin(][ ?

c

vucvuyxE jijiji

This is impossible because arcsin is not a linear function.

Page 26: Approximating the Cut-Norm

What we can prove……

)]},()([{2

][ Rj

gRi

fEj

vi

uj

yi

xE

where fi is a function depending on ui

and gj is a function depending on vj.

Important property of fi and gj:

E[fi2] = E[gj

2] = /2 – 1 < 1.

Page 27: Approximating the Cut-Norm

Inner Product and E[f g]

gf,fE

(R)f(R)g(R) fE

vuvuk

kk

g][

can write We

g][

Compare

Page 28: Approximating the Cut-Norm

Recall the SDP

Are (m+n)-dimensions sufficient?

Yes, since any m+n vectors in a higher dimensional Euclidean space lie on an (m+n)-dimensional subspace.

||A||SDP = max ij aij ui vj

ui ² ui = 1

vj ² vj = 1

where ui and vj are vectors in

m+n dimensional Euclidean space

Page 29: Approximating the Cut-Norm

Wait a minute…We need unit vectors!

SDPjiij

ij

jjii

Agfa

ggff

||||)12

( |,|

12

, ,

Page 30: Approximating the Cut-Norm

Constant factor approximation

},{2

][ jg

ifa

jv

iua

jy

ixaE

ijij

ijij

ijij

},||{||2

jg

ifaA

ijijSDP

}||||)12

( ||{||2

SDPSDP AA

.||||273.0 ||||273.0

||||)14

(

AA

A

SDP

SDP

Page 31: Approximating the Cut-Norm

What are functions f and g?

).(2

)( and

)(2

)( where

)]},()([{2

][

RvsignRvRg

RusignRuRf

Rj

gRi

fEj

vi

uj

yi

xE

iij

iii

Page 32: Approximating the Cut-Norm

Properties of Gaussian Measure

vuvu

rrEvu

rvruERvRuE

ppp

pqqpqp

p qqqpp

][

][)])([(

(a) Mean 0, Variance 1

(b) Multi-dimensional Gaussian spherical symmetric

vuRvsignRuE 2

)]()[(

Page 33: Approximating the Cut-Norm

Recap1. Solve for optimal vectors ui and vj for the SDP.

2. Generate multi-dimensional Gaussian random vector R.

Set xi = sign(ui ² R), yj = sign(vj ² R).

3. Relate E[xi yj] to ui ² vj.

)]}()([{2

][ Rj

gRi

fEj

vi

uj

yi

xE

4. Use (1) ui and vj are optimal vectors and

(2) E[fi gj] can be considered as an inner product.

E[ij aij xi yj] ¸ 0.273 ||A||§

Page 34: Approximating the Cut-Norm

What we would like to see….

.0constant somefor

, )arcsin(][ ?

c

vucvuyxE jijiji

This is impossible because arcsin is not a linear function.

Page 35: Approximating the Cut-Norm

What if…

jij

ji

vcuvu

v u

c

i

)arcsin(

such that and rsunit vecto

and 0constant aexist thereSuppose

''

''

Page 36: Approximating the Cut-Norm

If this is possible….

jijiji

jjii

vuc

vuyxE

zvsignyzusignx

2

)arcsin(2

][

)( ),(

''''

''''

Recall z is the random unit vector.

Page 37: Approximating the Cut-Norm

This is indeed possible!

jij

ji

vcuvu

v u

c

i

)arcsin(

such that and rsunit vecto

exist there),21ln(With

''

''

).sin( that Note ''jij vcuvu

i

Page 38: Approximating the Cut-Norm

Another Semidefinite Program

jvv

iuu

jivcuvu

jj

jij

ii

i

,1

,1

, ),sin(

''

''

''

Page 39: Approximating the Cut-Norm

Better Constant Approximation

.

''''

||||56.0

||||2

2

)arcsin(2

][

A

Ac

vuac

vuayxaE

SDP

jiij

ij

jiij

ijjiij

ij

Page 40: Approximating the Cut-Norm

Road Map

• Motivation

• Hardness Result

• General Approach

• Outline of Algorithm

• Conclusion

Page 41: Approximating the Cut-Norm

Main Ideas

• Semidefinite Program Relaxation

- a powerful tool for optimization problems

• Randomized Rounding Scheme

- random hyperplane

- multi-dimensional Gaussian

• Apply similar techniques directly to approximate MAX-CUT