tensors and optimizationravi kannan tensors and optimization september 16, 2013 2 / 11 reducing...

53
Tensors and Optimization Ravi Kannan September 16, 2013 Ravi Kannan Tensors and Optimization September 16, 2013 1 / 11

Upload: others

Post on 10-Oct-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Tensors and Optimization

Ravi Kannan

September 16, 2013

Ravi Kannan Tensors and Optimization September 16, 2013 1 / 11

Page 2: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Reducing MAX-r -CSP to Tensor Optimization

MAX-3-SAT: Clause: (xi + xj + xk ). Given a list of 3-clauses on nvariables, find the assignment maximizing the number of clausessatisfied.

Set up 8 n × n × n tensors: A(1),A(2), . . . ,A(8), where

A(1)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (1,1,1)

A(2)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (0,1,1)......Each clause contributes to 7 of the 8 tensors.

Maximize∑

ijk A(1)ijk xixjxk +

∑ijk A(2)

ijk (1− xi)xjxk + · · · .Can be done for all MAX-r -CSP problems. (Get 2r r -tensors. Butfor this talk, r = 3.) Goodbye MAX-CSP. Only Tensor Optimization.

Ravi Kannan Tensors and Optimization September 16, 2013 2 / 11

Page 3: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Reducing MAX-r -CSP to Tensor Optimization

MAX-3-SAT: Clause: (xi + xj + xk ). Given a list of 3-clauses on nvariables, find the assignment maximizing the number of clausessatisfied.Set up 8 n × n × n tensors: A(1),A(2), . . . ,A(8), where

A(1)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (1,1,1)

A(2)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (0,1,1)......Each clause contributes to 7 of the 8 tensors.

Maximize∑

ijk A(1)ijk xixjxk +

∑ijk A(2)

ijk (1− xi)xjxk + · · · .Can be done for all MAX-r -CSP problems. (Get 2r r -tensors. Butfor this talk, r = 3.) Goodbye MAX-CSP. Only Tensor Optimization.

Ravi Kannan Tensors and Optimization September 16, 2013 2 / 11

Page 4: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Reducing MAX-r -CSP to Tensor Optimization

MAX-3-SAT: Clause: (xi + xj + xk ). Given a list of 3-clauses on nvariables, find the assignment maximizing the number of clausessatisfied.Set up 8 n × n × n tensors: A(1),A(2), . . . ,A(8), where

A(1)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (1,1,1)

A(2)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (0,1,1)......Each clause contributes to 7 of the 8 tensors.

Maximize∑

ijk A(1)ijk xixjxk +

∑ijk A(2)

ijk (1− xi)xjxk + · · · .Can be done for all MAX-r -CSP problems. (Get 2r r -tensors. Butfor this talk, r = 3.) Goodbye MAX-CSP. Only Tensor Optimization.

Ravi Kannan Tensors and Optimization September 16, 2013 2 / 11

Page 5: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Reducing MAX-r -CSP to Tensor Optimization

MAX-3-SAT: Clause: (xi + xj + xk ). Given a list of 3-clauses on nvariables, find the assignment maximizing the number of clausessatisfied.Set up 8 n × n × n tensors: A(1),A(2), . . . ,A(8), where

A(1)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (1,1,1)

A(2)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (0,1,1)......

Each clause contributes to 7 of the 8 tensors.

Maximize∑

ijk A(1)ijk xixjxk +

∑ijk A(2)

ijk (1− xi)xjxk + · · · .Can be done for all MAX-r -CSP problems. (Get 2r r -tensors. Butfor this talk, r = 3.) Goodbye MAX-CSP. Only Tensor Optimization.

Ravi Kannan Tensors and Optimization September 16, 2013 2 / 11

Page 6: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Reducing MAX-r -CSP to Tensor Optimization

MAX-3-SAT: Clause: (xi + xj + xk ). Given a list of 3-clauses on nvariables, find the assignment maximizing the number of clausessatisfied.Set up 8 n × n × n tensors: A(1),A(2), . . . ,A(8), where

A(1)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (1,1,1)

A(2)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (0,1,1)......Each clause contributes to 7 of the 8 tensors.

Maximize∑

ijk A(1)ijk xixjxk +

∑ijk A(2)

ijk (1− xi)xjxk + · · · .Can be done for all MAX-r -CSP problems. (Get 2r r -tensors. Butfor this talk, r = 3.) Goodbye MAX-CSP. Only Tensor Optimization.

Ravi Kannan Tensors and Optimization September 16, 2013 2 / 11

Page 7: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Reducing MAX-r -CSP to Tensor Optimization

MAX-3-SAT: Clause: (xi + xj + xk ). Given a list of 3-clauses on nvariables, find the assignment maximizing the number of clausessatisfied.Set up 8 n × n × n tensors: A(1),A(2), . . . ,A(8), where

A(1)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (1,1,1)

A(2)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (0,1,1)......Each clause contributes to 7 of the 8 tensors.

Maximize∑

ijk A(1)ijk xixjxk +

∑ijk A(2)

ijk (1− xi)xjxk + · · · .

Can be done for all MAX-r -CSP problems. (Get 2r r -tensors. Butfor this talk, r = 3.) Goodbye MAX-CSP. Only Tensor Optimization.

Ravi Kannan Tensors and Optimization September 16, 2013 2 / 11

Page 8: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Reducing MAX-r -CSP to Tensor Optimization

MAX-3-SAT: Clause: (xi + xj + xk ). Given a list of 3-clauses on nvariables, find the assignment maximizing the number of clausessatisfied.Set up 8 n × n × n tensors: A(1),A(2), . . . ,A(8), where

A(1)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (1,1,1)

A(2)ijk = number of clauses satisfied by the assignment

(xi , xj , xk ) = (0,1,1)......Each clause contributes to 7 of the 8 tensors.

Maximize∑

ijk A(1)ijk xixjxk +

∑ijk A(2)

ijk (1− xi)xjxk + · · · .Can be done for all MAX-r -CSP problems. (Get 2r r -tensors. Butfor this talk, r = 3.) Goodbye MAX-CSP. Only Tensor Optimization.

Ravi Kannan Tensors and Optimization September 16, 2013 2 / 11

Page 9: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Planted Clique Problem

Given G(n,1/2) + p (planted) clique, find clique.

If p ≥ Ω(√

n), spectral methods work. Alon, Krivelevich, Sudakov.(p ∈ O(n0.5−ε)? Still open.)

Aijk =

1 if number of edges in G among (i , j), (j , k), (k , i) is odd−1 otherwise.

Frieze, K. Arg-Max|x |=1∑

ijk Aijkxixjxk , gives the planted cliqueprovided p ∈ Ω∗(n1/3).Brubaker, Vempala r -tensor problem gives planted clique providedp ∈ Ω∗(n1/r ).Planted Gaussian problem: A n × n i.i.d. N(0,1) entries. B hasi.i.d N(µ,1) entries in (hidden) p × p sub-matrix and 0 o.w. GivenA + B, find B. [Spectral methods for pµ ≥ c

√n.]

Planted Dense sub-graph problems.

Ravi Kannan Tensors and Optimization September 16, 2013 3 / 11

Page 10: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Planted Clique Problem

Given G(n,1/2) + p (planted) clique, find clique.If p ≥ Ω(

√n), spectral methods work. Alon, Krivelevich, Sudakov.

(p ∈ O(n0.5−ε)? Still open.)

Aijk =

1 if number of edges in G among (i , j), (j , k), (k , i) is odd−1 otherwise.

Frieze, K. Arg-Max|x |=1∑

ijk Aijkxixjxk , gives the planted cliqueprovided p ∈ Ω∗(n1/3).Brubaker, Vempala r -tensor problem gives planted clique providedp ∈ Ω∗(n1/r ).Planted Gaussian problem: A n × n i.i.d. N(0,1) entries. B hasi.i.d N(µ,1) entries in (hidden) p × p sub-matrix and 0 o.w. GivenA + B, find B. [Spectral methods for pµ ≥ c

√n.]

Planted Dense sub-graph problems.

Ravi Kannan Tensors and Optimization September 16, 2013 3 / 11

Page 11: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Planted Clique Problem

Given G(n,1/2) + p (planted) clique, find clique.If p ≥ Ω(

√n), spectral methods work. Alon, Krivelevich, Sudakov.

(p ∈ O(n0.5−ε)? Still open.)

Aijk =

1 if number of edges in G among (i , j), (j , k), (k , i) is odd−1 otherwise.

Frieze, K. Arg-Max|x |=1∑

ijk Aijkxixjxk , gives the planted cliqueprovided p ∈ Ω∗(n1/3).

Brubaker, Vempala r -tensor problem gives planted clique providedp ∈ Ω∗(n1/r ).Planted Gaussian problem: A n × n i.i.d. N(0,1) entries. B hasi.i.d N(µ,1) entries in (hidden) p × p sub-matrix and 0 o.w. GivenA + B, find B. [Spectral methods for pµ ≥ c

√n.]

Planted Dense sub-graph problems.

Ravi Kannan Tensors and Optimization September 16, 2013 3 / 11

Page 12: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Planted Clique Problem

Given G(n,1/2) + p (planted) clique, find clique.If p ≥ Ω(

√n), spectral methods work. Alon, Krivelevich, Sudakov.

(p ∈ O(n0.5−ε)? Still open.)

Aijk =

1 if number of edges in G among (i , j), (j , k), (k , i) is odd−1 otherwise.

Frieze, K. Arg-Max|x |=1∑

ijk Aijkxixjxk , gives the planted cliqueprovided p ∈ Ω∗(n1/3).Brubaker, Vempala r -tensor problem gives planted clique providedp ∈ Ω∗(n1/r ).

Planted Gaussian problem: A n × n i.i.d. N(0,1) entries. B hasi.i.d N(µ,1) entries in (hidden) p × p sub-matrix and 0 o.w. GivenA + B, find B. [Spectral methods for pµ ≥ c

√n.]

Planted Dense sub-graph problems.

Ravi Kannan Tensors and Optimization September 16, 2013 3 / 11

Page 13: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Planted Clique Problem

Given G(n,1/2) + p (planted) clique, find clique.If p ≥ Ω(

√n), spectral methods work. Alon, Krivelevich, Sudakov.

(p ∈ O(n0.5−ε)? Still open.)

Aijk =

1 if number of edges in G among (i , j), (j , k), (k , i) is odd−1 otherwise.

Frieze, K. Arg-Max|x |=1∑

ijk Aijkxixjxk , gives the planted cliqueprovided p ∈ Ω∗(n1/3).Brubaker, Vempala r -tensor problem gives planted clique providedp ∈ Ω∗(n1/r ).Planted Gaussian problem: A n × n i.i.d. N(0,1) entries. B hasi.i.d N(µ,1) entries in (hidden) p × p sub-matrix and 0 o.w. GivenA + B, find B. [Spectral methods for pµ ≥ c

√n.]

Planted Dense sub-graph problems.

Ravi Kannan Tensors and Optimization September 16, 2013 3 / 11

Page 14: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Planted Clique Problem

Given G(n,1/2) + p (planted) clique, find clique.If p ≥ Ω(

√n), spectral methods work. Alon, Krivelevich, Sudakov.

(p ∈ O(n0.5−ε)? Still open.)

Aijk =

1 if number of edges in G among (i , j), (j , k), (k , i) is odd−1 otherwise.

Frieze, K. Arg-Max|x |=1∑

ijk Aijkxixjxk , gives the planted cliqueprovided p ∈ Ω∗(n1/3).Brubaker, Vempala r -tensor problem gives planted clique providedp ∈ Ω∗(n1/r ).Planted Gaussian problem: A n × n i.i.d. N(0,1) entries. B hasi.i.d N(µ,1) entries in (hidden) p × p sub-matrix and 0 o.w. GivenA + B, find B. [Spectral methods for pµ ≥ c

√n.]

Planted Dense sub-graph problems.

Ravi Kannan Tensors and Optimization September 16, 2013 3 / 11

Page 15: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Tensor Optimization - What norms?

Problem: Maximize∑

ijk Aijkyiyjyk , where, there are someconstraints of the form yi ∈ 0,1 and yi = 1− yj .

Notation: A(x , y , z) =∑

ijk Aijkxiyjzk .Suppose we can approximate A by a “simpler to optimize” (lowrank) tensor B so that

Max|x |=|y |=|z|=1 |A(x , y , z)− B(x , y , z)| = ||A− B|| ≤ ∆.

Then, solving the problem with B instead of A ensures error is atmost ∆|x ||y ||z|.Moral of this: Enough to ensue that A is well approximated by B inspectral norm.

Ravi Kannan Tensors and Optimization September 16, 2013 4 / 11

Page 16: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Tensor Optimization - What norms?

Problem: Maximize∑

ijk Aijkyiyjyk , where, there are someconstraints of the form yi ∈ 0,1 and yi = 1− yj .Notation: A(x , y , z) =

∑ijk Aijkxiyjzk .

Suppose we can approximate A by a “simpler to optimize” (lowrank) tensor B so that

Max|x |=|y |=|z|=1 |A(x , y , z)− B(x , y , z)| = ||A− B|| ≤ ∆.

Then, solving the problem with B instead of A ensures error is atmost ∆|x ||y ||z|.Moral of this: Enough to ensue that A is well approximated by B inspectral norm.

Ravi Kannan Tensors and Optimization September 16, 2013 4 / 11

Page 17: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Tensor Optimization - What norms?

Problem: Maximize∑

ijk Aijkyiyjyk , where, there are someconstraints of the form yi ∈ 0,1 and yi = 1− yj .Notation: A(x , y , z) =

∑ijk Aijkxiyjzk .

Suppose we can approximate A by a “simpler to optimize” (lowrank) tensor B so that

Max|x |=|y |=|z|=1 |A(x , y , z)− B(x , y , z)| = ||A− B|| ≤ ∆.

Then, solving the problem with B instead of A ensures error is atmost ∆|x ||y ||z|.Moral of this: Enough to ensue that A is well approximated by B inspectral norm.

Ravi Kannan Tensors and Optimization September 16, 2013 4 / 11

Page 18: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Tensor Optimization - What norms?

Problem: Maximize∑

ijk Aijkyiyjyk , where, there are someconstraints of the form yi ∈ 0,1 and yi = 1− yj .Notation: A(x , y , z) =

∑ijk Aijkxiyjzk .

Suppose we can approximate A by a “simpler to optimize” (lowrank) tensor B so that

Max|x |=|y |=|z|=1 |A(x , y , z)− B(x , y , z)| = ||A− B|| ≤ ∆.

Then, solving the problem with B instead of A ensures error is atmost ∆|x ||y ||z|.

Moral of this: Enough to ensue that A is well approximated by B inspectral norm.

Ravi Kannan Tensors and Optimization September 16, 2013 4 / 11

Page 19: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Tensor Optimization - What norms?

Problem: Maximize∑

ijk Aijkyiyjyk , where, there are someconstraints of the form yi ∈ 0,1 and yi = 1− yj .Notation: A(x , y , z) =

∑ijk Aijkxiyjzk .

Suppose we can approximate A by a “simpler to optimize” (lowrank) tensor B so that

Max|x |=|y |=|z|=1 |A(x , y , z)− B(x , y , z)| = ||A− B|| ≤ ∆.

Then, solving the problem with B instead of A ensures error is atmost ∆|x ||y ||z|.Moral of this: Enough to ensue that A is well approximated by B inspectral norm.

Ravi Kannan Tensors and Optimization September 16, 2013 4 / 11

Page 20: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Any tensor can be approximated

Notation: x ⊗ y ⊗ z is the tensor with entries xiyjzk . It is a rank 1tensor. ||A||2F = sum of squares of all entries.

Lemma For any r−tensor A, there are 1/ε2 rank 1 tensors whosesum B satisfies

||A− B|| ≤ ε||A||F .

Proof: Start with B = 0. If Lemma not already satisfied, there arex , y , z such that |(A− B)(x , y , z)| ≥ ε||A||F . Take cx ⊗ y ⊗ z as thenext rank 1 tensor to subtract... [Greedy. Imitation of SVD.]To make this constructive, need to find x , y , z. NP-hard : Hillar, Limin “Most Tensor Problems are NP-Hard”.

Theorem dela Vega, K., Karpinski, Vempala For any r−tensor A,we can find in O∗(n1/ε2

) time 4/ε2 rank 1 tensors whose sum Bsatisfies whp: ||A− B|| ≤ ε||A||F .No Free Lunch: Cannot put || · ||F in lhs or || · || on rhs.

Ravi Kannan Tensors and Optimization September 16, 2013 5 / 11

Page 21: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Any tensor can be approximated

Notation: x ⊗ y ⊗ z is the tensor with entries xiyjzk . It is a rank 1tensor. ||A||2F = sum of squares of all entries.Lemma For any r−tensor A, there are 1/ε2 rank 1 tensors whosesum B satisfies

||A− B|| ≤ ε||A||F .

Proof: Start with B = 0. If Lemma not already satisfied, there arex , y , z such that |(A− B)(x , y , z)| ≥ ε||A||F . Take cx ⊗ y ⊗ z as thenext rank 1 tensor to subtract... [Greedy. Imitation of SVD.]To make this constructive, need to find x , y , z. NP-hard : Hillar, Limin “Most Tensor Problems are NP-Hard”.

Theorem dela Vega, K., Karpinski, Vempala For any r−tensor A,we can find in O∗(n1/ε2

) time 4/ε2 rank 1 tensors whose sum Bsatisfies whp: ||A− B|| ≤ ε||A||F .No Free Lunch: Cannot put || · ||F in lhs or || · || on rhs.

Ravi Kannan Tensors and Optimization September 16, 2013 5 / 11

Page 22: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Any tensor can be approximated

Notation: x ⊗ y ⊗ z is the tensor with entries xiyjzk . It is a rank 1tensor. ||A||2F = sum of squares of all entries.Lemma For any r−tensor A, there are 1/ε2 rank 1 tensors whosesum B satisfies

||A− B|| ≤ ε||A||F .

Proof: Start with B = 0. If Lemma not already satisfied, there arex , y , z such that |(A− B)(x , y , z)| ≥ ε||A||F . Take cx ⊗ y ⊗ z as thenext rank 1 tensor to subtract... [Greedy. Imitation of SVD.]

To make this constructive, need to find x , y , z. NP-hard : Hillar, Limin “Most Tensor Problems are NP-Hard”.

Theorem dela Vega, K., Karpinski, Vempala For any r−tensor A,we can find in O∗(n1/ε2

) time 4/ε2 rank 1 tensors whose sum Bsatisfies whp: ||A− B|| ≤ ε||A||F .No Free Lunch: Cannot put || · ||F in lhs or || · || on rhs.

Ravi Kannan Tensors and Optimization September 16, 2013 5 / 11

Page 23: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Any tensor can be approximated

Notation: x ⊗ y ⊗ z is the tensor with entries xiyjzk . It is a rank 1tensor. ||A||2F = sum of squares of all entries.Lemma For any r−tensor A, there are 1/ε2 rank 1 tensors whosesum B satisfies

||A− B|| ≤ ε||A||F .

Proof: Start with B = 0. If Lemma not already satisfied, there arex , y , z such that |(A− B)(x , y , z)| ≥ ε||A||F . Take cx ⊗ y ⊗ z as thenext rank 1 tensor to subtract... [Greedy. Imitation of SVD.]To make this constructive, need to find x , y , z. NP-hard : Hillar, Limin “Most Tensor Problems are NP-Hard”.

Theorem dela Vega, K., Karpinski, Vempala For any r−tensor A,we can find in O∗(n1/ε2

) time 4/ε2 rank 1 tensors whose sum Bsatisfies whp: ||A− B|| ≤ ε||A||F .No Free Lunch: Cannot put || · ||F in lhs or || · || on rhs.

Ravi Kannan Tensors and Optimization September 16, 2013 5 / 11

Page 24: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Any tensor can be approximated

Notation: x ⊗ y ⊗ z is the tensor with entries xiyjzk . It is a rank 1tensor. ||A||2F = sum of squares of all entries.Lemma For any r−tensor A, there are 1/ε2 rank 1 tensors whosesum B satisfies

||A− B|| ≤ ε||A||F .

Proof: Start with B = 0. If Lemma not already satisfied, there arex , y , z such that |(A− B)(x , y , z)| ≥ ε||A||F . Take cx ⊗ y ⊗ z as thenext rank 1 tensor to subtract... [Greedy. Imitation of SVD.]To make this constructive, need to find x , y , z. NP-hard : Hillar, Limin “Most Tensor Problems are NP-Hard”.

Theorem dela Vega, K., Karpinski, Vempala For any r−tensor A,we can find in O∗(n1/ε2

) time 4/ε2 rank 1 tensors whose sum Bsatisfies whp: ||A− B|| ≤ ε||A||F .

No Free Lunch: Cannot put || · ||F in lhs or || · || on rhs.

Ravi Kannan Tensors and Optimization September 16, 2013 5 / 11

Page 25: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Any tensor can be approximated

Notation: x ⊗ y ⊗ z is the tensor with entries xiyjzk . It is a rank 1tensor. ||A||2F = sum of squares of all entries.Lemma For any r−tensor A, there are 1/ε2 rank 1 tensors whosesum B satisfies

||A− B|| ≤ ε||A||F .

Proof: Start with B = 0. If Lemma not already satisfied, there arex , y , z such that |(A− B)(x , y , z)| ≥ ε||A||F . Take cx ⊗ y ⊗ z as thenext rank 1 tensor to subtract... [Greedy. Imitation of SVD.]To make this constructive, need to find x , y , z. NP-hard : Hillar, Limin “Most Tensor Problems are NP-Hard”.

Theorem dela Vega, K., Karpinski, Vempala For any r−tensor A,we can find in O∗(n1/ε2

) time 4/ε2 rank 1 tensors whose sum Bsatisfies whp: ||A− B|| ≤ ε||A||F .No Free Lunch: Cannot put || · ||F in lhs or || · || on rhs.

Ravi Kannan Tensors and Optimization September 16, 2013 5 / 11

Page 26: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Norm Maximization for tensors

Central Problem: Find x , y , z unit vectors to maximize∑

ijk Aijkxiyjzk .

1 If we knew the optimizing y , z, then the optimizing x is easy tofind: it is just the vector A(·, y , z) (whose i th component isA(ei , y , z)) scaled to length 1.

2 Now, A(ei , y , z) =∑

j,k Aijkyjzk . The sum can be estimated byhaving just a few terms. But, an important question is: how do wemake sure the variance is not too high, since the entries can havedisparate values ?

3 Length squared sampling works ! [Stated here without proof.]4 This gives us many candidate x ’s. How do we check which one is

good ? For each x , recursively solve the matrix problem (SVD!) todetermine its value !

Ravi Kannan Tensors and Optimization September 16, 2013 6 / 11

Page 27: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Norm Maximization for tensors

Central Problem: Find x , y , z unit vectors to maximize∑

ijk Aijkxiyjzk .1 If we knew the optimizing y , z, then the optimizing x is easy to

find: it is just the vector A(·, y , z) (whose i th component isA(ei , y , z)) scaled to length 1.

2 Now, A(ei , y , z) =∑

j,k Aijkyjzk . The sum can be estimated byhaving just a few terms. But, an important question is: how do wemake sure the variance is not too high, since the entries can havedisparate values ?

3 Length squared sampling works ! [Stated here without proof.]4 This gives us many candidate x ’s. How do we check which one is

good ? For each x , recursively solve the matrix problem (SVD!) todetermine its value !

Ravi Kannan Tensors and Optimization September 16, 2013 6 / 11

Page 28: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Norm Maximization for tensors

Central Problem: Find x , y , z unit vectors to maximize∑

ijk Aijkxiyjzk .1 If we knew the optimizing y , z, then the optimizing x is easy to

find: it is just the vector A(·, y , z) (whose i th component isA(ei , y , z)) scaled to length 1.

2 Now, A(ei , y , z) =∑

j,k Aijkyjzk . The sum can be estimated byhaving just a few terms. But, an important question is: how do wemake sure the variance is not too high, since the entries can havedisparate values ?

3 Length squared sampling works ! [Stated here without proof.]4 This gives us many candidate x ’s. How do we check which one is

good ? For each x , recursively solve the matrix problem (SVD!) todetermine its value !

Ravi Kannan Tensors and Optimization September 16, 2013 6 / 11

Page 29: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Norm Maximization for tensors

Central Problem: Find x , y , z unit vectors to maximize∑

ijk Aijkxiyjzk .1 If we knew the optimizing y , z, then the optimizing x is easy to

find: it is just the vector A(·, y , z) (whose i th component isA(ei , y , z)) scaled to length 1.

2 Now, A(ei , y , z) =∑

j,k Aijkyjzk . The sum can be estimated byhaving just a few terms. But, an important question is: how do wemake sure the variance is not too high, since the entries can havedisparate values ?

3 Length squared sampling works ! [Stated here without proof.]

4 This gives us many candidate x ’s. How do we check which one isgood ? For each x , recursively solve the matrix problem (SVD!) todetermine its value !

Ravi Kannan Tensors and Optimization September 16, 2013 6 / 11

Page 30: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Norm Maximization for tensors

Central Problem: Find x , y , z unit vectors to maximize∑

ijk Aijkxiyjzk .1 If we knew the optimizing y , z, then the optimizing x is easy to

find: it is just the vector A(·, y , z) (whose i th component isA(ei , y , z)) scaled to length 1.

2 Now, A(ei , y , z) =∑

j,k Aijkyjzk . The sum can be estimated byhaving just a few terms. But, an important question is: how do wemake sure the variance is not too high, since the entries can havedisparate values ?

3 Length squared sampling works ! [Stated here without proof.]4 This gives us many candidate x ’s. How do we check which one is

good ? For each x , recursively solve the matrix problem (SVD!) todetermine its value !

Ravi Kannan Tensors and Optimization September 16, 2013 6 / 11

Page 31: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Norm Maximization - Some Detail

Estimate∑

jk Aijkyjzk for all i . (Really for all y , z.)

Pick a set S of O(1) pairs (j , k) in i.i.d. trials, in each with

probabilities:∑

i A2jk

||A||2F.

For each (j , k) ∈ S, enumerate all possible values of yj , zk (indiscrete steps). [Only POLYO(1) =POLY many sets of values.]Treat

∑(j,k)∈S Aijkyjzk as an estimate of

∑all(j,k) Aijkyjzk .

Ravi Kannan Tensors and Optimization September 16, 2013 7 / 11

Page 32: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Norm Maximization - Some Detail

Estimate∑

jk Aijkyjzk for all i . (Really for all y , z.)Pick a set S of O(1) pairs (j , k) in i.i.d. trials, in each with

probabilities:∑

i A2jk

||A||2F.

For each (j , k) ∈ S, enumerate all possible values of yj , zk (indiscrete steps). [Only POLYO(1) =POLY many sets of values.]Treat

∑(j,k)∈S Aijkyjzk as an estimate of

∑all(j,k) Aijkyjzk .

Ravi Kannan Tensors and Optimization September 16, 2013 7 / 11

Page 33: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Norm Maximization - Some Detail

Estimate∑

jk Aijkyjzk for all i . (Really for all y , z.)Pick a set S of O(1) pairs (j , k) in i.i.d. trials, in each with

probabilities:∑

i A2jk

||A||2F.

For each (j , k) ∈ S, enumerate all possible values of yj , zk (indiscrete steps). [Only POLYO(1) =POLY many sets of values.]

Treat∑

(j,k)∈S Aijkyjzk as an estimate of∑

all(j,k) Aijkyjzk .

Ravi Kannan Tensors and Optimization September 16, 2013 7 / 11

Page 34: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Norm Maximization - Some Detail

Estimate∑

jk Aijkyjzk for all i . (Really for all y , z.)Pick a set S of O(1) pairs (j , k) in i.i.d. trials, in each with

probabilities:∑

i A2jk

||A||2F.

For each (j , k) ∈ S, enumerate all possible values of yj , zk (indiscrete steps). [Only POLYO(1) =POLY many sets of values.]Treat

∑(j,k)∈S Aijkyjzk as an estimate of

∑all(j,k) Aijkyjzk .

Ravi Kannan Tensors and Optimization September 16, 2013 7 / 11

Page 35: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

For what CSP’s is this good?

First, 2-CSP: MAX-2-SAT. Or MAX-CUT. n number of variables orvertices and m number of clauses or edges. A has ||A||2F = m.

Error = ||A− B|| |x | |1− x | ≤ ε||A||F√

n√

n ≤ ε√

mn.But all MAX-CSP problems can be easily solved with error at mostO(m).So, no use unless m ∈ Ω(n2). Dense. Similar argument for higherr .

Ravi Kannan Tensors and Optimization September 16, 2013 8 / 11

Page 36: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

For what CSP’s is this good?

First, 2-CSP: MAX-2-SAT. Or MAX-CUT. n number of variables orvertices and m number of clauses or edges. A has ||A||2F = m.Error = ||A− B|| |x | |1− x | ≤ ε||A||F

√n√

n ≤ ε√

mn.

But all MAX-CSP problems can be easily solved with error at mostO(m).So, no use unless m ∈ Ω(n2). Dense. Similar argument for higherr .

Ravi Kannan Tensors and Optimization September 16, 2013 8 / 11

Page 37: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

For what CSP’s is this good?

First, 2-CSP: MAX-2-SAT. Or MAX-CUT. n number of variables orvertices and m number of clauses or edges. A has ||A||2F = m.Error = ||A− B|| |x | |1− x | ≤ ε||A||F

√n√

n ≤ ε√

mn.But all MAX-CSP problems can be easily solved with error at mostO(m).

So, no use unless m ∈ Ω(n2). Dense. Similar argument for higherr .

Ravi Kannan Tensors and Optimization September 16, 2013 8 / 11

Page 38: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

For what CSP’s is this good?

First, 2-CSP: MAX-2-SAT. Or MAX-CUT. n number of variables orvertices and m number of clauses or edges. A has ||A||2F = m.Error = ||A− B|| |x | |1− x | ≤ ε||A||F

√n√

n ≤ ε√

mn.But all MAX-CSP problems can be easily solved with error at mostO(m).So, no use unless m ∈ Ω(n2). Dense. Similar argument for higherr .

Ravi Kannan Tensors and Optimization September 16, 2013 8 / 11

Page 39: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Generalizing Metrics, Dense problems

Scaling A: Let Di be the sum of the i th row. [Degree if A is theadjacency matrix.]

In many situations, a natural scaling of A is given by Bij =Aij√Di Dj

.

Define D =∑

i Di/n. Our scaling Bij =Aij√

(Di +D)(Dj +D).

A is core-dense if ||B||F ∈ O(1).Dense matrices, Metrics (triangle inequality), powers of metrics -all are core-dense !Theorem PTAS’s for all core-dense MAX-r -CSP’s.

Ravi Kannan Tensors and Optimization September 16, 2013 9 / 11

Page 40: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Generalizing Metrics, Dense problems

Scaling A: Let Di be the sum of the i th row. [Degree if A is theadjacency matrix.]

In many situations, a natural scaling of A is given by Bij =Aij√Di Dj

.

Define D =∑

i Di/n. Our scaling Bij =Aij√

(Di +D)(Dj +D).

A is core-dense if ||B||F ∈ O(1).Dense matrices, Metrics (triangle inequality), powers of metrics -all are core-dense !Theorem PTAS’s for all core-dense MAX-r -CSP’s.

Ravi Kannan Tensors and Optimization September 16, 2013 9 / 11

Page 41: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Generalizing Metrics, Dense problems

Scaling A: Let Di be the sum of the i th row. [Degree if A is theadjacency matrix.]

In many situations, a natural scaling of A is given by Bij =Aij√Di Dj

.

Define D =∑

i Di/n. Our scaling Bij =Aij√

(Di +D)(Dj +D).

A is core-dense if ||B||F ∈ O(1).Dense matrices, Metrics (triangle inequality), powers of metrics -all are core-dense !Theorem PTAS’s for all core-dense MAX-r -CSP’s.

Ravi Kannan Tensors and Optimization September 16, 2013 9 / 11

Page 42: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Generalizing Metrics, Dense problems

Scaling A: Let Di be the sum of the i th row. [Degree if A is theadjacency matrix.]

In many situations, a natural scaling of A is given by Bij =Aij√Di Dj

.

Define D =∑

i Di/n. Our scaling Bij =Aij√

(Di +D)(Dj +D).

A is core-dense if ||B||F ∈ O(1).

Dense matrices, Metrics (triangle inequality), powers of metrics -all are core-dense !Theorem PTAS’s for all core-dense MAX-r -CSP’s.

Ravi Kannan Tensors and Optimization September 16, 2013 9 / 11

Page 43: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Generalizing Metrics, Dense problems

Scaling A: Let Di be the sum of the i th row. [Degree if A is theadjacency matrix.]

In many situations, a natural scaling of A is given by Bij =Aij√Di Dj

.

Define D =∑

i Di/n. Our scaling Bij =Aij√

(Di +D)(Dj +D).

A is core-dense if ||B||F ∈ O(1).Dense matrices, Metrics (triangle inequality), powers of metrics -all are core-dense !

Theorem PTAS’s for all core-dense MAX-r -CSP’s.

Ravi Kannan Tensors and Optimization September 16, 2013 9 / 11

Page 44: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Generalizing Metrics, Dense problems

Scaling A: Let Di be the sum of the i th row. [Degree if A is theadjacency matrix.]

In many situations, a natural scaling of A is given by Bij =Aij√Di Dj

.

Define D =∑

i Di/n. Our scaling Bij =Aij√

(Di +D)(Dj +D).

A is core-dense if ||B||F ∈ O(1).Dense matrices, Metrics (triangle inequality), powers of metrics -all are core-dense !Theorem PTAS’s for all core-dense MAX-r -CSP’s.

Ravi Kannan Tensors and Optimization September 16, 2013 9 / 11

Page 45: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Moment Tensors

x1, x2, . . . , xn (dependent) r.v.s. with Exi = 0.

Aij = E(xixj) - Variance-Covariance matrix.Aijk = E(xixjxk ) - third moments tensor. So,E((u · x)3) = A(u,u,u).Frieze, Jerrum, K.,: If E(xi) = 0 and xi are 4-way independent andR is a orthonormal transformation, the local maxima ofF (u) = E [(uT Rx)4] over |u| = 1 are precisely the rows of R−1

corresponding to i with E(x4i ) > 3. Yields an algorithm for ICA.

Moral Some tensors are nice and we can do the maximization.Ananathkumar, Hsu, Kakade Third moment tensor used for TopicModeling.

Ravi Kannan Tensors and Optimization September 16, 2013 10 / 11

Page 46: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Moment Tensors

x1, x2, . . . , xn (dependent) r.v.s. with Exi = 0.Aij = E(xixj) - Variance-Covariance matrix.

Aijk = E(xixjxk ) - third moments tensor. So,E((u · x)3) = A(u,u,u).Frieze, Jerrum, K.,: If E(xi) = 0 and xi are 4-way independent andR is a orthonormal transformation, the local maxima ofF (u) = E [(uT Rx)4] over |u| = 1 are precisely the rows of R−1

corresponding to i with E(x4i ) > 3. Yields an algorithm for ICA.

Moral Some tensors are nice and we can do the maximization.Ananathkumar, Hsu, Kakade Third moment tensor used for TopicModeling.

Ravi Kannan Tensors and Optimization September 16, 2013 10 / 11

Page 47: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Moment Tensors

x1, x2, . . . , xn (dependent) r.v.s. with Exi = 0.Aij = E(xixj) - Variance-Covariance matrix.Aijk = E(xixjxk ) - third moments tensor. So,E((u · x)3) = A(u,u,u).

Frieze, Jerrum, K.,: If E(xi) = 0 and xi are 4-way independent andR is a orthonormal transformation, the local maxima ofF (u) = E [(uT Rx)4] over |u| = 1 are precisely the rows of R−1

corresponding to i with E(x4i ) > 3. Yields an algorithm for ICA.

Moral Some tensors are nice and we can do the maximization.Ananathkumar, Hsu, Kakade Third moment tensor used for TopicModeling.

Ravi Kannan Tensors and Optimization September 16, 2013 10 / 11

Page 48: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Moment Tensors

x1, x2, . . . , xn (dependent) r.v.s. with Exi = 0.Aij = E(xixj) - Variance-Covariance matrix.Aijk = E(xixjxk ) - third moments tensor. So,E((u · x)3) = A(u,u,u).Frieze, Jerrum, K.,: If E(xi) = 0 and xi are 4-way independent andR is a orthonormal transformation, the local maxima ofF (u) = E [(uT Rx)4] over |u| = 1 are precisely the rows of R−1

corresponding to i with E(x4i ) > 3. Yields an algorithm for ICA.

Moral Some tensors are nice and we can do the maximization.

Ananathkumar, Hsu, Kakade Third moment tensor used for TopicModeling.

Ravi Kannan Tensors and Optimization September 16, 2013 10 / 11

Page 49: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Moment Tensors

x1, x2, . . . , xn (dependent) r.v.s. with Exi = 0.Aij = E(xixj) - Variance-Covariance matrix.Aijk = E(xixjxk ) - third moments tensor. So,E((u · x)3) = A(u,u,u).Frieze, Jerrum, K.,: If E(xi) = 0 and xi are 4-way independent andR is a orthonormal transformation, the local maxima ofF (u) = E [(uT Rx)4] over |u| = 1 are precisely the rows of R−1

corresponding to i with E(x4i ) > 3. Yields an algorithm for ICA.

Moral Some tensors are nice and we can do the maximization.Ananathkumar, Hsu, Kakade Third moment tensor used for TopicModeling.

Ravi Kannan Tensors and Optimization September 16, 2013 10 / 11

Page 50: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Epilogue

These results can also be achieved (for a narrower class) bySherali-Adams schemes. Yoshida, Zhou (2013).

Low-rank Approximation of polynomials : A r−tensor A representsa r−homogeneous polynomial :

∑ijk Aijkxixjxk . A rank 1 tensor

a⊗ b ⊗ c represents a product of 3 linear polynomials -(a · x)(b · x)(c · x).Any homogeneous polynomial can be approximated by the sum ofa small number of products of linear polynomials.... StrongerResults Schrijver.OPEN: Use Tensors for other Optimization Problems. Suppose wecan find spectral norm of 3-tensors to within a factor of 1 + ε forany constant ε > 0 . [Not ruled out by NP-harness proofs.] Canone beat the best approximation factor for say Max-Cut obtainedby SDP (a quadratic method) ?

Ravi Kannan Tensors and Optimization September 16, 2013 11 / 11

Page 51: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Epilogue

These results can also be achieved (for a narrower class) bySherali-Adams schemes. Yoshida, Zhou (2013).Low-rank Approximation of polynomials : A r−tensor A representsa r−homogeneous polynomial :

∑ijk Aijkxixjxk . A rank 1 tensor

a⊗ b ⊗ c represents a product of 3 linear polynomials -(a · x)(b · x)(c · x).

Any homogeneous polynomial can be approximated by the sum ofa small number of products of linear polynomials.... StrongerResults Schrijver.OPEN: Use Tensors for other Optimization Problems. Suppose wecan find spectral norm of 3-tensors to within a factor of 1 + ε forany constant ε > 0 . [Not ruled out by NP-harness proofs.] Canone beat the best approximation factor for say Max-Cut obtainedby SDP (a quadratic method) ?

Ravi Kannan Tensors and Optimization September 16, 2013 11 / 11

Page 52: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Epilogue

These results can also be achieved (for a narrower class) bySherali-Adams schemes. Yoshida, Zhou (2013).Low-rank Approximation of polynomials : A r−tensor A representsa r−homogeneous polynomial :

∑ijk Aijkxixjxk . A rank 1 tensor

a⊗ b ⊗ c represents a product of 3 linear polynomials -(a · x)(b · x)(c · x).Any homogeneous polynomial can be approximated by the sum ofa small number of products of linear polynomials.... StrongerResults Schrijver.

OPEN: Use Tensors for other Optimization Problems. Suppose wecan find spectral norm of 3-tensors to within a factor of 1 + ε forany constant ε > 0 . [Not ruled out by NP-harness proofs.] Canone beat the best approximation factor for say Max-Cut obtainedby SDP (a quadratic method) ?

Ravi Kannan Tensors and Optimization September 16, 2013 11 / 11

Page 53: Tensors and OptimizationRavi Kannan Tensors and Optimization September 16, 2013 2 / 11 Reducing MAX-r-CSP to Tensor Optimization MAX-3-SAT: Clause: ( x i + xj + xk). Given a list of

Epilogue

These results can also be achieved (for a narrower class) bySherali-Adams schemes. Yoshida, Zhou (2013).Low-rank Approximation of polynomials : A r−tensor A representsa r−homogeneous polynomial :

∑ijk Aijkxixjxk . A rank 1 tensor

a⊗ b ⊗ c represents a product of 3 linear polynomials -(a · x)(b · x)(c · x).Any homogeneous polynomial can be approximated by the sum ofa small number of products of linear polynomials.... StrongerResults Schrijver.OPEN: Use Tensors for other Optimization Problems. Suppose wecan find spectral norm of 3-tensors to within a factor of 1 + ε forany constant ε > 0 . [Not ruled out by NP-harness proofs.] Canone beat the best approximation factor for say Max-Cut obtainedby SDP (a quadratic method) ?

Ravi Kannan Tensors and Optimization September 16, 2013 11 / 11