lattices

100
Lattices Lattices Definition and Related Problems

Upload: elaine-key

Post on 03-Jan-2016

38 views

Category:

Documents


0 download

DESCRIPTION

Lattices. Definition and Related Problems. Lattices. Definition (lattice): Given a basis v 1 ,..,v n  R n , The lattice L=L(v 1 ,..,v n ) is. Illustration - A lattice in R 2. Each point corresponds to a vector in the lattice. “Recipe”: - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Lattices

LatticesLattices

Definition and Related Problems

Page 2: Lattices

LatticesLattices

Definition (lattice): Given a basis v1,..,vnRn,

The lattice L=L(v1,..,vn) is

vav s.t. Za,...,a v n

1iiin1

Page 3: Lattices

Illustration - A lattice in RIllustration - A lattice in R22

“Recipe”:

1. Take two linearly independent vectors in R2.

2. Close them for addition and for multiplication by an integer scalar.

Each point corresponds to a vector in the lattice

etc. ... etc. ...

Page 4: Lattices

Shortest Vector ProblemShortest Vector Problem

SVP (Shortest Vector Problem):

Given a lattice L find s 0 L s.t. for any x 0 L

|| x || || s ||.

Page 5: Lattices

The Shortest Vector - ExamplesThe Shortest Vector - Examples

What’s the shortest vector in the lattice spanned by the two given vectors?

Page 6: Lattices

Closest Vector ProblemClosest Vector Problem

CVP (Closet Vector Problem):

Given a lattice L and a vector yRn, find a vL, s.t. || y - v || is minimal.

Which lattice vector is closest to the marked vector?

Page 7: Lattices

Lattice Approximation ProblemsLattice Approximation Problems gg-Approximation version:

Find a vector y s.t. ||||yy|||| < g g shortest(L)

gg-Gap version: Given LL, and a number dd, distinguish between– The ‘yes’ instances (( shortest(L) shortest(L) d ) d )– The ‘no’ instances ( ( shortest(L) shortest(L) > gd )> gd )

If gg-Gap problem is NP-hard, then having a gg-approximation polynomial algorithm --> P=NP.

shortest

Page 8: Lattices

Lattice Approximation ProblemsLattice Approximation Problems gg-Approximation version:

Find a vector y s.t. ||||yy|||| < g g shortest(L)

gg-Gap version: Given LL, and a number dd, distinguish between– The ‘yes’ instances (( shortest(L) shortest(L) d ) d )– The ‘no’ instances ( ( shortest(L) shortest(L) > gd )> gd )

If gg-Gap problem is NP-hard, then having a gg-approximation polynomial algorithm --> P=NP.

shortest

Page 9: Lattices

Lattice Problems - Brief HistoryLattice Problems - Brief History

[Dirichlet, Minkowski] no CVP algorithms… [LLL] Approximation algorithm for SVP, factor 2factor 2n/2n/2 [Babai] Extension to CVP [Schnorr] Improved factor, 22n/lg nn/lg n for both CVP and SVP

[vEB]: CVP is NP-hard [ABSS]: Approximating CVP is

– NP hard to within any constant– Almost NP hard to within an almost polynomial factor.

Page 10: Lattices

Lattice Problems - Recent HistoryLattice Problems - Recent History [Ajtai96]: worst-case/average-case reduction for SVP. [Ajtai-Dwork96]: Cryptosystem. [Ajtai97]: SVP is NP-hard (for randomized reductions). [Micc98]: SVP is NP-hard to approximate to within some constant

factor.

[DKRS]: CVP is NP hard to within an almost polynomial factor. [LLS]: Approximating CVP to within n1.5 is in coNP. [GG]: Approximating SVP and CVP to within n is in coAMNP.

Page 11: Lattices

CVP/SVP - which is easier?CVP/SVP - which is easier?

Reminder: Definition (Lattice): Given a basis

v1,..,vnRn, The lattice L=L(v1,..,vk) is {aivi | ai integers}

SVP (Shortest Vector Problem): Find the shortest non-zero vector in L.

CVP (Closest Vector Problem): Given a vector yRn, find a vL closest to y.

shortest

y

closest

Why is SVP not the same as

CVP with y=0?

Ohh... but isn’t that

just an annoying technicality?...

Page 12: Lattices

SVP

Trying to Reduce SVP to CVPTrying to Reduce SVP to CVP

CVPBy

Finds (c1,...,cn)Zn which minimizes

|| c1b1+...+cnbn- y ||

c0

BSVP s

#1 try: y=0#1 try: y=0 c=0

0b1

c(c1-1,c2,...,cn)

Finds (c1,...,cn)0Zn which minimizes || c1b1+...+cnbn ||

0

e1

b1

Note that we can similarly try:...But this will also yield s=0...

Page 13: Lattices

Geometrical IntuitionGeometrical Intuition

The lattice LThe lattice L

shortest: b2-2b1

b1

b2The obvious reduction:

the shortest vector is the difference between (say) b2 and the lattice vector closest to b2 (not b2!!)

The closest to b2,

besides b2 itself.

Thus we would like to somehow “extract” b2 from the lattice, so the oracle for CVP will be forced to find the non-trivial vector closest to b2.

...This is not as simple as it sounds...

Page 14: Lattices

SVP

Trying to Reduce SVP to CVPTrying to Reduce SVP to CVP

CVPBy

Finds (c1,...,cn)Zn which minimizes

|| c1b1+...+cnbn- y ||

c0

The trick: replace b1 with 2b1 in the basis

BSVP s0b1

c(c1-1,c2,...,cn)

Finds (c1,...,cn)0Zn which minimizes || c1b1+...+cnbn ||

0

c1

b1

B(1

)

|| c12b1+

(2c1

Since c1Z, s0

But in this way we only discover the

shortest vector among those with odd

coefficients for b1

That’s not really a problem!Since one of the coefficients

of the SV must be odd (Why?),we can do this process for allthe vectors in the basis and

take the shortest result!

Page 15: Lattices

Geometrical IntuitionGeometrical Intuition

The lattice L’The lattice L’ L LL’=span (bL’=span (b11,2b,2b22))

The lattice L’’The lattice L’’ L L L’’=span (2bL’’=span (2b11,b,b22))

By doubling a vector in the basis, we extract it from the lattice without changing the lattice “too much”

But we risk losing the closest point in the process.

The closest to b1 in

the original lattice,lost in the new lattice

The closest to b2 in

the original lattice.Also in the new lattice.

It’s a calculated risk though: one of the closest points has to survive...

Page 16: Lattices

The Reduction of g-SVP to g-CVPThe Reduction of g-SVP to g-CVP

Where B(j) = (b1,..,bj-1,2bj,bj+1,..,bn)

Input:Input: A pair (B,d), B=(b A pair (B,d), B=(b11,..,b,..,bnn) and d) and dRR

for j=1 to n dofor j=1 to n do invoke the CVP oracle on(Binvoke the CVP oracle on(B(j)(j),b,bjj,d),d)

Output:Output: The OR of all oracle replies. The OR of all oracle replies.

Page 17: Lattices
Page 18: Lattices

Hardness of SVP & applicationsHardness of SVP & applications

Finding and even approximating the shortest vector is hard.

Next we will see how this fact can be exploited for cryptography.

We start by explaining the general frame of work: a well known cryptographic method called public-key cryptosystem.

Page 19: Lattices

Public-Key Cryptosystems and Public-Key Cryptosystems and brave spies...brave spies...

The enemy

will attack

within a

week

The brave spy wants to send the HQ a secret message

...But the enemy is in between...

THEY can see and duplicate whatever transformed

And now the locked message can be sent to the HQ without fear,

In the HQ a brand new lock was developed for such cases

The solution:

HQ->spy

The spy can easilylock it without the

key And read only there.

HQ<--Spy

Page 20: Lattices

Public-Key Cryptosystem (76)Public-Key Cryptosystem (76)

Requirements: Two poly-time computable functions Encr and Decr, s.t:1. x Decr(Encr(x))=x2. Given Encr(x) only, it is hard to find x.

UsageMake Encr public so anyone can send you messages, keep Decr private.

Page 21: Lattices

The Dual LatticeThe Dual LatticeL* = { y | x L: yx Z}

Give a basis {v1, .., vn} for L one can construct, in poly-time, a basis {u1,…,un}:ui vj = 0 ( i j)

ui vi = 1

In other words U = (Vt)-1 where

U = u1,…,un V = v1, .., vn

Page 22: Lattices

Shortest Vector - Hidden Shortest Vector - Hidden HyperplaneHyperplane

H0 = {y| ys = 0}

H1 = {y| ys = 1}

Hk = {y| ys = k}

-s

distance = 1/||S||

s – shortest vectorH – hidden hyperplane

Observation: the shortest vector induces distinct layers in the dual lattice.

Page 23: Lattices

EncryptingEncrypting

Encoding 1 s s

Encoding 0

(1) Choose a random lattice point(2) Perturb it

Choose a random point

s – shortest vectorH – hidden hyperplane

Given the lattice L, the encryption is polynomial:

Page 24: Lattices

DecryptingDecrypting

Decoding 1 s s

Decoding 0

s – shortest vectorH – hidden hyperplane

Given s, decryption can be carried out in polynomial-time, otherwise it is hard

If the projection of the point is not close to any of the hyperplanes

If the projection is close to one of the hyperplanes

Page 25: Lattices
Page 26: Lattices

GGGG

Approximating SVP and CVP to within n is in NP coAM

Hence if these problem are shown NP-hard the polynomial-time hierarchy collapses

Page 27: Lattices

DKRS Ajtai-

Micciancio

GG The World According to LatticesThe World According to Lattices

1 O(logn)O(logn)

nn 2n/lgn

O(1)O(1)

2

1+1/n

n1/lglgn

SVPSVP

CVPCVP

NP-hardnessPoly-timeapproximationNPco-AM

L3

Page 28: Lattices

OPEN PROBLEMSOPEN PROBLEMS

1 O(logn)O(logn)

nn 2n/lgn

O(1)O(1)

2

1+1/n

n1/lglgn

SVPSVP

CVPCVP

NP-hardnessPoly-timeapproximationNPco-AM

Can LLL be improved?

Is g-SVP NP-hard to within

n ?

For super-polynomial, sub-exponential

factors; is it a class of its own?

Page 29: Lattices
Page 30: Lattices

Approximating SVP in Poly-TimeApproximating SVP in Poly-Time

The LLL Algorithm

Page 31: Lattices

What’s coming up next?What’s coming up next?

To within what factor can SVP be approximated?

In this chapter we describe a polynomial time algorithm for approximating SVP to factor 2(n-1)/2.

We would later see that approximating the shortest vector to within 2/(1+2) for some >0 is NP-hard.

Page 32: Lattices

The Fundamental Insight(?)The Fundamental Insight(?)

Assume an orthogonal basis for a lattice. The shortest vector in this lattice is

Page 33: Lattices

IllustrationIllustration

v1

v2

x

x=2v1+v2

||x||>||2v1||

||x||>||v2||

Page 34: Lattices

The Fundamental Insight(!)The Fundamental Insight(!)

Assume an orthogonal basis for a lattice. The shortest vector in this lattice is

the shortest basis vector

Page 35: Lattices

Why?Why?

If a1,...,akZ and v1,...,vk are orthogonal, then ||

a1v1+...+akvk||2 = a12•||v1||2+...+ak

2•||vk||2

Therefore if vi is the shortest basis vector, and

there exits an 1 i n s.t ai 0, then ||

a1v1+...+akvk||2 ||vi||2(a12+...+ak

2) ||vi||2

No non-zero lattice vector is longer than vi

Page 36: Lattices

What if we don’t get an What if we don’t get an orthogonal basis?orthogonal basis?

Gram-Schmidt

v1

..

vk

basis for a sub-

space in Rn

Remember the good old Gram-Schmidt procedure:

v1*

..

vk*

orthogonal basis for the same sub-

space in Rn

take a vector and subtract its projections

on each one of the vectors already taken

Page 37: Lattices

ProjectionsProjections

u

v

w

Computing the projection of v on u (denoted w):

uu

Cosvw

||||

)(|||| u

u

u

u

Cosvw

||||

||||

||||

)(||||

)(|||||||| Cosvw

uu

ww

||||

||||

uu

uv2||||

,

Page 38: Lattices

Formally: The Gram-Schmidt Formally: The Gram-Schmidt ProcedureProcedure Input: a basis {v1,...,vk} of some subspace in Rn. Output: an orthogonal basis {v1*,...,vk*}, s.t for

every 1ik,span({v1,...,vi}) = span({v1*,...,vi*})

Process: The procedure starts with {v1*=v1}. Each iteration (1<ik) adds a vector, which is

orthogonal to the subspace already spanned:

1i

1jj2

j

jiii *v

||*v||

*v,vv*v

Page 39: Lattices

The sub-space spanned by v1*,v2*

v2*v1*

ExampleExample

v3 v3*v3v1 only to

simplify thepresentation

The projection of v3 on v2*

Page 40: Lattices

Wishful ThinkingWishful Thinking

Gram-Schmidt

v1

..

vk

basis for a sub-

space in Rn

v1*

..

vk*

orthogonal basis for the same sub-

space in Rn

latticelatticelatticelattice

Unfortunately, the basis Gram-Schmidt constructs doesn’t necessarily span the same lattice

v1

v2

Example: the projection of v2 on v1 is 1.35v1.

v1 and v2-1.35v1 don’t span the same lattice as v1 and v2.As a matter of fact, not

every lattice even has an orthogonal basis...

Page 41: Lattices

Nevertheless...Nevertheless...

Invoking Gram-Schmidt on a lattice basis produces a lower-bound on the length of the shortest vector in this lattice.

Page 42: Lattices

Lower-Bound on the Length of Lower-Bound on the Length of the Shortest Vectorthe Shortest Vector

Claim: Let vi* be the shortest vector in the basis constructed by Gram-Schmidt. For any non-zero lattice vector x:

||vi*|| ||x||

Proof:

and thus ||x|| rm||vm*|| = zm||vm*|| ||vi*||

k

1iii

k

1iii *vrvzx

Let m be the largest index for which zm0

There exist z1,...,zkZ, r1,...,rkR, such that

rm=zm,

jm, the projectionof vj on vm* is 0.

The projection of vm on

vm* is vm*.

Page 43: Lattices

CompromiseCompromise

Still we’ll have to settle for less than an orthogonal basis:

We’ll construct reduced basis. Reduced basis are composed of

“almost” orthogonal and relatively short vectors.

They will therefore suffice for our purpose.

Page 44: Lattices

(2) 1 i < n

¾||vi*||2 ||vi+1*+i+1,ivi*||2

Reduced BasisReduced Basis

Definition (reduced basis):

A basis {v1,…,vn} of a lattice is called reduced if:

(1) 1 j < i n ij

The projectionof vi on {vi*,...,vn*}

The projection of vi+1 on {vi*,...,vn*}

21

||v||

*v,v2

j

ji

Page 45: Lattices

Properties of Reduced Basis(1)Properties of Reduced Basis(1)

Claim: If a basis {v1,...,vn} is reduced, then for every 1i<n

½||vi*||2 ||vi+1*||2 Proof:

*||*||

*,*

21

1 ii

iii v

v

vvv

*iv

Since {v1,...,vn} is reduced

¾|| ||2 || ||2|| ||2 + || ||2*1iv2

21

||*||

*,

i

ii

v

vv *iv|| ||2 + || ||2*1iv2

2

1

*iv

And the claim follows. Corollary: By induction on i-j, for all ij,

(½)i-j ||vj*||2||vi*||2

Since vi* and vi+1* are orthogonalSince |<vi+1,vi*>/<vi*,vi*>|½

Page 46: Lattices

Properties of Reduced Basis(2)Properties of Reduced Basis(2)

1

12

*||*||

*,*

j

kk

k

kjjj v

v

vvvv

Since {v1*,...,vn*} is an orthogonal basis

1

1

2

2

222 ||*||

||*||

*,||*||||||

j

kk

k

kjjj v

v

vvvv

Since |<vi+1,vi*>/<vi*,vi*>|½ and 1kj-1 ||vk*||2 (½)k-j ||vj*||2

Some arithmetics...Geometric sum

Claim: If a basis {v1,...,vn} is reduced, then for every 1jin

(½)i-1 ||vj||2 ||vi*||2 Proof:

And this is in fact what we wanted to prove.

1

1

22

2 ||*||2

1

2

1||*||

j

kj

jk

j vv

1

1

2 2||*||4

1 j

t

tjv

Rearranging the terms

)22( j21

||*||2

12j

j

v

By the previous corollary

2||*||2 iji v21 ||*||2 i

i v

Which implies that

Page 47: Lattices

Approximation for SVPApproximation for SVP

The previous claim together with the lower bound min||vi*|| on the length of the shortest vector result:

The length of the first vector of any reduced basis provides us with at least a 2(n-1)/2 approximation for the length of the shortest vector.

It now remains to show that any basis can be reduced in polynomial time.

Page 48: Lattices

Reduced BasisReduced Basis

Recall the definition of reduced basis is composed of two requirements: 1 j < i n |ij| ½

1 i < n ¾||vi*||2 ||vi+1*+i+1,ivi*||2

We introduce two types of “lattice-preserving” transformations: reduction and swap, which will allow us to reduce any given basis.

Page 49: Lattices

First Transformation: ReductionFirst Transformation: Reduction

vk vk-kl•vl

1in vi vi

1in vi* vi*

1j<l kj kj - kl•lj

kl kl - kl 1j<in ij ij

ik

The transformation ( for 1 l < k n )

The consequences

Using this transformation we can ensure for all j<i

|ij| ½

Page 50: Lattices

Second Transformation: SwapSecond Transformation: Swap

vk vk+1

vk+1 vk

1in vi vi

The transformation ( for 1 k n )

The important consequencevk*

vk+1*+k+1,kvk*

If we use this transformation, for a k which satisfies ¾||vk*||2 > ||vk+1*+k+1,kvk*||2,

we manage to reduce the value of ||vk*||2 by (at most) ¾.

Page 51: Lattices

Algorithm for Basis ReductionAlgorithm for Basis Reduction

Use the reduction transformation to obtain |ij| ½ for any i>j.

Apply the swap transformation for some k which satisfies ¾||vk*||2>||vk+1*+k+1,kvk*||2.

Stop if there is no such k.

Page 52: Lattices

Given a basis {v1,...,vn}; viZn i=1,...,n for a lattice, define:

di=||v1*||2 •... •||vi*||2

D=d1•... •dn-1

di is the square of the determinant of the lattice of rank i spanned by v1,...,vi.

Thus DN.

TerminationTermination

Page 53: Lattices

How are changes in the basis affect D?

The reduction transformation doesn’t change the vj*-s, and therefore does not affect D.

Suppose we apply the swap transformation for i. For all ji, dj is not affected by the swap. Also for all j<i vj* is unchanged. Thus D is reduced by a factor < ¾.

TerminationTermination

Page 54: Lattices

Polynomial-TimePolynomial-Time

Since DN and its value decreases with every iteration, the algorithm necessarily terminates.

Note that D’s initial value is poly-time computable, what implies the total number of iterations is also polynomial.

It is also true that each iteration takes polynomial time. The proof is omitted here.

Page 55: Lattices

SummarySummary

We have seen, that reduced basis give us an

approximation of 2(n-1)/2 on SVP.

We have also presented a polynomial-time algorithm

for the construction of such basis ([LLL82]).

Page 56: Lattices
Page 57: Lattices

Hardness of Approx. SVP Hardness of Approx. SVP [MICC][MICC]

GapSVPg:Input: (B,d) where B is a basis for a lattice in Rn and d R.Yes instances: (B,d) s.t. .No instances: (B,d) s.t. .

GapCVP’g:Input: (B,y,d) where B Zkn, y Zk, and d R.Yes instances: (B,y,d) s.t. .No instances: (B,y,d) s.t. .

dBz Zz n gdBz Zz n

dyBz 1,0z n gdiyBz Zi,Zz n

Page 58: Lattices

Reducing Reducing CVPCVP to to SVPSVP

We will use the fact that GapCVPc’ is NP-hard for every constant c, and give a reduction from GapCVP’2/ to GapSVP2/(1+2), for every > 0.

Page 59: Lattices

A Robust Lattice: No Short VectorsA Robust Lattice: No Short Vectors

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

..

.

p1,…,pm – the m smallest prime numbers.

D

P

Page 60: Lattices

A Robust Lattice: No Short VectorsA Robust Lattice: No Short Vectors

Lemma: .

Proof: Let zZm be a non-zero vector.Define ,and g = g+ · g-.g+ and g- are integers, andz 0 g+ g-

|g+ - g-| 1.

.

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

..

.

p1,…,pm – the m smallest prime numbers.

D

P

ln2Lz 0z,Zz 2m

0z

zi

i

ipg

0z

zi

i

ipg

glnplnzm

1iii

m

1ii

2i

2plnzDz

Page 61: Lattices

A Robust Lattice: No Short Vectors A Robust Lattice: No Short Vectors (2)(2)Proof (cont.):

. .

which is a convex function of g with minimum in g = 2. .

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

..

.D

P

m

1iii plnzPz

glngln

g,gmingg1ln

g1

g11ln

2222 PzDzLz

g2gln

ln21ln2Lz 2

Page 62: Lattices

A Robust Lattice: Many Close A Robust Lattice: Many Close VectorsVectors

-s

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

0..

.

-lnb

Page 63: Lattices

A Robust Lattice: Many Close A Robust Lattice: Many Close VectorsVectors

Lemma: z {0,1}m, if

then .

Proof: .

.

-s

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

0..

.

-lnb

m

1i

bzi b,bpg i

2blnsLz 2

glnPzDz2

11 bln1lnbln

2sLz 222

blnPzDz

22 blnglngln 2bln1bln 1

Page 64: Lattices

SchemeScheme

Page 65: Lattices

The End ResultThe End Result

-s

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

SVP lattice

Bnxk Ckxm

0

-y

..

.

-lnb

Page 66: Lattices

The End ResultThe End Result

Lemma: Let Z {0,1}m be a set of vectors containing exactly n 1’s. If and C {0,1}km is chosen by setting each entry independently at random with probability , thenPr[x{0,1}k zZ Cz = x] > 1 - 7.

-s

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

SVP lattice

Bnxk Ckxm

0

-y

..

.

-lnb

/kn4m!nZ

nk41p

Page 67: Lattices

The End Result (2)The End Result (2)

-s

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

SVP lattice

Bnxk Ckxm

0

-y

..

.

-lnb

Page 68: Lattices

The End Result (2)The End Result (2)Lemma: For any constant >

0 there exists a probabilistic polynomial time algorithm that on input 1k computes a lattice L R(m+1)m, a vector s Rm+1 and a matrixC Zkm s.t. with probability arbitrarily close to 1

1. .2. x {0,1}k z Zm

s.t. Cz = x and .

2Lz 0z,Zz 2

1sLz 2

-s

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

SVP lattice

Bnxk Ckxm

0

-y

..

.

-lnb

Page 69: Lattices

The End Result (3)The End Result (3)

Proof: Let be a constant0 < < 1, let k be a sufficiently large integer, and let = b1-. . Let m = k4/+1 and .Let Z be the set of vectors z {0,1}m containing exactly n 1’s, s.t.

.

-s

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

SVP lattice

Bnxk Ckxm

0

-y

..

.

-lnb

0z,Zz m

mln2mn

2blnsLz Zz 2

bb,bp

m

1i

zi

i

bln12ln2Lz 2

Page 70: Lattices

The End Result (4)The End Result (4)Proof (cont.): Let S be the set

of all products of n distinct primes pm. |S| = |Z|.By the prime number theorem 1 i mpi < 2mlnm=M. S [1,Mn].

.Divide [1,Mn] into intervals of the form [b,b+b’] where . There are O(M(1-)n) such intervals. they contain an average of at leastelements of S each.

-s

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

SVP lattice

Bnxk Ckxm

0

-y

..

.

-lnb nnm

nmS

b2'b blog2

n

)mln2(nmn

nMm

11

Page 71: Lattices

The End Result (5)The End Result (5)Proof (cont.): Choose a

random element of S and select the interval containing it. The probability that this interval contains less than elements of S is at most O((lnm/2)-n) < 2-n. for all sufficiently large k we can assume

and therefore with probability arbitrarily close to 1, x {0,1}k z Z Cz = x.

-s

lnp1

lnp2

lnpm

0

0

lnp1 lnpm

L

SVP lattice

Bnxk Ckxm

0

-y

..

.

-lnb nmlnn

m

/kn4n

mlnnm m!nSZ

Page 72: Lattices

Sum UpSum Up

Theorem: The shortest vector in a lattice is NP-hard to approximate within any constant factor less than 2.

Proof: The proof is by reduction from GapCVP’c to GapSVPg, where c = 2/ and g = 2/(1+2).Let (B,y,d) be an instance of GapCVP’c. We define an instance (V,t) of GapSVPg s.t.

- (B,y,d) is a Yes instance (V,t) is a Yes instance.- (B,y,d) is a No instance (V,t) is a No instance.Let L, s, C and V be as defined above, where = / d, and let t = (1+2).

Page 73: Lattices

CompletenessCompleteness

Proof (cont.): Assume that (B,y,d) is a Yes instance of GapCVP’c.

.

From the previous lemma z Zm s.t. Cz = x and .

Define a vector .

(V,t) is a Yes instance of GapSVPg.

dyBx 1,0x k

1sLz 2

1

zu

22222 t21yBxsLzVu

Page 74: Lattices

SoundnessSoundness

Proof (cont.): Assume that (B,y,d) is a No instance of GapCVP’c,

and let

. .

If w = 0 then z 0, and .If w 0 then .

.

(V,t) is a No instance of GapSVPg.

0u,Zw

zu 1m

2222 wyBxwsLzVu

2LzwsLz 22 2dcwyBx 22222

gt2Vu

Page 75: Lattices
Page 76: Lattices

Ajtai: SVP Instances Hard on Ajtai: SVP Instances Hard on AverageAverage

Approximating

SVP (factor= nc )

On randomrandom instances

from a specific constructible distribution

Finding Unique-SVP

Approximating

SVP (factor= n10+c )

Approximating

Shortest Basis (factor= n10+c )

Page 77: Lattices

DenoteDenote by this distribution

Average-Case DistributionAverage-Case Distribution

Pick an n*m matrix A, with coefficients uniformly ranging over [0,…,q-1]. .

A = v1 v2 … vm

Def:Def: (A) = {x Zm | Ax 0 mod q }

2c1 nq ,nlogncm

21 c,c,n'

Page 78: Lattices

1 q

v2

v4

v3

v1

2v1+v4

(2,0,0,1)(2,0,0,1)

(1,1,1,0)(1,1,1,0)q(a,b,c,d)q(a,b,c,d)

A mod-q lattice: A mod-q lattice: (v1 v2 v3 v4)

Page 79: Lattices

A Lattice with a Short Vector A Lattice with a Short Vector

For (which generates a lattice with a short vector) only v1,…,vm-1 are chosen randomly. vm is generated by choosing 1,…,m-1 R {0,1} and setting

.

Lemma: For a sufficiently large m, the distribution is exponentially close to the distribution .

1m

1iiim vv

21 c,c,n

21 c,c,n

21 c,c,n'

Page 80: Lattices

shsh & & blbl

Def: sh(L) – the length of the shortest vector in L.

Def: Let b1,…,bn L. The length of {b1,…,bn} is defined by .

Def: bl(L) – the length of the shortest basis of L.

ini1

n1 bmaxb,...,b

Page 81: Lattices

SVPSVP & & BLBL

Def: SVPf( ) – for at least ½ of L find a vector of length at most f(n)sh(L).

Def: The problem BLf(L) – find a basis of length at most f(n)bl(L).

21 c,c,n21 c,c,n

Page 82: Lattices

BL BL SVP( SVP())Thm: There are constants c1,c2,c3 such that

Proof:Assuming a procedure for SVP() we construct a poly-time algorithm that finds a short basis for any lattice L.

Lemma: Given a set of linearly independent elements r1,…,rn L we can construct, in polynomial time, a basis s1,…,sn of L such that

213c c,c,nnn

SVP LBL

n1n1 r,...,rns,...,s

Page 83: Lattices

Halving MHalving MLet a1,…,an L be a set of independent

elements, and let .The previous lemma shows that if , we can find a short basis.

In case , we will construct another set of linearly independent elements,b1,…,bn L, such that .Iterating this process, for logM steps, we can find a set of linearly independent c1,…,cn L, such that .

LblnM 1c3

2M

n1 b,...,b

Lblnc,...,c 1cn1

3

n1 a,...,aM

LblnM 1c3

Page 84: Lattices

L. I. a1,…,an L.I. f1,…,fn L, s.t.& W = P(f1,…,fn) (parallelepiped) is almost a cube[i.e. the distance of each vertex of W from the verticesof a fixed cube is at most nM]

Mnf,...,f 3n1

Now we cut W into qn parallelepipeds.

Page 85: Lattices

We take a random sequence of lattice points 1,…,m, andfind for each 1 i m the parallelepiped that contains i.

Let vi be the corner of the parallelepiped that contains i.v1,…,vm define a lattice from

21 c,c,n'

Therefore, we can find, with probability ½ a vectorh Zm s.t. and .nh 0vh

m

1iii

Page 86: Lattices

Proposition: With positive probability and .2

Mu 0hum

1iii

Page 87: Lattices

SVP SVP BL BLLemma: There is an absolute constant c such

that 1 sh(L*)bl(L) cn2.

Theorem: .

Proof: If we can get an estimate on bl(L*), then by the above lemma, we can obtain an estimate on sh((L*)*) = sh(L).

LBL LSVP 3c3c nn

Page 88: Lattices
Page 89: Lattices

Hardness of approx. CVP Hardness of approx. CVP [DKRS][DKRS]

g-CVP is NP-hard for g=n1/loglog n

n - lattice dimension

Improving – Hardness (NP-hardness instead of quasi-

NP-hardness)

– Non-approximation factor (from 2(logn)1-)

Page 90: Lattices

[ABSS] reduction: uses PCP to show – NP-hard for g=O(1)– Quasi-NP-hard g=2(logn)1- by repeated blow-up.

Barrier - 2(logn)1- const >0

SSAT: a new non-PCP characterization of NP. NP-hard to approximate to within g=n1/loglogn .

Page 91: Lattices

SATSAT

Input: =f1,..,fn Boolean functions ‘tests’

x1,..,xn’ variables with range {0,1}

Problem: Is satisfiable?

Thm (Cook-Levin): SAT is NP-complete (even when

depend()=3)

Page 92: Lattices

SAT as a consistency problemSAT as a consistency problemInput=f1,..,fn Boolean functions - ‘tests’

x1,..,xn’ variables with range Rfor each test: a list of satisfying assignments

ProblemIs there an assignment to the tests that is consistent?

g(w,x,z) h(y,w,x)

(1,0,7)(1,3,1)(3,2,2)

f(x,y,z)

(0,2,7)(2,3,7)(3,1,1)

(0,1,0)(2,1,0)(2,1,5)

Page 93: Lattices

Super-AssignmentsSuper-Assignments

||SA(f)|| = |-2|+|2|+|3| = 7 Norm SA - Averagef||A(f)||

A natural assignment for f(x,y,z)

(1,1,2) (3,1,1) (3,2,5) (3,3,1) (5,1,2)

1

0

A(f) = (3,1,1)

f(x,y,z)’s super-assignment

SA(f)=-2(3,1,1)+2(3,2,5)+3(5,1,2)

3

2

1

0

-1

-2

(1,1,2) (3,1,1) (3,2,5) (3,3,1) (5,1,2)

Page 94: Lattices

ConsistencyConsistency

A(f) = (3,2,5)A(f)|x := (3)

x f,g that depend on x: A(f)|x = A(g)|x

In the SAT case:

Page 95: Lattices

ConsistencyConsistency

SA(f) = +3(11,1,2) -2(33,2,5) 2(33,3,1)

Consistency:Consistency: x f,g that depend on x: SA(f)|x = SA(g)|x

SA(f)|x := +3(1) 0(3)

-2+2=0

3

2

1

0

-1

-2

)3,2,5(

)3,3,1(

(1) (2) (3)

)1,1,2(

Page 96: Lattices

g-g-SSAT - DefinitionSSAT - Definition

Input:=f1,..,fn tests over variables x1,..,xn’ with range R

for each test fi - a list of sat. assign.

Problem: Distinguish between[Yes] There is a natural assignment for [No] Any non-trivial consistent super-assignment is of

norm > g

Theorem: SSAT is NP-hard for g=n1/loglog n.

(conjecture: g=n , = some constant)

Page 97: Lattices

SSAT is NP-hard to approximateSSAT is NP-hard to approximateto within to within g = ng = n1/loglogn1/loglogn

Page 98: Lattices

f(w,x)f’(z,x)

00000000

Reducing SSAT to CVPReducing SSAT to CVPf,(1,2) f’,(3,2)

f,f’,x

wwwwwwww

I

ww0w

00w0

*123

Yes --> Yes: dist(L,target) = n

No --> No: dist(L,target) > gn

Choose w = gn + 1

Page 99: Lattices

00w0

A consistency gadgetA consistency gadget

*123

wwww

ww0w

Page 100: Lattices

w0ww

w0ww

w0ww

w0ww

w0ww

w0ww

w0ww

w0ww

w0ww

w0ww

w0ww

w0ww

00w0

A consistency gadgetA consistency gadget

*123

wwww

ww0w

w0ww

000w

0w00

www0

+ b3 a1 + a2 = 1

+ b2 a1 + + a3 = 1

+ b1 a2 + a3 = 1

a1 a2 a3 b1 b2 b3

a1 + a2 + a3 = 1