ubc march 20071 the evergreen project: the promise of polynomials to boost csp/sat solvers* karl j....
TRANSCRIPT
UBC March 2007 1
The Evergreen Project: The Promise of Polynomials to Boost CSP/SAT Solvers*
Karl J. Lieberherr
Northeastern University
Boston
joint work with Ahmed Abdelmeged, Christine Hang and Daniel Rinehart
Title inspired by a paper by Carla Gomes / David Shmoys
UBC March 2007 2
Two objectives
• I want you to become – better writers of MAX-SAT/MAX-CSP solvers
• better decision making• crosscutting exploration of search space
– better players of the Evergreen game• the game reveals what the polynomials can do• iterated game of base zero-sum game:
– Together: choose domain– Anna: choose instance (min .max. possible loss)– Bob: solve instance, Anna pays Bob satisfaction fraction
• perfect information game
UBC March 2007 3
Introduction
• Boolean MAX-CSP(G) for rank d, G = set of relations of rank d– Input
• Input = Bag of Constraint = CSP(G) instance• Constraint = Relation + Set of Variable• Relation = int. // Relation number < 2 ^ (2 ^ d) in G• Variable = int
– Output• (0,1) assignment to variables which maximizes the number of
satisfied constraints.
• Example Input: G = {22} of rank 3. H =– 22:1 2 3 0 – 22:1 2 4 0 – 22:1 3 4 0 1in3 has number 22
M = {1 !2 !3 !4} satisfies all
UBC March 2007 4
Variation
MAX-CSP(G,f): Given a CSP(G) instance H expressed in n variables which
may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H.
Example: G = {22} of rank 3MAX-CSP({22},f): H =
22:1 2 3 0 22:1 2 4 0 in MAX-CSP({22},?). Highest value for ?22:1 3 4 022: 2 3 4 0 1in3 has number 22
UBC March 2007 5
Evergreen(3,2) game
Anna and Bob: They agree on a protocol P1 to choose a set of 2 relations (=G) of rank 3.– Anna chooses CSP(G)-instance H (limited).– Bob solves H and gets paid by Anna the fraction
that Bob satisfies in H.• This gives nice control to Anna. Anna will choose an
instance that will minimize Bob’s profit.
– Take turns.R1, Anna
R2, Bob
100% R1, 0% R2100% R2, 0% R1
UBC March 2007 6
Protocol choice
• Randomly choose R1 and R2 (independently) between 1 and 255 (Throw two dice choosing relations).
UBC March 2007 7
Tell me
• How would you react as Anna?– The relations 22 and 22 have been chosen.– You must create a CSP({22}) instance with 1000
variables in which only the smallest possible fraction can be satisfied.
– What kind of instance will this be?
• What kind of algorithm should Bob use to maximize its payoff?
• Should any MAX-CSP solver be able to maximize Bob’s profit? How well does MAX-SAT (e.g., yices,
ubcsat)or MAX-CSP solvers do on symmetric instances ???
UBC March 2007 8
Game strategy in a nutshell
• Choose G={R1,R2} randomly.• Anna chooses instance so that payoff is
minimized.• Bob finds solution so that payoff is
maximized (Solve MAX-CSP(G))
• Take turns: Choose G= … Bob chooses …• Requires thorough understanding of MAX-
CSP(G) problem domain. Requires an excellent MAX-CSP(G) solver.
UBC March 2007 9
Our approach by Example:SAT Rank 2 example
14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0
14: 1 2 = or(1 2) 7: 1 3 = or(!1 !3)
= H
Evergreen game:maximize payofffind maximum assignment
UBC March 2007 10appmean = approximation of the mean (k variables true)
Blurry vision
What do we learn from the abstract representation absH?• set 1/3 of the variables to true (maximize).• the best assignment will satisfy at least 7/9 constraints.• very useful but the vision is blurry in the “middle”.
excellent peripheral vision
0 1 2 3 4 5 6 = k
8/9
7/9
UBC March 2007 11
Our approach by Example
• Given a CSP(G)-instance H and an assignment N which satisfies fraction f in H– Is there an assignment that satisfies more than f?
• YES (we are done), absH(mb) > f
• MAYBE, The closer absH() comes to f, the better
– Is it worthwhile to set a certain literal k to 1 so that we can reach an assignment which satisfies more than f
• YES (we are done), H1 = Hk=1, absH1(mb1) > f
• MAYBE, the closer absH1(mb1) comes to f, the better
• NO, UP or clause learningabsH= abstract representation of H
UBC March 2007 12
14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0
14 : 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0
8/9
6/7 = 8/9
3/7=5/9
3/9H
H0
abstract representation
0 1 2 3 4 5 6
0 1 2 3 4 5
maximum assignment away from max bias: blurry
7/9
5/7=7/9
UBC March 2007 13
14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0
14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 3 0 7 : 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0
8/9
7/8=8/9
6/8=7/9
H
H1
3/8
2/7=3/8
0 1 2 3 4 5 6
0 1 2 3 4 5
maximum assignment away from max bias: blurry
7/9
clearlyabove 3/4
UBC March 2007 14
8/97/9
14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0
6/7=8/9
5/7=7/9
7/8 = 8/9
6/8 = 7/9
abstract representationguarantees 7/9
abstract representationguarantees 7/9 abstract representation
guarantees 8/9
H
H0H1NEVER GOES DOWN: DERANDOMIZATION
UBCSAT
UBC March 2007 15
10 : 1 010 : 2 010 : 3 0 7 : 1 2 0 7 : 1 3 0 7 : 2 3 0
rank 210: 1 = or(1) 7: 1 2 = or(!1 !2)
5 : 1 010 : 2 010 : 3 013 : 1 2 013 : 1 3 0 7 : 2 3 0
rank 25: 1 = or(!1)13: 1 2 = or(1 !2)
4/6 4/6
3/6 3/6
abstract representation guarantees0.625 * 6 = 3.75: 4 satisfied.
4/6 4/6
3/6 3/6
4/6 4/6
0 1 2 3
The effect of n-map
Evergreen game: G = {10,7}How do you choose aCSP(G)-instance to minimizepayoff? 0.618 …
UBC March 2007 16
First Impression
• The abstract representation = look-ahead polynomials seems useful for guiding the search.
• The look-ahead polynomials give us averages: the guidance can be misleading because of outliers.
• But how can we compute the look-ahead polynomials? How do the polynomials help play the Evergreen(3,2) game?
UBC March 2007 17
Where we are
• Introduction
• Look-forward
• Look-backward
• SPOT: how to use the look-ahead polynomials together with superresolution.
UBC March 2007 18
Look Forward
• Why?– To make informed decisions– To play the Evergreen game
• How?– Abstract representation based on look-ahead
polynomials
UBC March 2007 19
Look-ahead Polynomial(Intuition)
• The look-ahead polynomial computes the expected fraction of satisfied constraints among all random assignments that are produced with bias p.
UBC March 2007 20
Consider an instance: 40 variables,1000 constraints (1in3)
1, … ,40
22: 6 7 9 0
22: 12 27 38 0
Abstract representation:reduce the instance tolook-ahead polynomial 3p(1-p)2 = B1,3(p) (Bernstein)
UBC March 2007 21
1in3
0
0.1
0.2
0.3
0.4
0.5
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Coin bias (Probability of setting a variable to true)
Fra
ctio
n o
f co
nst
rain
ts t
hat
ar
e g
uar
ante
ed t
o b
e sa
tisf
ied
3p(1-p)2 for MAX-CSP({22})
UBC March 2007 22
Look-ahead Polynomial(Definition)
• H is a CSP(G) instance.
• N is an arbitrary assignment.
• The look-ahead polynomial laH,N(p) computes the expected fraction of satisfied constraints of H when each variable in N is flipped with probability p.
UBC March 2007 23
The general case MAX-CSP(G)
G = {R1, … }, tR(F) = fraction of constraints in F that use R.
x = p appSATR(x) over all R is a super set of the Bernstein polynomials (computer graphics, weighted sum of Bernstein polynomials)
UBC March 2007 24
Rational Bezier Curves
UBC March 2007 25
Bernstein Polynomials
http://graphics.idav.ucdavis.edu/education/CAGDNotes/Bernstein-Polynomials.pdf
UBC March 2007 26
all the appSATR(x) polynomials
UBC March 2007 27
Look-ahead Polynomial in Action
• Focus on purely mathematical question first
• Algorithmic solution will follow
• Mathematical question: Given a CSP(G) instance. For which fractions f is there always an assignment satisfying fraction f of the constraints? In which constraint systems is it impossible to satisfy many constraints?
UBC March 2007 28
Remember?
MAX-CSP(G,f): Given a CSP(G) instance H expressed in n variables which
may assume only the values 0 or 1, find an assignment to the n variables which satisfies at least the fraction f of the constraints in H.
Example: G = {22} of rank 3MAX-CSP({22},f):
22:1 2 3 0 22:1 2 4 0 22:1 3 4 022: 2 3 4 0
UBC March 2007 29
Mathematical Critical Transition Point
MAX-CSP({22},f):
For f ≤ u: problem has always a solutionFor f ≥ u + : problem has not always a solution,
u critical transition point
always (fluid)
not always (solid)
UBC March 2007 30
The Magic Number
• u = 4/9
UBC March 2007 31
1in3
0
0.1
0.2
0.3
0.4
0.5
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Coin bias (Probability of setting a variable to true)
Fra
ctio
n o
f co
nst
rain
ts t
hat
ar
e g
uar
ante
ed t
o b
e sa
tisf
ied
3p(1-p)2 for MAX-CSP({22})
UBC March 2007 32
Produce the Magic Number
• Use an optimally biased coin– 1/3 in this case
• In general: min max problem
UBC March 2007 33
The 22 reductions:Needed for implementation
22 60
3
240
15 255
0
1,0
1,1
2,1
2,0
3,03,1
3,0
3,1
2,0
2,1
22 is expanded into 6 additionalrelations.
UBC March 2007 34
The 22 N-Mappings:Needed for implementation
22
41
73
134
97
146
148
0
2
1
22 is expanded into 7 additionalrelations.
1040
21
0
2
1
0
2
1
UBC March 2007 35
General Dichotomy Theorem
MAX-CSP(G,f): For each finite set G of relations closedunder renaming there exists an algebraic number tG :
For f ≤ tG: MAX-CSP(G,f) has polynomial solutionFor f ≥ tG+ : MAX-CSP(G,f) is NP-complete,
tG critical transition pointeasy (fluid)Polynomial
hard (solid)NP-complete
due to Lieberherr/Specker (1979, 1982)
polynomial solution:Use optimally biased coin.Derandomize.P-Optimal.
implications for the Evergreen game? Are you a better player?
UBC March 2007 36
Context
• Ladner [Lad 75]: if P !=NP, then there are decision problems in NP that are neither NP-complete, nor they belong to P.
• Conceivable that MAX-CSP(G,f) contains problems of intermediate complexity.
UBC March 2007 37
General Dichotomy Theorem(Discussion)
MAX-CSP(G,f): For each finite set G of relations closedunder renaming there exists an algebraic number tG
For f ≤ tG: MAX-CSP(G,f) has polynomial solutionFor f ≥ tG+ : MAX-CSP(G,f) is NP-complete,
tG critical transition pointeasy (fluid), Polynomial (finding an assignment)constant proofs (done statically using look-ahead polynomials)no clause learning
hard (solid), NP-completeexponential, super-polynomial proofs ???relies on clause learning
UBC March 2007 38
min max problem
tG = min max sat(H,M)
all (0,1) assignments M
all CSP(G)instances H
sat(H,M) = fraction of satisfied constraints in CSP(G)-instance H by assignment M
UBC March 2007 39
Problem reductions are the key
• Solution to simpler problem implies solution to original problem.
UBC March 2007 40
min max problem
tG = lim min max sat(H,M,n)
all (0,1) assignments Mto n variables
n toinfinity
sat(H,M,n) = fraction of satisfied constraints inCSP(G)-instance H by assignment M with n variables.
all SYMMETRIC CSP(G) -instances H with n variables
UBC March 2007 41
Reduction achieved
• Instead of minimizing over all constraint systems it is sufficient to minimize over the symmetric constraint systems.
UBC March 2007 42
Reduction
• Symmetric case is the worst-case: If in a symmetric constraint system the fraction f of constraints can be satisfied, then in any constraint system the fraction f can be satisfied.
UBC March 2007 43
Symmetric the worst-case
.
.
n variablesn! permutations
If in the big system thefraction f is satisfied, then there mustbe a least one small systemwhere the fraction f is satisfied
UBC March 2007 44
min max problem
tG = lim min max sat(H,M,n)
all (0,1) assignments Mto n variables where thefirst k variables are setto 1
n toinfinity
sat(H,M,n) = fraction of satisfied constraints in system S by assignment I
all SYMMETRIC CSP(G) -instances H with n variables
UBC March 2007 45
Observations
• The look-ahead polynomial look-forward approach has not been used in state-of-the-art MAX-SAT and Boolean MAX-CSP solvers.
• Often a fair coin is used. The optimally biased coin is often significantly better.
UBC March 2007 46
UBC March 2007 47
The Game Evergreen(r,m) for Boolean MAX-CSP(G), r>1,m>0
Two players: They agree on a protocol P1 to choose a set of m relations of rank r.
1. The players use P1 to choose a set G of m relations of rank r.
2. Anna constructs a CSP(G) instance H with 1000 variables and at most 2*m*(1000 choose r) constraints and gives it to player 2 (1 second limit).
3. Bob gets paid the fraction of constraints she can satisfy in H (100 seconds limit).
4. Take turns (go to 1).
UBC March 2007 48
ForEvergreen(3,2)
100% R1, 0% R2100% R2, 0% R1
UBC March 2007 49
Evergreen(3,2) protocol possibilities
• Variant 1– Player Bob chooses both relations G – Player Anna chooses CSP(G) instance H.– Player Bob solves H and gets paid by Anna.
• This gives too much control to Bob. Bob can choose two odd relations which guarantees him a pay of 1 independent of how Anna chooses the instance H.
UBC March 2007 50
Evergreen(3,2) protocol possibilities
• Variant 2:– Anna chooses a relation R1 (e.g. 22).– Bob chooses a relation R2.– Anna chooses CSP(G) instance H.– Bob solves H and gets paid by Anna.
R1, Anna R2, Bob
100% R1, 0% R2100% R2, 0% R1
UBC March 2007 51
Problem with variant 2
• Anna can just ignore relation R2.
• Gives Anna too much control because the payoff for Bob depends only on R1 chosen by Anna (and the quality of the solver that Bob uses).
UBC March 2007 52
Protocol choice: variant 3
• Randomly choose R1 and R2 (independently) between 1 and 255 (Throw two dice).
UBC March 2007 53
Tell me
• How would you react as Anna?– The relations 22 and 22 have been chosen.– You must create a CSP({22}) instance with
1000 variables in which only the smallest possible fraction can be satisfied.
– What kind of instance will this be? 4/9
• What kind of algorithm should P1 use to maximize its payoff? compute optimal k + best MAX-CSP solver.
symmetric instance with (1000 choose 3) constraints
UBC March 2007 54
ForEvergreen(3,2)
100% R1, 0% R2100% R2, 0% R1
Tells us how tomix the tworelations
UBC March 2007 55
Role of tG in the Evergreen(3,2) game
• 1: Instance construction: Choose a CSP(G) instance so that only the fraction tG can be satisfied: symmetric formula.
• 2: Choose an algorithm so that at least the fraction tG is satisfied. (2 gets paid tG from 1).
UBC March 2007 56
Game strategy in a nutshell
• Anna: Best: Choose tG instance
• Bob: Get’s paid tG
• etc.
UBC March 2007 57
Additional Information
• Rich literature on clause learning in SAT and CSP solver domain. Superresolution is the most general form of clause learning with restarts.
• Papers on look-ahead polynomials and superresolution: http://www.ccs.neu.edu/research/demeter/papers/publications.html
UBC March 2007 58
Additional Information
• Useful unpublished paper on look-ahead polynomials: http://www.ccs.neu.edu/research/demeter/biblio/partial-sat-II.html
• Technical report on the topic of this talk: http://www.ccs.neu.edu/research/demeter/biblio/POptMAXCSP.html
UBC March 2007 59
Future work
• Exploring best combination of look-forward and look-back techniques.
• Find all maximum-assignments or estimate their number.
• Robustness of maximum assignments.
• Are our MAX-CSP solvers useful for reasoning about biological pathways?
UBC March 2007 60
Conclusions
• Presented SPOT, a family of MAX-CSP solvers based on look-ahead polynomials and non-chronological backtracking.
• SPOT has a desirable property: P-optimal.• SPOT can be implemented very efficiently.• Preliminary experimental results are
encouraging. A lot more work is needed to assess the practical value of the look-ahead polynomials.
UBC March 2007 61
Polynomials for rank 3
• x^3 x^2 x^1 x^0 relation• -1 3 -3 1 1• 1 -2 1 0 2 • 0 1 -2 1 3• 1 -2 1 0 4• 0 1 -2 1 5For 2: x*(1-x)2=x3-2x2+xmaximum at x=1/3; 1/3*(2/3)2=4/27
Check: 2 and 4 are the same
UBC March 2007 62
Polynomials for rank 3
• x^3 x^2 x^1 x^0 relation• -1 3 -3 1 1• 1 -2 1 0 2 • 0 1 -2 1 3• 1 -2 1 0 4 (same as 2)• 0 1 -2 1 5For 4: x*(1-x)2=x3-2x2+xmaximum at x=1/3; 1/3*(2/3)2=4/27
UBC March 2007 63
Recall
• (f*g)’ = f’*g + f*g’
• (f2)’ = 2*f * f’
• For relation 2:– x*(1-x)2 = (1-x)2 + x*2(1-x)*(-1)= (1-x)(1-3x)– x=1 is minimum– x=1/3 is maximum– value at maximum: 4/27
UBC March 2007 64
Harold• concern: intension, extension: query, predicate
• extension = intension(software)
• Harold Ossher: confirmed pointcuts
UBC March 2007 65
The Game Evergreen(r,m) for Boolean MAX-CSP(G), r>1,m>0
Two players: They agree on a protocol P1 to choose a set of m relations of rank r.
1. The players use P1 to choose a set G of m relations of rank r.
2. Anna constructs a CSP(G) instance H with 1000 variables and at most 2*m*(1000 choose r) constraints and gives it to Bob (1 second limit).
3. Bob gets paid by Anna the fraction of constraints he can satisfy in H (100 seconds limit).
4. Take turns (go to 1).
UBC March 2007 66
Evergreen(3,2)
• Rank 3: Represent relations by the integer corresponding to the truth table in standard sorted order 000 – 111.
• choose relations between 1 and 254 (exclude 0 and 255).
• Don’t choose two odd numbers: All false would satisfy all constraints.
• Don’t choose both numbers above 128: All true would satisfy all constraints.
UBC March 2007 67
How to play Evergreen(3,2)
• G = {R1, R2} is given (by some protocol)
• Anna: compute t=(t1, t2) so that
max appmeant(x) for x in [0,1] is minimum.
Construct a symmetric instance SYMG(t) H.
• Bob: Solves H.
UBC March 2007 68
Question
• For any G and any CSP(G)-instance H, is there a weight assignment to the constraints of H so that the look-ahead polynomial absH has its maximum not at 0 or 1 and guarantees a maximum assignment for H without weights? – the polynomial might guarantee maximum-
1+e which is enough to guarantee a maximum assignment.
– what if we also allow n-maps?
UBC March 2007 69
Absolute P-optimality
• Bringing max to boundary is polynomial.• Bringing max away from boundary using weights? What
is the complexity.• Definition: ImproveLookAhead(G,H,N): Given G, a
CSP(instance) H and an assignment N for H. Is there an assignment that satisfies at least laH,N(mb) + 1. mb = maximum bias.
• Assume G sufficiently closed. • Theorem: [Absolute P-optimality]
ImproveLookAhead(G,H,N) is NP-hard iff MAX-CSP(G) is NP-hard.
• Warning: ImproveAllZero(G,H) is NP-hard iff MAX-CSP(G) is NP-hard.
UBC March 2007 70
Exploring the search space
• Look-ahead polynomials don’t eliminate parts of the search space.
• They crosscut the search space early in the search process. Whenever the look-ahead polynomial guarantees more than the currently best assignment, we can cut across the search space but might have to get back to the part we jumped over.
UBC March 2007 71
Crosscutting the search spacecurrent
better
best
by look-ahead
evenbetter
by search
UBC March 2007 72
Early better than later
• Look-ahead polynomials are more useful early in the search.
• Later in the search the maximum will be at 0 or 1.
• Look-ahead polynomials will make mistakes which are compensated by superresolvents.
• Superresolvents cut part of the search space and they help the look-ahead polynomials to eliminate the mistakes.
UBC March 2007 73
Requirements for algorithms and properties to work
• Relative P-optimality
• Absolute P-optimality– G needs to be closed under renaming and
reductions and n-maps
• Look-ahead polynomials– improve assignments: closed under n-maps
and reductions
UBC March 2007 74
Never require closed under renaming?
• symmetric formulas don’t require it? They do? Consider
2: 1 2 3 0
2: 1 2 4 0
2: 1 3 4 0
2: 2 3 4 0
is not symmetric. {1 !2 !3 4} does not satisfy all, only ¾. {!1 2 3 !4} only satisfies ¼.
UBC March 2007 75
What happens during the solution process
• Maximum of polynomial will be at the boundary, say 0. Can be achieved in P. Notice folding effect.
• Many superresolvents will be learned until better assignment is found.
• Most constraints use an odd relation, a few an even relation (if many constraints can be satisfied).
UBC March 2007 76
What happens …
• Because the polynomial only depends on a few numbers, it is not sensitive to the detailed properties of the instance.
• But if one variable has a visible bias towards either 1 or 0, polynomials might detect it.
• Adjust the weight of the constraints to bring the maximum of the polynomial into the middle so that abs(mb) increases.
UBC March 2007 77
Question for Daniel
• p(x) = t1*p1(x)+t2*p2(x)
• mb at 0
• p(mb)
• perturb t1,t2 so that p(x) gets a higher maximum. The fraction of t1 should go up if R1 is an unsatisfied relation.
• How high can we bring the fraction of satisfied constraints this way?
UBC March 2007 78
Question
• Does this solve the original problem?
• If we get all satisfied, yes.
• Can force that, by deleting all but one unsatisfied and adding them later on???
• Are forced to work with many relations.
UBC March 2007 79
SAT Rank 2 instance
14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0
14: 1 2 = or(1 2) 7: 1 3 = or(!1 !3)
= F
find maximum assignmentand proof that it is maximum
UBC March 2007 80
Solution Strategy
• The MAX-CSP transition system gives many options:– Choose initial assignment. Has significant
impact on length of proof. Best to start with a maximum assignment.
– variable ordering. Irrelevant because start with maximum assignment.
– value ordering: Also irrelevant.
UBC March 2007 81
SAT Rank 2 instance
14 : 1 2 014 : 3 4 014 : 5 6 0 7 : 1 3 0 7 : 1 5 0 7 : 3 5 0 7 : 2 4 0 7 : 2 6 0 7 : 4 6 0
14: 1 2 = or(1 2) 7: 1 3 = or(!1 !3)
N={1 !2 !3 4 5 !6}unsat=1/9{}|F|{}|N -> D UP*{1* !3 !5 4 6}|F|{}|N -> SSR Restart{}|F|5(1)|N -> UP*{!1 2 !4 !6 5 3}F|5(1)|N -> SSR{!1 2 !4 !6 5 3}F|5(1),0()|N -> Finaleend
rank 210: 1 = or(1) 5: 1 = or(!1)
UBC March 2007 82
Rank 2 relations
ba1 00 0 02 01 1 04 10 0 18 11 1 1 10 1210(1) = or(1) = or(*,1), don’t mention second
argument12(1) = or(1) = or(1,*), 10(2,1) = 12(1,2)0() = empty clause
UBC March 2007 83
UBC March 2007 84
UP / D
UBC March 2007 85
Variable ordering
• maximizes likelihood that look-ahead polynomials make correct decisions
• Finds variable where look-ahead polynomials give the strongest indication– even if the look-ahead polynomial chooses
the wrong mb, the decision might still be right– what is better:
• laH1(mb1) is max• | laH1(mb1) - laH0(mb0) | is max (is more instance
specific. Will adapt to superresolvents.)
UBC March 2007 86
mean versus appmean
• mean does less averaging, so it is preferred?
• appmean looks at the neighborhood of x*n
UBC March 2007 87
Derandomization
• In a perfectly symmetric CSP(G) instance, it is sufficient to try any assignment with k ones for k from 0 to n to achieve the maximum (tG).
• But in a non-symmetric instance, we need derandomization to achieve tG and superresolution to achieve the maximum.
UBC March 2007 88
The Game EvergreenTM(r,m) for Boolean MAX-CSP(G), r>1,m>0
Two players: They agree on a protocol P1 to choose a set of m relations of rank r.
1. The players use P1 to choose a set G of m relations of rank r.2. Anna constructs a CSP(G) instance H with 1000 variables and
at most 2*m*(1000 choose r) constraints and gives it to Bob (1 second limit). Anna knows the maximum assignment and has a proof for maximality but keeps it secret until Bob gives response.
3. Bob gets paid the fraction of constraints he can satisfy in H relative to the maximum number that can be satisfied (100 seconds limit).
4. Take turns (go to 1).
TM = true maximum
UBC March 2007 89
EvergreenTM versus Evergreen
• Anna can try to create instances that are hard to solve by Bob’s solver.
• If Bob has a perfect solver, he will be paid 1.0.
• The game depends a lot on the solver quality.
• Incomplete information (maximum assignment is kept secret).
• Challenge for Anna to find instance where maximum is known with short proof.
• Anna can control the maximum Bob is paid assuming a perfect solver.
• Bob may be paid little even with a perfect solver.
• The game depends less on solver quality.
• Complete information.
UBC March 2007 90
Using Mathematica
• Combine2[15, 238]– t1(1-x)-t2(-2+x)*x
• D[D[Combine2[15, 238],x],x]– -2*t2 is negative: must be a maximum
• Solve[D[Combine2[15, 238],x]=0,x]– x = -t1+2*t2/2*t2
• RootsOf2[15,238]/.t2->(1-t1)/.t1->1/5(5-sqrt(5)
15: 1 0 = !1238: 1 2 0 = 1 or 2
UBC March 2007 91
Mathematica
• Solve2(15, 238)– ½ (sqrt(5)-1)– t1 = 1-1/sqrt(5)– t2 = 1/sqrt(5)
• Solve2(22, 22)– 4/9– t1 = ½– t2 = ½
UBC March 2007 92
MathematicaIncludeIt[r_, n_] := Mod[Floor[r/n], 2];
AppSAT[r_] := Simplify[IncludeIt[r, 1]*x^0*((1 - x))^((3 - 0)) + (( IncludeIt[r, 2] + IncludeIt[r, 4] + IncludeIt[r, 16]))* x^1*((1 - x))^((3 - 1)) + ((IncludeIt[r, 8] + IncludeIt[r, 32] + IncludeIt[r, 64]))* x^2*((1 - x))^((3 - 2)) + IncludeIt[r, 128]*x^3*((1 - x))^((3 - 3))];Combine2[r1_, r2_] := t1*AppSAT[r1] + t2*AppSAT[r2];RootsOf2[r1_, r2_] := ReplaceAll[Combine2[r1, r2], Solve[D[Combine2[r1, r2], x] == 0, x]];Solve2[r1_, r2_] := For [i = 1, i <= Length[ RootsOf2[r1, r2]], i++, Print[Minimize[{ RootsOf2[r1, r2][[ i]], 0 < t1 < 1, 0 < t2 < 1, t1 + t2 == 1}, {t1, t2}]]];