structural decomposition methods for constraint solving and optimization
DESCRIPTION
Please do not distribute beyond MERS group. Structural Decomposition Methods for Constraint Solving and Optimization. Martin Sachenbacher February 2003. Outline. Decomposition-based Constraint Solving Tree Decompositions Hypertree Decompositions Solving Acyclic Constraint Networks - PowerPoint PPT PresentationTRANSCRIPT
Structural Decomposition Methods for Structural Decomposition Methods for Constraint Solving and OptimizationConstraint Solving and Optimization
Martin SachenbacherFebruary 2003
Please do not distribute beyond MERS group
OutlineOutline
Decomposition-based Constraint Solving– Tree Decompositions– Hypertree Decompositions– Solving Acyclic Constraint Networks
Decomposition-based Optimization– Dynamic Programming– Generalized OCSPs involving State Variables– Demo of Prototype
Decomposition vs. other Solving Methods– Conditioning-based Methods– Conflict-directed Methods
Decomposition-based Constraint Solving– Tree Decompositions– Hypertree Decompositions– Solving Acyclic Constraint Networks
Decomposition-based Optimization– Dynamic Programming– Generalized OCSPs involving State Variables– Demo of Prototype
Decomposition vs. other Solving Methods– Conditioning-based Methods– Conflict-directed Methods
OutlineOutline
Constraint Satisfaction ProblemsConstraint Satisfaction Problems
Domains dom(vi)
Variables V = v1, v2, …, vn
Constraints R = r1, r2, …, rm
Tasks– Find a solution– Find all solutions
Methods for Solving CSPsMethods for Solving CSPs
Generate-and-test Backtracking …
“Guessing”
“Decomposition”“Conflicts”
Truth Maintenance Kernels …
Analytic Reduction
ExampleExample
r(E,D)1 22 1
r(E,B)1 22 1
r(E,C)1 21 32 12 3r(B,C)
1 22 1
r(D,A)1 22 1
r(A,B)1 22 1
E
A B
D C
{1,2}
{1,2}{1,2}
{1,2} {1,2,3}
ExampleExample
Eliminate Variable E
E
A B
D C
{1,2}
{1,2}{1,2}
{1,2} {1,2,3}
r(E,D)1 22 1
r(E,B)1 22 1
r(E,C)1 21 32 12 3r(B,C)
1 22 1
r(D,A)1 22 1
r(A,B)1 22 1
ExampleExample
A B
D C
{1,2}{1,2}
{1,2} {1,2,3}
r(D,B,C)2 2 22 2 31 1 11 1 3
r(D,A)1 22 1
r(A,B)1 22 1
r(B,C)1 22 1
Example (continued)Example (continued)
Eliminate variable D
A B
D C
{1,2}{1,2}
{1,2} {1,2,3}
r(D,B,C)2 2 22 2 31 1 11 1 3
r(D,A)1 22 1
r(A,B)1 22 1
r(B,C)1 22 1
Example (continued)Example (continued)
Eliminate variable C
A B
C
{1,2}{1,2}
{1,2,3}
r(A,B,C)1 2 21 2 32 1 12 1 3
r(A,B)1 22 1
Example (continued)Example (continued)
Eliminate variable B
A B{1,2}{1,2}
r(A,B)1 22 1
Example (continued)Example (continued)
Non-empty: Satisfiable!
A{1,2}
r(A)12
Backtrack-free
Extend “backwards”to find solutions
Idea of DecompositionIdea of Decomposition
Computational Steps: Join, Project
Var E:
Var D:
Var C:
Var B:
r(E,D) r(E,B) r(E,C)
r(D,A) r(B,C,D)
r(C,B)
r(B,A)
Var A:
r(A,B,C)
r(A,B)
r(A)
Idea of DecompositionIdea of Decomposition
Computational Scheme (acyclic)
r(E,D) ⋈ r(E,B) ⋈ r(E,C)
r(D,A) ⋈ r(B,C,D)
r(C,B) ⋈ r(A,B,C)
r(B,A) ⋈ r(A,B)
r(A)
Idea of DecompositionIdea of Decomposition
Alternative Scheme for the Example
r(A,B) ⋈ r(A,D)
r(E,D) ⋈ r(B,D) r(E,C) ⋈ r(B,C)
r(E,B)
Tree DecompositionsTree Decompositions
A tree decomposition of a graph is a triple (T,,) where T=(N,E) is a tree, and are labeling functions associating with each node n N two sets (n) V, (n) R such that:
1. For each rj R, there is at least one n N such that rj (n) and scope(rj) (n) (“covering”)
2. For each variable vi V, the set {n N | vi (n)} induces a connected subtree of T (“connectedness”)
The tree-width of a tree decomposition is defined as max(|(n)|), n N.
Tree DecompositionsTree Decompositions
Comparing Tree Decompositions for the Example
R(A,B) ⋈ R(A,D)
R(E,D) ⋈ R(B,D)
R(E,C) ⋈ R(B,C)
R(E,B)
R(E,D) ⋈ R(E,B) ⋈ R(E,C)
R(D,A) ⋈ R(B,C,D)
R(C,B) ⋈ R(A,B,C)
R(B,A) ⋈ R(A,B)
R(A) Tree-Width 4
Tree-Width 3
Structural Decomposition MethodsStructural Decomposition Methods
Biconnected Components [Freuder ’85] Treewidth [Robertson and Seymour ’86] Tree Clustering [Dechter Pearl ’89] Cycle Cutset [Dechter ’92] Bucket Elimination [Dechter ‘97] Tree Clustering with Minimization [Faltings ’99] Hinge Decompositions [Gyssens and Paredaens ’84] Hypertree Decompositions [Gottlob et al. ’99]
Tree ClusteringTree Clustering
Example
A
E
D
C
B
F
Tree ClusteringTree Clustering
Step 1: Select Variable Ordering
A
E
D
C
B
F
Tree ClusteringTree Clustering
Step 2: Make graph chordal (connect non-adjacent parents)
A
E
D
C
B
F
A
C
B
Tree ClusteringTree Clustering
Step 3: Identify maximal cliques
A
E
D
C
B
F
Tree ClusteringTree Clustering
Step 4: Form Dual Graph using Cliques as Nodes
A,B,C,E
B,C,D,ED,E,F
B,C,EE
D,E
Tree ClusteringTree Clustering
Step 5: Remove Redundant Arcs
A,B,C,E
B,C,D,ED,E,F
B,C,E
D,E
(E)
Tree-Width 4
Tree ClusteringTree Clustering
Alternative variable order F,E,D,C,B,A produces
F,D
DB,C,D
A,B,C
A,B,E
B,C
B,A
(B)
Tree-Width 3
Decomposing HypergraphsDecomposing Hypergraphs
Possible Approach: Turn hypergraph into primal/dual graph, then apply graph decomposition method
But: sub-optimal, conversion loses information Idea [Gottlob 99]: Generalize decomposition to
hypergraphs
Hypertree DecompositionsHypertree Decompositions
A triple (T,,) such that:
1. For each rj R, there is at least one n N such that scope(rj) (n) (“covering”)
2. For each variable vi V, the set {n N | vi (n)} induces a connected subtree of T (“connectedness”)
3. For each n N, (n) scope((n))
4. For each n N, scope((n)) (Tn) (n), where Tn is the subtree of T rooted at n
The hypertree-width of a hypertree decomposition is defined as max(|(n)|), nN.
Tree-Width vs. Hypertree-WidthTree-Width vs. Hypertree-Width
Class of CSPs with bounded HT-width subsumes class of CSPs with bounded tree-width ([Gottlob 00])
Determining the HT-width of a CSP is NP-complete For each fixed k, it can be determined in polynomial
time whether the HT-width of a CSP is k
Game Characterization: Tree-WidthGame Characterization: Tree-Width
Robber and k Cops Cops want to capture the Robber Each Cop controls a node of the graph At any time, Robber and Cops can move to
neighboring nodes Robber tries to escape, but must avoid nodes
controlled by Cops
Playing the GamePlaying the Game
g
q
ab
f
c
d
p hl
nm
ok
e
i
j
First Move of the CopsFirst Move of the Cops
g
q
ab
f
c
d
p hl
nm
ok
e
i
j
Shrinking the SpaceShrinking the Space
g
q
ab
f
c
d
p hl
nm
ok
e
i
j
The CaptureThe Capture
g
q
ab
f
c
d
p hl
nm
ok
e
i
j
Game Characterization: HT-WidthGame Characterization: HT-Width
Cops are now on the edges Cop controls all the nodes of an edge simultaneously
Playing the GamePlaying the Game
VP R
S
X Y
ZT U
W
First Move of the CopsFirst Move of the Cops
VP R
S
X Y
ZT U
W
Shrinking the SpaceShrinking the Space
V
W
P R
S
X Y
T Z U
The CaptureThe Capture
V
Z
W
P
S
X Y
T U
R
Different Robber’s ChoiceDifferent Robber’s Choice
VP R
S
X Y
ZT U
W
The CaptureThe Capture
VP R
S
X Y
ZT U
W
Strategies and DecompositionsStrategies and Decompositions
Theorem: A hypergraph has hypertree-width k iff k Cops have a winning strategy
Winning strategies correspond to decompositions and vice versa
First Move of the CopsFirst Move of the Cops
V
Z
P R
S
X
T
Y
UW
a(S,X,T,R) ⋈ b(S,Y,U,P)
Possible Choice for the RobberPossible Choice for the Robber
V
Z
P R
S
X
T
Y
UW
a(S,X,T,R) ⋈ b(S,Y,U,P)
The CaptureThe Capture
V
Z
P R
S
X
T
Y
UW
a(S,X,T,R) ⋈ b(S,Y,U,P)
c(R,P,V,R)
Alternative Choice for the RobberAlternative Choice for the Robber
V
Z
P R
S
X
T
Y
UW
a(S,X,T,R) ⋈ b(S,Y,U,P)
c(R,P,V,R)
Shrinking the SpaceShrinking the Space
V
Z
P R
S
X
T
Y
UW
a(S,X,T,R) ⋈ b(S,Y,U,P)
c(R,P,V,R) d(X,Y) ⋈ e(T,Z,U)
The CaptureThe Capture
V
Z
P R
S
X
T
Y
UW
a(S,X,T,R) ⋈ b(S,Y,U,P)
c(R,P,V,R) d(X,Y) ⋈ e(T,Z,U)
d(X,Y) ⋈ f(W,X,Z)
DecompositionDecomposition
a(S,X,T,R) ⋈ b(S,Y,U,P)
c(R,P,V,R) d(X,Y) ⋈ e(T,Z,U)
d(X,Y) ⋈ f(W,X,Z)
HT-Width 2
Decomposition-based CSP SolvingDecomposition-based CSP Solving
1. Turn CSP into equivalent acyclic instance
2. Solve equivalent acyclic instance (polynomial in width)
S(D,A) R(A,B)
T(B,C)U(E,D)
V(E,B)
W(E,C)
“Compilation”
V(E,B) W(E,C)
T(B,C) ⋈ U(E,D)
R(A,B) ⋈ S(D,A)
Solving Acyclic CSPsSolving Acyclic CSPs
Bottom-Up Phase– Consistency check
Top-Down Phase– Solution extraction
Polynomial complexity Highly parallelizable
Bottom-Up PhaseBottom-Up Phase
Node Ordering Solve(node)
For Each tuple node.relation
For Each child node.children
cons consistentTuples(child.relation,tuple)
If cons = Then
node.relation node.relation \ { tuple }
Exit For
End If
Next child
Next tuple
Semi-join,DAC
ExampleExample
P(x,y,z)
Q(z,c,d)
G(u,z,d) H(c,d)
9 9 98 8 87 7 76 6 6
9 9 28 8 37 8 5 7 5 87 7 58 9 2
9 28 37 5
9 7 29 6 77 5 5 5 4 84 3 5
R(x,a,b)
1 9 21 8 32 7 5 1 7 5
non-empty:satisfiable
Top-Down PhaseTop-Down Phase
Node Ordering “Search” Queue Initialization: Queue (True) Expand(entry)
cons consistentTuples(entry.node.relation,entry.assignment)
For Each tuple cons
Queue (nextInOrdering(entry.node),
tuple ⋈ entry.assignment)
Next tuple
ExampleExample
P(x,y,z)
Q(z,c,d)
G(u,z,d) H(c,d)
9 9 28 8 37 7 5
9 28 37 5
9 7 29 6 77 5 5 5 4 84 3 5
R(x,a,b)
1 9 21 8 32 7 5 1 7 5
9 9 97 7 7
(xyz = 999)(abxyz = 72999)
(abcdxyz= 7292999)
(abcduxyz = 72921999) (abcduxyz = 72921999)
(True)
ExampleExample(True)
(xyz = 999)
(abxyz = 72999)
(abcdxyz = 7292999)Backtrack-free
Search
…
…
P
R
G
Q
H
(abcduxyz = 72921999)
(abcduxyz = 72921999)
OutlineOutline
Decomposition-based Constraint Solving– Tree Decompositions– Hypertree Decompositions– Solving Acyclic Constraint Networks
Decomposition-based Optimization– Dynamic Programming– Generalized OCSPs involving State Variables– Demo of Prototype
Decomposition vs. other Solving Methods– Conditioning-based Methods– Conflict-directed Methods
Optimal CSPsOptimal CSPs
Domains dom(vi)
Variables V = v1, v2, …, vn
Constraints R = r1, r2, …, rm
Utility Functions u(vi): dom(vi) R– mutual preferential independence
Tasks– Find best solution– Find k best solutions– Find all solutions up to utility u
Optimization for Acyclic CSPsOptimization for Acyclic CSPs
Utility of tuple in n: Utility of best instantiation for variables in subtree Tn that is compatible with tuple
Dynamic Programming: Best instantiation composed of tuple and best-utility consistentchild tuple for each child of n
Proof: Connectednessproperty of the treedecomposition
Tnn
ExampleExample
Dom(vi) = {0,1,2}
E
C
A
B
F
D
r(A,B,C): {(A,B,C) | ABC}
r(A,E,F): {(A,E,F) | AEF}
r(C,D,E): {(C,D,E) | CDE}
r(A,C,E): {(A,C,E) | ACE}
U = 6A+5B+4C+3D+2E+F
ExampleExample
Tree Decomposition
ACE
ABC CDEAEF
001,012,002,112,000,011,022,111,122,222
001,012,002,112,000,011,022,111,122,222
001,012,002,112,000,011,022,111,122,222
001012002112
A,C A,E C,E
ExampleExample
Child ABC
ACE
CDEAEF
001,012,002,112,000,011,022,111,122,222
001,012,002,112,000,011,022,111,122,222
001,012,002,112,000,011,022,111,122,222
001012002112
U = 8+
A,C A,E C,E
5+
ABC
U=4
U=9
Weight
ExampleExample
Child AEF
ACE
CDEAEF
001,012,002,112,000,011,022,111,122,222
001,012,002,112,000,011,022,111,122,222
001,012,002,112,000,011,022,111,122,222
001012002112
U = 8+
A,C A,E C,E
5+
ABC
2+
U=6
ExampleExample
Child CDE
ACE
CDEAEF
001,012,002,112,000,011,022,111,122,222
001,012,002,112,000,011,022,111,122,222
001,012,002,112,000,011,022,111,122,222
001012002112
U = 8+
A,C A,E C,E
5+
ABC
2+6 = 21
U=11
U=14
“To-go”
ExampleExample
Best Solution: utility = 27
ACE
CDEAEF
001,012,002,112,000,011,022,111,122,222
001,012,002,112,000,011,022,111,122,222
001,012,002,112,000,011,022,111,122,222
001012002112
U = 8+13 = 21
A,C A,E C,E
ABC
U = 1+6 = 7
U = 4+9 = 12U = 14+13 = 27
Solving Acyclic Optimal CSPsSolving Acyclic Optimal CSPs
Bottom-Up Phase– Consistency Check plus Utility Computation
Top-Down Phase– Solution Extraction Best-First
Polynomial complexity Highly parallelizable
Bottom-Up PhaseBottom-Up Phase
Solve(node)For Each tuple node.relation
tuple.weight weight(tuple)
For Each child node.children
cons consistentTuples(child.relation,tuple)
If cons = Then
node.relation node.relation \ { tuple }
Exit For
Else tuple.weight tuple.weight + bestUtilToGo(cons)
End If
Next child
Next tuple
Top-Down PhaseTop-Down Phase
Initialization: Queue (True, 0) Expand(entry)
cons consistentTuples(entry.node.relation,entry.assignment)
For Each tuple cons
util entry.util + tuple.util - bestUtil(cons)
Queue (nextInOrdering(entry.node),
tuple ⋈ entry.assignment, util)
Next tuple
Top-Down Phase: Computing UtilityTop-Down Phase: Computing Utility
Utility of extended assignment:
Util = weight(tuple ⋈ entry.assignment) +
utilToGo(tuple ⋈ entry.assignment)
= weight(tuple ⋈ entry.assignment) + tuple.utilToGo + entry.utilToGo - bestUtilToGo(cons)
= tuple.util + entry.util - weightSharedVars(cons) - bestUtilToGo(cons)
= tuple.util + entry.util - bestUtil(cons)
No need to call weight(), cancels out.
ExampleExample
(True, 0)
(ACE = 112, 27)
(ABCE = 1112, 27)
(ABCEF = 11122, 27)
(ABCDEF =111222, 27)
Backtrack-free A* Search
…
…
ACE
ABC
CDE
AEF
(ABCDEF = 111122, 24)
OCSPs with State VariablesOCSPs with State Variables
Domains dom(vi)
Variables V = v1, v2, …, vn
Constraints R = r1, r2, …, rm
Utility Functions u(vi): dom(vi) R for subset Dec V– mutual preferential independence
Tasks– Find the best solution projected on Dec– Find the k best solutions projected on Dec– Find all solutions projected on Dec up to utility u
ExampleExample
Boolean Polycell
And1
And2
F = 0
Or2
G = 1
Or1
Or3
X
Y
Z
B = 1
D = 1
A = 1
E = 0
C = 1
ExampleExample
State Variables: A, B, C, D, E, F, G, X, Y, Z– Domain {0,1}
Decision Variables: O1, O2, O3, A1, A2– Domain {ok, faulty}
Utility function: Mode Probabilities– Or-gate: u(ok) = 0.99, u(faulty) = 0.01– And-gate: u(ok) = 0.995, u(faulty) = 0.005
ExampleExample
Hypertree Decomposition
Or3 ⋈ And1
And2 Or1Or2
Y,Z Y C,X
State Variables: ChallengesState Variables: Challenges
Infeasible to iterate over tuples with state variables Instead: Must handle sets of tuples with same weight Problem: Child assignments can now constrain
themselfes mutually, hence they can no longer be considered independently.
ExampleExample
O3=ok, A1=ok
O3A1CEFXYZ
A2GYZ O1ACXO2BDY
Y,Z Y C,X
ok 1 1 1fty 1 1
1fty 1 0
1fty 1 1
0fty 1 0
0
ok ok 1 0 0 0 0 1ok ok 1 0 0 0 1 1ok ok 1 0 0 1 0 1
…
ok 1 1 1fty 1 1
1fty 1 1
0
ok 1 1 1fty 1 1
1fty 1 1
0
Decision Variables
ExampleExample
O3=ok, A1=ok
O3A1CEFXYZ
A2GYZ O1ACXO2BDY
Y,Z Y C,X
ok 1 1 1fty 1 1
1fty 1 0
1fty 1 1
0fty 1 0
0
ok ok 1 0 0 0 0 1ok ok 1 0 0 0 1 1ok ok 1 0 0 1 0 1
…
ok 1 1 1fty 1 1
1fty 1 1
0
ok 1 1 1fty 1 1
1fty 1 1
0
ExampleExample
O3=ok, A1=ok
O3A1CEFXYZ
A2GYZ O1ACXO2BDY
Y,Z Y C,X
ok 1 1 1fty 1 1
1fty 1 0
1fty 1 1
0fty 1 0
0
ok ok 1 0 0 0 0 1ok ok 1 0 0 0 1 1ok ok 1 0 0 1 0 1
…
ok 1 1 1fty 1 1
1fty 1 1
0
ok 1 1 1fty 1 1
1fty 1 1
0
ExampleExample
O3=ok, A1=ok
O3A1CEFXYZ
A2GYZ O1ACXO2BDY
Y,Z Y C,X
ok 1 1 1fty 1 1
1fty 1 0
1fty 1 1
0fty 1 0
0
ok ok 1 0 0 0 0 1ok ok 1 0 0 0 1 1ok ok 1 0 0 1 0 1
…
ok 1 1 1fty 1 1
1fty 1 1
0
ok 1 1 1fty 1 1
1fty 1 1
0
ExampleExample
O3=ok, A1=ok
O3A1CEFXYZ
A2GYZ O1ACXO2BDY
Y,Z Y C,X
ok 1 1 1fty 1 1
1fty 1 0
1fty 1 1
0fty 1 0
0
ok ok 1 0 0 0 0 1ok ok 1 0 0 0 1 1ok ok 1 0 0 1 0 1
…
ok 1 1 1fty 1 1
1fty 1 1
0
ok 1 1 1fty 1 1
1fty 1 1
0Inconsistent!
IdeaIdea
Best-First-Search for Consistent Child Assignments, until Parent Assignment is fully covered, or Child Assignments are exhausted.
ExampleExample
O3=ok, A1=ok
O3A1CEFXYZ
A2GYZ O1ACXO2BDY
Y,Z Y C,X
ok 1 1 1fty 1 1
1fty 1 0
1fty 1 1
0fty 1 0
0
ok ok 1 0 0 0 0 1ok ok 1 0 0 0 1 1ok ok 1 0 0 1 0 1
…
ok 1 1 1fty 1 1
1fty 1 1
0
ok 1 1 1fty 1 1
1fty 1 1
0
U = 9.7E-3
U = 0.01
U = 0.995
U = 0.99
ExampleExample
O3=ok, A1=ok
O3A1CEFXYZ
A2GYZ O1ACXO2BDY
Y,Z Y C,X
ok 1 1 1fty 1 1
1fty 1 0
1fty 1 1
0fty 1 0
0
ok ok 1 0 0 0 0 1ok ok 1 0 0 0 1 1ok ok 1 0 0 1 0 1
…
ok 1 1 1fty 1 1
1fty 1 1
0
ok 1 1 1fty 1 1
1fty 1 1
0
U = 0.99
U = 0.005
U = 0.01
U = 9.7E-3U = 4.8E-5
ExampleExample
O3=ok, A1=ok
O3A1CEFXYZ
A2GYZ O1ACXO2BDY
Y,Z Y C,X
ok 1 1 1fty 1 1
1fty 1 0
1fty 1 1
0fty 1 0
0
ok ok 1 0 0 0 0 1ok ok 1 0 0 0 1 1ok ok 1 0 0 1 0 1
…
ok 1 1 1fty 1 1
1fty 1 1
0
ok 1 1 1fty 1 1
1fty 1 1
0
U = 0.01U =
0.005
U = 0.01
U = 9.7E-3U = 4.8E-5
U = 4.9E-7
Bottom-Up PhaseBottom-Up Phase
Solve(node)For Each tuple projDec(node.relation) tuples consistentTuples(node.relation, tuple) tuple.weight weigth(tuple)
Repeat childrenAssign nextBestChildrenAssignment(node.children) cons consistentTuples(childrenAssign, tuples) If cons Then cons.weight tuple.weight + bestUtilToGo(cons) insertPartitionElement(node, cons) tuples tuples \ cons End If Until tuples = Or childrenAssign = node.relation node.relation \ tuplesNext tuple
ExampleExample
Root Node Partition
fty ok 1 0 0 0 1 1
fty ok 1 0 0 1 0 1
U=4.9E-9ok ok 1 0 0 0 1 1
ok ok 1 0 0 1 0 1
ok ok 1 0 0 0 0 1
U=9.7E-3U=4.8E-5U=4.9E-7
ok fty 1 0 0 1 1 1
ok fty 1 0 0 0 1 1
ok fty 1 0 0 1 0 1
ok fty 1 0 0 0 0 1
U=4.8E-3U=4.8E-5U=2.4E-7U=2.4E-9
U=4.9E-7U=9.8E-5fty ok 1 0 0 0 1
1fty ok 1 0 0 0 1
0fty ok 1 0 0 1 0
0fty ok 1 0 0 1 0
1
fty fty 1 0 0 1 1 1
fty fty 1 0 0 0 1 1
fty fty 1 0 0 1 1 0
fty fty 1 0 0 0 1 0
fty fty 1 0 0 1 0 1
fty fty 1 0 0 1 0 0
fty fty 1 0 0 0 0 1 fty fty 1 0 0 0
0 0
U=4.8E-5
U=4.9E-7U=2.4E-7U=2.4E-9
U=2.5E-11
Top-Down PhaseTop-Down Phase
Initialization: Queue (True, 0) Ordering on Siblings Expand(entry)
Repeat
siblingsAssign nextBestAssignment(entry.siblings)
cons consistentTuples(siblingsAssign, entry.assignment)
If cons Then
bestUtilSibAssign Max(bestUtilSibAssign, siblingsAssign.util)
util entry.util + siblingsAssign.util - bestUtilSibAssign
Queue (nextInOrdering(entry.siblings),
projSharedOrDec(cons ⋈ entry.assignment), util)
End If
Until siblingsAssign = 0
ExampleExample
Or3 ⋈ And1
And2
Or1
Or2
(True, 0)
(O3A1CXYZ = ok ok 1011, 0.0097)
(O1O2O3A1A2 = fty ok ok ok ok, 0.0097)
…
…
Example: Leading Four SolutionsExample: Leading Four Solutions
Search Queue
(True, 0)
Example: Leading Four SolutionsExample: Leading Four Solutions
Search Queue
(O3A1CXYZ = ok ok 1011, 9.7E-3)
(O3A1CXYZ = ok fty 1111, 4.8E-3)
(O3A1CXYZ = fty ok 1011, 9.8E-5)
(O3A1CXYZ = ok ok 1101, 4.8E-5)
Example: Leading Four SolutionsExample: Leading Four Solutions
Search Queue
(O1O2O3A1A2 = fty ok ok ok ok, 9.7E-3)
(O3A1CXYZ = ok fty 1111, 4.8E-3)
(O3A1CXYZ = fty ok 1011, 9.8E-5)
(O3A1CXYZ = ok ok 1101, 4.8E-5)
Example: Leading Four SolutionsExample: Leading Four Solutions
Search Queue
(O1O2O3A1A2 = fty ok ok ok ok, 9.7E-3)
(O3A1CXYZ = ok fty 1111, 4.8E-3)
(O3A1CXYZ = fty ok 1011, 9.8E-5)
(O3A1CXYZ = fty ok 1010, 4.8E-5)
(O1O2O3A1A2 = fty fty ok ok ok, 9.8E-5)
Example: Leading Four SolutionsExample: Leading Four Solutions
Search Queue
Solutions
(O1O2O3A1A2 = fty ok ok ok ok, 9.7E-3)
(O3A1CXYZ = ok fty 1111, 4.8E-3)
(O3A1CXYZ = fty ok 1011, 9.8E-5)
(O1O2O3A1A2 = fty fty ok ok ok, 9.8E-5)
Solution 1
Example: Leading Four SolutionsExample: Leading Four Solutions
Search Queue
Solutions
(O1O2O3A1A2 = fty ok ok ok ok, 9.7E-3) Solution 1
(O1O2O3A1A2 = ok ok ok fty ok, 4.8E-3)
(O3A1CXYZ = fty ok 1011, 9.8E-5)
(O1O2O3A1A2 = fty fty ok ok ok, 9.8E-5)
Example: Leading Four SolutionsExample: Leading Four Solutions
Search Queue
Solutions
(O1O2O3A1A2 = fty ok ok ok ok, 9.7E-3) Solution 1(O1O2O3A1A2 = ok ok ok fty ok, 4.8E-3) Solution 2
(O3A1CXYZ = fty ok 1011, 9.8E-5)
(O1O2O3A1A2 = fty fty ok ok ok, 9.8E-5)
Example: Leading Four SolutionsExample: Leading Four Solutions
Search Queue
Solutions
(O1O2O3A1A2 = fty ok ok ok ok, 9.7E-3) Solution 1(O1O2O3A1A2 = ok ok ok fty ok, 4.8E-3) Solution 2
(O1O2O3A1A2 = fty ok fty ok ok, 9.8E-5)
(O1O2O3A1A2 = fty fty ok ok ok, 9.8E-5)
Example: Leading Four SolutionsExample: Leading Four Solutions
Search Queue
Solutions
(O1O2O3A1A2 = fty ok ok ok ok, 9.7E-3) Solution 1(O1O2O3A1A2 = ok ok ok fty ok, 4.8E-3) Solution 2(O1O2O3A1A2 = fty ok fty ok ok, 9.8E-5)
(O1O2O3A1A2 = fty fty ok ok ok, 9.8E-5)
Solution 3
Example: Leading Four SolutionsExample: Leading Four Solutions
Solutions
(O1O2O3A1A2 = fty ok ok ok ok, 9.7E-3) Solution 1(O1O2O3A1A2 = ok ok ok fty ok, 4.8E-3) Solution 2(O1O2O3A1A2 = fty ok fty ok ok, 9.8E-5) Solution 3(O1O2O3A1A2 = fty fty ok ok ok, 9.8E-5) Solution 4
Search queue size bounded by k
Prototype for Decomposition-Prototype for Decomposition-based Optimizationbased Optimization
Software Components
OCSP.XML
ConstraintSystemOptkDecomp
(HT-Decomp)BDD
Decomp.-basedOptimization
Tree.XML
OutlineOutline
Decomposition-based Constraint Solving– Tree Decompositions– Hypertree Decompositions– Solving Acyclic Constraint Networks
Decomposition-based Optimization– Dynamic Programming– Generalized OCSPs involving State Variables– Demo of Prototype
Decomposition vs. other Solving Methods– Conditioning-based Methods– Conflict-directed Methods
Methods for Solving CSPsMethods for Solving CSPs
Generate-and-test Backtracking …
“Guessing”
“Decomposition”“Conflicts”
Truth Maintenance Kernels …
Analytic Reduction
Decomposition, Elimination, Decomposition, Elimination, ResolutionResolution
Basic Principle– Analytic reduction to equivalent subproblems
Advantages– No search, no inconsistencies (unless no solution exists)– Solutions obtained simultaneously (knowledge compilation)
Time/Space Requirements– Bound by structural properties (width)– Worst case is average case
Problems– Space Requirements (large constraints, as variables are
unassigned)
Conditioning, Search, GuessingConditioning, Search, Guessing
Basic Principle– Breaking up the problem into smaller subproblems by
(heuristically) assigning values and testing candidates
E
A B
D C
{1}
{1,2}{1,2}
{1,2} {1,2,3}
E
A B
D C
{1,2}
{1,2}{1,2}
{1,2} {1,2,3}
E = 1
Conditioning, Search, GuessingConditioning, Search, Guessing
Basic Principle– Breaking up the problem into smaller subproblems by
(heuristically) assigning values and testing candidates Advantages
– Less space (small constraints, as variables are assigned)– Works also for hard problems
Time/Space Requirements– Exponential (but average case much better than worst-case)
Problems – Backtracking necessary– Solutions obtained only one-by-one
Conflicts, Truth MaintenanceConflicts, Truth Maintenance
Basic Principle– Find and generalize inconsistencies (conflicts) to construct
descriptions of feasible regions (kernels) Advantages
– Re-use of information– Avoids redundant exploration of the search space
Time/Space Requirements– Exponential (both in number of conflicts and size of kernels)
Problems– Complexity
ExamplesExamples
Elimination– SAB (Structural Abduction) [Dechter 95]
Conditioning– CBA* (Constraint-based A*) [Williams 0?]
Conflict Generation– GDE (General Diagnosis Engine) [de Kleer Williams 87]
ApproximationsApproximations
Approximate Elimination– Local constraint propagation (incomplete)
Approximate Conditioning– Hill Climbing, Particle Filtering (incomplete)
Approximate Conflict Generation– Focussed ATMS, Sherlock (incomplete)
HybridsHybrids
Elimination + Conditioning– DCDR(b) [Rish Dechter 96]
Conditioning + Conflict Generation– CDA* [Williams 0?]
Elimination + Conflict Generation– XC1 [Mauss Tatar 02]
Challenge: Elimination + Conditioning + Conflicts.
ResourcesResources
Websites– F. Scarcello’s homepage: http://ulisse.deis.unical.it/~frank
Software– “optkdecomp” implements hypertree decomposition (Win32)– “decompOpSat” implements tree-based optimization
(Win32) Papers
– Gottlob, Leone, Scarcello: A comparison of structural CSP decomposition methods. Artificial Intelligence 124(2), 2000
– Gottlob, Leone, Scarcello: On Tractable Queries and Constraints, DEXA’99, Florence, Italy, 1999
– Rina Dechter and Judea Pearl, Tree clustering for constraint networks, Artificial Intelligence 38(3), 1989