shrinking and expansion algorithm presentation
DESCRIPTION
Explanation of basic concepts of graph matching and the paper Shrinking and Expansion Algorithm Presentation of Hairong Liu.TRANSCRIPT
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Fast Detection of DenseSubgraphs with IterativeShrinking and Expansion
Hairong LiuLongin Jan Latecki Shuicheng Yan
Kamil Adamczewski
February 27, 2015
Kamil Adamczewski Computer Vision Lab 1/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
OverviewIntroduction
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Shrinking and Expansion AlgorithmShrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
ApplicationsCorrespondance problemCluster analysis
Conclusions
Kamil Adamczewski Computer Vision Lab 2/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Introduction
Graph is an important representation for many real-worldobjects, such as the Internet, the shape of natural objects, andtraffic maps. Most of these objects have no correspondingvectorial representations.
Kamil Adamczewski Computer Vision Lab 3/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
A graph G is represented as G = (V ;E;w), where• V → (v1; ...; vn) is a set of n vertices,• E ⊆ V × V is the edge set,• w : E → R+ is the (nonnegative) weight function over edge
set.
• Vertices in G - data points or entities,• Edges - pairwise relations,• Edge-weight - the strength of pairwise relation.
Kamil Adamczewski Computer Vision Lab 4/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
A graph G is represented as G = (V ;E;w), where• V → (v1; ...; vn) is a set of n vertices,• E ⊆ V × V is the edge set,• w : E → R+ is the (nonnegative) weight function over edge
set.
• Vertices in G - data points or entities,• Edges - pairwise relations,• Edge-weight - the strength of pairwise relation.
Kamil Adamczewski Computer Vision Lab 4/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Subgraph
Definition of a subgraph:Let I = {1, ..., n} be the index set of the vertex set V. For anysubset of vertices B ⊆ I, we form a subgraph GB of G with thevertex set VB = {vi∥i ∈ B}.
Kamil Adamczewski Computer Vision Lab 5/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Embedding Subgraph in Simplex
Definition of an embedding G → ∆∆G = {x ∈ Rn : x = 0 or x = 1
m and |x|1 = 1} is a subgraph inthe graph G where m is the number of vertices in G.
The indices of all nonzero components of x ∈ ∆ constitute itssupport, denoted as σ(x) = {i|xi ̸= 0}. Each subgraph Gσ(x)
has a unique coordinate x ∈ ∆, and each point x ∈ Grepresents a unique subgraph of G;
Kamil Adamczewski Computer Vision Lab 6/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Embedding Subgraph in Simplex
Definition of an embedding G → ∆∆G = {x ∈ Rn : x = 0 or x = 1
m and |x|1 = 1} is a subgraph inthe graph G where m is the number of vertices in G.
The indices of all nonzero components of x ∈ ∆ constitute itssupport, denoted as σ(x) = {i|xi ̸= 0}. Each subgraph Gσ(x)
has a unique coordinate x ∈ ∆, and each point x ∈ Grepresents a unique subgraph of G;
Kamil Adamczewski Computer Vision Lab 6/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Embedding Subgraph in Simplex
Definition of an embedding G → ∆∆G = {x ∈ Rn : x = 0 or x = 1
m and |x|1 = 1} is a subgraph inthe graph G where m is the number of vertices in G.
The indices of all nonzero components of x ∈ ∆ constitute itssupport, denoted as σ(x) = {i|xi ̸= 0}. Each subgraph Gσ(x)
has a unique coordinate x ∈ ∆, and each point x ∈ Grepresents a unique subgraph of G;
Kamil Adamczewski Computer Vision Lab 6/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Unweighted dense subgraphs
Let G = (E, V ) be an undirected graph and let VB = (EB, VB)be a subgraph of G . Then the density of S is defined to bedS = |EB |
|VB | .
The densest subgraph problem is that of finding a subgraph ofmaximum density.
In 1984, Andrew V. Goldberg developed a polynomial timealgorithm to find the maximum density subgraph using a maxflow technique.
Kamil Adamczewski Computer Vision Lab 7/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Unweighted dense subgraphs
Let G = (E, V ) be an undirected graph and let VB = (EB, VB)be a subgraph of G . Then the density of S is defined to bedS = |EB |
|VB | .
The densest subgraph problem is that of finding a subgraph ofmaximum density.
In 1984, Andrew V. Goldberg developed a polynomial timealgorithm to find the maximum density subgraph using a maxflow technique.
Kamil Adamczewski Computer Vision Lab 7/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Graph AffinityA = (aij) is an n× n symmetric matrix, where aij = w(vi, vj) if(vi; vj) ∈ E, and aij = 0 otherwise.
The affinity between two subgraphs is given as:
a(x, y) =∑i,j
= xiaijyj = xTAy. (1)
As a special case, we obtain the (sub)graph density:
a(x, x) =∑i,j
= xiaijxj = xTAx. (2)
which is the average affinity of the subgraph Gσ(x)
Kamil Adamczewski Computer Vision Lab 8/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Graph AffinityA = (aij) is an n× n symmetric matrix, where aij = w(vi, vj) if(vi; vj) ∈ E, and aij = 0 otherwise.
The affinity between two subgraphs is given as:
a(x, y) =∑i,j
= xiaijyj = xTAy. (1)
As a special case, we obtain the (sub)graph density:
a(x, x) =∑i,j
= xiaijxj = xTAx. (2)
which is the average affinity of the subgraph Gσ(x)
Kamil Adamczewski Computer Vision Lab 8/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Graph AffinityA = (aij) is an n× n symmetric matrix, where aij = w(vi, vj) if(vi; vj) ∈ E, and aij = 0 otherwise.
The affinity between two subgraphs is given as:
a(x, y) =∑i,j
= xiaijyj = xTAy. (1)
As a special case, we obtain the (sub)graph density:
a(x, x) =∑i,j
= xiaijxj = xTAx. (2)
which is the average affinity of the subgraph Gσ(x)
Kamil Adamczewski Computer Vision Lab 8/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Dense subgraphs
Dense subgraphs identify cliques of vertices that are highlyrelated to each other (have high average affinity).
Such cohesiveness of pairwise relations is unlikely to beproduced by accident and is also not easily disturbed by noisesand outliers.
Dense subgraph may robustly indicate key patterns underlyingthe graph.
Kamil Adamczewski Computer Vision Lab 9/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Dense subgraphs
Dense subgraphs identify cliques of vertices that are highlyrelated to each other (have high average affinity).
Such cohesiveness of pairwise relations is unlikely to beproduced by accident and is also not easily disturbed by noisesand outliers.
Dense subgraph may robustly indicate key patterns underlyingthe graph.
Kamil Adamczewski Computer Vision Lab 9/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Dense subgraphs
Dense subgraphs identify cliques of vertices that are highlyrelated to each other (have high average affinity).
Such cohesiveness of pairwise relations is unlikely to beproduced by accident and is also not easily disturbed by noisesand outliers.
Dense subgraph may robustly indicate key patterns underlyingthe graph.
Kamil Adamczewski Computer Vision Lab 9/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Dense subgraphs
Examples:1 in the World Wide Web, dense subgraphs might be
communities or link spam
2 in telephone call graphs, dense subgraphs might be groupsof friends or families.
3 in machine learning, the one-class clustering/classificationproblem
Kamil Adamczewski Computer Vision Lab 10/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Dense subgraphs
Examples:1 in the World Wide Web, dense subgraphs might be
communities or link spam2 in telephone call graphs, dense subgraphs might be groups
of friends or families.
3 in machine learning, the one-class clustering/classificationproblem
Kamil Adamczewski Computer Vision Lab 10/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Dense subgraphs
Examples:1 in the World Wide Web, dense subgraphs might be
communities or link spam2 in telephone call graphs, dense subgraphs might be groups
of friends or families.3 in machine learning, the one-class clustering/classification
problem
Kamil Adamczewski Computer Vision Lab 10/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
A dense subgraph should have large average affinity.
∆G is a discrete set and therefore difficult to search:
∆G = {x ∈ Rn : x = 0 or x = 1m and |x|1 = 1} is a subgraph in
the graph G where m is the number of vertices in G.
Then relax the constraint to a simplex ∆:∆ = {x ∈ Rn : x > 0 and |x|1 = 1}.
The objective is: {maximize g(x) = xTAx
subject to x ∈ ∆
Kamil Adamczewski Computer Vision Lab 11/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
A dense subgraph should have large average affinity.
∆G is a discrete set and therefore difficult to search:
∆G = {x ∈ Rn : x = 0 or x = 1m and |x|1 = 1} is a subgraph in
the graph G where m is the number of vertices in G.
Then relax the constraint to a simplex ∆:∆ = {x ∈ Rn : x > 0 and |x|1 = 1}.
The objective is: {maximize g(x) = xTAx
subject to x ∈ ∆
Kamil Adamczewski Computer Vision Lab 11/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
A dense subgraph should have large average affinity.
∆G is a discrete set and therefore difficult to search:
∆G = {x ∈ Rn : x = 0 or x = 1m and |x|1 = 1} is a subgraph in
the graph G where m is the number of vertices in G.
Then relax the constraint to a simplex ∆:∆ = {x ∈ Rn : x > 0 and |x|1 = 1}.
The objective is: {maximize g(x) = xTAx
subject to x ∈ ∆
Kamil Adamczewski Computer Vision Lab 11/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
A dense subgraph should have large average affinity.
∆G is a discrete set and therefore difficult to search:
∆G = {x ∈ Rn : x = 0 or x = 1m and |x|1 = 1} is a subgraph in
the graph G where m is the number of vertices in G.
Then relax the constraint to a simplex ∆:∆ = {x ∈ Rn : x > 0 and |x|1 = 1}.
The objective is: {maximize g(x) = xTAx
subject to x ∈ ∆
Kamil Adamczewski Computer Vision Lab 11/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
A dense subgraph should have large average affinity.
∆G is a discrete set and therefore difficult to search:
∆G = {x ∈ Rn : x = 0 or x = 1m and |x|1 = 1} is a subgraph in
the graph G where m is the number of vertices in G.
Then relax the constraint to a simplex ∆:∆ = {x ∈ Rn : x > 0 and |x|1 = 1}.
The objective is: {maximize g(x) = xTAx
subject to x ∈ ∆
Kamil Adamczewski Computer Vision Lab 11/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
KKT points
A KKT point is a vector x satisfying the KKT conditions of ourobjective. All KKT points constitute a set Ψ which consists alllocal maximizers of the function g(x) which are the potentialsubgraphs of interest.
{maximize g(x) = xTAx
subject to x ∈ ∆(3)
Kamil Adamczewski Computer Vision Lab 12/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
KKT points
A KKT point is a vector x satisfying the KKT conditions of ourobjective. All KKT points constitute a set Ψ which consists alllocal maximizers of the function g(x) which are the potentialsubgraphs of interest.{
maximize g(x) = xTAx
subject to x ∈ ∆(3)
Kamil Adamczewski Computer Vision Lab 12/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Lagrangian:
L(x, λ, µ) = g(x)− λ
(n∑
i=1
xi − 1
)+
n∑i=1
µixi. (4)
KKT conditions:{2(Ax∗)i − λ+ µi = 0, i = 1, ..., n,∑n
i=1 x∗iµi = 0
(5)
or
(Ax∗)i
{= λ/2 i ∈ σ(x∗)
≤ λ/2 i /∈ σ(x∗)(6)
Kamil Adamczewski Computer Vision Lab 13/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Lagrangian:
L(x, λ, µ) = g(x)− λ
(n∑
i=1
xi − 1
)+
n∑i=1
µixi. (4)
KKT conditions:{2(Ax∗)i − λ+ µi = 0, i = 1, ..., n,∑n
i=1 x∗iµi = 0
(5)
or
(Ax∗)i
{= λ/2 i ∈ σ(x∗)
≤ λ/2 i /∈ σ(x∗)(6)
Kamil Adamczewski Computer Vision Lab 13/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
In many situations, graph G is very large. However, its densesubgraphs are usually limited to small subsets of vertices of G.In such a case, when computing x we can limit the computationon small subgraphs of G, thus greatly reducing the timecomplexity.
Kamil Adamczewski Computer Vision Lab 14/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Define a value that describes the affinity of any node ei to thenodes in the support, call it the reward at ei. It is also the ith
coordinate of the vector Ax∗:
(Ax∗)i =∑j
aijx∗j = eTi Ax
∗ = ri(x)
Kamil Adamczewski Computer Vision Lab 15/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
GraphSubgraph and its Embedding in SimplexGraph AffinityWeighted dense subgraphsContinuous Relaxation and KKT points
Relationship between a KKT of a graph G and its subgraphGσ(x∗):
Theorem
If x is a KKT point of our objective, then1 the rewards at the vertices belonging to the subgraph
Gσ(x∗) are identical.2 the rewards at the vertices not belonging to the subgraph
Gσ(x∗) are not larger than in the KKT point of graph G. Atthe same time, if a point x satisfies both 1 and 2, then it is aKKT point of our objective.
Kamil Adamczewski Computer Vision Lab 16/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Shrinking and Expansion Algorithm
Shrinking and Expansion Algorithm:• can efficiently locate a KKT point of our objective• always works on a small subgraph GB ⊆ G• adaptively updates GB until a KKT point of G has been
located
Kamil Adamczewski Computer Vision Lab 17/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Two phases
• Shrinking phase - a KKT point x∗B of current subgraph GB
is obtained (it is a subgraph of a subgraph), and so GB
shrinks to its subgraph.• Expansion phase - the vertices that have strong relations
with the subgraph GB are added and form the newsubgraph G′
B .
These two phases iterate until no vertex can be added inthe expansion phase. In both phases, the density functiong(x) always increases, and g(x) is upper bounded; thus theconvergence of this algorithm is guaranteed.
Kamil Adamczewski Computer Vision Lab 18/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Shrinking phase
Replicator equation:
(xB)i(t+ 1) =(xB)i(t)(ABxB(t))ixB(t)TABxB(t)
, i ∈ B. (7)
Kamil Adamczewski Computer Vision Lab 19/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Replicator dynamics
Consider a set of choices: S1, S2, S3.
They have payoff π(i): 4, 2, 7.
Proportion of people that select them, Pr(i): 20%, 70%, 10%
People choose:
• the highest payoff.• what other people choose.
What will people choose in the future?
Kamil Adamczewski Computer Vision Lab 20/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Replicator dynamics
Consider a set of choices: S1, S2, S3.
They have payoff π(i): 4, 2, 7.
Proportion of people that select them, Pr(i): 20%, 70%, 10%
People choose:
• the highest payoff.• what other people choose.
What will people choose in the future?
Kamil Adamczewski Computer Vision Lab 20/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Combine the two:
π(i)Pr(i)
and compute the change
Prt(i)π(i)∑Ni=1 Prt(i)π(i)
(8)
e.g.
4 ∗ 20%4 ∗ 20% + 2 ∗ 70% + 7 ∗ 10%
= 0.28.
Similarly, Pr(2) = 0.48 and Pr(3) = 0.24.
Kamil Adamczewski Computer Vision Lab 21/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Combine the two:
π(i)Pr(i)
and compute the change
Prt(i)π(i)∑Ni=1 Prt(i)π(i)
(8)
e.g.
4 ∗ 20%4 ∗ 20% + 2 ∗ 70% + 7 ∗ 10%
= 0.28.
Similarly, Pr(2) = 0.48 and Pr(3) = 0.24.Kamil Adamczewski Computer Vision Lab 21/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Replicator equation:
(xB)I(t+ 1) =(xB)i(t)(ABxB(t))ixB(t)TABxB(t)
, i ∈ B. (9)
• B is invariant under these dynamics, which means thatevery trajectory starting in B will remain in B.
• when AB is symmetric and with nonnegative entries, theobjective function g(x) = xTABx strictly increases along anonconstant trajectory.
• asymptotically stable points x are in one-to-onecorrespondence with strict local solutions.
Kamil Adamczewski Computer Vision Lab 22/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Shrinking
• The replicator equation has a nice property: If (xB)i(t) = 0,then (xB)i(t+ 1) = 0 and (xB)i(t) does not affect thecomputation of (xB)j(t); j ̸= i
• During the evolution procedure, replicator dynamic candrop vertices; thus the current graph Gσ(x∗
B) shrinks.• When the KKT point xB is detected, the current graph
shrinks to the subgraph Gσ(x∗B) (subgraph of the subgraph).
• xB is the KKT point of the subgraph GB, but may not be theKKT point of graph G. In the latter case, we must expandthe current subgraph.
Kamil Adamczewski Computer Vision Lab 23/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Shrinking
• The replicator equation has a nice property: If (xB)i(t) = 0,then (xB)i(t+ 1) = 0 and (xB)i(t) does not affect thecomputation of (xB)j(t); j ̸= i
• During the evolution procedure, replicator dynamic candrop vertices; thus the current graph Gσ(x∗
B) shrinks.
• When the KKT point xB is detected, the current graphshrinks to the subgraph Gσ(x∗
B) (subgraph of the subgraph).• xB is the KKT point of the subgraph GB, but may not be the
KKT point of graph G. In the latter case, we must expandthe current subgraph.
Kamil Adamczewski Computer Vision Lab 23/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Shrinking
• The replicator equation has a nice property: If (xB)i(t) = 0,then (xB)i(t+ 1) = 0 and (xB)i(t) does not affect thecomputation of (xB)j(t); j ̸= i
• During the evolution procedure, replicator dynamic candrop vertices; thus the current graph Gσ(x∗
B) shrinks.• When the KKT point xB is detected, the current graph
shrinks to the subgraph Gσ(x∗B) (subgraph of the subgraph).
• xB is the KKT point of the subgraph GB, but may not be theKKT point of graph G. In the latter case, we must expandthe current subgraph.
Kamil Adamczewski Computer Vision Lab 23/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Shrinking
• The replicator equation has a nice property: If (xB)i(t) = 0,then (xB)i(t+ 1) = 0 and (xB)i(t) does not affect thecomputation of (xB)j(t); j ̸= i
• During the evolution procedure, replicator dynamic candrop vertices; thus the current graph Gσ(x∗
B) shrinks.• When the KKT point xB is detected, the current graph
shrinks to the subgraph Gσ(x∗B) (subgraph of the subgraph).
• xB is the KKT point of the subgraph GB, but may not be theKKT point of graph G. In the latter case, we must expandthe current subgraph.
Kamil Adamczewski Computer Vision Lab 23/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Expansion phase
• Reminder, ri(x∗) = a(x∗, ei) is the reward of vertex i whichis the affinity of a node i to vertices in the given subgraph.
• Look at the adjacent vertices.• Choose threshold g(x∗) and search for vertices whose
affinity is larger than g(x∗).• Compute the rewards at the neighbors of the current
subgraph.• In the implementation, the authors usually set a threshold
K1 and add the vertices with the K1 largest rewards.
Kamil Adamczewski Computer Vision Lab 24/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
• When g(x∗) is small (usually appearing in the first fewiterations), the size of nodes with higher affinity naturallymay be large.
• To control the time complexity, control the size of currentsubgraph. Hence, add only some vertices with relativelylarger rewards into the current subgraph.
Kamil Adamczewski Computer Vision Lab 25/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Time complexity
• h - the number of edges in the subgraph,• t1 - number of iterationsfor teh replicator equation,• t2 - number of iterations for the shrink and expansion
phases.
O(t1h) - shrink phase time complexityO(h) - space complexityO(t1t2h) - total time complexity of the SEA algorithm.
Kamil Adamczewski Computer Vision Lab 26/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Time complexity
• h - the number of edges in the subgraph,• t1 - number of iterationsfor teh replicator equation,• t2 - number of iterations for the shrink and expansion
phases.
O(t1h) - shrink phase time complexityO(h) - space complexityO(t1t2h) - total time complexity of the SEA algorithm.
Kamil Adamczewski Computer Vision Lab 26/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Shrinking phaseReplicator dynamicsExpansion phaseTime complexityRecovering the dense graph
Recovering the dense graph
Find discrete x that approximates continuous x∗.
1 Sort the components of x∗ in descending order, set maxobjective to −∞.
2 . For each set of top i elements, construct the vector xwhere the top i elements have non-zero entry equal to 1
i .3 If xTAx > f , set f = xTAx, otherwise break.
Kamil Adamczewski Computer Vision Lab 27/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Correspondance problemCluster analysis
Applications
• Correspondance problem• Cluster analysis
Kamil Adamczewski Computer Vision Lab 28/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Correspondance problemCluster analysis
Correspondance problem
Correspondance problem• Point Set Matching• Near Duplicate Image Retrieval
Kamil Adamczewski Computer Vision Lab 29/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Correspondance problemCluster analysis
Figure: (a) extract SIFT features (for clarity, only a small subset of thecandidate correspondences are shown), and then form thecorrespondence graph G in (b) and weighed adjacency matrix A in(c). The correct correspondences shown in (d) form a densesubgraph of G, and thus correspond to the dense block.
Kamil Adamczewski Computer Vision Lab 30/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Correspondance problemCluster analysis
Point Set Matching
1 Generate a dataset Q of 2D model points by randomlyselecting inliers in a given region of the plane.
2 Obtain the corresponding inliers P by independentlydisturbing the points from Q with white Gaussian noise.
3 Rotate and translate the whole dataset Q with a randomrotation and translation.
4 Add outliers in Q and P, respectively.
Kamil Adamczewski Computer Vision Lab 31/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Correspondance problemCluster analysis
Figure: Top row: The performance curves as the deformation noisesvary; in (a) no outliers and in (b) 30 outliers. Bottom row: Theperformance curves as the number of outliers change; in (c) nodeformation noise and in (d) σ = 4.Kamil Adamczewski Computer Vision Lab 32/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Correspondance problemCluster analysis
Near Duplicate Image RetrievalDataset: Columbia database, 600 photos
• 150 near-duplicate pairs• 300 non- duplicate images
Pre-processing:• Use SIFT features• Construct the candidate correspondence set M,
• a point P in the first image is matched to a point Q in thesecond image only if their distance (multiplied by athreshold) is not greater than the distance of P to otherpoints in the second image (suggested by Lowe)
For near-duplicate images, there should be densecorrespondences in them.
Kamil Adamczewski Computer Vision Lab 33/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Correspondance problemCluster analysis
Figure: Comparison of cumulative accuracy of near duplicate imageretrieval on the Columbia database.
Kamil Adamczewski Computer Vision Lab 34/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Correspondance problemCluster analysis
Cluster analysis
• Shrinking and Expansion ALgorithm can be used as aclustering tool, and all the vertices evolving toward thesame KKT points should belong to the same cluster.
• Extracting dense clusters from cluttered background• It allows one to extract as many clusters as desired, while
leaving the remaining points (namely, the clutter)ungrouped.
Kamil Adamczewski Computer Vision Lab 35/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Correspondance problemCluster analysis
Figure: Clustering on data with uniform distributed background points:(a) the dataset, (b) clustering result of k-means, (c) clustering result ofSC (spectral clustering), and (d) clustering result of our method.
Kamil Adamczewski Computer Vision Lab 36/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Conclusions
• Shrinking and expansion algorithm is an efficient algorithmto detect dense subgraphs of a weighted graph.
• For a current subgraph, the expansion phase adds themost related vertices based on the average affinitybetween each vertex and the subgraph.
• The shrink phase filters out the nodes in the currentsubgraph whose affinity to all the vertices is small.
• Operates on small subgraphs.• Very good performance in applications, the
correspondance problem and clustering.
Kamil Adamczewski Computer Vision Lab 37/38
IntroductionShrinking and Expansion Algorithm
ApplicationsConclusions
Kamil Adamczewski Computer Vision Lab 38/38