distributed verification of minimum spanning...

31
Distributed Verification of Minimum Spanning Trees Amos Korman Shay Kutten Abstract The problem of verifying a Minimum Spanning Tree (MST) was introduced by Tarjan in a sequential setting. Given a graph and a tree that spans it, the algorithm is required to check whether this tree is an MST. This paper investigates the problem in the distributed setting, where the input is given in a distributed manner, i.e., every node “knows” which of its own emanating edges belong to the tree. Informally, the distributed MST verification problem is the following. Label the vertices of the graph in such a way that for every node, given (its own state and label and) the labels of its neighbors only, the node can detect whether these edges are indeed its MST edges. In this paper, we present such a verification scheme with a maximum label size of O(log n log W ), where n is the number of nodes and W is the largest weight of an edge. We also give a matching lower bound of Ω(log n log W ) (as long as W> ( logn) 1+ǫ for some fixed ǫ> 0). Both our bounds improve previously known bounds for the problem. For the related problem of tree sensitivity also presented by Tarjan, our method yields rather efficient schemes for both the distributed and the sequential settings. Keywords: network algorithms, graph property verification, labeling schemes, minimum span- ning tree, proof labeling, self stabilization. * A preliminary version of this work was presented in ACM PODC 2006. Information Systems Group, Faculty of IE&M, The Technion, Haifa, 32000 Israel. E-mail: [email protected]. Supported in part at the Technion by an Aly Kaufman fellowship. Contact person. Information Systems Group, Faculty of IE&M, The Technion, Haifa, 32000 Israel. E-mail: [email protected]. Fax: +972 (4) 829 5688. Phone: +972 (4) 829 4505. Supported in part by a grant from the Israeli Ministry for Science and Technology. 1

Upload: others

Post on 25-Jan-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

  • Distributed Verification of Minimum Spanning Trees∗

    Amos Korman † Shay Kutten ‡

    Abstract

    The problem of verifying a Minimum Spanning Tree (MST) was introduced by Tarjan

    in a sequential setting. Given a graph and a tree that spans it, the algorithm is required to

    check whether this tree is an MST. This paper investigates the problem in the distributed

    setting, where the input is given in a distributed manner, i.e., every node “knows” which of

    its own emanating edges belong to the tree. Informally, the distributed MST verification

    problem is the following. Label the vertices of the graph in such a way that for every

    node, given (its own state and label and) the labels of its neighbors only, the node can

    detect whether these edges are indeed its MST edges. In this paper, we present such a

    verification scheme with a maximum label size of O(log n log W ), where n is the number

    of nodes and W is the largest weight of an edge. We also give a matching lower bound

    of Ω(log n log W ) (as long as W > ( logn)1+ǫ for some fixed ǫ > 0). Both our bounds

    improve previously known bounds for the problem.

    For the related problem of tree sensitivity also presented by Tarjan, our method yields

    rather efficient schemes for both the distributed and the sequential settings.

    Keywords: network algorithms, graph property verification, labeling schemes, minimum span-

    ning tree, proof labeling, self stabilization.

    ∗A preliminary version of this work was presented in ACM PODC 2006.†Information Systems Group, Faculty of IE&M, The Technion, Haifa, 32000 Israel. E-mail:

    [email protected]. Supported in part at the Technion by an Aly Kaufman fellowship.‡Contact person. Information Systems Group, Faculty of IE&M, The Technion, Haifa, 32000 Israel. E-mail:

    [email protected]. Fax: +972 (4) 829 5688. Phone: +972 (4) 829 4505. Supported in part by a

    grant from the Israeli Ministry for Science and Technology.

    1

  • 1 Introduction

    1.1 Background and Motivation

    The problem of verifying a Minimum Spanning Tree (MST) was introduced by Tarjan [34, 29]

    in the context of sequential algorithms. A weighted graph is given, together with a tree that

    spans it, and it is required to decide whether this tree is indeed an MST of the graph. Improved

    algorithms in different sequential models were given by Harel [17], Komlòs [23], and Dixon,

    Rauch, and Tarjan [6] who improved the original results of Tarjan. (The same result with a

    simpler algorithm was later presented by King [27].) Parallel algorithms were presented by

    Dixon and Tarjan [7] and by King, Poon, Ramachandran, and Sinha [28]. One application

    of the algorithm of [6] is as a subroutine for an algorithm that computes an MST, see e.g.

    [22, 33]. A related motivation for the problem is that the verification seems easier than the

    computation. A linear (in the number of edges) time algorithm for computing an MST is known

    only in certain cases, or by a randomized algorithm [12, 22]. On the other hand, the verification

    algorithm of [6] is linear (in its sequential running time).

    The motivation for verification is even stronger in a distributed setting. There, the tree is

    given in a distributed manner, such that every vertex marks locally some of its own emanating

    edges, and the collection of the marked edges (of all the nodes) is the tree, see e.g. [5, 13, 4].

    Computing such a tree distributively requires a computation that involves all the network

    nodes, and involves messages sent to remote nodes and waiting for replies. Intuitively, it is

    much harder than computing an MST sequentially. On the other hand, in the verification task,

    it is desired that every node, looking only at itself and at its neighbors, is able to determine

    whether its edges that are marked as belonging to the MST indeed belong to it. Initial upper

    and lower bounds for the distributed verification of an MST are given in [25] together with

    the model for distributed verification we follow here. It turns out that neither of the bounds

    presented therein is optimal.

    A common application of local distributed verification is in the context of self stabilization.

    See, for example, the local detection [1], or the local checking [2], or the silent stabilization [8].

    Self stabilization deals with algorithms that must cope with faults that are rather sever, though

    of a type that does occur in reality [19, 18]. The faults may cause the states of different nodes

    to be inconsistent with each other. For example, the collection of marked edges may not be a

    tree, or may not be an MST. Self stabilizing algorithms thus often use distributed verification

    repeatedly. If the verification fails, then the output (e.g. the MST) is recomputed. An efficient

    2

  • verification algorithm thus saves repeatedly in communication.

    Another motivation is theoretical. A fundamental issue studied for distributed computing

    is the following question: how efficient is the translation from results for sequential models to

    results in distributed models? Quite a few papers dealing with the (distributed) time cost of

    the translations appear in the literature, e.g. [31, 32, 26, 24]. In particular, interest has been

    shown in translations that take one (or a constant) time unit. The current paper deals with

    a unit time translation (for Tarjan’s MST verification problem) and with its cost in terms of

    the size of the labels in nodes, as well as in terms of the communication needed to compare

    this information between neighboring nodes. A (semi) automatic paradigm for translating a

    sequential verification algorithm to a distributed one can be derived from [25]. Unfortunately,

    when translating the known sequential MST verification algorithms, the resulting distributed

    verification algorithms turn out to be inefficient.

    Techniques used for solving the MST verification problem (in the sequential setting) were

    previously shown to be useful for the related problem of sensitivity testing. Given a graph and

    an MST of the graph, the task of this problem is the following. Label every edge (u, v) with the

    minimum number c(u, v) such that if the weight of the edge changes by c(u, v) (and the weights

    of the rest of the edges remain the same) then the given tree is no longer minimum. Note, that

    the output of any algorithm solving the sensitivity testing problem must use Ω(|E| · log W ) bitsin some cases, where W is the maximum edge weight.

    In [29], Tarjan has extended his MST verification algorithm to an algorithm solving the

    sensitivity testing problem in O(|E| · α(|E|, n)) time, where α is the functional inverse of Ack-ermann’s function. For the special case of planar graphs, Booth and Westbrook [3] gave an

    algorithm solving the sensitivity problem in O(|E|) time. In [6], they describe a randomizedO(|E|)-time algorithm and a deterministic algorithm that runs in time within a constant factorof the unknown optimum (which is between O(|E|·α(|E|, n)) and Ω(|E|)). These results assumea model for time which allows very simple operations on edge costs to be performed in unit

    time. Let us note that in a stronger model for time, in which bit manipulations of edge costs

    are possible in unit time, somewhat stronger results where achieved. In particular, Harel ([17])

    showed that if the edge costs are polynomial in n then the sensitivity problem can be solved

    by a deterministic algorithm that runs in O(|E|) time.

    Our results We first present an algorithm solving the distributed MST-verification problem

    using O(log n log W )-bit labels, where n is the number of nodes and W is the largest weight of

    3

  • an edge. We note that our scheme can be applied to any given MST, even when the MST is not

    unique. This result improves the corresponding O(log2 n + log n log W ) size scheme presented

    in [25]. We also show a matching lower bound of Ω(log n log W ) (as long as W > ( logn)1+ǫ for

    some fixed ǫ > 0). This improves the corresponding lower bound of Ω(log n + log W ) given in

    [25] and answers a question left open there. Proving this lower bound was the main technical

    difficulty overcome in this study.

    Let us also also mention that one of the tools constructed here for the upper bound proof

    also improves the previously known upper bound for implicit labeling schemes supporting the

    FLOW function on trees [21].

    Returning to the sequential setting, our distributed MST verification algorithm yields a

    result for a slightly weaker version of the sensitivity problem. We relax the required output

    as follows: instead of writing the sensitivity of each edge explicitly, we write some auxiliary

    information. Later, when queried about the sensitivity of an edge, we are allowed to perform

    a constant time computation to derive the sensitivity of that edge, using the above auxiliary

    information. We provide a rather efficient algorithm for this version of the sensitivity problem

    as well as for a distributed version of this problem.

    2 Preliminaries

    The model described here is taken from [25]. We consider distributed systems that are rep-

    resented by weighted undirected connected graphs G = 〈V, E〉. For an edge e ∈ E, let ω(e)denote the (integral) weight of e. Every node v has internal ports, each corresponding to one

    of the edges attached to v. The ports are numbered from 1 to deg(v) (the degree of v) by an

    internal numbering known only to node v. For every vertex v, let N(v) denote the set of edges

    adjacent to v. Note, that deg(v) = |N(v)|.Let F(n, W ) (respectively, T (n, W )) denote the family of all graphs (resp., trees) with at

    most n vertices whose edge weights are bounded from above by W .

    Given a vertex v, let sv denote the state of v and let vs = (v, sv). A configuration graph

    corresponding to a graph G = 〈V, E〉 is a graph Gs = 〈Vs, Es〉, where Vs = {vs | v ∈ V } and(vs, us) ∈ Es iff (v, u) ∈ E. In some cases, we may assume that each vertex v has a uniqueidentity id(v) (i.e., for every pair of vertices v and u, it is given that id(u) 6= id(v)), which maybe encoded using O(log n) bits). In these cases, we assume that for each vertex v, its identity

    4

  • is encoded in its state, i.e., the state sv is referred to as having two fields: sv = (id(v), s′(v)).

    When the context is clear, we may refer to s′(v) as the state of v (instead of s(v)). Note the

    difference between a reference to a node v and to id(v). The first is useful when we discuss

    the case that nodes do not have unique identities “known” to the nodes. In this case, we still

    want to be able to refer to specific nodes, so we use the term v which is unique, but cannot be

    accessed by an algorithm.

    A family of configuration graphs Fs corresponding to a graph family F consists of configu-ration graphs Gs ∈ Fs for each G ∈ F . Let FS be the largest possible such family. When itis clear from the context, we use the term “graph” instead of “configuration graph”. We may

    also use the notation v instead of vs.

    We consider distributed representations of subgraphs, encoded in the collection of the nodes’

    states. There can be many such representations. For simplicity, we focus on the case that an

    edge is included in the subgraph if it is explicitly pointed at by the state of one of its endpoints.

    This is formalized in the following definition.

    Definition 2.1 Given a configuration graph Gs = 〈Vs, Es〉, the subgraph induced by the statesof Gs is defined as follows. The set of vertices of the subgraph is V . The state of each node

    contains (a) specific field (or fields) for storing names (that is, numbers) of ports. An edge

    (u, v) ∈ G belongs to the subgraph iff the state of one of its endpoints (say su) includes (in oneof the above specific fields) the name of u’s ports that points at v.

    For example, a spanning tree can be represented as follows. The state of every vertex u contains

    a field that is u’s port number leading to its parent in the tree. This field at the root is empty.

    Consider a graph G. A distributed problem Prob is the task of selecting a state sv for each

    vertex v, such that Gs satisfies a given predicate fProb. This induces the problem Prob on a

    graph family F in the natural way. We say that fProb is the characteristic function of Probover F . In this paper, Prob is the following problem- MST : assign states to the vertices of agiven graph G such that the subgraph induces by the states of G is a minimum spanning tree

    (MST) of G.

    Proof labeling schemes deal with the task of adding labels to configuration graphs in order

    to maintain a (locally checkable) distributed proof that the given configuration graph satisfies a

    given predicate fProb. Informally, a proof labeling scheme includes a marker algorithm M that

    generates a label for every node, and a verifier algorithm that compares labels of neighboring

    nodes. If a configuration graph satisfies fProb, then the verifier at every two neighboring nodes

    5

  • declares their labels (produced by marker M) “consistent” with each other. However, if the

    configuration graph does not satisfy fProb, then for any possible marker algorithm L, the verifier

    must declare “inconsistencies” between some neighboring nodes in the labels produced by the

    marker L. It is not required that the marker be distributed. However, the verifier is distributed

    and local, i.e., every node can check only the labels of its neighbors (and its own label and

    state).

    More formally, A marker algorithm L is an algorithm that given a graph Gs ∈ Fs, assigns alabel L(vs) to each vertex vs ∈ Gs. For a marker algorithm L and a vertex vs ∈ Gs, let N ′L(v)be a set of deg(v) fields, one field per neighbor. Each field e = (v, u) in N ′L(v), corresponding

    to edge e ∈ N(v), contains the following.

    • The port number of e in v.

    • The weight of e.

    • The label L(u).

    Let NL(v) = 〈(sv, L(v)), N ′L(v)〉. Informally, N ′L(v) contains the labels given to all of v’sneighbors along with the port number and the weights of the edges connecting v to them.

    NL(v) contains v’s state and label as well as N′L(v). A verifier algorithm V is an algorithm

    which is applied separately at each vertex v ∈ G. When V is applied at a vertex v, its input isNL(v) and its output, V(v, L), is boolean.

    Let f be some characteristic function of a problem over some family F . Let Fs be somefamily of configuration graphs corresponding to F . A proof labeling scheme π = 〈M,V〉 for Fsand f is composed of a marker algorithm M and a verifier algorithm V, such that the followingtwo properties hold.

    1. For every Gs ∈ Fs, if f(Gs) = 1 then V(v,M) = 1 for every vertex v ∈ G.

    2. For every Gs ∈ Fs, if f(Gs) = 0 then for every marker algorithm L there exists a vertexv ∈ G so that V(v, L) = 0.

    We note that the proof labeling schemes constructed in this paper use a polynomial time verifier

    algorithm. The size of a proof labeling scheme π = 〈M,V〉 is the maximum number of bitsused in a label M(vs) over all vs ∈ Gs and all Gs ∈ Fs.

    6

  • For some of our proofs we make use of a particular different type of a distributed represen-

    tation referred to as an implicit labeling scheme ([20, 15]). These were introduced in the past

    for a different purpose. For example, they can be viewed as a generalization of a routing table:

    given the labels of nodes u and v, find the first node (other than u) on the route from u to v.

    Formally, let g be a function on pairs of vertices of a graph. An implicit labeling scheme γ =

    〈E ,D〉 supporting function g on a family of graphs F is composed of the following components:

    1. An encoder algorithm E that given a graph in F , assigns labels to its vertices.

    2. An decoder algorithm D that given the labels E(u) and E(v) (assigned by the encoder Eto two vertices u and v, not necessarily neighbors) outputs g(u, v).

    The size of an implicit labeling scheme is the maximum number of bits in a label assigned by

    the encoder E to any vertex in any graph in F .Note, that unlike the case of the verifier (in proof labeling schemes), instead of receiving the

    labels of neighbors as input, the decoder (in implicit labeling schemes) receives two labels of

    any two nodes as input. An additional difference between proof labeling schemes and implicit

    labeling scheme is the following. In an implicit labeling scheme, the answer of the decoder is

    some function of the two specific nodes given as inputs. It does not seem easy to define such a

    function for a proof labeling scheme. For example, if the answer of a correct verifier for some

    graph and function should be zero (e.g., the states of the graph do not encode an MST) then

    one correct verifier V1 may output zero for some node, while another correct verifier V2 may

    output 1 for that node, but zero for some other node. Hence, for a proof labeling scheme it is

    hard to claim that what the verifier finds is a function of some specific nodes (unlike the case of

    a decoder in implicit labeling schemes). Still, we find it interesting that techniques from each

    of these kinds of schemes helped, or inspired our proofs and constructions for the other kind.

    Let us define two functions for which we construct and use implicit labeling schemes. Given

    a tree T ∈ T (n, W ) and two vertices u and v in T , MAX(u, v) is the maximum weight of anedge on the path connecting u and v in T . Similarly, FLOW (u, v) is the minimum weight of

    an edge on the path connecting u and v in T . (Our constructions are based on the methods

    used for the approximation distance labeling scheme of [14].)

    In our schemes, the labels given to the vertices may contain several fields of different sizes.

    We assume that it is possible to perform operations on individual fields, e.g., to compare an

    individual field between the labels of two neighbors. Clearly, this can be performed with the

    aid of a sequential data structure that does not increase the order of the size of a label.

    7

  • Regarding the sensitivity problem, we use the same model for time as used in [6]). I.e., we

    allow edge costs to be compared, added, or subtracted at unit cost, and side computations to

    be performed on a unit-cost random-access machine with word size Ω(log n) bits.

    Let us illustrate the definitions using an example presented in [25], and is in the spirit of

    [35]. Note, that v’s neighbors cannot ‘see’ the state of v but they can see v’s label.

    Agreement in anonymous families Problem [25]: Assign all the nodes identical states.

    Let S = {1, 2, · · · , 2m}.Note, that the corresponding computation task, that of assigning every node the same state,

    requires only states of size 1. The verification is harder, as was shown in the following lemma.

    Lemma 2.2 [25] The proof size of FallS (the family of all the graphs) and fAgreement is Θ(m).

    Proof: We first describe a trivial proof labeling scheme π = 〈M,D〉 of the desired size m.Given Gs such that fAgreement(Gs) = 1, for every vertex v, let M(v) = sv. I.e., we just copy thestate of node v into its label. Then, D(v, L) simply verifies that L(v) = sv and that L(v) = L(u)for every neighbor u of node v. It is clear that π is a correct proof labeling scheme for FallS andfAgreement of size m. We now show that the above bound is tight up to a multiplicative constant

    factor even assuming that FallS is id-based. Consider the connected graph G with two verticesv and u. Assume, by way of contradiction, that there is a proof labeling scheme π = 〈M,D〉for F allS and fAgreement of size less than m/2. For i ∈ S, let Gis be G modified so that both uand v have state s(u) = s(v) = i. Obviously, fAgreement(G

    is) = 1 for every i. For a vertex x, let

    Mi(x) be the label given to x by marker M applied on Gis. Let Li = (Mi(v),Mi(u)). Sincethe number of bits in Li is assumed to be less than m, there exist i, j ∈ S such that i < j andLi = Lj . Let Gs be G modified so that su = i and sv = j. Let L be the marker algorithm

    for Gs in which L(u) = Mi(u) and L(v) = Mj(v). Then, for each vertex x, D(x, L) = 1,contradicting the fact that f(Gs) = 0.

    3 Upper bound for the MST problem

    In this section, we construct a proof labeling scheme πmst = 〈Mmst,Vmst〉 for fMST andFS(n, W ) with size O(log n · log W ). Recall that the main technical difficulty overcome inthis paper is the proof of the lower bound, which is presented later.

    Like the verification scheme of [6], our proof labeling scheme πmst is based on the well known

    8

  • fact that given a graph G, a spanning tree T (of G) is an MST iff for every edge e = (u, v) ∈ G,its weight ω(e) is at least as large as MAX(u, v) (calculated on T ) [30].

    It was proved in [25], that the problem of verifying an MST can be split into two: (1) veri-

    fying that it is a spanning tree, and (2) verifying that the spanning tree is minimal. Moreover,

    in [25] they show how to solve problem (1) above using a relatively simple construction. Intu-

    itively, in the construction of [25] (a direct translation of existing self stabilizing tree protocols

    such as [1, 9, 10]), the label includes the unique identity of some root node, as well as the

    distance to that root and a pointer to the parent towards that root. Clearly, the size of the

    label is O(log n). The decoder at each node v needs to verify that (1) the root of each of its

    neighbors is the same as the root of v, (2) that the distance of the parent is smaller by 1 than

    the distance of v, and (3) that if the node is at distance 0 then its own identity is the identity

    of the root (and it has no parent). Hence, we assume that the given tree is indeed a spanning

    tree and concentrate on proving whether T is minimal or not.

    We first establish a family Γ of implicit labeling schemes supporting the function MAX(u, v)

    on T (n, W ). Second, we select an implicit labeling scheme γ ∈ Γ and the proof labeling schemelabels every vertex in T according to γ. For the proof size to be small, we find a specific

    γsmall ∈ Γ of small size, namely O(log n · log W ). The reason we define both Γ and γsmall israther subtle, and is explained below.

    At first glance, applying any γ ∈ Γ may seem enough to perform the distributed verifi-cation. If the vertices are labeled according to any such γ then the distributed verification

    can be performed as follows. Let I(v) be the label given to v in the implicit labeling scheme.

    Supposedly I(v) has been given according to γ. Each vertex v just compares the weight of each

    of its incident edges (v, u) in G to MAX(v, u) (calculated on T ) using I(u) and I(v)).

    Unfortunately, for that to suffice, we also need to locally verify that a given set of labels

    I has indeed been given according to γ (or some other implicit labeling scheme supporting

    MAX(·, ·) on T ). That is, we need to construct a proof labeling scheme to verify the implicitlabeling scheme. We do not know how to construct a small size proof labeling scheme to verify

    that I has been given according to γsmall. Fortunately, it is enough to verify that that there

    exists some implicit labeling scheme γ ∈ Γ such that the labels at the vertices are indeed givenaccording to γ. This already verifies that the the value computed by v above for edge (v, u) is

    MAX(v, u).

    Even though we cannot verify that the specific γ ∈ Γ used is γsmall, our marker does useγsmall nevertheless, since the labels of γ the marker gives are a part of the proof labeling scheme.

    9

  • Hence, using γsmall reduces the size of the scheme.

    Let us remark that γsmall can easily be transformed into an implicit labeling scheme sup-

    porting the FLOW function on weighted trees with size O(log n · log W ); this improves thepreviously known upper bound O(log2 n + log n · log W ) shown (for general graphs) in [21]. Wealso note that similar techniques can be used to provide compact proof labeling schemes for

    various implicit labeling schemes on trees, such as routing, distance etc.

    Let us start with some preliminaries. A separator decomposition of a tree T is defined

    recursively as follows. At the first stage we choose some vertex v in T to be the level-1 separator

    of T . Vertex x may be any vertex (not chosen before). Different choices of v lead to different

    decompositions. By removing v, T breaks into disconnected subtrees T 1(v), T 2(v), · · · , T p(v).These subtrees are referred to as the subtrees formed by v. Each such subtree is decomposed

    recursively by choosing some vertex to be a level-2 separator, etc. A separator decomposition is

    termed perfect if every such separator v mentioned above is chosen in such a way that |T j(v)| ≤|T |/2 for every j. It is easy to see that every tree has a perfect separator decomposition.

    Define the separator tree T sep to be the tree rooted at the level-1 separator of T , with the

    level-2 separators as its children, and generally, with each level j + 1 separator as the child

    of the level j separator above it in the decomposition. For a vertex v in T , define the level-j

    separator of v to be the ancestor of v in T sep at depth j.

    Given a separator decomposition Sep Decomp of a tree T , we define the function Sep−levelon pairs of vertices as follows. For every two vertices u and v in T , Sep − level(u, v) is thedepth of the nearest common ancestor of u and v in T sep, i.e., the maximum level of a separator

    common to both u and v.

    3.1 Implicit labeling schemes supporting MAX(·, ·) on T (n, W )

    We first describe a family Γ of implicit labeling schemes, each supporting the function MAX(·, ·)on T (n, W ). In fact, in the construction (and later in the marker algorithm), we shall use aspecific scheme γsmall in Γ. As mentioned above, the reason we define both Γ and Γsmall (rather

    than just Γsmall is somewhat subtle. Intuitively, our verifier will not check for Γsmall, because

    proving that Γsmall was used is costly. Instead, the proof is that some scheme in Γ is used.

    10

  • 3.1.1 The family Γ of implicit labeling schemes

    We define Γ as the collection of all the implicit labeling schemes γ = 〈Eγ,Dγ〉 described as fol-lows. Given a tree T ∈ T (n, W ), perform a separator decomposition Sep Decomp on T . (Thechoice of the specific separator decomposition to use is one of the factors that differentiates the

    members of Γ from one another). Note, that given any separator decomposition, each vertex

    is a separator of some level. For every vertex v, the encoder algorithm Eγ assigns each vertexv a label E(v) composed of two sublabels, namely, Esep(v) and Eω(v). Let us first describethe first sublabel, Esep(·), which is designed so that given the sublabels Esep(u) and Esep(w) oftwo vertices u and w in T , one can determine Sep − level(u, w). Specifically, for every level-lseparator v, Esep(v) consists of l fields, namely, Esep(v) = (Esep1 (v), Esep2 (v), · · · , Esepl (v)). It willfollow from the description of the encoder algorithm Eγ, that the following property holds forevery two vertices u and w.

    The Sep − level property: For every 1 ≤ k ≤ i, Esepk (u) = Esepk (w) iff u and w share thesame level-i separator in the separator decomposition Sep Decomp.

    We start by initializing Esep(v) to be the same arbitrary number for every vertex v. Next, we

    apply the following recursive protocol. Let T 1(v), T 2(v), · · · , T p(v) for some p be the subtreesformed by (the removal of) v from a tree T (T itself may be a subtree created earlier during the

    algorithm by the removal of another node). For every 1 ≤ j ≤ p, let ρ(j) be a unique number(in the sense that if j 6= g then ρ(j) 6= ρ(g)). The specific values of ρ(j) is the other factordifferentiating the members of Γ from one another.

    For every vertex u ∈ T j(v) update Esep(u) as follows- Esep(u) = Esep(u), ρ(j) (where thecomma stands for concatenation). The protocol is then applied recursively on each subtree

    T j(v) formed by v.

    For each level-l separator v, its second sublabel Eω(v) also consists of l fields, namely,Eω(v) = (Eω1 (v), Eω2 (v), · · · , Eωl (v)). For every 1 ≤ i ≤ l, Eωi (v) is set to MAX(v, vi), where vi isthe level-i separator of v in the separator decomposition Sep Decomp.

    Given the labels E(u) and E(v) of two vertices u and v, the decoder Dγ first calculatesthe largest index i such that for every 1 ≤ k ≤ i, Esepk (u) = Esepk (v), and then outputsmax{Eωi (u), Eωi (v)}. Note, that the decoder Dγ operates in the same manner for every schemeγ ∈ Γ.

    11

  • Claim 3.1 Every scheme γ ∈ Γ is a correct implicit labeling scheme supporting functionMAX(·, ·) on T (n, W ).

    Proof: Fix a scheme γ = 〈Eγ,Dγ〉 in Γ. Let E(u) and E(v) be the labels assigned by theencoder algorithm Eγ to two vertices u and v in some tree T ∈ T (n, W ). By the descriptionof the encoder algorithm Eγ, the Sep − level property is satisfied for u and v. Therefore,Sep − level(u, v) is the largest index i such that for every 1 ≤ k ≤ i, Esepk (u) = Esepk (v).Consequently, x, the level-i separator common to both u and v resides on the path connecting

    u and v in T . Therefore, MAX(u, v) = max{MAX(u, x), MAX(v, x)} and the correctness ofthe implicit labeling scheme γ follows.

    Let us remark, however, that the size of γ may be large. Next, we describe a particular

    implicit labeling scheme γsmall ∈ Γ whose size is small, namely O(log n · log W ).

    3.1.2 The size O(log n log W ) implicit labeling scheme γsmall ∈ Γ

    Scheme γsmall is constructed using a similar method to the one described in [14] for the different

    purpose of constructing approximate distance labeling schemes in trees. Scheme γsmall is a

    refinement of Γ as follows. The separator decomposition chosen by scheme γsmall is a perfect

    separator decomposition Perfect Decomp. The specification of how to assign a unique number

    to each of the subtrees T 1(v), T 2(v), ..., T p(v) formed by the removal of each separator v, is done

    according to the method described in [14]. As proved therein, for every vertex v, the number

    of bits used in Esep(v) is O(log n) (there, a different terminology was used for Esep(v)). Sincethe separator decomposition Perfect Decomp is perfect, the level of each separator is bounded

    from above by log n+1, therefore, for every vertex v, Eω(v) contains O(log n) fields. Since eachsuch field can be encoded using O(logW ) bits, we obtain that for every vertex v, the number of

    bits used in Eω(v) is O(log n · log W ). Therefore, by following similar steps to the ones describedin Lemma 2.5 in [14], we obtain the following lemma.

    Lemma 3.2 γsmall is a correct implicit labeling scheme supporting the function MAX(·, ·) onT (n, W ) with size O(log n · log W ). Moreover, given the labels assigned by γsmall to two verticesu and v in some T ∈ T (n, W ), the value MAX(u, v) can be computed in constant time.

    Next, we establish a proof labeling scheme πΓ on TS(n, W ) whose goal is to verify, givena configuration tree Ts, whether there exists an implicit labeling scheme γ ∈ Γ such that thestates of the vertices in Ts are the same as the corresponding labels assigned by γ.

    12

  • 3.2 The proof labeling scheme πΓ for TS(n, W )

    Let Prob(Γ) be the following problem: assign states to the vertices of a tree so that there exists

    an implicit labeling scheme γ = 〈E ,D〉 ∈ Γ such that for every vertex v ∈ T , sv = Eγ(v).

    Lemma 3.3 There exists a proof labeling scheme πΓ = 〈MΓ,VΓ〉 for fProb(Γ) and TS(n, W )such that given any tree Ts ∈ TS(n, W ), the maximum number of bits used in a label given byMΓ is asymptotically the same as the maximum number of bits used in a state of a vertex inTs.

    Before proving, let us try to give an informal high level description of the proof. At the label

    of each vertex v, we specify for each separator u of v, whether the direction from v towards

    u is up or down the tree. By verifying consistency between the labels at neighboring nodes

    and by verifying at each separator v that the numbers assigned to the subtrees formed by

    v are disjoint, we obtain that the states are indeed given by some separator decomposition

    Sep Decomp. Then, the fact that for each separator u of v, the maximum weight of an edge

    on the path connecting u and v is as indicated in the corresponding place in the state of v, is

    verified along the path from v to u.

    We stress that this does not verify that the implicit labeling scheme γsmall is used. Fortu-

    nately, we do not have to prove that for γsmall to be useful for us.

    Proof: We describe proof labeling scheme πΓ = 〈MΓ,VΓ〉 as claimed. First, it was shown in[25] (Lemma 2.4) that any proof labeling scheme π on undirected trees can be partitioned into

    three steps: (1) transform the tree into a rooted tree where every node knows the orientation

    towards the root; (2) construct a proof labeling scheme for this orientation; and (3) construct

    π assuming the tree has such an orientation. Moreover, the first two steps are already given in

    [25] using O(log n)-bit labels. Hence, we assume that an orientation is given on the tree, i.e.,

    the root r knows it is the root and every non-root vertex v knows which of its port numbers

    leads to its parent p(v) in the tree.

    Constructing the marker algorithm: Given a tree Ts such that fProb(Γ)(Ts) = 1, let

    γ ∈ Γ be the corresponding implicit labeling scheme. Recall that γ is based on some separatordecomposition Sep Decomp on T . For every level-l separator v (in Sep Decomp), the label

    MΓ(v) given by marker MΓ is composed of two sublabels, namely, Morient(v) and Mstate(v).Informally, the first sublabel Morient(·) is used to verify that the states are given by some

    implicit labeling scheme according to some separator decomposition. This is implemented by

    13

  • indicating, at the label of each vertex v, whether the direction from v towards each separator

    of v is up or down the tree. More formally, for every level-l separator v, Morient(v) contains lfields, i.e., Morient(v) = (Morient1 (v),Morient2 (v), · · · ,Morientl (v)). For every 1 < k ≤ l, set

    Morientk (v) =

    0 , the level-k separator of v is a descendant of v in T

    * , k = l

    1 , otherwise

    The second sublabel Mstate(v) is simply a copy of the state sv. Since the state sv is supposedto be a label assigned by some γ ∈ Γ, we may assume that it is composed of two components(which correspond to the sublabels Esep(·) and Eω(·) of Eγ(·)). We therefore consider the sublabelMstate(·) as composed of two components, namely, Msep(·) and Mω(·), i.e., for every vertex v,Mstate(v) = Msep(v),Mω(v). (The comma stands for the concatenation operator.)

    Constructing the verifier algorithm: Given some marker algorithm L, for every vertex

    v, let l(v) denote the number of fields in sublabel Lorient(v). (We assume that some delimiter

    method is used to define the fields; clearly, the delimiter method does not increase the order of

    the size of the label.)

    Given NL(v), the verifier V outputs 1 iff for every 1 ≤ k ≤ l(v), the following conditions aresatisfied. Informally, the conditions imply that the label indeed looks as if it has been given by

    Marker MΓ as defined above. Recall, that MΓ assigns Mstate(v) to be the state of v- this ischecked below by condition 1. Then, some conditions check that Lorient matches the semantics

    of Morient of the case that the separator is an ancestor of v (condition 2), the case that theseparator is a descendant of v (condition 3), and the case that v itself is the separator (conditions

    4 and 6). The Sep − level property is also checked (based on Claim 3.1) by conditions 4, 5,and 6. The remaining conditions check that the computation of the maximum weight edge on

    the route between v and each separator is consistent among the nodes.

    1. Lstate(v) = sv 6= ∅

    2. If Lorientk (v) = 1 then v is not the root and l(p(v)) ≥ k. Moreover, for every child u of v,Lorientk (u) = 1.

    3. If Lorientk (v) = 0 then there exists precisely one child u of v such that Lorientk (u) is either

    0 or ∗ and if v is not the root then Lorientk (p(v)) = 0.

    14

  • 4. If Lorientk (v) = ∗ then both Lorient(v), Lsep(v) and Lω(v) contain precisely k fields.

    5. If w is a neighbor of v then for every 1 ≤ j ≤ min{l(v), l(w)}, Lsepj (v) = Lsepj (w).

    6. If k = l(v) then Lorientk (v) = ∗ and for every neighbor w of v such that l(w) ≥ l(v), thefollowing hold.

    (a) l(w) > l(v).

    (b) If v is a non-root node then Lorientk (p(v)) = 0 and if v is a non-leaf node then

    Lorientk (u) = 1 for every child u of v.

    (c) For any other neighbor w′ of v such that l(w′) ≥ l(v), Lsepk+1(w) 6= Lsepk+1(w′).

    7. If Lorientk (v) = 1 then let ω be the weight of the edge leading from v to its parent p(v).

    Then, if Lorientk (p(v)) = ∗ then Lωk (v) = ω, otherwise, Lωk (v) = max{Lωk (p(v)), ω}.

    8. If Lorientk (v) = 0 then let ω be the weight of the edge leading from v to its unique child

    u satisfying Lorientk (v) 6= 1. Then, if this child u satisfies Lorientk (v) = ∗ then Lωk (v) = ω,otherwise, Lωk (v) = max{Lωk (u), ω}.

    The fact that the maximum number of bits used in a label given by MΓ is asymptoticallythe same as the maximum number of bits used in a state of a vertex in Ts is clear from the

    description of MΓ.Let us now turn to prove that πΓ is a correct proof labeling scheme for fProb(Γ) and TS(n, W ).

    Clearly, given a tree Ts ∈ TS(n, W ) such that fProb(Γ)(Ts) = 1, for every vertex v ∈ T , theverifier VΓ applied on NMΓ(v) outputs 1. Assume now, by way of contradiction, that Ts issuch that fProb(Γ)(Ts) = 0 and there exists a marker algorithm L such that for every vertex

    v ∈ T , VΓ(NL(v)) = 1. The contradiction is achieved once we show that there exists an implicitlabeling scheme γ ∈ Γ such that for every vertex v ∈ T , sv = Mγ(v).

    Let us first show that there exists a unique vertex v such that l(v) = 1. By Condition 2

    in the description of VΓ, for the root r, Lorient1 (r) is either 0 or ∗. If Lorient1 (r) 6= ∗ then byCondition 3, there must exists a vertex x such that Lorient1 (x) = ∗. It follows that there mustexist a vertex v1 such that L

    orient1 (v1) = ∗. By Conditions 4 and 1, l(v1) = 1 and therefore,

    by Conditions 6.b and 1, for every child u of v1, Lorient1 (u) = 1 and if v1 is not the root then

    Lorient1 (p(v1)) = 0. Therefore, by Condition 2, for every descendant w of v1, Lorient1 (w) = 1.

    It follows by Condition 6 that if v1 is the root then v1 is the only vertex satisfying l(v) = 1.

    Otherwise, if v1 is not the root, then Lorient1 (p(v1)) = 0 and by Condition 3, L

    orient1 (w) = 0 for

    15

  • every vertex w on the path from v1 to the root. Therefore, by Conditions 2 and 3, Lorient1 (x) = 1

    for every vertex x 6= v1 which is not an ancestor of v1. We therefore get that v1 is the onlyvertex satisfying l(v) = 1. Consider v1 as the level-1 separator in the separator decomposition

    Sep Decomp performed by the desired scheme γ. Since v1 is the only vertex with just one

    field in its state, it follows from Condition 5 that by removing v1 from the tree, T breaks into

    subtrees T 1(v1), T2(v2), · · · , T p(v1), such that for every 1 ≤ j ≤ p, all labels Lsep(v) of vertices

    v ∈ T j(v1) have the same value ρ(j) in their second field. Moreover, by Condition 6.c, for every1 ≤ j 6= g ≤ p, ρ(j) 6= ρ(g). The fact that for each vertex v, Lω1 (v) is MAX(v, v1) follows fromConditions 7 and 8.

    Following the same arguments as before we obtain that for each subtree T j(v1), there exists

    a unique vertex vj2 whose state contains precisely two fields and that for each vertex v ∈ T j(v1),Lω2 (v) is MAX(v, v

    j2). For each j, consider v

    j2 as a level-2 separator in the separator decompo-

    sition Sep Decomp performed by the desired scheme γ. Using the same arguments as before,

    the lemma follows by induction on the depth of the separator decomposition Sep Decomp.

    3.3 The proof labeling scheme πmst for fMST and F(n, W )

    Theorem 3.4 There exists a proof labeling scheme πmst for fMST and F(n, W ) with sizeO(log n · log W ).

    Proof: As mentioned before, it was shown in [25] (Lemma 4.3) that a proof labeling scheme

    for fMST and F(n, W ) can be partitioned into two steps: (1) verify that the subgraph inducedby the states is a spanning tree; and (2) verify that this spanning tree is minimal. Moreover, the

    first step is already given in [25] using O(log n)-bit labels. Hence, we assume that the subgraph

    induced by the states is a spanning tree T and concentrate on verifying whether it is minimal.

    We describe a proof labeling scheme πmst = 〈Mmst,Vmst〉 as claimed. In the case where Tis an MST, the marker algorithm Mmst assigns each vertex v a label M(v), composed of threesublabels. The first sublabel at every node Mspan(·) is assigned by the marker algorithm of theproof labeling scheme described in Lemma 2.3 of [25] (to verify that the given tree is indeed a

    tree). The second sublabel Mγsmall(·) is the same as the label assigned to the correspondingvertex by the encoder algorithm Eγsmall of the implicit labeling scheme γsmall applied on T .By composing with the proof labeling scheme πΓ, mentioned in the above lemma (encoded

    in the third sublabel MΓ(·)), we can verify that the second sublabel is the label assigned bysome implicit labeling scheme γ ∈ Γ applied on T . Note, that in terms of correctness, it is

    16

  • not necessary to verify that the second sublabel is the label assigned by the specific scheme

    γsmall ∈ Γ. However, since an arbitrary scheme γ ∈ Γ may have large size, we use the specificscheme γsmall in the marker algorithm, to achieve a proof labeling scheme of small size.

    As mentioned before, using the first sublabels, verifier Vmst first verifies that T is a spanningtree of G. Then, using the third sublabels of the vertices, verifier Vmst verifies that the secondsublabels can be used by some implicit labeling scheme γ supporting the MAX(·, ·) functionon T . Then, verifier Vmst at vertex v computes MAX(v, u) for every neighbor u of v, using thedecoder of γsmall (which is the same for any scheme γ ∈ Γ) applied on the second sublabels ofv and u. Finally, verifier Vmst at vertex v verifies for every neighbor u of v, that ω(v, u) is atleast as large as MAX(v, u).

    The correctness of the proof labeling scheme πmst follows from Claim 3.1 and Lemmas 3.2

    and 3.3. The fact that the size of πmst is O(log n · log W ) follows from Lemmas 3.2 and 3.3.

    4 Lower bound for MST Verification

    In this section we establish a lower bound of Ω(log n·log W ) on the label size of any proof labelingscheme for fMST and FS(n, W ) as long as W > (log n)1+ǫ for some fixed ǫ > 0. Though thespecific parts of our proof are rather different, our approach is inspired by the lower bound

    proofs of [16, 21]. The lower bound proof of [16] was constructed for implicit distance labeling

    schemes in trees. This proof was later modified in [21] to construct a lower bound proof for

    implicit FLOW labeling schemes. Our proof is somewhat closer to the modified proof of [21].

    Specifically, the proof in [21] uses a class of weighted complete binary trees called (h, µ)-trees.

    The main lemma therein shows that one can derive labels for the leaves of any (h − 1, µ2)-treefrom the labels given to the leaves of some (h, µ)-tree. In fact, the class of (h, µ)-trees can be

    partitioned into µ disjoint subclasses, such that one can label the leaves in (h−1, µ2)-trees usingthe labels given to the leaves in trees even of one (any one) of the above mentioned subclass.

    (This shows that the set of labels used for leaves in (h, µ)-trees is large, and thus leads to a

    lower bound on the size of a label.)

    Our proof uses a new combinatorial structure termed (h, µ)-hypertrees that is a combination

    of (h, µ)-trees and a hypercube. That is, an (h, µ)-hypertree is constructed by connecting (via

    a weighted path) every node in one (h − 1, µ)-hypertree to the corresponding node in another(h − 1, µ)-hypertree. Intuitively, the proof needed that the combinatorial structure has thefollowing properties, which the original construct of [21] does not provide: (1) we needed to

    17

  • create many cycles; and (2) we needed to move nodes in different subtrees closer to each other.

    Our proof follows the general structure of [21] in the sense that labels for some (h − 1, µ2)-hypertree H ′ are computed using the labels for some (h, µ)-hypertree H . However, it may

    be useful to understand some of the differences (in addition to the difference between the

    combinatorial structure). First, in contrast to [21], in which only the leaves of the trees are

    labeled, we needed to label all the vertices of our hypertrees. This is because proof labeling

    schemes rely on the cooperation of many vertices, checking a property transitively. Second,

    the additional complexity of our structure poses additional difficulties not only at the state of

    computing labels, but also at the stage of verifying them. A new trick we introduce here is that

    the verifier described in the construction below, at any node v, has to guess labels for some

    other nodes.

    Let us start by defining the family of (h, µ)-hypertrees. For i ≥ 0, let

    Qi(µ) = {µ · i + j | 0 ≤ j ≤ µ − 1}.

    An (h, µ)-hypertree H is constructed inductively on h. We note that it will follow from our con-

    struction that given h, all (h, µ)-hypertrees H are identical if we consider them as unweighted.

    Furthermore, it will follow from our construction, that the subgraph induced by the states of a

    hypertree H (see definition 2.1) is a spanning tree of H (not necessarily minimal). We therefore

    refer to the subgraph induced by the states of a hypertree H as the spanning tree induced by

    the states of H . (The edges of the spanning tree are drawn as directed edges in Figure 1.) A

    (1, µ)-hypertree is a single vertex with an empty state. Given two (rooted) (h−1, µ)-hypertreesH0 and H1, we construct an (h, µ)-hypertree H as follows. (Figure 1 illustrates the following

    construction of H).

    1. Create a new root vertex r and choose some fixed x ∈ Qh−1(µ). Connect the roots of H0and H1 to r, and assign them states such that they point at r. Let the weight of both

    edges incident to r be x.

    2. For every vertex a0 ∈ H0, let a1 be its homologous vertex in H1, and add to H the pathPath(a0, a1) defined as follows. Path(a0, a1) = (a0, â0, â1, a1) consists of four vertices (a0

    and a1 together with two new vertices â0 and â1) and three edges, namely, (a0, â0), (â0, â1)

    and (â1, a1). Let the state at â0 (respectively, â1) point at a0 (resp., a1).

    3. For every new path Path(a0, a1) defined in Step 2 above, let the weights ω(a0, â0) =

    ω(â1, a1) = 1 and let the value of ω(â0, â1) be taken from Qh−1(µ). We refer to ω(â0, â1)

    18

  • a1a0 a0 a1

    H

    xx

    11

    1 1

    H1H0

    r

    r0 r1r0 r1

    w(a0 ,a1 )

    x, w(a0 ,a1 ), w(r0 ,r1 ) Qh-1 ( )

    w(r0 ,r1)^ ^

    ^^

    ^^

    ^ ^

    Path(a0 , a1 )

    ^^^^

    Figure 1: Constructing a hypertree out of two trees.

    as the weight of Path(a0, a1). Note, that the weight of the pass can be any fixed value

    from Qh−1(µ). However, path Path(a0, a1) is termed legal if its weight equals x, the value

    given in step 1 above (to the top most edges of H).

    4. Consider the spanning tree induced by the states of H (the directed edges of Figure 1).

    Perform a preorder on this spanning tree starting at the root. Let the new identity id(v)

    of each vertex v be a number according to this preorder. In particular, id(r) = 1.

    Given an (h, µ)-hypertree H constructed as shown above, let P denote the set of pathsPath(u, v) added to H throughout the inductive construction. (Note, that a path is added also

    between vertices that were created for paths added earlier.) An (h, µ)-hypertree H is termed

    legal if the collection of paths P consists only of legal paths.The proof of the following claim is straightforward.

    Claim 4.1 1. Given a hypertree H, the weight of any legal path Path(u, v) ∈ H equalsMAX(u, v), where MAX(u, v) is calculated on the spanning tree induced by the states of

    H.

    2. The spanning tree induced by the states in a legal (h, µ)-hypertree H is an MST of H.

    19

  • Let C(h, µ) be the family of (h, µ)-hypertrees. Given x ∈ Qh−1(µ), let C(h, µ, x) be thesubfamily of C(h, µ) consisting of (h, µ)-hypertrees with x being the weight of the edges adjacentto the root. Hence, C(h, µ) = ⋃µh−1x=µ(h−1) C(h, µ, x).

    A proof labeling scheme π = 〈M,V〉 satisfies the identity property if the identity id(v) ofeach vertex v is encoded in the leftmost field of the label M(v), assigned to v by M. (We shallshow that it is enough to speak only of schemes with this property.) Given a proof labeling

    scheme π = 〈M,V〉 for fMST and C(h, µ) satisfying the identity property, let X(π, h, µ) denotethe set of all labels assigned by M to nodes of hypertrees in C(h, µ). Let g(h, µ) denote theminimum cardinality |X(π, h, µ)| over all proof labeling schemes for fMST and C(h, µ) whichsatisfy the identity property.

    Hereafter, we fix π̂ =〈

    M̂, V̂〉

    to be some proof labeling scheme (satisfying the identity

    property) for fMST and C(h, µ) attaining g(h, µ), i.e., such that |X(π, h, µ)| = g(h, µ).Note, that a legal (h, µ)-hypertree H is completely defined by H0, H1, x, where H0 and H1

    are the two (legal) (h − 1, µ)-hypertrees hanging from the root’s children and x is the weightof the edges incident to the root of H . Let X(x) denote the set of all possible pairs of labels

    assigned by M̂ to some nodes a0, a1 ∈ H = (H0, H1, x), where a0 ∈ H0, a1 ∈ H1 and H isa legal hypertree in C(h, µ, x). Let X = ⋃µh−1x=µ(h−1) X(x). As X ⊆ X(π̂, h, µ) × X(π̂, h, µ), wehave,

    Claim 4.2 |X | ≤ g(h, µ)2.

    Lemma 4.3 For every 0 ≤ x 6= x′ < µ, the sets X(x) and X(x′) are disjoint.

    Proof: Consider two different weights µ(h − 1) ≤ x 6= x′ < hµ, and assume by way ofcontradiction that there exists a pair (λ1, λ2) ∈ X(x) ∩X(x′). Then there exists a legal (h, µ)-hypertree H = (H0, H1, x) which uses the label λ1 for some vertex a0 ∈ H0 and the label λ2for some vertex a1 ∈ H1, and there exists a legal (h, µ)-hypertree H ′ = (H ′0, H ′1, x′) whichuses the label λ1 for some vertex a

    ′0 ∈ H ′0 and the label λ2 for some vertex a′1 ∈ H ′1. By our

    assumption, that the identity of a vertex is encoded in the leftmost field of its label, and by

    the fact that given h, all (h − 1, µ)-hypertrees are identical if we consider them as unweighted,we obtain that a0 = a

    ′0 and a1 = a

    ′1. If follows that the path Path(a0, a1) in H is the same as

    the path Path(a′0, a′1) in H

    ′, except that the weight of Path(a0, a1) is x whereas the weight of

    Path(a′0, a′1) is x

    ′. Assume W.L.O.G that x > x′ and modify H into a new hypertree H ′′ by

    replacing the weight of path Path(a0, a1) in H by x′.

    Since, by Claim 4.1, the weight x′ of Path(a0, a1) in H′′ is smaller than x = MAX(a0, a1)

    20

  • (calculated on the spanning tree induced by H ′′), we obtain that the spanning tree induced by

    the states of H ′′ is not minimal, i.e., fMST (H′′) = 0. We now show that it is possible to mark

    H ′′ by a combination of the labeling M̂ uses for H ′ and the labeling it uses for H such thatthe claimed verifier is actually fooled (to output 1) at each node.

    We describe a labeling assignment L to the vertices of H ′′. For every vertex v /∈ Path(a0, a1)(that is, all the vertices except for four), let L(v) be the label assigned to v by the marker

    algorithm M̂ applied on H and for every vertex v ∈ Path(a0, a1), let L(v) be the label assignedto v by the marker algorithm M̂ applied on H ′.

    Since fMST (H) = 1 (by Claim 4.1), and since for every vertex v /∈ Path(a0, a1), NL(v)in H ′′ is the same as N

    M̂(v) in H , we obtain that at every vertex v /∈ Path(a0, a1), the

    verifier at v satisfies V̂(NL(v)) = 1. Similarly, since fMST (H ′) = 1, and since for every vertexv ∈ Path(a0, a1), NL(v) in H ′′ is the same as NM̂(v) in H ′, we obtain that at every vertexv ∈ Path(a0, a1), the verifier at v satisfies V̂(NL(v)) = 1. This contradicts the correctness of π̂(assumed in the definition of X(x)), since fMST (H

    ′′) = 0.

    Lemma 4.4 For every x ∈ Qh(µ), |X(x)| ≥ g(h − 1, µ2).

    Before proving, let us first outline the proof. Given an (h−1, µ2) hypertree H ′, we associatewith it a hypertree H ∈ C(h, µ, x) (for some x ∈ Qh−1(µ)) such that H is legal if and only if H ′is legal. Moreover, the states of the two hypertrees (H and of H’) induce two subgraphs. We

    show that one of these subgraphs is a minimum spanning tree if and only if the other is. This

    is summarized in Claim 4.5.

    Next, given a marking of an assumed optimal proof labeling scheme for H (for fMST ), we

    show how to translate these marking M̂ to a marking M′ for the vertices of H ′ (for fMST ).That is, we translate the labels of a pair of vertices in H to a label of one vertex in H ′. (Each

    vertex of H participates in only one pair). Hence, we obtain in Claim 4.6 that the number

    of labels used to label H ′ is at most X(x). This implies the lemma (by the definition of g),

    provided that we prove that the above marking is a part of a correct proof labeling scheme.

    For that, we first construct the verifier part V ′ for the above marker M′. We then concludethe proof by showing that scheme (M′,V ′) is a correct proof labeling scheme for fMST for H ′.That is, if the tree induced by the states of the vertices in H ′ is a minimum spanning tree than

    V ′ outputs 1 everywhere in H ′, and vice versa.Proof: Given a hypertree H , and a vertex v ∈ H , let NeighH(v) denote the set of vertices inH which are at hop distance at most 1 from v, i.e., the set of all neighbors of v (including the

    21

  • vertex v itself).

    In any (h − 1, µ2)-hypertree, a weight ωi ∈ Qi(µ2), ωi = i · µ2 + j, for 0 ≤ j ≤ µ2 − 1, canbe represented by the pair of weights

    y0 = j mod µ and y1 =

    j

    µ

    ,

    such that y0, y1 ∈ [0, µ − 1] and ωi = y0 + µ · y1 + µ2 · iConsequently, one can associate with any hypertree H ′ ∈ C(h − 1, µ2) a hypertree H =

    (H0, H1, x) ∈ C(h, µ, x) as follows. Every vertex a′ of H ′ is now associated with two homologousvertices of H , namely, the vertex a0 (occurring in the left part of H , i.e., H0), and the vertex a1

    (occurring in H1). For any edge e′ of H ′ with weight ωe′ = y0 +µ · y1 +µ2 · i (for some i), let the

    weight of e0 (respectively, e1), the corresponding edge of e in H0 (resp., H1), be ωe0 = y0 + µ · i(resp., ωe1 = y1 + µ · i) for the i defined above. Note, that ωe0 and ωe1 are uniquely determinedby ωe′ and vice versa. In addition, for every two homologous vertices a0 ∈ H0 and a1 ∈ H1, letthe weight of the path Path(a0, a1) in H be x. See Figure 2 for illustrating how to associate

    H ′ with H .

    ^ ^

    a1a0

    H C(h, )

    x x

    11

    1 1

    H1H0

    r’ r0 r1r0 r1

    ax

    x

    w(e)Q i (

    2)w(e0)

    Q i ()

    w(e1)Q i (

    )

    H’ C(h-1, 2)

    ^ â1a0

    r

    Figure 2: Associating H ′ with H .

    The following claim is straightforward.

    22

  • Claim 4.5 1. The hypertree H ′ ∈ C(h − 1, µ2) is legal iff the associated hypertree H ∈C(h, µ, x) is legal,

    2. fMST (H) = 1 iff fMST (H′) = 1.

    Using the marking of H to derive a marking for H ′: We use the claim above to derive a

    proof labeling scheme π′ = 〈M′,V ′〉 for fMST and C(h−1, µ2) (satisfying the identity property)using at most |X(x)| labels. This will show that X(x) is large, as claimed in the lemma. Givenan (h − 1, µ2)-hypertree H ′, consider the hypertree H ∈ C(h, µ, x) associated with H ′ and usethe marker algorithm M̂ to label H = (H0, H1, x). Denote by L̂(v, H) the label assigned toa vertex v ∈ H by marker M̂ applied on H . We now use these labels to define the markeralgorithm M′ for the nodes of H ′ as follows.

    For every vertex a′ ∈ H ′, let a0 and a1 be the two homologous vertices in H associ-ated with a′. Vertex a′ ∈ H ′ receives M′(a′) =

    id(a′), L̂(a0, H), L̂(a1, H), x〉

    as its la-

    bel. Note, that for every vertex a′ ∈ H ′, the label M′(a′) contains four sublabels, namely,〈M′1(a′),M′2(a′),M′3(a′),M′4(a′)〉. Clearly, given id(a′), one can reconstruct the identitiesid(a0) and id(a1) and vise versa. (Recall that the id of a vertex in a hypertree is the preorder

    number of the node in the spanning tree of the hypertree.) Therefore, since the identities of the

    vertices are encoded in the left fields of the corresponding labels, the set {〈

    id(a′), L̂(a0, H), L̂(a1, H), x〉

    |a′ ∈ H ′ and H ′ ∈ C(h−1, µ2)} contains the same number of items as the set {

    L̂(a0, H), L̂(a1, H)〉

    |a′ ∈ H ′ and H ′ ∈ C(h − 1, µ2)} ⊆ X(x). The following claim follows:

    Claim 4.6 The number of labels used by marker M is at most |X(x)|.

    It remains to show that M′ is the marker algorithm of a correct proof labeling scheme πfor MST.

    Constructing verifier V ′: Given a labeling algorithm L′ applied on H ′, letL′(a′) = 〈L′1(a′), L′2(a′), L′3(a′), L′4(a′)〉 denote the label assigned to a′ by L′. The verifierV ′(NL′(a′)) at a vertex a′ ∈ H ′ outputs 1 iff the following conditions hold. (Informally, theconditions check that L′ could really result from performing M ′ as defined above.)

    1. L′(a′) = id(a′).

    2. For every vertex b′ ∈ NeighH′(a′) , L′4(b′) = L′4(a′). (In the case where L′ is M′, thisvalue is x).

    23

  • 3. (The guessing stage:) There exists a labeling assignment L for the nodes in NeighH(a0)∪NeighH(a1) with the following properties. Consider any vertex b

    ′ ∈ NeighH′(a′), and letb0, b1 be the two homologous vertices in H associated with b

    ′ as defined above. Labeling

    assignment L is required to be such that L(b0) = L′2(b

    ′) and L(b1) = L′3(b

    ′). Note, that

    verifier V ′ needs to guess L(b0), b1 by seeing L′(b′). Moreover, assuming the weight ofPath(a0, a1) is L

    ′4(a

    ′), verifier V ′ performs a simulated execution of verifier V̂ using L.Verifier V ′ then checks that for the simulated execution, the following hold:

    (a) If a′ 6= r′ (i.e., id(a′) 6= 1), then the condition here is satisfied if in the simulatedexecution at a0, â0, â1 and a1 the following holds for V̂: V̂(NL(a0)) = V̂(NL(â0)) =V̂(NL(â1)) = V̂(NL(a1)) = 1.

    (b) If a′ = r′ (i.e., id(a′) = 1), then the condition here holds if in the simulated execution

    of verifier V̂ at r0, r̂0, r̂1, r1 and r the following holds: V̂(NL(r0)) = V̂(NL(r̂0)) =V̂(NL(r̂1)) = V̂(NL(r1)) = V̂(NL(r)) = 1.

    Proving scheme (M′,V ′): Let us now show that scheme π′ = 〈M′,V ′〉 is a correct prooflabeling scheme for fMST and C(h − 1, µ2) (satisfying the identity property). Let H ′ be ahypertree in C(h−1, µ2) such that fMST (H ′) = 1. Clearly, for every vertex a′ ∈ H ′, Conditions1 and 2 in the description of verifier V ′ are satisfied if the labels are given by marker M′. LetH be the corresponding hypertree in C(h, µ, x) and for each vertex v ∈ H , let M̂(v) be thelabel assigned to v by marker algorithm M̂. By Claim 4.5, fMST (H) = 1 and therefore, bythe correctness of proof labeling scheme π̂, we obtain that for every vertex v ∈ H , V̂(v) = 1.It follows that for every vertex a′ ∈ H ′, Condition 3 in the description of verifier V ′ is alsosatisfied. Altogether, we obtain that for every vertex a′ ∈ H ′, V ′M′(a′) = 1.

    Assume now that for every vertex a′ ∈ H ′, V ′L′(a′) = 1 for some labeling algorithm L′applied on H ′. Our goal is to show that fMST (H

    ′) = 1. Let H be the corresponding hypertree

    in C(h, µ, x). We now define a marker algorithm L assigning each vertex v ∈ H a label L(v)defined as follows. For every vertex a′ ∈ H ′, let L(a0) = L′2(a′) and let L(a1) = L′3(a′). Forevery non-root vertex a′ ∈ H ′, we know that V ′ guessed some labels that satisfied condition3(a) (since we assume here that V ′L′(a′) = 1). Let these labels be L(â0) and L(â1) of â0 andâ1, respectively. Similarly, let L(r̂0), L(r̂1) and L(r) (where r is the root of H) be the labels of

    r̂0, r̂1 and r respectively, such that by guessing them V ′ found that Condition 3(b) is satisfied.If follows that for every vertex v ∈ H , V̂L(v) = 1 and therefore, by the correctness of π̂,fMST (H) = 1. By Claim 4.5, fMST (H

    ′) = 1 and the correctness of π′ follows.

    24

  • Since scheme π′ = 〈M′,V ′〉 uses at most |X(x)| labels, we obtain that |X(x)| ≥ g(h−1, µ2).

    Combining Claim 4.2 and Lemmas 4.3 and 4.4, we obtain the following;

    Corollary 4.7 g(h, µ) ≥ √µ ·√

    g(h − 1, µ2).

    Subsequently, we obtain the following lemma.

    Lemma 4.8 g(h, µ) ≥ µh/2.This allows us to conclude with the lower bound.

    Theorem 4.9 In the case where W > (log n)1+ǫ for some fixed ǫ > 0, the size of any proof

    labeling scheme for fMST and F(n, W ) is Ω(log n · log W ).Proof: For W > (log n)1+ǫ for some fixed ǫ > 0, let π be any proof labeling scheme for fMST

    and F(n, W ) satisfying the identity property. Let h = log(n + 1) and let µ = (W + 1)/h.Scheme π is in particular a proof labeling scheme for fMST and C(h, µ). Therefore, the size ofπ is at least log(g(h, µ)). By Lemma 4.8, the size of π is at least

    h

    2log µ =

    log(n + 1)

    2· log

    (

    W + 1

    h

    )

    =log(n + 1)

    2· log(W + 1) − log(n + 1)

    2· log log(n + 1) .

    Since W > (log n)1+ǫ for some fixed ǫ > 0, it follows that log log n = o(log W ) and we obtain

    that the size of π is Ω(log n · log W ). The theorem follows by the fact that with an extraadditive cost of O(log n) to the label size, any proof labeling scheme for fMST and F(n, W )can be transformed into a proof labeling scheme for fMST and F(n, W ) satisfying the identityproperty.

    5 Sensitivity testing

    As mentioned before, in the sequential setting, a related problem to MST verification is the

    following sensitivity testing problem: given a graph and an MST of the graph G, label every

    edge (u, v) with the minimum number c(u, v) such that if the weight of the edge changes by

    c(u, v) (and the weights of the rest of the edges remain the same) then the given tree is no

    longer minimum. The number of bits used to store the output of an algorithm π per edge is

    referred to as the space complexity of π. Observe that if each c(u, v) must be given explicitly (in

    binary), then any algorithm solving this problem must outputs at least log c(u, v) bits, leading

    to a space complexity of Θ(log W · |E|), for example in graphs where the weights of all the edgesare within a constant multiplicative factor of each other.

    25

  • In this paper, we consider a slightly weaker variant of the sensitivity problem, in which the

    requirement from the output is relaxed as follows. Instead of writing the sensitivity c(u, v) of

    each edge (u, v) explicitly, we write some auxiliary information. Later, when queried about the

    sensitivity of an edge, we are allowed to perform a constant time computation to derive the

    sensitivity of that edge, using the above auxiliary information and the given graph G. Our

    sensitivity testing problem is referred to as the weak sensitivity problem.

    We describe a deterministic algorithm solving the weak sensitivity problem. For dense

    graphs (specifically, when n log n = o(|E|)), this improves the best possible space complexityof any algorithm solving the original sensitivity problem. Moreover, let π be any algorithm

    solving the original sensitivity problem and let T ime(π) be the time complexity of π (using the

    same model for time as the one used in [6]). Given a graph and an MST, our computation of

    the auxiliary information incurs O(T ime(π) + n log n) time. Hence, for graphs for which we

    improve the space complexity, the running time of our algorithm is not worse then the running

    time of π.

    Lemma 5.1 Let π be any algorithm solving the original sensitivity problem and let T ime(π)

    be the time complexity of π. There exists an algorithm solving the weak sensitivity problem with

    space complexity O(log n log W )-bits per node, and time complexity O(T ime(π) + n log n).

    Proof: Given a graph and an MST T of the graph, the auxiliary information consists of

    assigning a label L(v) to each node v in the graph. For every node v, the label L(v) contains

    two sublabels, namely LT (v) and Lγsmall(v).

    We first run π, but instead of writing c(v, u) on each edge (v, u), we do the following. Let v

    be a non-root vertex in T and let p(v) be v’s parent in the tree. The sublabel LT (v) is the value

    c(v, p(v)) as given by π. If r is the root of T then LT (r) is empty. The sublabel Lγsmall(v) of

    each vertex v is simply the label given to v by the implicit labeling scheme γsmall as described

    in Subsection 3.1.2.

    When queried about the sensitivity c(u, v) of a some edge (u, v), the following happens. If

    (u, v) is a tree edge and u is a child of v in T , then c(u, v) is written in LT (u). If (u, v) is a non-

    tree edge, then MAX(u, v) can be extracted using the decoder of γsmall (which runs in a constant

    time). The sensitivity c(u, v) is then extracted using the fact that c(u, v) = ω(u, v)−Max(u, v),where ω(u, v) is the weight of (u, v) in G.

    Clearly, a sensitivity query can be answered in constant time using our auxiliary information

    and the given graph G. Furthermore, the fact that the auxiliary information uses O(log n·log W )

    26

  • bits per node follows from Lemma 3.2. The time required to construct the sublabels LT (·) isT ime(π). In addition, it can be easily shown that the time required to construct the sublabels

    Lγsmall(·) is O(n logn). (Clearly, in a perfect decomposition, a vertex participates in up to log npartitions; in each, the vertex has to be marked by the unique name of the subtree and by the

    heaviest edge on the route to the root of the subtree; those can be calculated for the whole

    subtree by passing once on every edge of the subtree, walking down from the subtree root.)

    Returning to the distributed setting, one can define the sensitivity problem in several ways.

    We define the distributed sensitivity problem as follows. Given a graph G and an MST of G,

    label each vertex v by a label L(v), such that the following holds. For every vertex u and each

    neighbor v of u, the sensitivity c(u, v) can be extracted in constant distributed time, using a

    distributed algorithm that is invoked at u. Note, that the preprocessing algorithm that labels

    the vertices is not required to be distributed. However, the sensitivity query is distributed in

    the sense that when queried about the sensitivity of an adjacent edge (u, v), vertex u is only

    allowed to look at its own label L(v) and to invoke a distributed algorithm (that is subject

    to the constraints in the common message passing model, for example, the only way to move

    information between nodes is between neighboring nodes, and only by using messages).

    We evaluate an algorithm solving the distributed sensitivity problem by its label size, i.e,

    the maximum number of bits used in a label. Using the same labels as given by the algorithm

    described in the previous lemma, the following lemma can easily be obtained.

    Theorem 5.2 There exists an algorithm solving the distributed sensitivity problem with label

    size O(log n log W ).

    6 Conclusion

    Our bounds are tight only for W such that log log n = o(log W ). The lower bound proof does

    not carry to the cases of a smaller W , and the question of tight bounds in these cases are still

    an open problem. Note, that for a large W the existing lower bound depends on W , while for

    a small W this dependence disappears. This seems counter intuitive. At first glance, it seems

    that some factor that depends on W should become easier to prove in the lower bound when

    W shrinks.

    Another interesting direction of research would be to construct efficient distributed marker

    algorithms, as opposed to the algorithm described here that was not designed for an efficient

    27

  • distributed implementation.

    We used some tools taken from the the theory of implicit labeling schemes. We used them

    both for proving the lower bound and for constructing the algorithm for the upper bound. See

    [20] for implicit labeling schemes for adjacency queries, and [?] for a comprehensive survey of

    implicit labeling schemes. This may raise the hope that the theory of implicit labeling scheme

    may be shown useful for studying proof labeling schemes further.

    Finally, we should mention that not much is known about the complexity of proof labeling

    schemes in general. We hope that the current paper can serve as an example that problems in

    this area are an interesting subject of study.

    References

    [1] Y. Afek, S. Kutten, and M. Yung. The Local Detection Paradigm and its Applications to

    Self Stabilization. Theoretical Computer Science, Vol. 186, No. 1–2, pp. 199–230, 1997.

    [2] B. Awerbuch and G. Varghese. Distributed Program Checking: a Paradigm for Building

    Self-Stabilizing Distributed Protocols. Proc. IEEE FOCS 1991, pp. 258-267.

    [3] H. Booth and J. Westbrook. Linear Algorithms for Analysis of Minimum Spanning and

    Shortest Paths Trees in Planar Graphs. Algorithmica Vol. 11, No. 4, pp. 341-352, 1994.

    [4] I. Cidon, I. Gopal, M. Kaplan, and S. Kutten. A Distributed Control Architecture of High-

    Speed Networks. IEEE Transactions on Communications, Vol. 43, No. 5, pp. 1950–1960,

    May 1995.

    [5] S. Deering, D. Estrin, D. Farinacci, V. Jacobson, C. Liu, and L. Wei. Protocol Independent

    Multicast (PIM): Motivation and Architecture. Internet Draft draft-ietf-pim-arch-01.ps,

    January 1995.

    [6] B. Dixon, M. Rauch, and R. E. Tarjan. Verification and Sensitivity Analysis of Minimum

    Spanning Trees in Linear Time. SIAM Journal on Computing, Vol. 21, No 6, pp. 1184-1192,

    December 1992.

    [7] B. Dixon and R. E. Tarjan. Optimal Parallel Verification of Minimum Spanning Trees in

    Logarithmic Time. Algorithmica Issue: Vol. 17, No. 1, pp. 11 - 18, January 1997.

    [8] S. Dolev, M. G. Gouda, and M. Schneider. Memory requirements for silent stabilization.

    Acta Informatica, Vol. 36, Issue 6, October 1999, pp. 447-462.

    28

  • [9] Shlomi Dolev, Amos Israeli, and Shlomo Moran. Uniform Dynamic Self-Stabilizing Leader

    Election. IEEE Trans. Parallel Distrib. Syst. 8(4), pp. 424-440 (1997).

    [10] Anish Arora, Mohamed G. Gouda. Distributed Reset. IEEE Trans. Computers 43(9):

    1026-1038 (1994).

    [11] P. Fraigniaud and C. Gavoille. Routing in trees. In Proc. 28th Int. Colloq. on Automata,

    Languages & Prog., LNCS 2076, pp. 757–772, July 2001.

    [12] M.L. Fredman and D.E. Willard. Trans-Dichotomous algorithms for minimum spanning

    trees and shortest paths. Proc. 31st IEEE FOCS, Los Alamitos, CA, 1990, pp. 719-725.

    [13] R.G. Gallager, P.A. Humblet, and P.M. Spira. A distributed algorithm for minimum-weight

    spanning trees. ACM Trans. on Programming Languages and Systems, 5 (1983), pp. 66-77.

    [14] C. Gavoille, M. Katz, N.A. Katz, C. Paul, and D. Peleg. Approximate Distance Labeling

    Schemes. In 9th European Symp. on Algorithms, Aug. 2001, Aarhus, Denmark, SV-LNCS

    2161, pp. 476–488.

    [15] C. Gavoille and D. Peleg. Compact and localized distributed data structures. Distributed

    Computing 16(2-3): 111-120 (2003).

    [16] C. Gavoille, D. Peleg, S. Pérennes, and R. Raz. Distance labeling in graphs. In Proc. 12th

    ACM-SIAM Symp. on Discrete Algorithms, pp. 210–219, Jan. 2001.

    [17] D. Harel. A linear Time Algorithm for Finding Dominators in Flow Graphs and Related

    Problems. Proc. 17th ACM STOC, Salem, MA, 1985, pp. 185-194.

    [18] M. Jayaram and G. Varghese. The Complexity of Crash Failures. Proc ACM PODC 1997,

    pp. 179-188.

    [19] M. Jayaram, G. Varghese. Crash Failures Can Drive Protocols to Arbitrary States. Proc

    ACM PODC 1996, pp. 247-256.

    [20] S. Kannan, M. Naor, and S. Rudich. Implicit representation of graphs. In SIAM J. on

    Discrete Math 5, (1992), pp. 596–603.

    [21] M. Katz, N.A. Katz, A. Korman, and D. Peleg. Labeling schemes for flow and connectivity.

    In Proc. 19th ACM-SIAM Symp. on Discrete Algorithms, Jan. 2002, pp. 927-936.

    [22] D. R. Karger, P.N. Klein, and R.E. Tarjan. A Randomized Linear-Time Algorithm to Find

    Minimum Spanning Trees. JACM Vol. 42, No. 2, pp. 321-328, 1955.

    29

  • [23] J. Komlòs. Linear verification for spanning trees. Combinatorica, 5 (1985), pp. 57-65.

    [24] Moni Naor and Larry J. Stockmeyer. What Can be Computed Locally? SIAM J. Comput.

    24(6), pp. 1259-1277, 1995.

    [25] A. Korman, S. Kutten, and D. Peleg. Proof Labeling Schemes. Proc. the 24th Annual

    ACM Symposium on Principles of Distributed Computing (PODC 2005), Las Vegas, NV,

    USA, July 2005.

    [26] S. Kutten and D. Peleg. Fast Distributed Construction of Small k-Dominating Sets, and

    Applications. Journal of Algorithms 28(1), pp. 40-66 (1998).

    [27] V. King. A simpler minimum spanning tree verification algorithm. Algorithmica (1997)

    18, pp. 263-270.

    [28] V. King, C.K Poon, V. Ramachandran, S. Sinha. An Optimal EREW PRAM Algorithm

    for Minimum Spanning Tree Verification. IPL 62(3):153-159, 1997.

    [29] R. E. Tarjan. Sensitivity Analysis of Minimum Spanning Trees and Shortest Paths Trees.

    Information Processing Letters, 14 (1982), pp. 30-33; Corrigendum, Ibid. 23 (1986) page

    219.

    [30] R. E. Tarjan. Data Structures and Network Algorithms. SIAM, Philadelphia, PA, USA,

    1983.

    [31] F. Kuhn and R. Watenhoffer. Constant Time Distributed Dominating Set Approximation.

    Distributed Computing 17(4), pp. 303-310 (2005).

    [32] F. Kuhn, T. Moscibroda, and R. Watenhoffer. What cannot be computed locally! ACM

    PODC 2004, pp. 300-309.

    [33] S. Pettie and V. Ramachandran. An optimal minimum spanning tree algorithm. JACM

    49(1), pp. 16-34 (2002).

    [34] E. E. Tarjan. Applications of path compression on balanced trees. JACM 26 (1979), pp.

    690715.

    [35] A. C. Yao. Some Complexity Questions Related to Distributed Computing. Proc. of 11th

    STOC, pp. 209-213, 1979.

    30

  • Appendix

    31