propagation models for trust and distrust in social networks

22
Information Systems Frontiers 7:4/5, 337–358, 2005 C 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands. Propagation Models for Trust and Distrust in Social Networks Cai-Nicolas Ziegler and Georg Lausen Institut f ¨ ur Informatik, Universit¨ at Freiburg, Georges-K¨ ohler-Allee, Geb¨ aude 51, 79110 Freiburg i.Br., Germany E-mail: [email protected]; [email protected] Abstract. Semantic Web endeavors have mainly focused on issues pertaining to knowledge representation and ontology design. However, besides understanding information metadata stated by subjects, knowing about their credibility becomes equally cru- cial. Hence, trust and trust metrics, conceived as computational means to evaluate trust relationships between individuals, come into play. Our major contribution to Semantic Web trust manage- ment through this work is twofold. First, we introduce a classifi- cation scheme for trust metrics along various axes and discuss advantages and drawbacks of existing approaches for Semantic Web scenarios. Hereby, we devise an advocacy for local group trust metrics, guiding us to the second part which presents Ap- pleseed, our novel proposal for local group trust computation. Compelling in its simplicity, Appleseed borrows many ideas from spreading activation models in psychology and relates their con- cepts to trust evaluation in an intuitive fashion. Moreover, we provide extensions for the Appleseed nucleus that make our trust metric handle distrust statements. Key Words. trust, semantic web, trust propagation, spreading activation models, balance theory, web of trust 1. Introduction In our world of information overload and global con- nectivity leveraged through the Web and other types of media, social trust (McKnight and Chervany, 1996) be- tween individuals becomes an invaluable and precious good. Hereby, trust exerts an enormous impact on deci- sions whether to believe or disbelieve information as- serted by other peers. Belief should only be accorded to statements from people we deem trustworthy. Hence, trust assumes the role of an instrument for complexity reduction (Luhmann, 1998). However, when supposing huge networks such as the Semantic Web, trust judge- ments based on personal experience and acquaintance- ship become unfeasible. In general, we accord trust, concisely defined by Mui as the “subjective expectation an agent has about another’s future behavior based on the history of their encounters” (Mui, Mohtashemi and Halberstadt, 2002), to only small numbers of people. These people, again, trust another limited set of people, and so forth. The network structure emanating from our very person, composed of trust statements linking in- dividuals, constitutes the basis for trusting people we do not know personally. Playing an important role for the conception of Semantic Web trust infrastructure, the latter structure has been dubbed “Web of Trust” (Golbeck, Parsia and Hendler, 2003). Its effectiveness has been underpinned through em- pirical evidence from social psychology and sociol- ogy, indicating that transitivity is an important char- acteristic of social networks (Holland and Leinhardt, 1972; Rapoport, 1963). To the extent that communica- tion between individuals becomes motivated through positive affect, drive towards transitivity can also be explained in terms of Heider’s famous “balance the- ory” (Heider, 1958), i.e., individuals are more prone to interact with friends of friends than unknown peers. Hence, we might be tempted to adopt the policy of trusting all those people who are trusted by per- sons we trust, exploiting transitivity in social networks. Trust would thus propagate through the network and become accorded whenever two individuals can reach each other via at least one trust path. However, com- mon sense tells us we should not rely upon this strategy. More complex metrics are needed in order to more sen- sibly evaluate trust between two persons. Among other features, these metrics must take into account subtle social and psychological aspects of trust and suffice criteria of computability and scalability, likewise. The paper is organized as follows. In order to as- sess diverse properties of metrics, Section 2.1 briefly 337

Upload: others

Post on 09-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Information Systems Frontiers 7:4/5, 337–358, 2005C© 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands.

Propagation Models for Trust and Distrust in Social NetworksCai-Nicolas Ziegler and Georg LausenInstitut fur Informatik, Universitat Freiburg,Georges-Kohler-Allee, Gebaude 51, 79110 Freiburg i.Br.,GermanyE-mail: [email protected];[email protected]

Abstract. Semantic Web endeavors have mainly focused on issuespertaining to knowledge representation and ontology design.However, besides understanding information metadata stated bysubjects, knowing about their credibility becomes equally cru-cial. Hence, trust and trust metrics, conceived as computationalmeans to evaluate trust relationships between individuals, comeinto play. Our major contribution to Semantic Web trust manage-ment through this work is twofold. First, we introduce a classifi-cation scheme for trust metrics along various axes and discussadvantages and drawbacks of existing approaches for SemanticWeb scenarios. Hereby, we devise an advocacy for local grouptrust metrics, guiding us to the second part which presents Ap-pleseed, our novel proposal for local group trust computation.Compelling in its simplicity, Appleseed borrows many ideas fromspreading activation models in psychology and relates their con-cepts to trust evaluation in an intuitive fashion. Moreover, weprovide extensions for the Appleseed nucleus that make our trustmetric handle distrust statements.

Key Words. trust, semantic web, trust propagation, spreadingactivation models, balance theory, web of trust

1. Introduction

In our world of information overload and global con-nectivity leveraged through the Web and other types ofmedia, social trust (McKnight and Chervany, 1996) be-tween individuals becomes an invaluable and preciousgood. Hereby, trust exerts an enormous impact on deci-sions whether to believe or disbelieve information as-serted by other peers. Belief should only be accorded tostatements from people we deem trustworthy. Hence,trust assumes the role of an instrument for complexityreduction (Luhmann, 1998). However, when supposinghuge networks such as the Semantic Web, trust judge-ments based on personal experience and acquaintance-ship become unfeasible. In general, we accord trust,concisely defined by Mui as the “subjective expectation

an agent has about another’s future behavior based onthe history of their encounters” (Mui, Mohtashemi andHalberstadt, 2002), to only small numbers of people.These people, again, trust another limited set of people,and so forth. The network structure emanating from ourvery person, composed of trust statements linking in-dividuals, constitutes the basis for trusting people wedo not know personally. Playing an important role forthe conception of Semantic Web trust infrastructure,the latter structure has been dubbed “Web of Trust”(Golbeck, Parsia and Hendler, 2003).

Its effectiveness has been underpinned through em-pirical evidence from social psychology and sociol-ogy, indicating that transitivity is an important char-acteristic of social networks (Holland and Leinhardt,1972; Rapoport, 1963). To the extent that communica-tion between individuals becomes motivated throughpositive affect, drive towards transitivity can also beexplained in terms of Heider’s famous “balance the-ory” (Heider, 1958), i.e., individuals are more prone tointeract with friends of friends than unknown peers.

Hence, we might be tempted to adopt the policyof trusting all those people who are trusted by per-sons we trust, exploiting transitivity in social networks.Trust would thus propagate through the network andbecome accorded whenever two individuals can reacheach other via at least one trust path. However, com-mon sense tells us we should not rely upon this strategy.More complex metrics are needed in order to more sen-sibly evaluate trust between two persons. Among otherfeatures, these metrics must take into account subtlesocial and psychological aspects of trust and sufficecriteria of computability and scalability, likewise.

The paper is organized as follows. In order to as-sess diverse properties of metrics, Section 2.1 briefly

337

338 Ziegler and Lausen

d

b

f

c

x

e

on

k

m

z

u

a

Fig. 1. Sample web of trust for agent a.

introduces existing trust metrics and classifies them ac-cording to our proposed classification scheme. An in-vestigation of trust metric classes and their fitness forSemantic Web scenarios follows in Section 2.2, alongwith an overview of our asserted trust model. Besides,Section 2.2 exposes the urging need for local grouptrust metrics and gives examples of possible applica-tion scenarios. Section 3 forms the second part of thepaper and explicitly deals with these local group trustmetrics. We briefly sketch the well-known Advogatotrust metric and introduce our novel Appleseed trustmetric in Section 3.2. Appleseed constitutes the ma-jor contribution of this paper and represents our ownapproach to local group trust computation. Many ofits ideas and concepts borrow from spreading activa-tion models, which simulate human semantic memory.Section 3.3 matches Appleseed and Advogato againsteach other, discussing advantages and drawbacks ofeither approach. Furthermore, results of experimentsconducted to evaluate the behavior of Appleseed un-der diverse conditions are illustrated in Section 3.4.Section 3.5 indicates possible modifications and givessome implementation details, while Section 3.6 brieflypresents the testbed we used to base all our experimentsand comparisons upon. Eventually, in Section 4.1, se-mantics and implications of distrust are discussed, fol-lowed by the integration of distrust into the Appleseedframework in Section 4.2.

2. Trust in Social Networks

Trust represents an invaluable and precious good oneshould award deliberately. Trust metrics compute quan-titative estimates of how much trust an agent a should

accord to its peer b, taking into account trust rat-ings from other persons on the network. These met-rics should also act “deliberately”, not overly award-ing trust to persons or agents whose trustworthiness isquestionable.

2.1. Classification of trust metricsApplications for trust metrics and trust management(Blaze, Feigenbaum and Lacy, 1996) are rife and notconfined to the Semantic Web. First proposals for met-rics date back to the early nineties, where trust metricswere deployed in various projects to support the Pub-lic Key Infrastructure (Zimmermann, 1995). Metricsproposed in Levien and Aiken (1998), Reiter and Stub-blebine (1997), Maurer (1996), and Beth, Borcherdingand Klein (1994) count among the most popular onesfor public key authentication and have initiated fruit-ful discussions. New areas and research fields apartfrom PKI have come to make trust metrics gain mo-mentum. Peer-to-peer networks, ubiquitous and mobilecomputing, and rating systems for online communities,where maintenance of explicit certification authoritiesis not feasible anymore, have raised the research inter-est in trust. The whole plethora of available metrics canhereby be defined and characterized along various clas-sification axes. We identify three principal dimensionswith distinctive features. These axes are not orthog-onal, though, for various features impose restrictionson the feature range of other dimensions. Mind thatsome of the below mentioned categories have alreadybeen defined in prior work. For instance, Guha (2003)differentiates between local and global trust, and dis-tinctive features between scalar and group trust met-rics are discussed in Levien (2003). However, to ourknowledge, no explicit categorization of trust metricsalong various axes, supplemented with an analysis ofaxis interaction, exists. We therefore regard the clas-sification scheme provided below as one major con-tribution of this paper. Its results are also synthesizedin Fig. 2.

2.1.1. Network perspective. The first dimension in-fluences semantics assigned to the values computed.Trust metrics may basically be subdivided into oneswith global, and ones with local scope. Global trustmetrics take into account all peers and trust links con-necting them. Global trust ranks are assigned to anindividual based upon complete trust graph informa-tion. Many global trust metrics, such as those presented

Propagation Models for Trust and Distrust in Social Networks 339

Centralized Distributed Centralized

GroupGroup Scalar

Local Global

Trust Metric Features

Fig. 2. Trust metric classification.

in Kamvar, Schlosser and Garcia-Molina (2003),Guha (2003), and Richardson, Agrawal and Domingos(2003), borrow their ideas from the renowned PageR-ank algorithm Page et al. (1998) to compute web pagereputation. The basic intuition behind the approach isthat nodes should be ranked higher the better the rank ofnodes pointing to them. Obviously, the latter approachworks for trust and page reputation likewise.

Trust metrics with local scope, on the other hand,take into account personal bias. Interestingly, some re-searchers claim that only local trust metrics are “true”trust metrics, since global ones compute overall repu-tation rather than personalized trust (Mui, Mohtashemiand Halberstadt, 2002). Local trust metrics take theagent for whom to compute trust as an additional in-put parameter and are able to operate on partial trustgraph information. The rationale behind local trust met-rics is that persons an agent a trusts may be com-pletely different from the range of individuals thatagent b deems trustworthy. Local trust metrics ex-ploit structural information defined by personalizedwebs of trust. Hereby, the personal web of trust forindividual a is given through the set of trust relation-ships emanating from a and passing through nodes ittrusts either directly or indirectly, as well as the setof nodes reachable through these relationships. Merg-ing all webs of trust engenders the global trust graph.Local trust metrics comprise Levien’s Advogato trustmetric (Levien and Aiken, 2000), metrics for modellingthe Public Key Infrastructure (Beth, Borcherding andKlein, 1994; Maurer, 1996; Reiter and Stubblebine,1997), Golbeck’s metrics for Semantic Web trust(Golbeck, Parsia and Hendler, 2003), and Sun Mi-crosystems’s Poblano (Chen and Yeager, 2003). Thelatter work hereby strongly resembles Abdul-Rahmanand Hailes Abdul-Rahman and Hailes (1997).

2.1.2. Computation locus. The second axis refers tothe place where trust relationships between individu-als are evaluated and quantified. Local or centralizedapproaches perform all computations in one single ma-chine and hence need to be granted full access to rele-vant trust information. The trust data itself may herebybe distributed over the network. Most of the before-mentioned metrics count among the class of centralizedapproaches.

Distributed metrics for the computation of trustand reputation, such as those described in Richardson,Agrawal and Domingos (2003), Kamvar, Schlosser andGarcia-Molina (2003), and Sankaralingam, Sethumad-havan and Browne (2003), equally deploy the load ofcomputation on every trust node in the network. Uponreceiving trust information from its predecessor nodesin the trust graph, an agent a merges the data with itsown trust assertions and propagates synthesized valuesto its successor nodes. The entire process of trust com-putation is necessarily asynchronous and its conver-gence depends on the eagerness or laziness of nodes topropagate information. Another characteristic featureof distributed trust metrics refers to the fact that they areinherently global. Though the individual computationload is decreased with respect to centralized computa-tion approaches, nodes need to store trust informationabout any other node in the system.

2.1.3. Link evaluation. The third dimension distin-guishes scalar and group trust metrics. According toLevien Levien (2003), scalar metrics analyze trust as-sertions independently, while group trust metrics eval-uate groups of assertions “in tandem”. PageRank Pageet al. (1998) and related approaches count among globalgroup trust metrics, for the reputation of one page de-pends on the ranks of referring pages, thus entailingparallel evaluation of relevant nodes thanks to mutualdependencies. Advogato (Levien and Aiken, 2000) rep-resents an example for local group trust metrics. Mostother trust metrics count among the category of scalarones, tracking trust paths from sources to targets and notperforming parallel evaluations of groups of trust asser-tions. Hence, another basic difference between scalarand group trust metrics refers to their functional de-sign. In general, scalar metrics compute trust betweentwo given individuals a and b taken from set V of allagents.

On the other hand, group trust metrics generallycompute trust ranks for sets of individuals in V . Hereby,global group trust metrics assign trust ranks for every

340 Ziegler and Lausen

a ∈ V , while local ones may also return ranked sub-sets of V . Note that complete trust graph informationis only important for global group trust metrics, butnot for local ones. Informally, local group trust metricsmay be defined as metrics to compute neighborhoodsof trusted peers for an individual a. As input param-eters, these trust metrics take an individual a ∈ V forwhich to compute the set of peers it should trust, as wellas an amount of trust the latter wants to share amongthe most trustworthy agents. For instance, in Levienand Aiken (2000), the amount of trust is said to cor-respond to the number of agents that a wants to trust.The output is hence given by a trusted subset of V .

Note that scalar trust metrics are inherently local,while group trust metrics do not impose any restrictionson features for other axes.

2.2. Semantic web trustMost presented metrics and trust models have been pro-posed for scenarios other than the Semantic Web. Infact, research in trust infrastructure and metrics for thelatter network of metadata still has to come of age andgain momentum. Before discussing specific require-ments and fitness properties of trust metrics along thoseaxes proposed before, we need to define one commontrust model on which to rely upon for the Semantic Web.Some steps towards one such common model havealready been taken and incorporated into the FOAF(Dumbill, 2002) project. FOAF is an abbreviation for“Friend of a Friend” and aims at enriching personalhomepages with machine-readable content encoded inRDF statements. Besides various other information,these publicly accessible pages allow their owners tonominate all individuals part of the FOAF universethey know, thus weaving a “web of acquaintances”(Golbeck, Parsia and Hendler, 2003). Golbeck has ex-tended the FOAF schema to also contain trust asser-tions with values ranging from 1 to 9, where 1 de-notes complete distrust and 9 absolute trust towardsthe individual for which the assertion has been issued(Golbeck, Parsia and Hendler, 2003). Hereby, herassumption that trust and distrust represent sym-metrically opposed concepts perfectly aligns withAbdul-Rahman and Hailes’s work (Abdul-Rahman andHailes, 2000).

The model that we adopt is quite similar to FOAFand its extensions, but only captures the notion of trustand lack of trust, instead of trust and distrust. Notethat zero trust and distrust are not the same (Marsh,1994a) and may hence not be intermingled. Explicit

modelling of distrust has some serious implications fortrust metrics and will hence be discussed separatelyin Section 4. Mind that only few research endeavorsinvestigated the implementation of distrust into trustmodels, e.g., Jøsang, Gray, and Kinateder (2003) andGuha (2003), Guha, Raghavan, and Tomkins (2004).

2.2.1. Trust model. In this section, we present theconstituents of our model for the Semantic Web trustinfrastructure. As is the case for FOAF, we assumethat all trust information is publicly accessible for anyagent in the system through machine-readable personalhomepages distributed over the network. This assump-tion may yield privacy concerns and will be discussedand justified later.

� Agent set V = {a1, . . . , an}. Similar to the FOAFapproach, we assume agents a ∈ V to be representedand uniquely identified by the URI of their machine-readable personal homepage.

� Partial trust function set T = {Wa1 , . . . , Wan }. Ev-ery agent a is associated with one partial trust func-tion Wa : V → [0, 1]⊥, which corresponds to the setof trust assertions that a has stated on its machine-readable homepage. In most cases, these functionswill be very sparse as the number of individuals forwhich an agent is able to assign explicit trust ratingsis much smaller than the total number n of agents onthe Semantic Web:

Wai (a j ) ={

p, if trust(ai , a j ) = p

⊥, if no rating for a j from ai

Note that the higher the value of Wai (a j ), the moretrustworthy ai deems a j . Conversely, Wai (a j ) = 0means that ai considers a j to be not trustworthy atall. The assignment of trust through continuous valuesbetween 0 and 1 and their adopted semantics is in per-fect accordance with (Marsh, 1994a), where possiblestratifications of trust values are proposed. Our trustmodel defines one directed trust graph with nodes be-ing represented by agents a ∈ V and directed edgesfrom nodes ai to nodes a j being trust statements withweight Wai (a j ).

For convenience, we furthermore introduce the par-tial function W : V × V → [0, 1]⊥ which we define asthe union of all partial functions Wa ∈ T .

2.2.2. Trust metrics for the semantic web. Trustand reputation ranking metrics have primarily been

Propagation Models for Trust and Distrust in Social Networks 341

used for public key certification (Reiter and Stub-blebine, 1996,1997; Levien and Aiken, 1998; Maurer,1996; Beth, Borcherding and Klein, 1994), rating andreputation systems part of online communities (Guha,2003; Levien and Aiken, 2000; Levien, 2003), peer-to-peer networks (Kamvar, Schlosser and Garcia-Molina,2003; Sankaralingam, Sethumadhavan and Browne,2003; Kinateder and Rothermel, 2003; Kinateder andPearson, 2003; Aberer and Despotovic, 2001), andalso mobile computing fields (Eschenauer, Gligor andBaras, 2002). Each of these scenarios favors differenttrust metrics. For instance, reputation systems for on-line communities tend to make use of centralized trustservers that compute global trust values for all users onthe system (Guha, 2003). On the other hand, peer-to-peer networks of moderate size rely upon distributedapproaches that are in most cases based upon PageRank(Kamvar, Schlosser and Garcia-Molina, 2003; Sankar-alingam, Sethumadhavan and Browne, 2003).

The Semantic Web, however, is expected to be madeup of millions of nodes a representing agents. The fit-ness of distributed approaches to trust metric compu-tation, such as depicted in Richardson, Agrawal andDomingos (2003) and Kamvar, Schlosser and Garcia-Molina (2003), is hence limited by virtue of variousreasons:

� Trust data storage. Each agent a needs to store trustinformation about any other agent b on the SemanticWeb. Agent a uses this information in order to mergeit with own trust beliefs and propagates the synthe-sized information to trusted agents. Even though wemight expect the size of the Semantic Web to be sev-eral orders of magnitude smaller than the traditionalWeb, the number of agents which to keep trust in-formation for will still exceed storage capabilities of“normal” agents.

� Convergence. The structure of the Semantic Web isdiffuse and not subject to some higher ordering prin-ciple or hierarchy. Furthermore, the process of trustpropagation is necessarily asynchronous. As the Se-mantic Web is huge in size with possibly numerousantagonist or idle agents, convergence of trust valuesmight take a very long time.

The huge advantage of distributed approaches, onthe other hand, is the immediate availability of com-puted trust information for any other agent in the sys-tem as well as the fact that agents have to disclose theirtrust assertions only to peers they trust (Richardson,

Agrawal and Domingos, 2003). For instance, supposethat a declares its trust in b to be 0.1, which is verylow. Hence, a might want b not to know about that fact.As distributed metrics only propagate synthesized trustvalues from nodes to successor nodes in the trust graph,a would not have to disclose its trust statements to b.

As it comes to centralized, i.e., locally computed,metrics, full trust information access is required foragents inferring trust. Hence, online communitiesbased on trust require their users to disclose all trust in-formation to the community server, but not necessarilyto other peers (Guha, 2003). Privacy is thus maintained.On the Semantic Web and in the area of ubiquitous andmobile computing, however, it is not only some centralauthority which computes trust. Any agent might wantto do so. Our own trust model, as well as trust mod-els proposed in Golbeck, Parsia and Hendler (2003),Eschenauer, Gligor and Baras (2002), and Abdul-Rahman and Hailes (1997), are hence based uponthe assumption of publicly available trust information.Though privacy concerns may persist, this assumptionis vital due to the mentioned deficiencies of distributedcomputation models. Moreover, centralized globalmetrics, such as depicted in Guha (2003) and Pageet al. (1998), also fail to fit the requirements imposedby the Semantic Web: due to the huge number of agentsissuing trust statements, only dedicated server clusterscould be able to manage the whole bulk of trust relation-ships. For small agents and applications roaming theSemantic Web, global trust computation is not feasible.

The traditional as well as the Semantic Web bearsignificant traits of small-world networks (Golbeck,Parsia and Hendler, 2003). Small worlds theory hasbeen investigated extensively by Stanley Milgram, so-cial psychologist at Harvard University. His hypothesis,commonly referred to as “six degrees of separation”,states that members of any large social network areconnected to each other through short chains of inter-mediate acquaintances (Gray et al., 2003). Relating hisresearch results to trust on the Semantic Web, we cometo conclude that average trust path lengths between anytwo individuals are small. Hence, locally computedlocal trust metrics considering trust paths from trustsources to trust targets, such as the ones proposed forPKI (Reiter and Stubblebine, 1996,1997; Levien andAiken, 1998; Maurer, 1996; Beth, Borcherding andKlein, 1994), may be expected to suitably lend them-selves to the Semantic Web. In contrast to global met-rics, no clustering of massive CPU power is requiredto compute trust.

342 Ziegler and Lausen

Besides centrally computed scalar trust metrics tak-ing into account personal bias, we advocate local grouptrust metrics for the Semantic Web. These metrics bearseveral welcome properties with respect to computabil-ity and complexity, which may be summarized as fol-lows:

� Partial trust graph exploration. Global metrics re-quire a priori full knowledge of the entire trust net-work. Distributed metrics store trust values for allagents in the system, thus implying massive datastorage demands. On the other hand, when comput-ing trusted neighborhoods, the trust network onlyneeds to be explored partially: originating from thetrust source, one only follows those trust edges thatseem promising, i.e., bearing high trust weights, andwhich are not too far away from the trust source. In-spection of personal, machine-readable homepagesis thus performed in a just-in-time fashion. Hence,prefetching bulk trust information is not required.

� Computational scalability. Tightly intertwined withpartial trust graph exploration is computational com-plexity. Local group trust metrics scale well to anysocial network size, as only tiny subsets of relativelyconstant size are visited. This is not the case forglobal trust metrics.

By the time of this writing, local group trust met-rics have been subject to comparatively sparse researchinterest and none, to our best knowledge, have beenproposed for the Semantic Web. However, we believethat local group trust metrics will play an importantrole for trust-based communities on the Semantic Web.Application scenarios for group trust are rife. In orderto not go beyond the scope of this article, we will givejust one detailed example dealing with trust in metadatastatements:

The Semantic Web basically consists of metadataassertions that machines can understand by virtue ofontology sharing. However, since the number of agentsable to publish statements is vast, credibility in thosestatements should be limited. The issue of trust in Se-mantic Web content has already been addressed inGil and Ratnakar (2002). Herein, the authors proposea centralized system which allows issuing statementsand analyzing their reliability and credibility. Comple-mentary to this work by Gil and Ratnakar, the W3CAnnotea Project intends to provide an infrastructurefor assigning annotations to statements (Kahan, 2001).These statements could also include statements about

the credibility of certain metadata. Supposing such anenvironment and supposing an agent a who wants toreason about the credibility of an assertion s found onthe Semantic Web, local group trust metrics could playan important role in its quest: not being able to judge thecredibility of s on its own, a could refer to its personalweb of trust and compute its n most trusted peers. Thelatter trust neighborhood is now taking part in an opin-ion poll where a wants to know about the credibilityits trusted peers assign to s. Technically, this could beachieved by searching Annotea servers for statementsby a’s peers about s. The eventual decision whether tobelieve s or not could then be made by averaging thecredibility ratings of its trusted peers. Similar modelswith distributed reputation systems based on trust havebeen proposed in Kinateder and Rothermel (2003).

3. Local Group Trust Metrics

Local group trust metrics, in their function as meansto compute trust neighborhoods, have not been sub-ject to mainstream research until now. Actually, sig-nificant research has been limited to the work done byLevien (2003), Levien and Aiken (1998), having con-ceived the Advogato group trust metric. This sectionprovides an overview of Advogato and introduces ourown Appleseed trust metric, eventually comparing bothapproaches.

3.1. Outline of advogato maxflowThe Advogato maximum flow trust metric has beenproposed by Levien and Aiken (2000) in order to dis-cover which users are trusted by members of an onlinecommunity and which are not. Hereby, trust is com-puted by a centralized community server and consid-ered relative to a seed of users enjoying supreme trust.However, the metric is not only applicable to commu-nity servers, but also to arbitrary agents which maycompute personalized lists of trusted peers and not onesingle global ranking for the whole community theybelong to. In this case, the agent itself constitutes thesingleton trust seed. The following paragraphs brieflyintroduce basic concepts. For more detailed informa-tion, refer to Levien and Aiken (2000,1998), and Levien(2003).

3.1.1. Trust computation steps. Local group trustmetrics compute sets of agents trusted by those being

Propagation Models for Trust and Distrust in Social Networks 343

part of the trust seed. In case of Advogato, its input isgiven by an integer number n, which is supposed to beequal to the number of members to trust (Levien andAiken, 2000), as well as the trust seed s, being a subsetof the entire set of users V . The output is a characteris-tic function that maps each member to a boolean valueindicating trustworthiness:

TrustM : 2V × N+0 → (V → {true, false})

The trust model underlying Advogato does not pro-vide support for weighted trust relationships in its orig-inal version. Hence, trust edges extending from indi-vidual x to y express blind, i.e., full, trust of x in y.Metrics for PKI maintenance suppose similar models.Maximum integer network flow computation (Ford andFulkerson, 1962) was investigated by Reiter and Stub-blebine (1997), Reiter and Stubblebine (1996) in orderto make trust metrics more reliable. Levien adopted andextended this approach for group trust in his Advogatometric.

Capacities CV : V → N are assigned to every com-munity member x ∈ V based upon the shortest-pathdistance from the seed to x . Hereby, the capacity ofthe seed itself is given by the input parameter n men-tioned before, whereas the capacity of each successivedistance level is equal to the capacity of the previouslevel l divided by the average outdegree of trust edgese ∈ E extending from l. The trust graph obtained hencecontains one single source, which is the set of seednodes considered one single “virtual” node, and mul-tiple sinks, i.e., all nodes other than those defining theseed. Capacities CV (x) constrain nodes. In order toapply Ford-Fulkerson maximum integer network flow(Ford and Fulkerson, 1962), the underlying problemhas to be formulated as single-source/single-sink, hav-ing capacities CE : E → N constrain edges instead ofnodes. Hence, Algorithm 1 is applied to the old di-rected graph G = (V, E, CV ), resulting in a new graphstructure G ′ = (V ′, E ′, CE ′ ).

Figure 3 and 4 depicts the outcome of convertingnode-constrained single-source/multiple-sink graphsinto single-source/single-sink ones with capacities con-straining edges.

Conversion is followed by simple integer maximumnetwork flow computation from the trust seed to thesuper-sink. Eventually, trusted agents x are exactlythose peers for which there is flow from “negative”nodes x− to the super-sink. An additional constraintneeds to be introduced, requiring flow from x− to thesuper-sink whenever there is flow from x− to x+. The

function transform (G = (V, E, CV)) {set E′ ← ∅, V ′ ← ∅;for all x ∈ V do

add node x+ to V ′;add node x− to V ′;if CV(x) ≥ 1 then

add edge (x−, x+) to E′;set CE′ (x−, x+) ← CV(x) − 1;for all (x, y) ∈ E do

add edge (x+, y−) to E′;set CE′ (x+, y−) ← ∞;

end doadd edge (x−, supersink) to E′;set CE′ (x−, supersink) ← 1;

end ifend doreturn G′ = (V ′, E′, CE′ );

}

Algorithm 1. Trust graph conversion.

Seed

c

3

a

5

b

3

i

0

f

1

g

0

d

0

e

1

h

1

Fig. 3. Trust graph before conversion for Advogato.

latter constraint assures that node x does not only serveas an intermediate for the flow to pass through, but isactually added to the list of trusted agents when reachedby network flow. However, the standard implementa-tion of Ford-Fulkerson traces shortest paths to the sinkfirst (Ford and Fulkerson, 1962). Therefore, the aboveconstraint is satisfied implicitly already.

Example 1 (Advogato trust computation). Supposethe trust graph depicted in Fig. 3 and 4. The only seednode is a with initial capacity CV (a) = 5. Hence, tak-ing into account the outdegree of a, nodes at unit dis-tance from the seed, i.e., nodes b and c, are assignedcapacities CV (b) = 3 and CV (c) = 3, respectively. Theaverage outdegree of both nodes is 2.5 so that sec-ond level nodes e and h obtain unit capacity. When

344 Ziegler and Lausen

1

11

1

0

22

1

4

Seed

h+

e+

a+ b+ c+

c-b-a-

h-

0

e-

Super Sinks

Fig. 4. Trust graph after conversion for Advogato.

computing maximum integer network flow, agent a willaccept itself, b, c, e, and h as trustworthy peers.

3.1.2. Attack-resistance properties. Advogato hasbeen designed with resistance against massive attacksfrom malicious agents outside of the community inmind. Therefore, an upper bound for the number of“bad” peers chosen by the metric is provided in Levienand Aiken (2000), along with an informal securityproof to underpin its fitness. Resistance against malev-olent users trying to break into the community mayalready be observed in the example depicted by Fig. 1,supposing node n to be “bad”: though agent n is trustedby numerous persons, it is deemed less trustworthythan, for instance, x . While there are fewer agents trust-ing x , these agents enjoy higher trust reputation thanthe numerous persons trusting n. Hence, it is not justthe number of agents trusting an individual i , but alsothe trust reputation of these agents that exerts an impacton the trust assigned to i . PageRank (Page et al., 1998)works in a similar fashion and has been claimed to pos-sess similar properties of attack-resistance like the Ad-vogato trust metric (Levien, 2003). In order to make theconcept of attack-resistance more tangible, Levien pro-

poses the “bottleneck property” as common feature ofattack-resistant trust metrics. Informally, this propertystates that the “trust quantity accorded to an edge s → tis not significantly affected by changes to the succes-sors of t” (Levien, 2003). Moreover, attack-resistancefeatures of various trust metrics are discussed in detailin Levien and Aiken (1998) and Twigg and Dimmock(2003).

3.2. Appleseed trust metricThe Appleseed trust metric constitutes the main con-tribution of this work and is our novel proposal for lo-cal group trust metrics. In contrast to Advogato, beinginspired by maximum network flow computation, thebasic intuition of Appleseed is motivated by spreadingactivation models. Spreading activation models havefirst been proposed by Quillian (1968) in order to simu-late human comprehension through semantic memory.They are commonly described as “models of retrievalfrom long-term memory in which activation subdividesamong paths emanating from an activated mental rep-resentation” (Smith et al., 2003). By the time of thiswriting, the seminal work of Quillian has been portedto a whole plethora of other disciplines, such as latent

Propagation Models for Trust and Distrust in Social Networks 345

a

b

d

c b

c

da

Fig. 5. Node chains and rank sinks.

semantic indexing (Ceglowski, Coburn and Cuadrado,2003) and text illustration (Hartmann and Strothotte,2002). As an example, we will briefly introduce thespreading activation approach adopted in Ceglowski,Coburn and Cuadrado (2003) for semantic search incontextual network graphs in order to then relate Ap-pleseed to the former work.

3.2.1. Searches in contextual network graphs. Thegraph model underlying search strategies in contextualnetwork graphs is almost identical in structure to theone presented in Section 2.2.1, i.e., edges (x, y) ∈ E ⊆V × V connecting nodes x, y ∈ V . Edges are assignedcontinuous weights through W : E → [0, 1]. Sourcenode s to start the search from is activated throughan injection of energy e, which is then propagated toother nodes along edges according to some set of simplerules: all energy is fully divided among successor nodeswith respect to their normalized local edge weight, i.e.,the higher the weight of an edge (x, y) ∈ E , the higherthe portion of energy that flows along that edge. Fur-thermore, supposing average outdegrees greater thanone, the closer node x to the injection source s, and themore paths leading from s to x , the higher the amount ofenergy flowing into x . To eliminate endless, marginaland negligible flow, energy streaming into node x mustexceed threshold T in order not to run dry. The de-scribed approach is captured formally by Algorithm 2,which propagates energy recursively.

procedure energize (e ∈ R+0 , s ∈ V) {

energy(s) ← energy(s) + e;e′ ← e / ∑(s,n)∈E W(s, n);if e > T then∀(s, n) ∈ E : energize (e′ · W(s, n), n);

end if}

Algorithm 2. Recursive energy propagation.

3.2.2. Trust propagation. Algorithm 2 shows the ba-sic intuition behind spreading activation models. In or-der to tailor these models to trust computation, later tobecome the Appleseed trust metric, serious adaptationsare necessary. For instance, procedure energize(e, s)registers all energy e that passed through node x , ac-cumulated in energy(x). Hence, energy(x) representsthe rank of x . Higher values indicate higher node rank.However, at the same time, all energy contributing tothe rank of x is passed without loss to its successornodes. Interpreting energy ranks as trust ranks thus im-plies numerous issues of semantic consistency as wellas computability. Consider the graph depicted on theleft-hand side of Fig. 5. Applying spreading activationaccording to Ceglowski, Coburn and Cuadrado (2003),trust ranks of nodes b and d will be identical. However,common sense tells us that d should be accorded lesstrust than b, since its shortest-path distance to the trustseed is higher. Trust decay is commonly agreed upon(Guha, 2003; Jøsang, Gray, and Kinateder, 2003), forpeople tend to trust individuals trusted by immediatefriends more than individuals trusted only by friends offriends. The right-hand side of Fig. 5 entails even moreserious implications. All energy, or trust, respectively,distributed along edge (a, b) becomes trapped in a cy-cle and will never be accorded to any other nodes butthose being part of that cycle, i.e., b, c, and d. Thesenodes will eventually acquire infinite trust rank. Obvi-ously, the bottleneck property (Levien, 2003) does nothold. Similar issues occur with simplified versions ofPageRank (Page et al., 1998), where cycles accumulat-ing infinite rank are dubbed “rank sinks”.

3.2.3. Spreading factor. We handle both issues, i.e.,trust decay in node chains and the elimination of ranksinks, by tailoring the algorithm to rely upon our globalspreading factor d. Hereby, let in(x) denote the energyinflux into node x . Parameter d then denotes the portion

346 Ziegler and Lausen

0.25

0.7 0.7

1

11

c

b

e

f

d

g

a

0.25

0.7

0.7

1

1b

c

a

Fig. 6. Issues with trust normalization.

of energy d · in(x) that the latter node distributes amongsuccessors, while retaining (1 − d) · in(x) for itself.For instance, suppose d = 0.85 and energy quantityin(x) = 5.0 flowing into node x . Then, the total energydistributed to successor nodes amounts to 4.25, whileenergy rank energy(x) of x increases by 0.75. Specialtreatment is necessary for nodes with zero outdegree.For simplicity, we assume all nodes to have an outde-gree of at least one, which makes perfect sense, as willbe shown later.

The spreading factor concept is very intuitive and,in fact, very close to real models of energy spread-ing through networks. Observe that the overall amountof energy in the network, after initial activation in0,does not change over time. More formally, supposethat energy(n) = 0 for all n ∈ V before injection in0

into source s. Then the following equation holds inevery computation step of our modified spreadingalgorithm, incorporating the concept of spreading fac-tor d:

∑x∈V

energy(x) = in0 (1)

Spreading factor d may also be seen as the ratiobetween direct trust in x and trust in the ability of x torecommend others as trustworthy peers. For instance,Beth, Borcherding and Klein (1994) and Maurer (1996)explicitly differentiate between direct trust edges andrecommendation edges.

We generally assume d = 0.85, though other val-ues may also seem reasonable. For instance, havingd ≤ 0.5 allows agents to keep most of the trust theyare granted for themselves and only pass small por-tions of trust to their peers. Observe that low valuesfor d favor trust proximity to the source of trust injec-tion, while high values allow trust to also reach nodeswhich are more distant. Furthermore, the introduction

of spreading factor d is crucial for making Appleseedretain Levien’s bottleneck property, as will be shownin later sections.

3.2.4. Rank normalization. Algorithm 2 makes useof edge weight normalization, i.e., the quantity ex→y

of energy distributed along (x, y) from x to successornode y depends on its relative weight, i.e., W (x, y)compared to the sum of weights of all outgoing edgesof x :

ex→y = d · in(x) · W (x, y)∑(x,s)∈E W (x, s)

Normalization is common practice to many trustmetrics, among those PageRank (Page et al., 1998),EigenTrust (Kamvar, Schlosser and Garcia-Molina,2003), and AORank (Guha, 2003). However, while nor-malized reputation or trust seem reasonable for modelswith plain, non-weighted edges, serious interferencesoccur when edges are weighted, as is the case for ourtrust model adopted in Section 2.2.1.

For instance, refer to the left-hand side of Fig. 6 forunwanted effects: the amounts of energy that node aaccords to successors b and d, i.e., ea→b and ea→d , re-spectively, are identical in value. Note that b has issuedonly one trust statement W (b, c) = 0.25, telling that itstrust in c is rather weak. On the other hand, d assignsfull trust to individuals e, f , and g. Nevertheless, theoverall trust rank for d will be much higher than forany successor of d, for c is accorded ea→b · d, while e,f , and g only obtain ea→d · d · 1/3 each. Hence, c willbe trusted three times as much as e, f , and g, which isnot reasonable at all.

3.2.5. Backward trust propagation. The above issuehas already been discussed in Kamvar, Schlosser andGarcia-Molina (2003), but no solution was proposed

Propagation Models for Trust and Distrust in Social Networks 347

therein, arguing that “substantially good results” wereachieved despite the drawbacks. We propose to allevi-ate the problem by making use of backward propaga-tion of trust to the source: when computing the metric,additional “virtual” edges (x, s) from every node x ∈V \ {s} to the trust source s are created. These edgesare assigned full trust W (x, s) = 1. Existing backwardlinks (x, s), along with their weights, are “overwritten”.Intuitively, every node is supposed to blindly trust thetrust source s, see Figure 6. The impacts of addingbackward propagation links are threefold:

� Mitigating relative trust. Again, we refer to theleft-hand graph in Figure 6. Trust distribution inthe underlying case becomes much fairer throughbackward propagation links, for c now only obtainsea→b · d · (0.25/(1 + 0.25)) from source s, while e,f , and g are accorded ea→d · d · (1/4) each. Hence,trust ranks of both e, f , and g amount to 1.25 timesthe trust assigned to c.

� Avoidance of dead ends. Dead ends, i.e., nodes x withzero outdegree, require special treatment in our com-putation scheme. Two distinct approaches may beadopted. First, the portion of incoming trust d · in(x)supposed to be passed to successor nodes is com-pletely discarded, which contradicts our constraint ofno energy leaving the system. Second, instead of re-taining (1 − d) · in(x) of incoming trust, x keeps alltrust for itself. The latter approach is also not sensi-ble as it encourages users to not issue trust statementsfor their peers. Luckily, with backward propagationof trust, all nodes are implicitly linked to the trustsource s, so that there are no more dead ends toconsider.

� Favoring trust proximity. Backward links to the trustsource s are favorable for nodes close to the source,as their eventual trust rank will increase. On the otherhand, nodes further away from s are penalized.

Overly rewarding nodes close to the source is notbeyond dispute and may pose some issues. In fact, itrepresents the tradeoff we have to pay for both welcomeaspects of backward propagation.

3.2.6. Nonlinear trust normalization. In addition tobackward propagation, an integral part of Appleseed,we propose supplementary measures to decrease thenegative impact of trust distribution based on relativeweights. Situations where nodes y with poor ratingsfrom x are awarded high overall trust ranks, thanks to

the low outdegree of x , have to be avoided. Taking thesquares of local trust weights provides an appropriatesolution:

ex→y = d · in(x) · W (x, y)2∑(x,s)∈E W (x, s)2

As an example, refer to node b in Figure 6. Withsquared normalization, the total amount of energyflowing backward to source a increases, while theamount of energy flowing to the poorly trusted nodec decreases significantly. Accorded trust quantitieseb→a and eb→c amount to d · in(b) · (1/1.0625) andd · in(b) · (0.0625/1.0625), respectively. More seriouspenalization of poor trust ratings can be achieved byselecting powers above two.

3.2.7. Algorithm outline. Having identified modifi-cations to apply to spreading activation models in or-der to tailor them for local group trust metrics, we arenow able to formulate the core algorithm of Appleseed.Input and output are characterized as follows:

TrustA : V × R+0 × [0, 1] × R

+ → (trust : V → R+0 )

The first input parameter specifies trust seed s,the second trust injection e, parameter three identifiesspreading factor d ∈ [0, 1], and the fourth argumentbinds accuracy threshold Tc, which serves as one of twoconvergence criteria. Similar to Advogato, the output isan assignment function of trust with domain V . How-ever, Appleseed allows rankings of agents with respectto the trust accorded. Advogato, on the other hand, onlyassigns boolean values indicating presence or absenceof trust.

Appleseed works with partial trust graph informa-tion. Nodes are accessed only when needed, i.e., whenreached by energy flow. Trust ranks trust(x), which cor-respond to energy(x) in Algorithm 2, are initialized to0. Any unknown node u hence obtains trust(u) = 0.Likewise, virtual trust edges for backward propagationfrom node x to the source are added at the moment thatx is discovered. In every iteration, for those nodes xreached by flow, the amount of incoming trust is com-puted as follows:

in(x) = d ·∑

(p,x)∈E

(in(p) · W (p, x)∑

(p,s)∈E W (p, s)

)

Incoming flow for x is hence determined by all flowthat predecessors p distribute along edges (p, x). Notethat the above equation makes use of linear normaliza-tion of relative trust weights. Replacement of linear by

348 Ziegler and Lausen

nonlinear normalization according to Section 3.2.6 isstraight-forward, though. The trust rank of x is updatedas follows:

trust(x) ← trust(x) + (1 − d) · in(x)

However, trust networks generally contain cyclesand thus allow no topological sorting of nodes. Hence,the computation of in(x) for reachable x ∈ V is in-herently recursive. Several iterations for all nodes arerequired in order to make computed information con-verge towards the least fixpoint. We give a criterionthat has to be satisfied for convergence, relying uponaccuracy threshold Tc briefly introduced before.

Definition 1 (Termination). Suppose that Vi ⊆ V rep-resents the set of nodes that were discovered until step i ,and trusti (x) the current trust ranks for all x ∈ V . Thenthe algorithm terminates when the following conditionis satisfied after step i :

∀x ∈ Vi : trusti (x) − trusti−1(x) ≤ Tc (2)

Informally, Appleseed terminates when changes oftrust ranks with respect to the prior iteration i − 1 arenot greater than accuracy threshold Tc.

Moreover, when supposing spreading factor d > 0,accuracy threshold Tc > 0, and trust source s part ofsome connected component G ′ ⊆ G containing at leasttwo nodes, convergence, and thus termination, is guar-anteed. The following paragraph gives an informalproof:Proof (Convergence of Appleseed): Assume that fi

denotes step i’s quantity of energy flowing through thenetwork, i.e., all the trust that has not been captured bysome node x through function trusti (x). It follows fromEquation (1) that in0 constitutes the upper boundary oftrust energy floating through the network, and fi canbe computed as below:

fi = in0 −∑x∈V

trusti (x)

Since d > 0 and ∃(s, x) ∈ E, x �= s, the sumof current trust ranks trusti (x) of all x ∈ V isstrictly increasing for increasing i . Consequently,limi→∞ fi = 0 holds. Moreover, since termination isdefined by some fixed accuracy threshold Tc > 0,there exists some step k such that limi→k fi ≤ Tc.

3.3. Comparison of advogato and appleseedBoth Advogato and Appleseed are implementations oflocal group trust metrics. Advogato has already provenits efficiency in practical usage scenarios such as theAdvogato online community, though lacking quantita-tive fitness information. Its success is mainly measuredby indirect feedback, such as the amount of spam mes-sages posted on Advogato, which has been claimedto be rather low. In order to evaluate the fitness ofAppleseed as an appropriate approach to group trustcomputation, we intend to relate our novel approach toAdvogato for comparison:

� Attack-resistance. This property defines the behaviorof trust metrics in case of malicious nodes trying toinvade into the system. For the evaluation of attack-resistance capabilities, we have briefly introduced the“bottleneck property” in Section 3.1.2, which holdsfor Advogato. In order to recapitulate, suppose thats and t are nodes and connected through trust edge(s, t). Node s is assumed good, while t is an attack-ing agent trying to make good nodes trust malev-olent ones. In case the bottleneck property holds,manipulation “on the part of bad nodes does notaffect the trust value” (Levien, 2003). Clearly, Ap-pleseed satisfies the bottleneck property, for nodescannot raise their impact by modifying the struc-ture of trust statements they issue. Bear in mindthat the amount of trust accorded to agent t onlydepends on its predecessors and does not increasewhen t adds more nodes. Both spreading factor dand normalization of trust statements ensure thatAppleseed becomes equally as attack-resistant asAdvogato.

� Trust weight normalization. We have indicated be-fore that issuing multiple trust statements dilutes trustaccorded to successors. According to Guha (2003),this does not comply with real world observations,where statements of trust “do not decrease in valuewhen the user trusts one more person [ . . . ]”. Themalady that Appleseed suffers from is common tomany trust metrics, notably those based upon find-ing principal eigenvectors (Page et al., 1998; Kam-var, Schlosser and Garcia-Molina, 2003; Richardson,Agrawal and Domingos, 2003). On the other hand,the approach pursued by Advogato does not penalizetrust relationships asserted by eager trust dispensers,for node capacities do not depend on local informa-tion. Remember that capacities of nodes pertainingto level l are assigned based on the capacity of level

Propagation Models for Trust and Distrust in Social Networks 349

l − 1 as well as the overall outdegree of nodes partof this level. Hence, Advogato encourages agents is-suing numerous trust statements, while Appleseedpenalizes overly abundant trust certificates.

� Deterministic trust computation. Appleseed is de-terministic with respect to the assignment of trustrank to agents. Hence, for any arbitrary trust graphG = (V, E, W ) and for every node x ∈ V , linearequations allow to characterize the amount of trustassigned to x , as well as the quantity that x accordsto its successor nodes. Advogato, however, is non-deterministic. Though the number of trusted agents,and therefore the computed maximum flow size, isdetermined for given input parameters, the set ofagents itself is not. Changing the order in which trustassertions are issued may yield different results. Forexample, suppose CV (s) = 1 holds for trust seed s.Furthermore, assume s has issued trust certificatesfor two agents, b and c. The actual choice betweenb or c as trustworthy peer with maximum flow onlydepends on the order in which nodes are accessed.

� Model and output type. Basically, Advogato sup-ports non-weighted trust statements only. Appleseedis more versatile by virtue of its trust model basedon weighted trust certificates. In addition, Advogatoreturns one set of trusted peers, whereas Appleseedassigns ranks to agents. These ranks allow to selectmost trustworthy agents first and relate them to eachother with respect to their accorded rank. Hereby, thedefinition of thresholds for trustworthiness is left tothe user who can thus tailor relevant parameters to fitdifferent application scenarios. For instance, raisingthe application-dependent threshold for the selectionof trustworthy peers, which may be either an abso-lute or relative value, allows for enlarging the neigh-borhood of trusted peers. Appleseed is hence moreadaptive and flexible than Advogato.

3.4. Parameterization and experimentsAppleseed allows numerous parameterizations of itsinput variables. Discussions of parameter instantia-tions and caveats thus constitute indispensable com-plements to our contribution. Moreover, we provideexperimental results exposing observed effects of pa-rameter tuning. Note that all experiments have beenconducted on data obtained from “real” social net-works: several web crawling tools were written tomine the Advogato community web site and extracttrust assertions stated by its more than 8,000 mem-bers. Hereafter, we converted all trust data to our trust

model proposed in Section 2.2.1. Notice that the Ad-vogato community server supports four different levelsof peer certification, namely “Observer”, “Apprentice”,“Journeyer”, and “Master”. We mapped these quali-tative certification levels to quantitative ones, assign-ing W (x, y) = 0.25 for x certifying y as “Observer”,W (x, y) = 0.5 for an “Apprentice”, and so forth. TheAdvogato community grows rapidly and our crawlerextracted 3, 224, 101 trust assertions. Heavy prepro-cessing and data cleansing was inevitable, eliminat-ing reflexive trust statements W (x, x) and shrinkingtrust certificates to reasonable sizes. Note that some ea-ger Advogato members issued more than two thousandtrust statements, yielding an overall average outdegreeof 397.69 assertions per node. Common sense tell usthat this figure is beyond dispute. Applying our set ofextraction tools, we tailored the test data obtained fromAdvogato to our needs and extracted trust networkswith specific average outdegree for the experimentalanalysis.

3.4.1. Trust injection. Trust values trust(x) com-puted by the Appleseed metric for source s and nodex may differ greatly from explicitly assigned trustweights W (s, x). We have already mentioned beforethat computed trust ranks may not be interpreted asabsolute values, but rather in comparison with ranksassigned to all other peers. In order to make assignedrank values more tangible, though, one might expectthat tuning the trust injection in0 to satisfy the follow-ing proposition will align computed ranks and explicittrust statements:

∀(s, x) ∈ E : trust(x) ∈ [W (s, x) − ε, W (s, x) + ε]

However, when assuming reasonably small ε, theapproach does not succeed. Recall that computed trustvalues of successor nodes x to s do not only dependon assertions made by s, but also on trust ratings as-serted by other peers. Hence, perfect alignment of ex-plicit trust ratings with computed ones cannot be ac-complished. However, we propose alignment heuris-tics, incorporated into Algorithm 4, which have provento work remarkably well in diverse test scenarios. Thebasic idea is to add another node i and edge (s, i) withW (s, i) = 1 to the trust graph G = (V, E, W ), treat-ing (s, i) as an indicator to tell whether trust injectionin0 is “good” or not. Consequently, parameter in0 hasto be adapted in order to make trust(i) converge to-wards W (s, i). The trust metric computation is hencerepeated with different values for in0 until convergence

350 Ziegler and Lausen

function TrustA (s ∈ V, in0 ∈ R+0 , d ∈ [0, 1], Tc ∈ R+) {

set in0(s) ← in0, trust0(s) ← 0, i ← 0;set V0 ← {s};repeat

set i ← i + 1;set Vi ← Vi−1;∀x ∈ Vi−1 : set ini(x) ← 0;for all x ∈ Vi−1 do

set trusti(x) ← trusti−1(x) + (1 − d) · ini−1(x);for all (x, u) ∈ E do

if u /∈ Vi thenset Vi ← Vi ∪ {u};set trusti(u) ← 0, ini(u) ← 0;add edge (u, s), set W(u, s) ← 1;

end ifset w ← W(x, u) / ∑(x,u′)∈E W(x, u′);set ini(u) ← ini(u) + d · ini−1(x) · w;

end doend doset m = maxy∈Vi{trusti(y) − trusti−1(y)};

until (m ≤ Tc)return (trust : {(x, trusti(x)) | x ∈ Vi});

}

Algorithm 3. Appleseed trust metric.

function Trustheu (s ∈ V, d ∈ [0, 1], Tc ∈ R+) {add node i, edge (s, i), set W(s, i) ← 1;set in0 ← 20, ε ← 0.1;repeat

set trust ← TrustA (s, in0, d, Tc);in0 ← adapt (W(s, i), trust(i), in0);

until trust(i) ∈ [W(s, i) − ε, W(s, i) + ε]remove node i, remove edge (s, i);return TrustA (s, in0, d, Tc);

}

Algorithm 4. Adding weight alignment heuristics.

of explicit and computed trust value for i is achieved.Eventually, edge (s, i) and node i are removed and themetric computation is performed one more time. Exper-iments have shown that our imperfect alignment heuris-tics yield computed ranks trust(x) for direct successorsx of trust source s which come close to previously spec-ified trust statements W (s, x).

3.4.2. Spreading factor. Small values for d tend tooverly reward nodes close to the trust source and pe-nalize remote ones. Recall that low d allows nodes toretain most of the incoming trust quantity for them-selves, while large d stresses the recommendation oftrusted individuals and makes nodes distribute most of

1

1

nonlinear

linear

0

Fig. 7. Linear and nonlinear normalization.

the assigned trust to their successor nodes:

Experiment 1 (Impact of parameter d). We comparedistributions of computed rank values for three di-verse instantiations of d, namely d1 = 0.1, d2 = 0.5,and d3 = 0.85. Our setup is based upon a social net-work with an average outdegree of six trust assignmentsand 384 nodes reached by trust energy spreading fromour designated trust source. We furthermore supposein0 = 200, Tc = 0.01, and linear weight normalization.Computed ranks are classified into 11 histogram cellswith nonlinear cell width. Obtained output results aredisplayed in Fig. 8. Mind that we have chosen loga-rithmic scales for the vertical axis in order to renderthe diagram more legible. For d1, we observe that thisvalue engenders the highest amount of nodes x withranks trust(x) ≥ 25. On the other hand, virtually noranks ranging from 0.2 to 1 are assigned, while thenumber of nodes with ranks smaller than 0.05 is againmuch higher for d1 than for both d2 and d3. Instanti-

1

10

100

1000

0.05 0.1 0.2 0.3 0.4 0.5 1 5 10 25 50

Accorded Trust Rank

Num

ber

of N

odes

d = 0.1

d = 0.5

d = 0.85

1

2

3

Fig. 8. Spreading factor impact.

Propagation Models for Trust and Distrust in Social Networks 351

ation d3 = 0.85 constitutes the counterpart of d1. Noranks with trust(x) ≥ 25 are accorded, while interimranks between 0.1 and 10 are much more likely for d3

than for both other instantiations of spreading factord. Consequently, the number of ranks below 0.05 islowest for d3.

The experiment demonstrates that high values forparameter d tend to distribute trust more evenly, neitheroverly rewarding nodes close to the source, nor penal-izing remote ones too rigidly. On the other hand, lowd assigns high trust ranks to very few nodes, namelythose which are closest to the source, while the major-ity of nodes obtains very low trust rank. We propose toset d = 0.85 for general use.

3.4.3. Accuracy and convergence rate. We alreadymentioned before that the Appleseed algorithm is in-herently recursive. Parameter Tc constitues the ultimatecriterion for termination. We will show through anexperiment that convergence is reached very fast, nomatter how huge the number of nodes trust is flow-ing through, and no matter how large the initial trustinjection.

Experiment 2 (Convergence rate). The trust networkwe consider has an average outdegree of five trust as-signments per node. The number of nodes for whichtrust ranks are assigned amounts to 572. We sup-pose d = 0.85, Tc = 0.01, and linear weight normal-ization. Two runs are computed, one with trust activa-tion in1 = 200, the other with initial energy in2 = 800.Fig. 9 demonstrates the rapid convergence of both runs.Though the trust injection for the second run is four

0

5

10

15

20

25

1 7 13 19 25 31 37 43

Number of Iterations

Max

imum

Dev

iatio

n

in1 : 200

in2 : 800

Fig. 9. Convergence of Appleseed.

times as high as for the first, convergence is reached inonly few more iterations: run one takes 38 iterations,run two terminates after 45 steps.

For both runs, we assumed accuracy threshold Tc =0.01, which is extremely small and accurate beyondnecessity already. However, experience taught us thatconvergence takes place rapidly even for very large net-works and high amounts of trust injected, so that assum-ing the latter value for Tc imposes no scalability issues.In fact, the amount of nodes taken into account for trustrank assignment in the above example well exceedspractical usage scenarios: mind that the case at handdemands 572 documents to be fetched from the Web,complaisantly supposing that these pages containingpersonal trust information for each node are cached af-ter their first access. Hence, we may well claim thatthe actual bottleneck of group trust computation is notthe Appleseed metric itself, but downloads of trust re-sources from the network. This bottleneck might alsobe the reason for selecting thresholds Tc greater than0.01, in order to make the algorithm terminate afterfewer node accesses.

3.5. Implementation and extensionsAppleseed was implemented in JAVA, based uponAlgorithm 3. We applied moderate fine-tuning and sup-plemented our metric with an architectural cushion inorder to access “real” machine-readable RDF home-pages. Other notable modifications to the core algo-rithm are discussed briefly:

� Maximum number of nodes. We supplemented theset of input parameters by yet another argumentM , which specifies the maximum number of nodesto unfold. This extension hinders trust energy fromoverly covering vast parts of the entire network. Notethat accessing the personal, machine-readable home-pages, which contain trust information required forthe metric computation, represents the actual com-putation bottleneck. Hence, expanding as few nodesas possible is highly desirable. When choosing rea-sonably large M , for instance, twice the number ofagents assumed trustworthy, we may expect to notmiss any relevant nodes: mind that Appleseed pro-ceeds breadth-first and thus considers close nodesfirst, which are more eligible for trust than distantones.

� Upper-bounded trust path lengths. Another approachto sensibly restrict the number of nodes unfoldedrelies upon upper-bounded path lengths. The idea

352 Ziegler and Lausen

of constraining path lengths for trust computationhas been adopted before by Reiter and Stubblebine(1996) and within the X.509 protocol (Housely et al.,1999). Depending on the overall trust network con-nectivity, we opt for maximum path lengths betweenthree and six, well aware of Milgram’s “six degress ofseparation” paradigm (Milgram, 1992). In fact, trustdecay is inherent to Appleseed, thanks to spread-ing factor d and backward propagation. Strippingnodes at large distances from the seed therefore onlymarginally affects the trust metric computation re-sults while providing major speed-ups at the sametime.

� Zero trust retention for the source. Third, we modi-fied Appleseed to hinder trust source s from accumu-lating trust energy, essentially introducing one novelspreading factor ds = 1.0 for the seed only. Conse-quently, all trust is divided among peers of s andnone retained, which is reasonable. Remember thats wants to discover trustworthy agents and not as-sign trust rank to itself. Convergence may acceler-ate, since trusti+1(x) − trusti (x) used to be maximalfor seed node s, thanks to backward propagation oftrust. Furthermore, supposing the same trust quantityin0 injected, assigned trust ranks become greater invalue, also enlarging gaps between neighbors in trustrank.

3.6. Testbed for local group trust metricsTrust metrics and models for trust propagation have tobe intuitive, underpinning the need for the applicationof Occam’s Razor. Humans must be able to com-prehend why agent a was accorded higher trust rankthan b and come to similar results when asked for apersonal judgement. Consequently, we implementedour own testbed, which visually displays social net-works, allows zooming of specific nodes and layoutsthese appropriately, with minimum overlap. We madeuse of the yFiles (Wiese, Eiglsperger and Kaufmann,2001) library to perform all sophisticated graph draw-ing. Moreover, our testbed permits to parameterize Ap-pleseed through dialogs. Detailed output is provided,both graphical and textual. Graphical results comprisethe highlighting of nodes with trust ranks above cer-tain thresholds, while textual results return quantita-tive trust ranks of all accessed nodes, numbers of it-erations, and so forth. We also implemented the Ad-vogato trust metric and incorporated the latter into ourtestbed. Hereby, our implementation of Advogato doesnot require a priori complete trust graph information,

but accesses nodes “just in time”, similar to Appleseed.All experiments were conducted on top of the testbedapplication.

4. Distrust

Distrust is one of the most controversial topics and is-sues to cope with, especially when considering trustmetrics and trust propagation. Most approaches com-pletely ignore distrust and only consider full trust ordegrees of trust (Levien and Aiken, 1998; Mui, Mo-htashemi and Halberstadt, 2002; Beth, Borcherdingand Klein, 1994; Maurer, 1996; Reiter and Stubblebine,1996; Richardson, Agrawal and Domingos, 2003).Others, among those (Abdul-Rahman and Hailes,1997; Chen and Yeager, 2003; Aberer and Despotovic,2001; Golbeck, Parsia and Hendler, 2003), allow fordistrust ratings, though, but do not consider the subtlesemantic differences pertaining to the distinct notionsof trust and distrust. Consequently, according to Ganset al. (2001), “distrust is regarded as just the other sideof the coin, that is, there is generally a symmetric scalewith complete trust on one end and absolute distruston the other.” Furthermore, some researchers equatethe notion of distrust with lack of trust information.Contrarily, in his seminal work on the essence of trust,Marsh (1994a) has already pointed out that those twoconcepts, i.e., lack of trust and distrust, may not beintermingled. For instance, in absence of trustworthyagents, we might be more prone to accept recommen-dations from persons we do not trust, probably becauseof lack of prior experiences Marsh (1994a), than frompersons we explicitly distrust, resulting from past badexperiences or deceit. However, even Marsh pays littleattention to the specifics of distrust.

Gans et al. (2001) were among the first to recog-nize the importance of distrust, stressing the fact that“distrust is an irreducible phenomenon that cannot beoffset against any other social mechanisms”, hence in-cluding trust. In their work (Gans et al., 2001), an ex-plicit distinction between confidence, trust, and distrustis made. Moreover, the authors indicate that distrustmight be highly relevant to social networks. Its im-pact is not inherently negative, but may also influencethe network in an extremely positive fashion. However,the primary focus of the latter work is on methodol-ogy issues and planning, not considering trust asser-tion evaluations and propagation through appropriatemetrics.

Propagation Models for Trust and Distrust in Social Networks 353

Guha (2003), Guha, Raghavan, and Tomkins (2004)eventually acknowledges the immense role of distrustwith respect to trust propagation applications, arguingthat “distrust statements are very useful for users todebug their Web of Trust”. For example, suppose thatagent a blindly trusts b, which again blindly trusts c,which blindly trusts d . However, a completely distrustsd. The latter distrust statement hence ensures that awill not accept beliefs and ratings from d , regardlessof agent a trusting b trusting c trusting d .

4.1. Semantics of distrustThe non-symmetrical nature of distrust and trust, be-ing two perfect dichotomies, has already been recog-nized by recent sociological research (Lewicki, McAl-listerand and Bies, 1998). In this section, we investigatethe differences between distrust and trust pertaining topossible inferences and the propagation of statements.

4.1.1. Distrust as negated trust. Interpreting distrustas negation of trust was adopted by many trust met-rics, among those trust metrics proposed in Abdul-Rahman and Hailes (1997), Jøsang, Gray, and Kinat-eder (2003), Abdul-Rahman and Hailes (2000), andChen and Yeager (2003). Basically, these metrics com-pute trust values by analyzing chains of trust statementsfrom source s to target t , eventually merging them toobtain an aggregate value. Each chain hereby becomessynthesized into one single number through weightedmultiplication of trust values along trust paths. Se-rious implications resulting from assuming that trustconcatenation relates to multiplication (Richardson,Agrawal and Domingos, 2003), and distrust to negatedtrust, manifest when agent a distrusts b, whichdistrusts c:

¬ trust(a, b) ∧ ¬ trust(b, c) |= trust(a, c)

Jøsang, Gray, and Kinateder (2003) are well awareof this rather unwanted effect but do not deny its cor-rectness, for the enemy of your enemy could well beyour friend. Guha, on the other hand, indicates that twodistrust statements cancelling out each other most of-ten does not reflect desired behavior (Guha, 2003). Weadopt the opinion of Guha and claim that distrust maynot be interpreted as negated trust.

4.1.2. Propagation of distrust. The “conditionaltransitivity” (Abdul-Rahman and Hailes, 1997) of trustis commonly agreed upon and constitutes the founda-tion and pivotal premiss that all trust metrics rely upon.

However, no consensus in literature has been achievedas it comes to the degree of transitivity and the decayrate of trust. Many approaches therefore explicitly dis-tinguish between recommendation trust and direct trust(Jøsang, Gray, and Kinateder, 2003; Abdul-Rahmanand Hailes, 1997; Maurer, 1996; Beth, Borcherdingand Klein, 1994; Chen and Yeager, 2003) in order tokeep apart the transitive fraction of trust from the non-transitive one. Hence, in these works, only the ultimateedge within the trust chain, i.e., the one linking to thetrust target, needs to be direct, while all others are sup-posed to be recommendations. For the Appleseed trustmetric, this distinction is made through the introduc-tion of the global spreading factor d. However, the con-ditional transitivity property of trust does not equallyextend to distrust. The case of double negation throughdistrust propagation has already been considered. Nowsuppose, for instance, that a distrusts b, which trusts c.Supposing distrust to propagate through the network,we may come to make the following inference:

distrust(a, b) ∧ trust(b, c) |= distrust(a, c)

This inference is more than questionable, for a pe-nalizes c simply for being trusted by an agent that adistrusts. Obviously, this assumption is not sound anddoes not reflect expected real-world behavior. We as-sume that distrust does not allow to make direct infer-ences of any kind. This conservative assumption makesus stay on the “safe” side and is in perfect accordancewith (Guha, 2003).

4.2. Incorporating distrust into appleseedWe compare our implementation of distrust withGuha’s approach, who supposes an identical model ofdistrust. Guha computes trust by means of one globalgroup trust metric, similar to PageRank (Page et al.,1998). For distrust, he proposes two candidate ap-proaches. The first one directly integrates distrust intothe iterative eigenvector computation and comes upwith one single measure combining both trust and dis-trust. However, in networks dominated by distrust, theiteration might not converge. The second proposal firstcomputes trust ranks by trying to find the dominanteigenvector, and then computes separate distrust ranksin one single step, based upon the iterative computationof trust ranks. Suppose that Da is the set of agents whodistrust a:

DistrustRank(a) =∑

b∈DaTrustRank(b)

|Da|

354 Ziegler and Lausen

The problem we perceive with this approach refersto superimposing the computation of distrust ranksafter trust rank computation, which may yield somestrange behavior: suppose an agent a which is highlycontroversial by engendering ambiguous sentiments,i.e., on the one hand, there are numerous agents thattrust a, and on the other hand, there are numerous agentswhich distrust a. With the approach proposed by Guha,a’s impact through asserting distrust into other agentsis huge, resulting from its immense positive trust rank.However, common sense tells us this should not be thecase, for a is subject to tremendous distrust itself, thuslevelling out its high trust rank.

Hence, for our own approach, we intend to directlyincorporate distrust into the iterative process of the Ap-pleseed trust metric computation, and not superimposedistrust afterwards. Several pitfalls have to be avoided,such as the risk of non-convergence in case of networksdominated by distrust (Guha, 2003). Furthermore, inabsence of distrust statements, we want the distrust-enhanced Appleseed algorithm, which we denote byTrustA− , to yield results identical to those engenderedby the original version TrustA.

4.2.1. Normalization and distrust. First, the trustnormalization procedure has to be adapted. We herebysuppose the more general case which does not neces-sarily assume linear normalization but normalizationof weights to the power of q , as has been discussedin Section 3.2.6. Let in(x), the trust influx for agent x ,be positive. As usual, we denote the global spreadingfactor by d, and quantified trust statements from xtowards y by W (x, y). Function sign(x) returns thesign of value x . Note that from now on, we assumeW : E → [−1, +1], for degrees of distrust need to beexpressed as well. Then the trust quantity ex→y dis-tributed from x to successor node y is computed asfollows:

ex→y = d · in(x) · sign(W (x, y)) · w, (3)

where

w = |W (x, y)|q∑(x,s)∈E |W (x, s)|q

The accorded quantity ex→y becomes negative ifW (x, y) is negative, i.e., if x distrusts y to a certainextent. For the relative weighting, the absolute values|W (x, s)| of all weights are considered. Otherwise, thedenominator could become negative, or positive truststatements could become boosted unduly. The latter

0.25

0.75

1

- 0.5

0.25

0.75

0.75

- 0.25 n

a

mb

de

c f

g

Fig. 10. Network augmented by distrust.

would be the case if the sum of positive trust ratingsslightly outweighed the sum of negative ones, makingthe denominator converge towards zero. An exampledemonstrates the computation process:

Example 2 (Distribution of Trust and Distrust). Weassume the trust network as depicted in Fig. 10. Letthe trust energy influx into node a be in(a) = 2, andglobal spreading factor d = 0.85. For simplicity rea-sons, backward propagation of trust to the source is notconsidered. Moreover, we suppose linear weight nor-malization, thus q = 1. Consequently, the denomina-tor of the normalization equation is |0.75| + | − 0.5| +|0.25| + |1| = 2.5. The trust energy that a distributesto b hence amounts to ea→b = 0.51, whereas the en-ergy accorded to the distrusted node c is ea→c = −0.34.Furthermore, we have ea→d = 0.17 and ea→e = 0.68.

Observe that trust energy becomes lost during distri-bution, for the sum of energy accorded along outgoingedges of a amounts to 1.02, while 1.7 was provided fordistribution. The effect results from the negative trustweight W (a, c) = −0.5.

4.2.2. Distrust allocation and propagation. We nowanalyze the case where the influx in(x) for agent xis negative. In this case, the trust allocated for x willalso be negative, i.e., in(x) · (1 − d) < 0. Moreover, theenergy in(x) · d that x may distribute among its suc-cessor nodes will naturally be negative as well. Theimplications are those which have been mentioned inSection 4.1, i.e., distrust as negation of trust and prop-agation of distrust. For the first case, refer to node fin Figure 10 and assume in(c) = −0.34, which is de-rived from Example 2. The trusted agent a distrusts c

Propagation Models for Trust and Distrust in Social Networks 355

which distrusts f . Eventually, f would be accordedd · (−0.34) · (−0.25), which is positive. For the sec-ond case, node g would be assigned a negative trustquantity d · (−0.34) · (0.75), simply for being trustedby f , which is commonly distrusted. Both unwantedeffects can be avoided by not allowing distrusted nodesto distribute any energy at all. Hence, more formally,we introduce a novel function out(x):

out(x) ={

d · in(x), if in(x) ≥ 00, else

(4)

The function then has to replace d · in(x) when com-puting the energy distributed along edges from x tosuccessor nodes y:

ex→y = out(x) · sign(W (x, y)) · w, (5)

where

w = |W (x, y)|q∑(x,s)∈E |W (x, s)|q

This design decision perfectly aligns with our as-sumptions made in Section 4.1 and prevents the in-ference of unwanted side-effects mentioned before.Furthermore, one can see easily that the modificationsintroduced do not change the behavior with respect toAlgorithm 3 when not considering relationships of dis-trust.

4.2.3. Convergence. Even in networks largely or en-tirely dominated by distrust, our enhanced version ofAppleseed is guaranteed to converge. We thereforebriefly outline an informal proof, knowing about theconvergence of the core Appleseed algorithm, whichhas been shown before by Proof 1:Proof 2 (Convergence in presence of distrust): Re-call that only positive trust influx in(x) becomes prop-agated, which has been indicated in Section 4.2.2.Hence, all we need to show is that the overall quan-tity of positive trust distributed in computation stepi cannot be augmented through the presence of dis-trust statements. In other words, suppose that G =(V, E, W ) defines an arbitrary trust graph, contain-ing quantified trust statements, but no distrust, i.e.,W : E → [0, 1]. Now consider another trust graphG ′ = (V, E ∪ D, W ′) which contains additional edgesD, and weight function W ′ = W ∪ (D → [−1, 0[).Hence, G ′ augments G by additional distrust edgesbetween nodes taken from V . We now perform twoparallel computations with enhanced Appleseed, oneoperating on G and the other on G ′. In every step,

and for every trust edge (x, y) ∈ E for G, the dis-tributed energy ex→y is greater or equal than for itsequivalent counterpart on G ′, for the denominator ofthe fraction given in Eq. (5) can only become greaterthrough additional distrust outedges. Second, for thecomputation performed on G ′, negative energy dis-tributed along edge (x, y) can only reduce the trust in-flux for y and may hence even accelerate convergence.

�However, as one might already have observed from

the proof, there exists one serious implication arisingfrom having distrust statements in the network. Theoverall accorded trust quantity does not equal the ini-tially injected energy anymore. Moreover, in networksdominated by distrust, the overall trust energy sum mayeven be negative.

Experiment 3 (Network impact of distrust). We in-tend to analyze the number of iterations until conver-gence and the overall accorded trust rank of five net-works. The structures of all these graphs are identical,being composed of 623 nodes with an average inde-gree and outdegree of 9. The only difference applies tothe assigned weights, where the first graph contains nodistrust statements at all, while 25% of all weights arenegative for the second, 50% for the third, and 75% forthe fourth. The fifth graph only contains distrust state-ments. Appleseed parameters are identical for all fiveruns, having backward propagation enabled, an initialtrust injection in0 = 200, spreading factor d = 0.85,convergence threshold Tc = 0.01, linear weight nor-malization, and no upper bound on the number ofnodes to unfold. The left-hand side of Figure 11 clearlydemonstrates that the number of iterations until con-vergence, given on the vertical axis, decreases withthe proportion of distrust increasing, observable alongthe horizontal axis. Likewise, the overall accorded trustrank, indicated on the vertical axis of the right-hand sideof Figure 11, decreases rapidly with increasing distrust,eventually dropping below zero. The same experimentwas repeated for another network with 329 nodes, anaverage indegree and outdegree of 6, yielding similarresults.

The effects observable in Experiment 3 onlymarginally affect the ranking itself, for trust ranks areinterpreted relative to each other. Moreover, compen-sation for lost trust energy may be achieved by boostingthe initial trust injection in0.

356 Ziegler and Lausen

0

5

10

15

20

25

0% 25% 50% 75% 100%

Distrust (in %)

Num

ber

of It

erat

ions

329 Nodes

623 Nodes

-50

0

50

100

150

200

250

0% 25% 50% 75% 100%

Distrust (in %)O

vera

ll T

rust

Ran

k

329 Nodes

623 Nodes

Fig. 11. Network impact of distrust.

5. Discussion

In this work, we have introduced various axes to clas-sify trust metrics with respect to diverse criteria andfeatures. Furthermore, we have advocated the needfor local group trust metrics, eventually presentingAppleseed, our main contribution made. Through ourproposed trust model, we have situated Appleseedwithin the Semantic Web universe. However, we be-lieve that Appleseed suits other application scenarioslikewise, such as group trust in online communities,open rating systems, ad-hoc and peer-to-peer networks.

For instance, Appleseed could support peer-to-peer-based file-sharing systems in reducing the spread ofself-replicating inauthentic files by virtue of trust prop-agation (Kamvar, Schlosser and Garcia-Molina, 2003).In that case, explicit trust statements, resulting from di-rect interaction, reflect belief in someone’s endeavor toprovide authentic files.

Moreover, we have provided ample discussions ofsemantics and propagation models of distrust, owing tothe fact that the latter concept has remained rather unat-tended by research. Details of its incorporation into thecore Appleseed framework have also been provided.

However, several open issues for future research re-main. Though having described ranking mechanismsand ways to align direct and indirect, i.e., computed,trust relationships by means of heuristics, an actualpolicy for eventual boolean decision-taking with re-spect to which agents to grant trust and which to denyhas not been considered. Note that possible criteria areapplication-dependent. For some, one might want to se-

lect the n most trustworthy agents. For others, all agentswith ranks above given thresholds may be eligible.

We strongly believe that local group trust metrics,such as Advogato and Appleseed, will become subjectto substantial research for diverse computing domainswithin the near future. For instance, the Appleseed corecurrently undergoes integration into our decentralized,Semantic Web-based recommender system (Ziegler,2004; Ziegler and Lausen, 2004), playing an essentialrole in its overall conception.

At any rate, success or failure of Appleseed, Ad-vogato, and other group trust metrics largely dependon the leverage that candidate application scenarios areable to unfold.

Acknowledgments

We are very grateful to Lule Ahmedi, Nathan Dim-mock, Kai Simon, and Paolo Massa for insightful com-ments and fruitful discussions, which helped us im-prove the quality of the paper.

Notes

1. Recall the definition of trust given before, telling that trust is a“subjective expectation”.

2. Mind that in this context, “local” refers to the place of computa-tion and not network perspective.

3. Supposing identical parameterizations for the metrics in use, aswell as similar network structures.

4. Though various levels of peer certification exist, their proposedinterpretation does not correspond to weighted trust relationships.

5. With respect to seed node a.

Propagation Models for Trust and Distrust in Social Networks 357

6. The terms “energy” and “trust” are used interchangeably in thiscontext.

7. By relying on predicate calculus expressions, we greatly simplifythrough supposing that trust, and hence distrust, is fully transitive.

References

Abdul-Rahman A, Hailes S. A distributed trust model. In: New Secu-rity Paradigms Workshop, Cumbria, UK, September 1997:48–60.

Abdul-Rahman A, Hailes S. Supporting trust in virtual com-munities. In: Proceedings of the 33rd Hawaii InternationalConference on System Sciences, Maui, HW, USA, January2000.

Aberer K, Despotovic Z. Managing trust in a peer-2-peer informationsystem. In Paques H, Liu L, and Grossman D (eds.) Proceedings ofthe Tenth International Conference on Information and KnowledgeManagement, ACM Press, 2001:310–317.

Beth T, Borcherding M, Klein B. Valuation of trust in open networks.In: Proceedings of the 1994 European Symposium on Research inComputer Security, 1994:3–18.

Blaze M, Feigenbaum J, Lacy J. Decentralized trust management.In: Proceedings of the 17th Symposium on Security and Privacy,Oakland, CA, USA. IEEE Computer Society Press 1996:164–173.

Ceglowski M, Coburn A, Cuadrado J. Semantic search ofunstructured data using contextual network graphs, June2003.

Chen R, Yeager W. Poblano: A distributed trust model for peer-to-peer networks. Technical report, Sun Microsystems, Santa Clara,CA, USA, February 2003.

Dumbill E. Finding friends with XML and RDF, June 2002. IBM’sXML Watch.

Eschenauer L, Gligor V, Baras J. On trust establishment in mobile ad-hoc networks. Technical Report MS 2002-10, Institute for SystemsResearch, University of Maryland, MD, USA, October 2002.

Ford L, Fulkerson R. Flows in Networks. Princeton, NJ,USA:Princeton University Press, 1962.

Gans G, Jarke M, Kethers S, Lakemeyer G. Modeling the impactof trust and distrust in agent networks. In: Proceedings of theThird International Bi-Conference Workshop on Agent-orientedInformation Systems, Montreal, Canada, May 2001.

Gil Y, Ratnakar V. Trusting information sources one citizen at a time.In: Proceedings of the First International Semantic Web Confer-ence, Sardinia, Italy, June 2002.

Golbeck J, Parsia B, Hendler J. Trust networks on the semantic web.In: Proceedings of Cooperative Intelligent Agents, Helsinki, Fin-land, August 2003.

Gray E, Seigneur J-M, Chen Y, Jensen C. Trust propagation in smallworlds. In Nixon P, and Terzis S (eds.), Proceedings of the FirstInternational Conference on Trust Management, volume 2692 ofLNCS, Springer-Verlag, 2003:239–254.

Guha R. Open rating systems. Technical report, Stanford KnowledgeSystems Laboratory, Stanford, CA, USA, 2003.

Guha R, Kumar R, Raghavan P, Tomkins A. Propagation of trust anddistrust. In: Proceedings of the Thirteenth International WorldWide Web Conference, New York, NY, USA, May 2004. ACMPress, 2004.

Hartmann K, Strothotte T. A spreading activation approach to textillustration. In: Proceedings of the 2nd International Symposium

on Smart Graphics, Hawthorne, NY, USA: ACM Press, 2002:39–46.

Heider F. The Psychology of Interpersonal Relations. New York, NY,USA, Wiley, 1958.

Holland P, Leinhardt S. Some evidence on the transitivity of pos-itive interpersonal sentiment. American Journal of Sociology,1972;77:1205–1209.

Housely R, Ford W, Polk W, Solo D. Internet X.509 public keyinfrastructure, January 1999. Internet Engineering Task Force RFC2459.

Jøsang A, Gray E, Kinateder M. Analysing topologies of transitivetrust. In: Proceedings of the Workshop of Formal Aspects of Secu-rity and Trust, Pisa, Italy, September 2003.

Kahan J, Koivunen M-R, Prud’ Hommeaux E, Swick R. Annotea -an open RDF infrastructure for shared web annotations. In: Pro-ceedings of the Tenth International World Wide Web Conference,Hong Kong, China, May 2001:623–632.

Kamvar S, Schlosser M, Garcia-Molina H. The EigenTrust algorithmfor reputation management in P2P networks. In: Proceedings ofthe Twelfth International World Wide Web Conference, Budapest,Hungary, May 2003.

Kinateder M, Pearson S. A privacy-enhanced peer-to-peer reputa-tion system. In: Proceedings of the 4th International Conferenceon Electronic Commerce and Web Technologies, volume 2378of LNCS, Prague, Czech Republic, September 2003. Springer-Verlag.

Kinateder M, Rothermel K. Architecture and algorithms for a dis-tributed reputation system. In Nixon P, Terzis S (eds.), Pro-ceedings of the First International Conference on Trust Man-agement, volume 2692 of LNCS, Springer-Verlag, 2003:1–16.

Levien R. Attack Resistant Trust Metrics. PhD thesis, UC Berkeley,Berkeley, CA, USA, 2003.

Levien R, Aiken A. Attack-resistant trust metrics for public key certi-fication. In: Proceedings of the 7th USENIX Security Symposium,San Antonio, TX, USA, January 1998.

Levien R, Aiken A. An attack-resistant, scalable name service, 2000.Draft submission to the Fourth International Conference on Finan-cial Cryptography.

Lewicki R, McAllister D, Bies R. Trust and distrust: New re-lationships and realities. Academy of Management Review,1998;23(12):438–458.

Luhmann N. Trust and Power. Chichester, UK: Wiley, 1979.Marsh S. Formalising Trust as a Computational Concept. PhD thesis,

Department of Mathematics and Computer Science, University ofStirling, Stirling, UK, 1994a.

Marsh S. Optimism and pessimism in trust. In Ramirez J (eds.),Proceedings of the Ibero-American Conference on Artificial Intel-ligence, Caracas, Venezuela, McGraw-Hill, 1994.

Maurer U. Modelling a public key infrastructure. In Bertino E, ed-itor, Proceedings of the 1996 European Symposium on Researchin Computer Security, volume 1146 of LNCS, Springer-Verlag,1996:325–350.

McKnight H, Chervany N. The meaning of trust. Technical ReportMISRC 96-04, Management Informations Systems Research Cen-ter, University of Minnesota, MN, USA, 1996.

Milgram S. The small world problem. In Sabini J, and SilverM (eds.), The Individual in a Social World—Essays and Ex-periments 2nd edition. New York, NY, USA, McGraw-Hill,1992.

358 Ziegler and Lausen

Mui L, Mohtashemi M, Halberstadt A. A computational model oftrust and reputation. In: Proceedings of the 35th Hawaii Inter-national Conference on System Sciences, Big Island, HI, USA,January 2002:188–196.

Page L, Brin S, Motwani R, Winograd T. The pagerank citationranking: Bringing order to the web. Technical report, StanfordDigital Library Technologies Project, 1998.

Quillian R. Semantic memory. In Minsky M, (ed.), Semantic In-formation Processing, Boston, MA, USA, MIT Press, 1968:227–270.

Rapoport A. Mathematical models of social interaction. In Luce D,Bush R, and Galanter E (eds.), Handbook of Mathematical Psy-chology, volume 2. New York, NY, USA Wiley, 1963.

Reiter M, Stubblebine S. Path independence for authentication inlarge-scale systems. In: ACM Conference on Computer and Com-munications Security, 1996:57–66.

Reiter M, Stubblebine S. Toward acceptable metrics of authentica-tion. In: Proceedings of the IEEE Symposium on Security andPrivacy, 1997:10–20.

Richardson M, Agrawal R, Domingos P. Trust management for thesemantic web. In: Proceedings of the Second International Seman-tic Web Conference, Sanibel Island, FL, USA, September 2003.

Sankaralingam K, Sethumadhavan S, Browne J. Distributed pager-ank for P2P systems. In: Proceedings of the Twelfth InternationalSymposium on High Performance Distributed Computing, Seattle,WA, USA, June 2003.

Smith E, Nolen-Hoeksema S, Fredrickson B, Loftus G. Atkinsonand Hilgards’s Introduction to Psychology. Thomson Learning,Boston, MA, USA, 2003.

Twigg A, Dimmock N. Attack-resistance of computational trust mod-els. In: Proceedings of the Twelfth IEEE International Workshopon Enabling Technologies: Infrastructure for Collaborative En-terprises: Enterprise Security (Special Session on Trust Manage-ment), Linz, Austria, June 2003:275–280.

Wiese R, Eiglsperger M, Kaufmann M. yfiles—visualization andautomatic layout of graphs. In: Proceedings of the 9th In-ternational Symposium on Graph Drawing, volume 2265 ofLNCS, Heidelberg, Germany, Springer-Verlag, January 2001:453–454.

Ziegler C-N. Semantic web recommender systems. In Lindner W,and Perego A (eds.), Proceedings of the Joint ICDE/EDBT Ph.D.Workshop 2004, Heraklion, Greece. Crete University Press, March2004.

Ziegler C-N, Lausen G. Analyzing correlation between trust anduser similarity in online communities. In: Jensen C, Poslad S,and Dimitrakos T (eds.), Proceedings of the 2nd InternationalConference on Trust Management, volume 2995 of LNCS, Oxford,UK, March 2004. Springer-Verlag, 251–265.

Zimmermann P. The Official PGP User’s Guide. Boston, MA, USA,MIT Press, 1995.

Cai-Nicolas Ziegler is a post-doctoral researcher atDBIS, the Databases and Information Systems group ofthe University of Freiburg, Germany. He studied Com-puter Science at the University of Passau, Germany,and Universite Laval, Quebec, receiving his Diploma(equivalent to MSc) in 2003. Cai-Nicolas obtainedhis PhD in Computer Science from the University ofFreiburg in 2005.

His primary research interests cover collaborativefiltering applications and recommender systems, aswell as computational trust models on the verge ofhuman-computer interaction.

Georg Lausen is head of the research group onDatabases and Information Systems (DBIS) at the Uni-versity of Freiburg, Germany. He received his Diplomain Industrial Engineering in 1978, PhD in 1982, and hispostdoctoral lecture qualification (Habilitation) in 1985from the University of Karlsruhe (TH). He was asso-ciate professor for Information Technology and Inte-gration Problems at the Technical University of Darm-stadt from 1986 to 1987. From 1987 to 1994 he wasfull professor for databases and informations systemsat the University of Mannheim, and since 1994 at theUniversity of Freiburg.

His current research interests comprise informationintegration, internet technologies, and Web services.