trust and reciprocity decisions: the differing perspectives of trustors and trusted parties

13
Trust and reciprocity decisions: The differing perspectives of trustors and trusted parties q Deepak Malhotra * Harvard Business School, Mellon Hall, D3-2, Soldiers Field, Boston, MA 02163, USA Received 5 September 2003 Available online 30 April 2004 Abstract This paper examines trusting actions and reciprocity responses in the two-person Trust Game. Two experiments test a model that suggests that individuals in social and economic interactions are likely to view the situation from their own unique perspective. The results demonstrate that trustors focus primarily on the risk associated with trusting, while trusted partiesthose who are in a position to reciprocatebase their decisions on the level of benefits they have received. Specifically, trusting is more likely when risk is low, but the likelihood of trust does not depend on the level of benefit that trust provides to trusted parties. Meanwhile, reciprocity is more likely when the benefit provided is high, but does not depend on the level of risk the trustor faced. Neither party is particularly sensitive to the factors that affect their counterpartÕs decision. Furthermore, trustors underestimate the extent to which the level of benefits they provide might affect the trusted partyÕ decision to reciprocate. Responses to a post-experimental questionnaire provide additional support for the proposition that the parties view the interaction from markedly different perspectives. Implications of this are discussed. Ó 2004 Elsevier Inc. All rights reserved. Trust has been an important topic in the academic literature for decades (e.g., Deutsch, 1958; Lewis & We- igert, 1985; Lindskold, 1978; Luhmann, 1988; Mayer, Davis, & Schoorman, 1995; Rotter, 1967; Strickland, 1958). While definitions of trust vary across disciplines (e.g., economics, psychology, and sociology) and levels of analysis (e.g., interpersonal, institutional, etc.), many commonalities are present. Risk, or vulnerability, is a primary and consistent element in definitions of trust. For example, Rousseau, Sitkin, Burt, and CamererÕs (1998) review of definitions and conceptualizations of trust across a wide variety of disciplines reported that ‘‘the willingness to be vulnerable’’ appears to be common to all. Specifically, Rousseau et al. (1998) defined trust as ‘‘a psychological state comprising the intention to accept vulnerability based upon positive expectations of the in- tentions or behavior of another.’’ Similarly, Johnson- George and Swap (1982) reported that ‘‘the willingness to take risks may be one of the few characteristics common to all trust situations.’’ There are a variety of factors that might influence oneÕs willingness to accept vulnerability at the hands of another. Shapiro, Sheppard, and Cheraskin (1992) suggest three broad categories (or typologies) of trust: deterrence-based trust, knowledge-based trust, and identification-based trust. Deterrence-based trust rests on a consideration of the incentives that the other party faces: if incentives are aligned, or if the other party does not gain from exploiting the vulnerability of the trustor, then trust increases. Knowledge-based trust rests on a consideration of the intrinsic characteristics of the other party: if the other party is seen as being fair and having integrity, these attributions increase trust. Identification-based trust rests on a consideration of the relationship between the parties: to the extent that each party is seen as inherently caring about each otherÕs welfare, then this perceived benevolence increases trust. q I would like to thank Keith Murnighan, Leigh Thompson, David Messick, Lyn M. Van Swol, and Max Bazerman for their comments on earlier versions of the manuscript. I would also like to thank Maggie Neale and two anonymous reviewers for their insightful and important comments and ideas. Finally, thanks to Michael Jensen for one important suggestion. The research was conducted using a grant provided by the Dispute Resolution Research Center at the Kellogg School of Management at Northwestern University. * Fax: 1-617-495-5672. E-mail address: [email protected]. 0749-5978/$ - see front matter Ó 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.obhdp.2004.03.001 Organizational Behavior and Human Decision Processes 94 (2004) 61–73 ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES www.elsevier.com/locate/obhdp

Upload: deepak-malhotra

Post on 11-Sep-2016

212 views

Category:

Documents


2 download

TRANSCRIPT

ORGANIZATIONALBEHAVIOR

Organizational Behavior and Human Decision Processes 94 (2004) 61–73

AND HUMANDECISION PROCESSES

www.elsevier.com/locate/obhdp

Trust and reciprocity decisions: The differing perspectivesof trustors and trusted partiesq

Deepak Malhotra*

Harvard Business School, Mellon Hall, D3-2, Soldiers Field, Boston, MA 02163, USA

Received 5 September 2003

Available online 30 April 2004

Abstract

This paper examines trusting actions and reciprocity responses in the two-person Trust Game. Two experiments test a model that

suggests that individuals in social and economic interactions are likely to view the situation from their own unique perspective. The

results demonstrate that trustors focus primarily on the risk associated with trusting, while trusted parties—those who are in a position

to reciprocate—base their decisions on the level of benefits they have received. Specifically, trusting is more likely when risk is low, but

the likelihood of trust does not depend on the level of benefit that trust provides to trusted parties.Meanwhile, reciprocity is more likely

when the benefit provided is high, but does not depend on the level of risk the trustor faced. Neither party is particularly sensitive to the

factors that affect their counterpart�s decision. Furthermore, trustors underestimate the extent towhich the level of benefits they provide

might affect the trusted party� decision to reciprocate. Responses to a post-experimental questionnaire provide additional support for

the proposition that the parties view the interaction from markedly different perspectives. Implications of this are discussed.

� 2004 Elsevier Inc. All rights reserved.

Trust has been an important topic in the academic

literature for decades (e.g., Deutsch, 1958; Lewis & We-

igert, 1985; Lindskold, 1978; Luhmann, 1988; Mayer,Davis, & Schoorman, 1995; Rotter, 1967; Strickland,

1958). While definitions of trust vary across disciplines

(e.g., economics, psychology, and sociology) and levels of

analysis (e.g., interpersonal, institutional, etc.), many

commonalities are present. Risk, or vulnerability, is a

primary and consistent element in definitions of trust. For

example, Rousseau, Sitkin, Burt, and Camerer�s (1998)

review of definitions and conceptualizations of trustacross a wide variety of disciplines reported that ‘‘the

willingness to be vulnerable’’ appears to be common to

all. Specifically, Rousseau et al. (1998) defined trust as ‘‘a

q I would like to thank Keith Murnighan, Leigh Thompson, David

Messick, Lyn M. Van Swol, and Max Bazerman for their comments on

earlier versions of the manuscript. I would also like to thank Maggie

Neale and two anonymous reviewers for their insightful and important

comments and ideas. Finally, thanks to Michael Jensen for one

important suggestion. The research was conducted using a grant

provided by the Dispute Resolution Research Center at the Kellogg

School of Management at Northwestern University.* Fax: 1-617-495-5672.

E-mail address: [email protected].

0749-5978/$ - see front matter � 2004 Elsevier Inc. All rights reserved.

doi:10.1016/j.obhdp.2004.03.001

psychological state comprising the intention to accept

vulnerability based upon positive expectations of the in-

tentions or behavior of another.’’ Similarly, Johnson-George and Swap (1982) reported that ‘‘the willingness to

take risks may be one of the few characteristics common

to all trust situations.’’

There are a variety of factors that might influence

one�s willingness to accept vulnerability at the hands of

another. Shapiro, Sheppard, and Cheraskin (1992)

suggest three broad categories (or typologies) of trust:

deterrence-based trust, knowledge-based trust, andidentification-based trust. Deterrence-based trust rests

on a consideration of the incentives that the other

party faces: if incentives are aligned, or if the other

party does not gain from exploiting the vulnerability of

the trustor, then trust increases. Knowledge-based trust

rests on a consideration of the intrinsic characteristics

of the other party: if the other party is seen as being

fair and having integrity, these attributions increasetrust. Identification-based trust rests on a consideration

of the relationship between the parties: to the extent

that each party is seen as inherently caring about

each other�s welfare, then this perceived benevolence

increases trust.

62 D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73

These different categories of trust share a commonunderlying assumption: that vulnerability exists when the

other party has an incentive to exploit the trustor for

personal gain. When this incentive to exploit is offset by

deterrence, by the integrity of the other party, or by the

nature of the relationship, trust develops. However, oth-

ers (e.g., Mayer et al., 1995) have noted that vulnerability

can also exist in situations where there is no incentive for

the other party to exploit. In particular, a trusted partymay wish to reciprocate, or honor trust, but may lack the

competence or ability to do so. For example, trust in one�sdoctor rests not only on attributions of integrity and be-

nevolence, but also on attributions of ability. Similarly,

when a person professes lack of trust inmeteorologist, it is

not the motives or integrity of the meteorologist that is in

question, but rather their competence.

The focus in this paper is on the many trust situationswhere the concern of the trustor is regarding the char-

acter of the trusted party (e.g., their benevolence,

integrity, etc.), and not the trusted party�s competence.

In the experiments that follow, the trusted party has a

relatively simple choice to make regarding whether

to reciprocate or to exploit trust. As a result, the tru-

stor�s decision rests on considerations of the trusted

party�s motives and incentives rather than the trustedparty�s competence or ability.

Trust in such contexts involves vulnerability because

reciprocity is not guaranteed: the trusted party might

exploit the trustor for personal gain (cf., Mayer et al.,

1995; Rousseau et al., 1998; Snijders, 1996). While

trusting often entails benefits to one or both parties, the

risk associated with trusting can be considerable when

the trusted party has an incentive to exploit. Thus, it isimportant for parties that might trust to consider not

only the potential benefits of trusting, but also the

likelihood that the trusted party will reciprocate, or

honor their trust.

The good news for trustors is that reciprocity is often in

the self-interest of trusted parties. For example, whenever

there is a possibility for repeated interaction and reputa-

tion building, those who have been trusted might self-in-terestedly choose to reciprocate. Indeed, research on the

development of trust suggests that a primary means of

building trust is via repeated positive interactions over

time (e.g., Lindskold, 1978; Osgood, 1962).

In addition, research on reciprocity suggests that peo-

ple often reciprocate the acts of others even when it goes

against their self-interest (Berg, Dickhaut, & McCabe,

1995; Gouldner, 1960; Ortmann, Fitzgerald, & Boeing,2000). Gouldner (1960) reports that a ‘‘norm of reci-

procity’’may exist across societies. The normdictates that

one ‘‘should repay (inkind)what another hasprovided for

us.’’ Cialdini (1993), for instance, notes that people tend to

reciprocate uninvited, and even unwanted, gifts.

Despite the seeming pervasiveness of the norm of

reciprocity, however, there are many situations in which

people do not reciprocate the acts of others. Berg et al.(1995) found that when trusted parties could maximize

their own monetary benefit by not reciprocating, 20%

chose not to reciprocate at all. Thus, trusting is not

likely to engender universal reciprocity. Likewise, a

party might choose not to trust because of the risk of

exploitation (i.e., expecting no reciprocity) even though

the other party would have been willing to reciprocate.

Thus, a critical issue for those who might trust wouldbe to consider all of the factors that affect a trusted

party�s likelihood or willingness to reciprocate. Research

suggests that people tend to trust only when they expect

others to reciprocate and honor their trust (e.g., An-

dreoni, 1995; Gneezy, Guth, & Verboven, 2000; Pruitt &

Kimmel, 1977; Snijders & Keren, 1999). Whether these

expectations are rational and reasonable, i.e., whether

trustors are sensitive to the actual factors that affectreciprocity decisions, however, is not clear. Likewise, it

is unclear whether trusted parties are sensitive to the

factors that affect trustors� trusting decisions.

A vast amount of social psychology research, for

example, suggests that decision makers in strategic in-

teractions are unlikely to adequately consider the factors

that might influence their counterparts� decisions and

behavior (e.g., Bazerman, 1994; Gilovich, Kruger, &Savitsky, 1999; Jones & Nisbett, 1972). Thus, trustors

might take risks when reciprocity is unlikely or forego

gains from trusting when reciprocity is likely but un-

anticipated.

This paper investigates these issues in the context of

the ‘‘Trust Game’’ (Gambetta, 1988; Snijders, 1996).

Prior research using similar paradigms (Berg et al., 1995;

Pillutla, Malhotra, & Murnighan, 2003) suggests thattrust and reciprocity are correlated and that the degree

of reciprocity is a function of the level of trust: large

trusting acts make reciprocity more likely and more

substantive. However, it is unclear why this is the case.

Large acts of trusts might engender greater reciprocity

because they entail greater risk for the trustor and

trusted parties appreciate this. Alternatively, large acts

of trust might engender greater reciprocity because theytend to provide greater benefits to trusted parties, which

makes them feel indebted. It is also possible that both of

these mechanisms are at work and simultaneously affect

reciprocity. Previous studies (reviewed below) have

confounded these two factors. This paper disentangles

the impact of these two factors and examines which

factor(s) influence decisions to trust and which factor(s)

influence decisions to reciprocate.The next section reviews the findings of some recent

research on trust and reciprocity decisions. The follow-

ing two sections present a series of hypotheses suggest-

ing how trustors and trusted parties might be

differentially sensitive to the risks and benefits associated

with trusting. These hypotheses are then experimentally

tested using the ‘‘Trust Game,’’ which allows us to

D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73 63

independently vary the risks and benefits of trust. Thefinal section suggests implications and limitations, and

concludes.

The impact of trust on reciprocity

Berg et al.�s (1995) experimental results showed that

trustors expect others to reciprocate even when decisionsare anonymous, others are unconstrained, and there is

no possibility of future interaction. They also docu-

mented that many trusted parties reciprocate under these

conditions. Participants in Berg et al.�s (1995) study en-

gaged in a 2-player interaction known as the Investment

Game (IG). In this game, both players received an initial

endowment of $10. Player 1s made the first decision and

had the opportunity to send as much of their $10endowment to player 2s as they wished. The amount sent

was tripled before player 2s received it. Once this money

was received, player 2s had the opportunity to send back

as much money as they wished to player 1s. All partici-

pants knew that the money sent by player 1s would be

tripled and that this game would be played only once.

Traditional economic models of behavior suggest that

player 2s will act self-interestedly and return no money.Rational player 1s will anticipate this and send no

money. However, 94% of player 1s in this study sent

money: on average, $5.16 of their $10 endowment. In

addition, 80% of player 2s who received money returned

some to player 1s: on average, $4.66, significantly less

than the amount sent.1 Notably, any amount returned

by player 2s in this experiment amounted to a monetary

loss (to player 2s) that was associated with no possibilityof an offsetting (monetary) gain, suggesting that an

obligation to reciprocate may have prompted player 2s

to honor player 1s� trust.Pillutla et al. (2003) conducted a follow-up study

using the same game, in which all participants were

player 2s and where player 1s sent amounts that ranged

from small amounts to all of their endowments. They

found that player 2s� returns increased as the amountssent by player 1s increased, both in absolute terms and

as a percentage of the amount sent, suggesting that the

greater the amount sent, the greater the reciprocity.

Participants� explanations for their actions suggest thattwo factors were critical: a desire for equality and feel-

ings of obligation. Whereas small amounts sent by

player 1s were often perceived as cheap, or non-trusting,

larger amounts sent generated feelings of obligation anda desire for equality. For example, participants who had

1 In a follow-up study reported in the same paper, participants

were given the results of this first study prior to participation in the

Trust Game. This resulted in a slight increase in the amounts sent by

player 1s (on average, $5.36), and a significant increase in the amounts

returned by player 2s (on average, $6.46). Thus, choosing to trust more

informed player 2s was, on average, profitable.

been sent larger amounts explained their decision toreturn money with comments such as ‘‘want to repay

them and thank them,’’ ‘‘want them to share in earnings

and be rewarded for trust in me,’’ and ‘‘I want to reward

their generosity and risk.’’ In addition, analyses revealed

that the effect of amount sent (by player 1) on amount

returned (by player 2) was mediated by player 2�s feel-

ings of ‘‘obligation’’.

The impact of risk and benefit on trust and reciprocity

Berg et al. (1995) found that trusted parties often

reciprocate even when it is costly. Pillutla et al. (2003)

replicated this finding and also found that reciprocity

was more frequent and more sizable when trustors had

taken large rather than small risks. These findings sug-gest that trusted parties may appreciate the risks that

trustors take and might be more obligated to reciprocate

when the risk associated with the trusting act is high.

However, both of these studies confound the level of

risk that trustors took with the amount of benefit their

trust provided to trusted parties. In these experiments,

whenever trustors took higher risks by sending larger

portions of their endowment, they provided greaterbenefit (more money) to trusted parties as a conse-

quence. Thus, there is no way to know whether trusted

parties reciprocated because they appreciated the risks

trustors had undertaken or because they felt indebted to

the trustors for the benefits they had provided, or both.

While risks to the trustor and benefits to the trusted

party are often correlated, there are also many instances

in which risk is relatively low and benefit is relativelyhigh, or vice versa. Indeed, the very nature of logrolling

in negotiation (Thompson, 2000) is predicated on the

insight that negotiators might be able to offer concession

that are of relatively low cost to them, but of great

benefit to the other party. Ideally, the negotiator pro-

vides a large benefit by taking a relatively small risk, and

is compensated for their concession when the other

party reciprocates. Alternatively, a poorly crafted con-cession might entail high risk and provide low benefit to

the other party—in this case, a negotiator gives away

something of considerable personal value that is not

sufficiently appreciated by the other party. (As someone

once pointed out, when a romantic relationship ends,

one partner is often left exclaiming, ‘‘I gave you the best

years of my life,’’ never stopping to think how good

those years were for the recipient!)More generally, it is important for people in social

and economic interactions to know which aspects of

their actions (the risk involved, the benefit provided,

both, or neither) will affect the likelihood that the other

party will reciprocate. This insight can help improve the

quality of decision-making and increase the likelihood

that a mutually trusting relationship will develop.

64 D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73

Cialdini (1993) suggests that both the level of risk andthe level of benefit affect reciprocity, citing how people

often reciprocate uninvited, and even unwanted gifts. As

Cialdini notes in his observations of the Hare Krishna

Society�s methods for inducing donations: ‘‘The nature

of the reciprocity rule is such that a gift so unwanted

that it was thrown away at the first opportunity had

nonetheless been effective and exploitable’’ (p. 31). This

implies that trusted parties might reciprocate even in theabsence of benefits because the trustor has taken some

risk in providing something that might not be recipro-

cated. Similarly, Reagan (1971) found that people often

reciprocate more than the amount of benefit others have

provided to them. In this study, an uninvited gift (a can

of Coke) that cost 10 cents induced an amount of reci-

procity from recipients (in the form of purchasing raffle

tickets from the gift giver) that, on average, cost 50cents. Consistent with this, Mauss (1955) argues that

reciprocity in gift exchange is sometimes entirely devoid

of any attention to the value (to the recipient) of the gift

given. Instead, there is an obligation to reciprocate re-

gardless of the benefit provided. In a variety of situa-

tions, however, reciprocity is more directly tied to the

amount of benefit provided. For example, reciprocity in

typical market transactions is often more explicitly cal-culated: buyers are sensitive to how much value they are

receiving for each dollars spent and sellers are sensitive

to their profit margins.

This suggests that trusted parties might reciprocate

because of both the benefits provided and the risks ta-

ken, suggesting the following hypotheses:

Hypothesis 1. Trusted parties will be more likely to re-ciprocate when trustors have provided greater benefits.

Hypothesis 2. Trusted parties will be more likely to re-

ciprocate when trustors have taken large (rather than

small) risks.

As discussed earlier, decisions to trust should (ideally)

be sensitive to the factors trusted parties consider when

they decide whether to reciprocate. If Hypothesis 1 is

correct, then to the extent that trustors are motivated to

(and can) accurately assess the likelihood of reciprocity,

they should be sensitive to how much benefit their act oftrust provides to the trusted party. Controlling for the

level of risk faced by trustors, the benefit provided to the

trusted party should increase their willingness to trust.

This suggests:

Hypothesis 3. Potential trustors will be more likely to

engage in trusting acts when they can provide more

(rather than less) benefit to trusted parties.

Hypothesis 2 suggests that trusted parties should be

more likely to reciprocate when trustors have taken

large (rather than small) risks. While this might maketaking large risks more palatable to potential trustors,

trusting acts may still not correlate positively with risk

because the amount of increase in likelihood of reci-

procity may not be sufficient. For example, Berg et al.

(1995) found that in the absence of social history, the

average level of reciprocity was less than the average

cost (or risk) of trusting. Similarly, Pillutla et al. (2002)

found that taking large risks often did not pay, and thateven extremely high risks (despite being associated with

higher benefits to trusted parties) yielded only a small

average gain to trustors. Thus, controlling for the level

of benefit provided, the net effect of large risks on trust

decisions may be negative. This suggests:

Hypothesis 4. Potential trustors will be more likely to

engage in trusting acts when the risks of trusting are lowrather than high.

It is important to point out the distinction being

made here between trust (which is a psychologicalstate) and trusting acts (which are behaviors). As de-

fined by Rousseau et al. (1998), trust is a psychological

state comprising the willingness to accept vulnerability

at the discretion of another party. Trusting acts, by

extension, entail accepting vulnerability in the hope or

expectation of gain at the discretion of another person

(cf., Snijders, 1996). The distinction between trust (the

psychological state) and a trusting act (which is aparticular behavior) is meaningful. Large acts of trust

are those that entail accepting greater vulnerability,

and these acts suggest that a high degree of trust is

present. Small acts of trust entail accepting less vul-

nerability, and such acts require a low degree of trust

(though a lot may be present). As a result, when vul-

nerability is low (as in Hypothesis 4), the likelihood of

engaging in a trusting act is greater, but the magnitudeof trust being exhibited is less. In the extreme, of

course, when there is no vulnerability, it becomes

meaningless to talk about trust or trusting acts.

Taken together, Hypotheses 1–4 state that both par-

ties will be sensitive to both the risks taken by trustors

and the benefits provided to trusted parties. However,

research on perspective taking ability and egocentrism

questions this symmetry.

The differing perspectives of trustors and trusted parties

Hypotheses 1–4 are based on an implicit assumption

that both parties view important aspects of their inter-

action similarly. For example, the trusted party will

understand the trustor�s position and be sensitive to therisk the trustor faces. Similarly, a potential trustor will

be sensitive to the needs of the trusted party and be more

willing to trust when she can provide greater benefit.

D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73 65

Research suggests, however, that parties in any socialor strategic interaction view the interaction from their

own unique perspectives and that this asymmetry can

have implications for their behaviors and outcomes (cf.,

Jones & Nisbett, 1972; Neale & Bazerman, 1991; Taylor

& Brown, 1988). People often have idiosyncratic, self-

oriented sets of information, making it difficult to ac-

curately assess others� likely evaluations (Bazerman,

1994). Even without information asymmetry, people areoften insensitive to other parties� payoffs and are unable

to take their perspective (cf., Neale & Bazerman, 1991;

Samuelson & Bazerman, 1985; Snijders, 1996). Across

various domains people maintain unshared ‘‘illusions’’

regarding their own behavior and benevolence (Taylor

& Brown, 1988) while they devalue others� contributionsand concessions (Stillinger, Epelbaum, Keltner, & Ross,

1990). Similarly, Gilovich et al. (1999) note that ego-centrism, or ‘‘being at the center of one�s world,’’ makes

it difficult for people to remove themselves from their

own perspective to see others� points of view.In the current context, this inability to take the per-

spective of the other party might lead trustors to be

sensitive to the risks they face (which directly affects

their expected final outcome), but less sensitive to the

benefit their trust provides to the trusted party (whichmight affect their final outcome only indirectly through

its impact on the likelihood of reciprocity). For example,

Carroll, Bazerman, and Maury (1988) argue that nego-

tiators tend to simplify their decision making tasks by

focusing on their own information and goals, and sys-

tematically ignoring the cognitions of their opponents.

Tor and Bazerman (2003) suggest that in a variety of

strategic contexts, people pay insufficient attention tothe decisions of others, even when these decisions will

affect their own outcomes (as is clearly the case here).

More specifically, people tend to focus on those aspects

of an interaction that most directly influence their out-

comes, and pay less attention to those aspects that in-

directly influence their outcomes (Idson et al., 2004;

Moore & Kim, 2003).

Snijders and Keren (1999) provide evidence of thetendency to ignore such indirect effects in the context

of trust decisions. In their study using the Trust Game,

they found that trustors were not sensitive to the level

of ‘‘temptation’’ to exploit that trusted parties experi-

enced, even though this factor had implications for

their own outcomes. In their study, temptation was a

function of the difference between what the trusted

party received by exploiting vs. what they receive byreciprocating. The degree of temptation had an impact

on the final outcome of trustors, but only via its effect

on the decisions of the trusted party. The results reveal

that while temptation influenced trusted parties, it did

not influence decisions to trust. In the current context,

this suggests that trustors may not be sufficiently sen-

sitive to the benefits their trust provides to trusted

parties because this factor affects their own outcomeonly indirectly.

There is even more reason to suspect that trusted

parties will be relatively insensitive to the level of risk

that trustors have undertaken. The level of risk that

trustors face does not influence trusted parties because

trusted parties respond only once they have been trus-

ted. At this stage, it is irrelevant how much risk trustors

faced because the final outcomes are now completely upto the discretion of trusted parties.

Taken together, the research on perspective taking,

egocentrism, and the tendency to ignore indirect effects

and the cognitions of others suggests that trustors will

be more sensitive to risk and less to benefits, whereas

trusted parties will be more sensitive to benefits and less

to risk. More formally:

Hypothesis 5. Trustors will consider the level of risk they

face to be more relevant to their decision than the level

of benefit their trust provides to trusted parties.

Hypothesis 6. Trusted parties will consider the level of

benefit they are provided to be more relevant to their

decision than the level of risk faced by trustors.

Hypothesis 7. The level of risk will be more important to

trustors than to trusted parties.

Hypothesis 8. The level of benefit will be more important

to trusted parties than to trustors.

Thus, Hypotheses 5–8 provide a set of expectations

that challenge two of our earlier hypotheses: that trusted

parties will be sensitive to the risks faced by trustors

(Hypothesis 2) and that trustors will be sensitive to the

benefits provided to trusted parties (Hypothesis 3).

Experiment 1 tests Hypotheses 1–4 and also providessome initial evidence pertaining to Hypotheses

5–8. Hypotheses 5–8 are more carefully tested in

Experiment 2.

Experiment 1

Participants were assigned either the role of trustor

(player 1) or of trusted party (player 2) in a Trust Game

(Gambetta, 1988; Snijders, 1996; Snijders&Keren, 1999).

The Trust Game (TG) is very similar to the Investment

Game (Berg et al., 1995), but forces each player to make

dichotomous choices (player 1 chooses whether to trust,player 2 chooses whether to reciprocate). The TG is

akin to a sequential Prisoners� Dilemma Game. The dif-

ference is that in the standard Prisoners�DilemmaGame,

both players choose simultaneously, whereas in the TG,

player 1s make the first decision and player 2s observe

player 1s� choice before making their own decisions.

66 D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73

Specifically, player 1 in the TG chooses to trust (A) ornot to trust (B), after which player 2 chooses to recip-

rocate (X) or to exploit trust (Y). The payoffs in the

game are such that player 1s (trustors) can take a ‘‘safe’’

default outcome by choosing not to trust or they can

take a risk by choosing to trust. If player 1 trusts, then

player 2 can choose to reciprocate (making player 1

better off) or to exploit (making player 1 worse off).

Player 2 only has a decision to make if player 1 hastrusted; player 2 always has a monetary incentive to

exploit player 1when given a chance.

The TG operationalizes the essential elements of

many trust situations (cf., Mayer et al., 1995): player 1

can choose to accept vulnerability (by choosing A) in the

hopes of gain at the discretion of player 2 who can ex-

ploit this vulnerability for personal gain (by choosing Y

rather than X). In particular, when there is no potentialfor repeated interaction (i.e., in a one-shot game), trus-

ted parties maximize their payoffs by exploiting. The

Trust Game is a particularly appropriate paradigm to

test the current set of hypotheses because it allows for

the easy and independent manipulation of risks (to

trustors) and benefits (to trusted parties). Specifically,

the risk to the trustor (player 1) is high when the tru-

stor�s default (safe) outcome is high rather than low: bytrusting, player 1 risks that much money. The benefit to

the trusted party (player 2) of having been trusted is

high when the trusted party�s default outcome (i.e., the

outcome received if player 1 does not trust) is low rather

than high.

Fig. 1. Extended form representations of the 4

Fig. 1 presents graphical representations (i.e., ex-tended-form versions) of the four TG�s that were used in

Experiment 1. The different versions of the game ma-

nipulate the risk to player 1s (high vs. low) and the

benefit to player 2s (high vs. low). In each version of the

TG, player 1�s decision to trust increases the amount of

money that will be distributed between the two players,

but in doing so it gives player 2 control over how that

money will be distributed. Player two can either rewardplayer 1 by choosing to ‘‘reciprocate’’ (option X) or

make player 1 worse off for having trusted by choosing

to ‘‘exploit’’ (option Y). It is important to note that the

payoffs to player 2 for reciprocating vs. exploiting are

the exact same in each of the four games. Thus, player 2

faces the exact same choice (between two sets of payoffs)

in every version of the game. The only payoffs that are

manipulated are those that each player gets if player 1chooses not to trust.

In TG-a (see Fig. 1), player 1�s default outcome is $11

and player 2�s default outcome is $0. Because player 1

has little to gain even if player 2 reciprocates, and player

2 gains considerably from being trusted, this creates a

situation with high risk and high benefit. TG-b also has

high risk, but because player 2�s default outcome is $10,

the benefit associated with trust is lower than in TG-a.Both TG-c and TG-d involve low risk, because the de-

fault outcome for player 1 drops to $5, meaning that the

trusting choice involves little vulnerability, much less

than in TG-a and TG-b. TG-c involves high benefit

for the trusted party; TG-d involves low benefits. It is

Trust Games used in the Experiment 1.

D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73 67

important to note that regardless of the condition (TG-a, b, c, or d), player 2 faces the same two choices re-

garding how to distribute the outcome between the two

players, either to reciprocate and distribute the payoffs

more evenly ($13 for player 1 and $11 for player 2) or to

exploit and take the lion�s share of the money ($4 for

player 1 and $20 for player 2).

Method

Experiment 1 used a 2 (Risk: High vs. Low)� 2

(Benefit: High vs. Low) design. Participants were as-

signed the role of either trustor or trusted party in two

Trust Games with a different counterpart for each game.

The participants maintained the same role (Player 1 or

Player 2) for both games. The order of the games was

randomized and counterbalanced. This means that theparticipants were randomly assigned to two of the four

games and that the order of the two games was alter-

nated so that no two-game order was used more often

than others. Randomization and counterbalancing

help to ensure that the behavior of participants in

any game was due to the structure of the game and

was not a function of the people who played the game

or the particular game played first. Subsequent analy-sis revealed no order effects—that is, behavior in any

game did not depend on whether it was played first or

second, or whether a particular game had been played

prior to it.

A double-blind procedure ensured that all decisions

were anonymous (i.e., neither other participants nor the

experimenter knew which person made any particular

decision). Participants were told that their decisionswere anonymous and that they would be interacting

with a different counterpart in each of the two games.

Participants were not told of the results of their first

interaction prior to engaging in their second interaction.

However, due to the structure of the game, player 1s

always knew the result of an interaction if they chose not

to trust (because that ended the game), and player 2s

always knew the result of an interaction as soon as theymade their choice.

For each interaction, participants were provided a

sheet of written general instructions that was common

to all conditions. The general instructions informed

participants (who were MBA students) that they would

be interacting with another student from a different

section of the same course from which they were re-

cruited. They were told that they would be assigned therole of ‘‘player 1’’ or ‘‘player 2’’ for two different in-

teractions with two different ‘‘counterparts.’’ Partici-

pants were told that their decisions would be

anonymous, and were instructed to choose a 4-digit

code number of their choice to use for the exercise. Fi-

nally, the instructions informed participants that they

would be paid the following week as a function of their

decision and that of their counterpart. Following thegeneral instructions, participants were presented with

specific instructions for each of their two interactions

(one at a time). The specific instructions explained the

rules of the interaction and the payoff structure for the

Trust Game.

The specific instructions explained the rules of the

TG and contained a graphical representation (i.e., ex-

tensive-form versions) of the game (a, b, c, or d ofFig. 1), which was also explained in words. Thus, par-

ticipants knew how and when they would make their

decision, and what payoffs might result given any set of

decisions made by the players. Providing both a graphic

and a verbal explanation of the rules and procedures,

and of the payoff structure, assured to the extent pos-

sible that all participants understood the game and that

the payoff structure was completely transparent. Inaddition, after the participants had read both the gen-

eral and the specific instructions, they were given an

opportunity to ask any clarification questions regarding

their task, and the experimenter answered these ques-

tions promptly.

Participants were who were assigned the role of

player 1 were asked to place their 4-digit code at the top

of the page. This was used to make anonymous pay-ments later. At the bottom of the sheet, player 1s re-

corded their decision (A or B). Player 1s were told that

this sheet would later be presented to another MBA

student who had been assigned the role of player 2.

Because player 2s only have a decision to make when

they are trusted, all player 2s were always provided

sheets that showed player 1 had chosen A (i.e., to trust).

The sheets provided to player 2s were identical to thosethat had been presented to player 1s, except that the

experimenter recorded player 1�s response at the bottomand player 1�s 4-digit code at the top. Player 2s recordedtheir own 4-digit code and then made their decision (X

or Y).

After both interactions were completed, participants

filled out a post-experimental questionnaire that was

designed to assess how they perceived the decisionsfacing each player. The list of items that participants

responded to in the questionnaire was derived from the

questionnaire used by Pillutla et al. (2003). Participants

were informed that at the end of the experiment one of

their two interactions would be randomly chosen to

determine their actual monetary payoffs. Participants

were paid and fully debriefed one week after their in-

volvement in the experiment.

Participants

Sixty-three MBA students from a midwestern Uni-

versity participated in Experiment 1. Participants were

students enrolled in one of two sections of an elective

course.

2 These scales differed only slightly from the items used by Pillutla

et al. (2003); the minor difference was due to the fact that Pillutla et al.

(2003) only studied the behavior of player 2s.

68 D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73

Analysis

The independent variables were risk (high vs. low)

and benefit (high vs. low). The dependent measures were

the percentage of player 1s who chose to trust and the

percentage of player 2s who chose to reciprocate. Lo-

gistic regression was used to test for the effects of risk

and benefits on participant decisions.

Results

Consistent with Hypothesis 1, trusted parties recip-

rocated more when benefit was high rather than low:

47% vs. 8% (F ð1; 47Þ ¼ 4:10; p < :05). Although player

2s� decisions always entailed choosing between the same

two outcomes, they behaved differently depending on

the choice that player 1 had earlier faced. When player2�s default outcome from no-trust was low, i.e., their

benefits from player 1s trusting them was high, they

were significantly more likely to reciprocate trusting

actions.

Hypothesis 2 predicted that trusted parties would be

more likely to reciprocate when trustors had faced high

rather than low risk. Although trusted parties recipro-

cated more often when the risk was high rather thanlow, this difference was not significant, failing to support

Hypothesis 2, (F ð1; 56Þ ¼ 2:25; p > :15). Notably, these

results (pertaining to Hypotheses 1 and 2) are consistent

with the logic of Hypotheses 6 and 8, which suggested

that trusted parties would be more influenced by the

amount of benefit they had received than by the amount

of risk trustors had faced.

Hypothesis 3 predicted that trustors would be morelikely to trust when their decision to trust provided

more benefit for trusted parties. The results of Hy-

pothesis 1 suggest that this would be rational, con-

sidering that trusted parties in this experiment

reciprocated more when their benefits were high. Al-

though trusting actions were slightly more frequent

when benefits were high rather than low, this difference

was not significant, failing to support Hypothesis 3,(F ð1; 51Þ ¼ :36; ns).

Finally, consistent with Hypothesis 4, player 1s

trusted significantly more when risk was low rather than

high: 53% trusted when risk was low; 17% trusted when

risk was high (F ð1; 56Þ ¼ 7:80; p < :01). These results

(pertaining to Hypotheses 3 and 4) are consistent with

the logic of Hypotheses 5 and 7, which suggested that

trustors would be more influenced by the amount of riskthey faced than by the benefit they could provide to

trusted parties.

The support for Hypotheses 1 and 4, and the lack of

support for Hypotheses 2 and 3, suggests that trustors

may be more sensitive to risk than to benefits, and

trusted parties may be more sensitive to benefits than to

risk, and neither appears to be significantly influenced

by the factor that more directly affects the other party.While the lack of support for Hypotheses 2 and 3 is

suggestive of a tendency to ignore the cognitions of

others and to be insensitive to indirect effects, a more

careful test of this thesis (captured by Hypotheses 5–8) is

implemented in Experiment 2.

Post-experimental responses

After their decisions, all participants were shown the

high-risk/high-benefit version of the TG and asked the

following question regarding each player�s decision:

‘‘Consider player 1�s (player 2�s) decision. What factors

do you think are important to player 1 (player 2).’’

Participants then responded to a series of 7-point, Lik-

ert-type scales from 1, ‘‘Not at all important,’’ to 7,

‘‘Extremely important.’’ Derived from the scales used byPillutla et al. (2003), six items measured participants�perception regarding whether a decision was a matter of

‘‘being nice’’ (fairness, generosity, trust, benevolence,

cooperation, and obligation; coefficient a ¼ :85), two

items measured whether a decision was a matter of

‘‘being smart’’ (intelligence, rationality; coefficient

a ¼ :79), and one item measured whether a decision was

a matter of ‘‘taking risks’’ (risk).2 Each participant re-sponded to both the player 1 question and the player 2

question. The order of these questions was counterbal-

anced, with hall of the participants responding to the

player 1 question first, and half responding to the player

2 question first.

Participant responses suggest that player 1s and

player 2s viewed their interactions differently. Those

who had been assigned the role of player 1 consideredtheir own decision to trust as an issue of ‘‘being smart’’

somewhat more than did player 2s, (tð59Þ ¼ 9:42;p < :07). In contrast, player 2s considered the decision

of player 1 to be significantly more an issue of ‘‘being

nice’’ than did player 1s (tð59Þ ¼ 11:00; p < :001). Thus,trustors seemed to be more sensitive to whether their

decision was rational (e.g., how vulnerable am I?), while

trusted parties were more sensitive to how the trustorshad treated them (e.g., was the trustor generous?). This

is consistent with the behavioral data, and with the logic

of Hypotheses 5–8: player 1s were sensitive to the risks

they faced and to the need for making smart decisions,

whereas player 2s were sensitive to the benefits they had

received.

Player 1s and player 2s did not differ in their evalu-

ation of player 1�s decision on the factor of ‘‘risk.’’ Also,there was no difference in how player 1s and player 2s

perceived player 2�s decision. Finally, there was no

difference between in any of the responses between

Fig. 2. Extended form representation of the Trust Game used in

Experiment 2.

D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73 69

respondents who had initially participated in the high-risk/high-benefit version of the TG and those who had

participated in a different version.

Discussion

The results of Experiment 1 indicate that trustors and

trusted parties focus on different aspects of the trusting

interaction. Trustors were more sensitive to the risksthey faced and less sensitive to the benefits their trust

provided to trusted parties. In contrast, trusted parties

were sensitive to the benefits they had been given but

were relatively insensitive to trustors� risks.The post-experimental questionnaire provided fur-

ther insights into the dynamics of trusting interactions.

Consistent with the behavioral data and the logic of

Hypotheses 5–8, trustors and trusted parties report thatthey viewed their interaction from different perspectives.

Trustors saw their decision as an issue of being smart

and making rational decisions. Trusted parties saw the

decision to trust as an issue of benevolence and fairness.

The design of the first experiment, however, poses

some limitations regarding how confident we can be in

stating that trustors are less sensitive to benefits and

trusted parties are less sensitive to risk. Experiment 2was designed to more carefully test Hypotheses 5–8,

which suggest that trustors are more sensitive to risk

than they are to benefits, that trusted parties are more

sensitive to benefits than they are to risk, that risk is

more important to trustors than to trusted parties, and

that benefits are more important to trusted parties than

to trustors.

3 Whereas Study 1 used MBA students, Study 2 used undergrad-

uate students from the same university. Malhotra and Murnighan

(2003) also used these two samples in their research on trust decisions

and found no significant differences between the two groups in any of

their analyses.

Experiment 2

Method

Participants were assigned the role of player 1 or

player 2 and then presented a version of the Trust

Game in which the default outcomes for player 1 andplayer 2 were not provided. Instead, player 1�s default

outcome was listed as $R and player 2�s default out-

come was listed as $B. The other payoffs in the TG

were identical to those in Experiment 1. As in Exper-

iment 1, participants were provided a graphical repre-

sentation of the game (see Fig. 2), which was also

explained in words. Unlike the first experiment, Ex-

periment 2 did not require participants to make actualchoices. Instead, participants were asked to imagine

that they would be playing the role of player 1 (or

player 2) in the TG and to then answer a number of

questions regarding how the values of R and B would

affect their own decision and that of the other player in

the TG. As in Experiment 1, a change in R affects the

level of risk faced by the trustor (player 1) and a

change in B affects the level of benefit that trust pro-

vides to the trusted party (player 2).

Participants

Forty undergraduate students from a midwestern

University participated in Experiment 2.3 Thirty-five

percentage (N ¼ 14) of the participants were female.

Twenty students were randomly assigned the role of

player 1; 20 were assigned the role of player 2.

Analysis

Comparing the responses of player 1s and player 2s

on the following three questions tested Hypotheses 5–8:

(1) Knowing the value of which number (R or B) is

more relevant to your decision in this interaction?

(Tests Hypotheses 5 and 6.)(2) On a scale of 1–7, how important is the value of R to

your decision? (Tests Hypothesis 7.)

(3) On a scale of 1–7, how important is the value of B to

your decision? (Tests Hypothesis 8.)

Two additional questions were asked:

(4) On a scale of 1–7, how important do you think the

value of B is to player 2? (This was asked to player

1s.)(5) On a scale of 1–7, how important do you think the

value of R is to player 1? (This was asked to player

2s.)

Questions 4 and 5 were asked in order to try and

better understand why trustors and trusted parties

might have behaved the way they did in Experiment 1.

The results of Experiment 1 suggest that trustors are

70 D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73

sensitive only to risk and trusted parties are sensitiveonly to benefits. Presumably, this is due to the fact that

trustors underestimate the extent to which benefits will

influence trusted parties and trusted parties perhaps

underestimate the extent to which trustors are affected

by risk. A comparison between player 1s� response to

question 4 (how important is the value of B to player 2)

and player 2s� response to question 3 (how important is

the value of B to you) tests whether trustors accuratelypredict how sensitive player 2s will be to the amount of

benefit received. Similarly, a comparison between player

2s� response to question 5 (how important is the value of

R to player 1) and player 1s� response to question 2 (how

important is the value of R to you) tests whether trusted

parties accurately predict how sensitive player 1s will be

to the amount of risk they face.

Logistic regression was used to test Hypotheses 5 and6. Analysis of variance (ANOVA) was used to test Hy-

potheses 7 and 8. Finally, t tests were used to compare

the responses related to questions 4 and 5.

Results

Consistent with Hypotheses 5 and 6, those who were

assigned the role of trustors (player 1s) and those whowere assigned the role of trusted parties (player 2s) dif-

fered significantly with regards to which factor (R or B)

they considered more relevant to their decision. Specif-

ically, 90% of trustors stated that the value of R was

more relevant to their decision while only 10% stated

that B was more relevant; in contrast, only 15% of

trusted parties stated that R was more relevant to their

decision while 85% stated that B was more relevant,(F ð1; 40Þ ¼ 16:31; p < :001).

Hypotheses 7 and 8 were also strongly supported.

Trustors rated the importance of R significantly more

highly than did trusted parties (6.30 vs. 3.85),

(F ð1; 39Þ ¼ 22:20; p < :001). Meanwhile, trusted parties

rated the importance of B significantly more highly than

did trustors (5.40 vs. 3.10), (F ð1; 39Þ ¼ 14:08; p < :001).Thus, not only was R more relevant than B to trustors(Hypothesis 5) and B more relevant than R to trusted

parties (Hypothesis 6), but also trustors considered R

more important than did trusted parties (Hypothesis 7)

and trusted parties considered B more important than

did trustors (Hypothesis 8).

Finally, it was discovered that trustors underestimate

the degree to which B (benefit) is important to trusted

parties. Trustors thought that the value of B would beless important to trusted parties (4.0) than trusted par-

ties stated it would (5.4) (tð38Þ ¼ 2:00; p < :055).Meanwhile, trusted parties were stunningly accurate in

predicting the importance of R (risk) to trustors. Trus-

ted parties predicted that R would be extremely im-

portant to trustors (6.25), just as trustors themselves

claimed (6.30) (tð38Þ ¼ :14; ns).

Discussion

The results of Experiment 2 suggest that trustors and

trusted parties may be differentially sensitive to the risks

and benefits involved in trusting interactions. Not only

was risk more important than benefits to trustors, but

also trustors cared more about risk than did trusted

parties. Similarly, benefit was more important than risk

to trusted parties, and trusted parties cared more aboutbenefits than did trustors. These findings are entirely

consistent with the behavioral results of Experiment 1,

but are critical in that they allow for a statistical test of

the proposition that trustors are less sensitive to benefits

than are trusted parties and trusted parties are less

sensitive to risk than are trustors.

Conclusions

This paper focused on decisions to trust and to re-

ciprocate and demonstrated that trustors and trusted

parties are differentially sensitive to the risks and bene-

fits involved in trust interactions. The results of Exper-

iment 1 suggest that those who are in a position to trust

focus primarily on the risks involved in trusting ratherthan on how much benefit their trust might provide to

the other party. Thus, decisions to trust were more likely

when risks were low. Meanwhile, trusted parties are

relatively insensitive to the trustor�s risks and recipro-

cate more on the basis of the benefits the trustor has

provided. Reciprocity was more likely when the benefits

provided were high. Furthermore, the results suggest

that trustors consider the decision to trust to be more amatter of ‘‘being smart’’ and less a matter of ‘‘being

nice’’ than do trusted parties, providing further evidence

that trustors and trusted parties view the trust interac-

tion from different perspectives.

The results of Experiment 2 bolster these initial re-

sults and demonstrate that trustors care more about risk

than do trusted parties, and trusted parties care more

about benefits than do trustors. In addition, the resultsof Experiment 2 suggest that trustors underestimate the

degree to which trusted parties are influenced by the

level of benefits they are being provided. In contrast,

trusted parties accurately predict how important risk is

to trustors. It is not clear why this difference between

trustors and trusted parties exists, and this asymmetry

needs to be explored further. One possibility is that the

two parties are actually answering different types ofquestions. Trustors, who make the first decision, might

interpret what they have been asked as ‘‘on what basis

might the other party behave?’’ In contrast, trusted

parties, who face a choice only after they have been

trusted, might interpret what they have been asked as

‘‘on what basis did the other party behave?’’ Thus,

trustors might have the more difficult task: they need to

D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73 71

imagine that the other party will be in a position to re-spond, and that the other party might choose to recip-

rocate, and then try and determine what factors will be

relevant to that decision. Meanwhile, trusted parties

must do only the last of these. Nonetheless, while

plausible, this explanation is merely speculative at this

stage.

The decision-making by trustors in this study was

clearly sub-optimal. While being more willing to trustwhen risk is low (rather than high) makes sense, being

insensitive to the benefits that trusting provides for

others is self-defeating for trustors. This was certainly

true in Experiment 1, because decisions to reciprocate

were significantly affected by the level of benefit pro-

vided. More generally, potential trustors might forego

relatively safe opportunities to trust when benefits are

high or choose to trust (unwisely) when reciprocity isunlikely (because benefit provided is low). For exam-

ple, consider an executive, a salesperson, or a negoti-

ator who does not rigorously analyze the potential

benefits to the counterpart of a set of possible actions.

They might choose a low cost act that provides little

benefit rather than an act that is only slightly more

costly but provides immense benefit. The latter act

would probably be considerably more likely to elicitreciprocity and be more likely to result in efficient

trades. Consistent with this logic, studies suggest that

negotiations among people who are high in perspective

taking ability (and thus more attuned to the factors

that are important to the other party) tend to result in

higher joint gains than do those negotiations among

people who are lower in perspective taking ability

(Batson, 1991; Kohlberg, 1976). Thus, trustors clearlystand to benefit from being sensitive to both risks and

benefits.

It is less clear, in a one-shot interaction, whether

trusted parties have any reason to be sensitive to the

risks faced by trustors. Whereas the level of benefit

(indirectly) affected the trustor�s final outcome in Ex-

periment 1, the level of risk had no affect at all on the

trusted party�s outcome once the trustor had made thedecision to trust. Thus, while it is important to note that

benefits affect reciprocity and risk does not, there is no

reason to question the rationality of trusted parties in

Experiment 1. An interesting empirical question, how-

ever, is whether trusted parties would continue to be

insensitive to risk in a repeated (i.e., multiple round)

interaction. If trustors expect more reciprocity when

they have taken large rather than small risks, it may beimportant for trusted parties to be sensitive to the level

of risk in repeated interactions. The current results do

not speak directly to this issue, but are suggestive:

trusted parties in Experiment 2 accurately assessed the

degree to which risk was important to trustors, sug-

gesting that they may indeed be sufficiently sensitive to

risks when it is required.

The current findings also have broader implicationsregarding behavior in organizations. An interesting im-

plication relates to the transparency and efficacy of in-

centive systems in organizations. Consider the

possibility that managers—those who determine the size

of bonuses and other organizational rewards—may be

more attuned to employee performance (i.e., benefits

received) than to effort (i.e., risks undertaken). To the

extent that employees might underestimate how muchtheir actual performance (rather than effort) affects the

manager�s decisions, there is likely to be a discrepancy

between what the employee expects to receive and what

the manager decides to give.

Teams and workgroups in organizations might also

be affected by the dynamics suggested in this paper. The

costs incurred by an individual who contributes to a

group project often do not translate directly into bene-fits for the group. Furthermore, the costs incurred are

often unknown to others, whereas the benefits to the

team are publicly visible. It is not surprising, then, that

most individuals tend to believe that they have con-

tributed more than others to the group (Burger &

Rodman, 1983; Leary & Forsyth, 1987; Miller &

Schlenker, 1985): this may be due in part to each person

judging others on the benefits they have provided, whilethey judge themselves based on the costs and risks they

have incurred.

There are, nonetheless, contexts in which individuals

seem to be able to overcome their perspective-taking

limitations. For example, in long-term relationships,

people tend to stop calculating the risks and benefits

that each party has incurred in any one exchange and

instead adopt a more informal norm of providing ‘‘whatthe other needs, when it is needed’’ (Cialdini, 1993;

Clark, Mills, & Corcoran, 1989), suggesting a shift to-

wards being sensitive to benefits provided. Not only is

there a greater emphasis on the other party�s needs in

such interactions, but also the risk of exploitation is

lowered when both parties perceive the interaction as

situated in a communal rather than exchange relation-

ship (Clark, S, Mills, & Powell, 1986).Negotiation is another domain in which some people

are able to overcome their perspective-taking limita-

tions. Expert negotiators seem able to craft agreements

that provide high benefits (and entail sufficiently low

risk) to the other party with the realistic expectation that

this will lead to reciprocity and high benefits in return

(cf., Thompson, 1990). Novice negotiators may be able

to achieve the same result through effective communi-cation: negotiators may be able to procure high benefits

if they communicate what is important to them, and also

their intent to reciprocate in kind. Furthermore, com-

municating and making salient the costs and risks one

has incurred might also increase the likelihood of

reciprocity. For example, Malhotra (2004) suggests

that labeled concessions may be more likely to induce

72 D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73

reciprocity because such concessions are harder toignore by negotiation counterparts who may otherwise

be tempted to discount the contributions and conces-

sions made to them (cf., Ross & Stillinger, 1991).

The results of this study also have broader implica-

tions for our understanding of the norm of reciprocity.

While earlier research has documented the pervasiveness

of this norm across human (and non-human; de Waal,

1991) societies (Gouldner, 1960), the question of whenthe norm is triggered has received scant attention (Pill-

utla et al., 2003). The results of the current experiments

suggest that reciprocity is more sensitive to the benefits

provided to the potential reciprocator and that the cost,

investment, or risk faced by the trustor may be less likely

to trigger reciprocity. Thus, whether people feel obli-

gated to reciprocate may be more a function of being

indebted to benefits, and less a function of being appre-ciative of others� risks. This is an important clarification

of earlier findings (Berg et al., 1995; Pillutla et al., 2003),

which have confounded the levels of risk and benefit.

The current study focused specifically on the risks

and benefits of trusting and demonstrated the differing

perspectives of trustors and trusted parties. Further re-

search on such differences in perspective may be of

critical importance to a better understanding of thedynamics of trust and reciprocity decisions. For exam-

ple, Pillutla et al. (2003) suggest that while trustors

might focus on how much they are trusting, trusted

parties seem to focus on how much trustors could have

trusted. An appreciation for the existence and impact of

different perspectives might lead to more efficient out-

comes and also to a more understanding and empathetic

view of the problems people face in the development andmaintenance of trust.

References

Andreoni, J. (1995). Cooperation in public goods experiments:

Kindness or confusion. American Economic Review, 85(4), 891–904.

Bazerman, M. H. (1994). Judgment in managerial decision making (3rd

ed.). New York: John Wiley & Sons.

Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and

social history. Games and Economic Behavior, 10, 122–142.

Burger, J. M., & Rodman, J. L. (1983). Attributions of responsibility

for group tasks: The egocentric bias and the actor–observer

difference. Journal of Personality and Social Psychology, 45,

1232–1242.

Carroll, J. S., Bazerman, M. H., & Maury, R. (1988). Negotiator

cognitions: A descriptive approach to negotiators� understandingof their opponents. Organizational Behavior and Human Decision

Processes, 41, 352–370.

Cialdini, R. B. (1993). Influence: Science and practice (3rd ed.). New

York: HarperCollins College Publishers.

Clark, M. S., Mills, J. R., & Corcoran, D. M. (1989). Keeping track of

needs and inputs of friends and strangers. Personality and Social

Psychology Bulletin, 37, 12–24.

Clark, M., S, Mills, J. R., & Powell, M. C. (1986). Keeping track of

needs in communal and exchange relationships. Journal of Person-

ality and Social Psychology, 51(2), 333–338.

Deutsch, M. (1958). Trust and suspicion. Journal of Conflict Resolu-

tion, 2, 265–279.

de Waal, F. B. M. (1991). The social nature of primates. In M. A.

Novak & A. J. Petto (Eds.), Through the looking glass: Issues of

psychological well-being in captive non-human primates (pp. 69–77).

Washington, DC: American Psychological Association.

Gambetta, D. (1988). Can we trust?. In D. Gambetta (Ed.), Trust:

Making and breaking cooperative relationships (pp. 213–237).

Cambridge: Blackwell.

Gilovich, T., Kruger, J., & Savitsky, K. (1999). Everyday egocentrism

and everyday interpersonal problems. In R. M. Kowalski & M. R.

Leary (Eds.), The social psychology of emotional and behavioral

problems: Interfaces of social and clinical psychology (pp. 69–95).

Washington, DC: American Psychological Association.

Gneezy, U., Guth, W., & Verboven, F. (2000). Presents or invest-

ments? An experimental analysis. Journal of Economic Psychology,

21(5), 481–493.

Gouldner, A. W. (1960). The norm of reciprocity: A preliminary

statement. American Sociological Review, 25, 161–178.

Idson, L. C., Chugh,D., Bereby-Meyer, Y.,Moran, S., Grosskopf, B., &

Bazerman, M. H. (2004). Overcoming focusing failures in compet-

itive environments. Journal of Behavioral DecisionMaking (in press).

Johnson-George, C., & Swap, W. (1982). Measurement of specific

interpersonal trust: Construction and validation of a scale to assess

trust in a specific other. Journal of Personality and Social

Psychology, 43, 1306–1317.

Jones, E. E., & Nisbett, R. E. (1972). The actor and the observer:

Divergent perceptions of the causes of behavior. In E. E. Jones

et al. (Eds.), Attributions: Perceiving the causes of behavior.

Morristown, NJ: General Learning Press.

Leary, M. R., & Forsyth, D. R. (1987). Attributions of responsibility

for collective endeavors. In C. Hendrick (Ed.), Review of

personality and social psychology (Vol. 8, pp. 167–188). Newbury

Park, CA: Sage.

Lewis, J. D., & Weigert, A. (1985). Trust as a social reality. Social

Forces, 63, 967–985.

Lindskold, S. (1978). Trust development, the GRIT proposal, and the

effects of conciliatory acts on conflict and cooperation. Psycholog-

ical Bulletin, 85, 772–793.

Luhmann, N. (1988). Familiarity, confidence, trust: Problems and

alternatives. In D. Gambetta (Ed.), Trust: Making and breaking

cooperative relations (pp. 94–108). Cambridge, MA: Oxford Univer-

sity Press.

Malhotra, D. (2004). Risky business: Trust in negotiation. Negotiation.

Malhotra, D., & Murnighan, J. K. (2003). The effects of contracts on

interpersonal trust. Administrative Science Quarterly, 47,

534–559.

Mauss, M. (1955). The gift. London: Cohen and West.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative

model of organizational trust. Academy of Management Review, 20,

709–734.

Miller, R. S., & Schlenker, B. R. (1985). Egotism in group members:

Public and private attributions of responsibility for group perfor-

mance. Social Psychology Quarterly, 48, 85–89.

Moore, D. A., & Kim, T. G. (2003). Myopic social prediction and the

solo comparison effect. Journal of Personality and Social Psychol-

ogy, 85(6), 1121–1135.

Neale, M. A., & Bazerman, M. H. (1991). Cognition and rationality in

negotiation. New York: The Free Press.

Ortmann, A., Fitzgerald, J., & Boeing, C. (2000). Trust, reciprocity,

and social history: A re-examination. Experimental Economics,

3(1), 81–100.

Osgood, C. (1962). An alternative to war or surrender. Urbana, IL:

University of Illinois press.

D. Malhotra / Organizational Behavior and Human Decision Processes 94 (2004) 61–73 73

Pillutla, M., Malhotra, D., & Murnighan, J. K. (2003). Attributions of

trust and the calculus of reciprocity. Journal of Experimental Social

Psychology, 39, 448–455.

Pruitt, D. G., & Kimmel, M. J. (1977). Twenty years of experimental

gaming: Critique, synthesis, and suggestions for the future. Annual

Review of Psychology, 28, 363–392.

Reagan, R. T. (1971). Effects of a favor and liking on compliance.

Journal of Experimental Social Psychology, 7, 627–639.

Ross, L., & Stillinger, C. (1991). Barriers to conflict resolution.

Negotiation Journal, 7(4), 389–404.

Rotter, J. B. (1967). A new scale for the measurement of interpersonal

trust. Journal of Personality, 35, 651–655.

Rousseau, D., Sitkin, S., Burt, R., & Camerer, C. (1998). Not so

different after all: A cross-discipline view of trust. Academy of

Management Review, 23, 393–404.

Samuelson, W. F., & Bazerman, M. H. (1985). The winner�s curse in

bilateral negotiations. In V. Smith (Ed.), Research in experimental

economics (Vol. 3, pp. 105–137). Greeenwich, CT: JAI Press.

Shapiro, D., Sheppard, B. H., & Cheraskin, L. (1992). Business on a

handshake. Negotiation Journal, 8, 365–377.

Stillinger, C., Epelbaum, M., Keltner, D., & Ross, L. (1990). The

�reactive devaluation� barrier to conflict resolution. Palo Alto, CA:

Stanford University.

Snijders, C. (1996). Trust and commitments. Interuniversity Center for

Social Science Theory and Methodology.

Snijders, C., & Keren, G. (1999). Determinants of trust. In D. V.

Budescu & I. Erev (Eds.), Games and human behavior: Essays in

honor of Amnon Rapoport (pp. 355–385). Mahwah, NJ: Erlbaum.

Strickland, L. H. (1958). Surveillance and trust. Journal of Personality,

26, 200–215.

Taylor, S. E., & Brown, J. D. (1988). Illusions and well-being. A social

psychological perspective on mental health. Psychological Bulletin,

103, 193–210.

Thompson, L. (1990). An examination of na€ıve and experienced

negotiators. Journal of Personality and Social Psychology, 59, 82–

90.

Tor, A., & Bazerman, M. H. (2003). Focusing failures in competitive

environments: Explaining decision errors in the Monty Hall game,

the Acquiring a Company game, and Multiparty Ultimatums.

Journal of Behavioral Decision Making, 16, 353–374.