anticipatory feelings and attitudes to information

25
Anticipatory Feelings and Attitudes to Information Kr Eliaz and Ran Spiegler October 1, 2002 Abstract The well-being of agents is often directly aected by their beliefs, in the form of an- ticipatory feelings such as anxiety. Economists have tried to incorporate such eects by introducing beliefs as arguments in decision makers’ VNM utility function. Intuition sug- gests that anticipatory feelings lead to anomalous attitudes to information. We explore this intuition, by studying the class of attitudes to information, which is consistent with the extended expected utility model. Our ndings suggest that the anomalous behavior associated with anticipatory feelings cannot be captured by expected utility from beliefs. We thank Oliver Board, Andrew Caplin, Eddie Dekel, Tzachi Gilboa, Barton Lipman, Efe Ok and Ariel Rubinstein for helpful comments. Dept. of Economics, NYU. 269 Mercer St., New York, NY 10003, E-mail: k[email protected], URL: http://homepages.nyu.edu/~ke7/ School of Economics, Tel Aviv University, Tel Aviv 69978, Israel. E-mail: [email protected]. URL: http://www.tau.ac.il/~rani. 1

Upload: others

Post on 15-Jul-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Anticipatory Feelings and Attitudes to Information

Anticipatory Feelings and Attitudes to Information∗

Kfir Eliaz† and Ran Spiegler‡

October 1, 2002

Abstract

The well-being of agents is often directly affected by their beliefs, in the form of an-

ticipatory feelings such as anxiety. Economists have tried to incorporate such effects by

introducing beliefs as arguments in decision makers’ VNM utility function. Intuition sug-

gests that anticipatory feelings lead to anomalous attitudes to information. We explore

this intuition, by studying the class of attitudes to information, which is consistent with

the extended expected utility model. Our findings suggest that the anomalous behavior

associated with anticipatory feelings cannot be captured by expected utility from beliefs.

∗We thank Oliver Board, Andrew Caplin, Eddie Dekel, Tzachi Gilboa, Barton Lipman, Efe Ok and ArielRubinstein for helpful comments.

†Dept. of Economics, NYU. 269 Mercer St., New York, NY 10003, E-mail: [email protected], URL:http://homepages.nyu.edu/~ke7/

‡School of Economics, Tel Aviv University, Tel Aviv 69978, Israel. E-mail: [email protected]. URL:http://www.tau.ac.il/~rani.

1

Page 2: Anticipatory Feelings and Attitudes to Information

1 Introduction

Standard expected utility theory assumes that the decision maker (DM) has preferences over

‘physical’ consequences. The description of a consequence consists of a state of nature and an

action taken by the DM. The DM’s beliefs regarding the state of nature clearly affect his decisions,

but they are not part of the definition of a consequence - that is, they do not enter as arguments

into the DM’s VNM utility function.

In reality, economic agents’ sense of well-being often seems to be directly affected by their

beliefs. Imagine a patient expecting the results of a medical test. Even if this information cannot

lead to any change in his actions (say, because the disease he may be suffering from is incurable),

it causes him to update his beliefs and this may affect his well-being in the form of greater or

lesser anxiety.

For this reason, there has been a growing sentiment among economists that the standard

model of choice under uncertainty should be enriched by adding beliefs to the description of

consequences, in order to capture anticipatory feelings such as anxiety or hopefulness. Quoting

Akerlof and Dickens (1982):

“...persons not only have preferences over states of the world, but also over their

beliefs about the state of the world...persons have some control over their beliefs; not

only are people able to exercise some choice about belief given available information,

they can also manipulate their own beliefs by selecting sources of information likely

to confirm ‘desired’ beliefs.”

This paper studies the link between agents’ ‘preferences over beliefs’ and their observed

‘selection of sources of information’. That such a link exists is highly intuitive. In many cases,

2

Page 3: Anticipatory Feelings and Attitudes to Information

we try to explain anomalous attitudes to information by attributing anticipatory feelings to

the decision maker. For instance, psychologists and medical experts argue that patients often

deliberately avoid medical tests because they are afraid of ‘what they may find out’. (Lyter et.

al. (1987), Lerman et. al. (1998), Cohen (2002)). The question is to what extent this intuition

can be accommodated into the enriched expected utility model, in which the agent’s belief enters

as an argument in his VNM utility function.

To study this question, we construct a simple model with two states of nature, ω1 and ω2.

The DM obtains information about the state by observing a binary signal q = (q1, q2), where

qi is the probability that the signal will be accurate in state ωi. For each prior p ∈ (0, 1), the

DM is assumed to have a complete preference ordering %p over the set of all binary signals.

The preference profile (%p)p∈(0,1) constitutes his ‘attitude to information’. This model allows

attitudes to information to be prior-dependent. The following are a few real-life examples of

such attitudes:

1. A patient prefers more accurate medical information when he is relatively certain of being

healthy, yet he avoids such tests when he is relatively certain of being ill.

2. An otherwise keen news reader decides to skip the “hard news” when he has pessimistic

expectations about the political situation.

3. A manager consults advisors when he is relatively sure of what they will advise, yet avoids

such advice when he is relatively unsure.

These examples appear to be driven by the agents’ fear of what they will believe as a con-

sequence of observing the signal. The patient and the news reader are afraid of learning that

3

Page 4: Anticipatory Feelings and Attitudes to Information

the state is adverse, while the manager is afraid that his consultants’ advice will contradict his

current belief. The examples are anomalous, in the sense that a standard expected utility max-

imizer (w.r.t physical outcomes) would never display the attitudes they describe. However, it

would be desirable to incorporate them into the extended choice model.

A prior p and a signal q induce a probability distribution σ(p, q) over the DM’s posterior belief,

via Bayes’ rule. An attitude to information is said to be consistent with utility maximization

over beliefs if there is a continuous VNM utility function u : [0, 1]→ R over posteriors, such that

the DM evaluates signals by the expected utility from σ(p, q). E.g., at the prior p, he prefers the

fully informative signal q1 = q2 = 1 to the fully uninformative signal q1 = q2 = 12if:

p · u(1) + (1− p) · u(0) ≥ u(p)

Note that u is defined only over posterior beliefs, ignoring the actions that the DM may take

after observing the signal. As we show in Section 2, this is without loss of generality.

Other researchers have commented on the link between utility from beliefs and choice over

signals. (See the literature survey below). The present paper is the first to study these links

systematically, and it is the first to examine the DM’s attitude toward information sources in the

entire range of prior beliefs. There are several reasons why we should want to look at how the

DM approaches information at different priors. We have already mentioned examples of prior-

dependent attitudes, which intuitively seem to be driven by anticipatory feelings. Furthermore,

the DM may receive signals sequentially. Even if all he cares about is his terminal beliefs, he

updates his beliefs constantly during the process of observing the signals, and this means that

we have to consider his preferences over signals as a function of his current beliefs.

4

Page 5: Anticipatory Feelings and Attitudes to Information

Our main point is that despite its apparent intuitive attraction, the model of expected utility

from beliefs offers a weak explanation of anomalous attitudes to information. We first tackle

the distinction between “good” and “bad” states in the model. Intuitively, this distinction is

important for interpreting the role of anticipatory feelings in attitudes to information. However,

we show that there are non-monotonic transformations of the VNM function u, which represent

the same preference profile. That is, as far as the DM’s choices of signals are concerned, it does

not matter whether his beliefs are optimistic or pessimistic.

Our second set of results partially characterizes the class of attitudes to information which

can be explained by expected utility maximization over beliefs. In particular, we show that

anomalous attitudes to information, which have been linked in the literature to anticipatory

feelings (including the above set of examples) are in fact inconsistent with the model.

The results are quite simple from a technical point of view. Their importance is in that they

demonstrate the difficulties of formalizing intuitions about the relation between anticipatory

feelings and attitudes to information. Specifically, the model of expected utility from beliefs,

which has been employed by economists, has limited capability to explain anomalous attitudes

to information. In our opinion, the assumption of Bayesian updating is the chief “culprit” here.

We surmise that the intuitive links between anticipatory feelings and attitudes to information

probably involve elements of non-Bayesian updating.

Related Literature. Previous works, in which beliefs enter as arguments in agents’ utility

function include Akerlof and Dickens (1982), Geanakopolos et. al. (1989), Rabin (1993), Caplin

and Leahy (1999,2001), Koszegi (2001a,2001b) Yariv (2001), Bénabou and Tirole (2002), Wein-

berg (2002) and Caplin and Eliaz (2002). These papers construct choice models, in which the

5

Page 6: Anticipatory Feelings and Attitudes to Information

description of a consequence includes the DM’s beliefs, in addition to the state of nature and the

DM’s own action. Expected utility preferences are imposed on this extended consequence space.

These models have been used to study risk taking, reciprocal fairness and other kinds of choice

behavior.

Some of the above-cited papers point out that the extended choice models can lead to anoma-

lous choices of signals. As mentioned earlier, the present paper is the first paper that examines

the link between utility from beliefs and the DM’s entire attitude to information, whereas previ-

ous studies considered partial domains or implicitly assumed prior-independent attitudes. E.g.,

Caplin and Leahy (1999) focus on the choice between no information and full information and

they ignore signals of intermediate levels of accuracy.

The present paper is related to the literature that studies attitudes to temporal resolution

of uncertainty, emanating from Kreps and Porteus (1978). Although our model is static, an

argument could be made that it can be embedded in the Kreps-Porteus formalism: information-

loving (-averse) preferences in a static model can be re-interpreted as preferences for early (later)

resolution of uncertainty in a multi-period model. (See Grant et. al. (1998).) However, the

original Kreps-Porteus formalism excludes prior-dependent preferences. At any rate, we do not

take issue with this “embedding” argument. To the extent that preferences over beliefs can be

re-interpreted as preferences over the timing of uncertainty resolution, our results apply to the

Kreps-Porteus formalism.

Decision theorists have studied anomalous attitudes to information and sought to trace them

to sources other than anticipatory feelings. As demonstrated by Wakker (1988), Safra and

Sulganik (1995) and Grant et. al. (1998,2000), violation of the independence axiom may cause

6

Page 7: Anticipatory Feelings and Attitudes to Information

decision makers to be averse to information. In these papers, preferences are defined over lotteries

and utility is defined over physical consequences; attitudes to information are deduced from

these preferences. In contrast, the present paper redefines the consequence space to be the set of

posterior beliefs, while adhering to an expected-utility functional; attitudes to information are

not deduced, but serve as choice primitives.

Anomalous attitudes to information have also been explained by dynamically inconsistent

preferences. As shown by Carrillo and Mariotti (2000) and Bénabou and Tirole (2002), a decision

maker who lacks self control may decide to avoid information today so as to discipline his future

self. We are interested in studying a different explanation for anomalous attitude to information,

one that does not arise from a strategic decision to handicap oneself.

2 The Model

There are two states of nature, labeled ω1 and ω2. A DM has a prior probability of p ∈ (0, 1)

that ω1 is the true state. A signal is a random variable, which can take two values, s1 and s2.

A signal is characterized by a pair of conditional probabilities (q1, q2), where qi ∈£12, 1¤is the

probability of observing the realization si conditional on the state being ωi. We write q ≥ r if

q1 ≥ r1 and q2 ≥ r2. We write q > r if q1 ≥ r1 and q2 ≥ r2, with at least one strict inequality.

The DM updates his beliefs according to Bayes’ Law. Therefore, a signal q and a prior p generate

the following probability distribution σ(p, q) over the posterior probability of ω1:

7

Page 8: Anticipatory Feelings and Attitudes to Information

s Pr(si) Pr(ω1 | si)

s1 pq1 + (1− p) (1− q2) pq1pq1+(1−p)(1−q2)

s2 p (1− q1) + (1− p)q2 p(1−q1)p(1−q1)+(1−p)q2

We assume that the DM’s choice of signals at every prior p is rational. Therefore, the DM’s

overall attitude to information can be summarized by a profile of preference relations (%p)p∈(0,1)

over the set of signals£12, 1¤ × £1

2, 1¤. We are agnostic as to whether the DM has to take some

action after observing a signal’s realization. We offer a justification for this agnosticism below.

The following are examples of interesting profiles of preferences over signals:

Information seeking / aversion. An information-seeking DM satisfies the following prop-

erty: for every prior p ∈ (0, 1), q > r implies q Âp r. An information-averse DM satisfies the

opposite property: for every prior p ∈ (0, 1), q > r implies r Âp q.1

Confirmatory bias. Consider a DMwho likes changes in his beliefs only when he is confident

in his belief of the state of nature. Such a DM can be described as having a confirmatory

bias because he is willing to acquire information only when he is sufficiently certain that the

new information will not change his view as to which state is more probable. Formally, when

max{p, 1 − p} is close to 1, q > r implies q Âp r; and when max{p, 1 − p} is close to 12, q > r

implies r Âp q.

Anxiety. People’s desire to acquire information often depends on whether they believe that

the state is ‘good’ or ‘bad’. Some examples include the anxious patient from Section 1, or

1For the purpose of our examples, we only consider signals q and r that are of uniformly higher accuracy thanone another (i.e., q > r).

8

Page 9: Anticipatory Feelings and Attitudes to Information

someone who eagerly follows hard news when the political situation seems to be good, yet avoids

them when the political situation seems to be bad. Formally, let ω1 be the “good” state. When p

is close to 1, q > r implies q Âp r; and when p is close to 0, q > r implies r Âp q.

Preference for biased information. People often have a preference for information

sources, which display a small bias in one particular direction, whereas they strictly dislike a bias

of the same magnitude in the opposite direction. E.g., they prefer a newspaper with a right-wing

orientation to a newspaper with a left-wing orientation, while ranking a neutral, fully informative

newspaper in the middle. Formally, there exists a∗ ∈ [12, 1), such that (1, a) Âp (1, 1) Âp (a, 1)

for every a > a∗.

Standard expected utility. After observing the realization of a signal, the DM has to take

some action. Each state is associated with some optimal action; failure to take it entails a loss.

The DM chooses his action so as to maximize expected utility from physical outcomes. We refer

to this DM as an EUM agent.

Suppose that p 6= 12. When the signals q and r are sufficiently close to (1

2, 12), the DM’s

optimal choice of action is independent of their realization; therefore, he is indifferent between

q and r. As the signal becomes more informative, it begins to affect the DM’s choice of action

and his expected loss decreases. For example, suppose that there are two actions, a1 and a2. For

every i = 1, 2, the utility values from taking actions ai and aj (i 6= j) at state ωi are 0 and −1,

respectively. Thus, the DM faces a decision problem with a symmetric loss function. It follows

that whenever q > r and p > 12, q vp r if p

1−p ≥ q21−q1 and q Âp r if

p1−p <

q21−q1 . Finally, q > r

always implies q  12r.

9

Page 10: Anticipatory Feelings and Attitudes to Information

When a DM chooses a signal he effectively chooses a lottery over his posterior beliefs. Thus,

we may interpret choices of signals at a given prior as a revealed preference over such lotteries.

The following question therefore comes to mind: when can we explain a DM’s pattern of choices

over signals by maximization of expected utility over beliefs?

Let u : [0, 1] → R be a continuous VNM utility function over posterior beliefs. We will say

that u rationalizes the preference profile (%p)p∈(0,1) if for every p ∈ (0, 1) and every pair of signals

q, r ∈ £12, 1¤× £1

2, 1¤,

q % r⇐⇒ U(p, q) ≥ U(p, r)

where for every s ∈ {q, r}:

U (p, s) = π · u[ps1π] + (1− π) · u[p(1− s1)

1− π ]

and π = ps1 + (1− p)(1− s2)

The following analogy (previously suggested by Grant et. al. (1998)) is useful for understand-

ing this concept. By Bayes’ rule, any signal induces a probability distribution over posteriors,

whose mean is the prior. Think of the prior as ‘initial wealth’ and of the posterior as ‘final

wealth’. Thus, a signal corresponds to a ‘fair lottery’ over final wealth: its mean is equal to

initial wealth. A DM with a rationalizable profile of preferences over signals corresponds to

an agent, whose attitude to fair lotteries at any initial wealth is induced by maximization of

expected utility over his final wealth.

The natural benchmark for the model is the EUM agent: the DM who maximizes expected

utility w.r.t physical outcomes, as if he has no anticipatory feelings. let us verify that an EUM

10

Page 11: Anticipatory Feelings and Attitudes to Information

agent can be rationalized by expected utility over beliefs. Let A be the set of actions available to

the DM upon observing a signal’s realization. Let π(a,ωi) denote the DM’s utility from taking

action a in state ωi. For every x ∈ [0, 1], define:

u(x) = maxa∈A

[x · π (a,ω1) + (1− x) · π (a,ω2)]

Thus, for every prior p and signal q, U (p, q) is equal to the DM’s indirect expected utility from

physical outcomes when he observes q. By definition, an EUM agent prefers q to r if and only if

q yields a higher indirect expected utility than r: q %p r if and only if U (p, q) ≥ U (p, r). Note

that u is continuous. It follows that u rationalizes the EUM agent. We shall examine the above

examples of anomalous preference profiles in Section 4.

Why are beliefs the only argument in the utility function? In this model, agents

only choose signals. We are agnostic as to whether or not they take some action after observing

the signal. The consequence space consists of posterior beliefs only. Does this entail any loss of

generality? Recall that in the EUM example, the function u that rationalized the DM’s attitude

to information was precisely the indirect expected utility function over physical outcomes. This is

in fact a general feature. There is no loss of generality in letting the consequence space consist of

beliefs only and being agnostic about the actions that the DM may take upon receiving the signal.

The utility function over beliefs u is the indirect utility induced by expected utility maximization

over the enriched consequence space, which includes not only beliefs but also states and actions.

To see why, consider an extended model, in which the DM has to choose an action a ∈ A

after observing the realization of a signal. Let z denote the DM’s posterior belief after observ-

ing a particular realization. Let v(ω, a, z) be a continuous utility function over the extended

11

Page 12: Anticipatory Feelings and Attitudes to Information

consequence space. Let Ev(ω, a, z) be the DM’s expected utility given the posterior z:

Ev(ω, a, z) = z · v(ω1, a, z) + (1− z) · v(ω2, a, z)

Now, define:

u(z) = maxa∈A

Ev(ω, a, z)

It is easy to see that u is continuous. A DM whose objective is to maximize the expectation

of v will choose signals so as to maximize the expectation of u. This follows from our ability

to employ dynamic programming under expected utility maximization over the terminal nodes

of a finite decision problem. Thus, our modeling procedure of defining utility over beliefs only

simplifies analysis without sacrificing generality.

The expected utility functional. We have assumed expected utility preferences over

probability distributions of posterior beliefs, for a number of reasons. First, it is in line with

existing work on anticipatory feelings. Second, it involves a minimal extension of the standard

case with no anticipatory feelings: whereas the EUM agent is rationalized by a utility function

having a particular structure, our model imposes no structure on u except for continuity. Third,

there is no obvious reason for using familiar non-expected utility functionals instead.

Unlike the first two reasons, which are relatively straightforward, the third reason requires

some explanation. In our model, a ranking of signals at a given prior translates into a ranking of

two distributions of posterior beliefs, via Bayes’ rule. Since Bayes’ rule is non-linear in p and q,

it follows that “nice” properties of u do not imply “nice” properties of (%p)p∈(0,1). In particular,

an independence axiom imposed on preferences over probability distributions of beliefs does not

12

Page 13: Anticipatory Feelings and Attitudes to Information

imply anything that looks like an independence axiom imposed on (%p)p∈(0,1). Presumably, this

is also the case for the familiar weakenings of the independence axiom. We therefore believe that

it will be difficult to find a rationale for any of the familiar non-expected utility functionals.

3 Optimistic and Pessimistic Beliefs

Recall Akerlof and Dickens’ claim that “people...manipulate their own beliefs by selecting sources

of information likely to confirm ‘desired’ beliefs.” Bénabou and Tirole (2002, p. 906) write in

a similar vein: “people just like to think of themselves as good, able, generous, attractive, and

conversely find it painful to contemplate their failures and shortcomings.”

Both statements reflect the idea that a belief that the state is “good” is more desired than

the belief that the state is “bad”. Intuitively, agents who differ in what constitutes a desirable

belief for them will also differ in their attitude toward information. A patient’s avoidance of

information is affected by his fear that upon observing the signal, he will come to believe that

he is ill. Presumably, if the patient were a hypochondriac, for whom the belief that he is ill is

desirable, he would behave differently. Likewise, a news reader’s perception of what constitutes

“good news” will determine his choice between newspapers with a right-wing or a left-wing bias.

Our first result in this paper demonstrates that a naïve interpretation of u(1) > u(0), accord-

ing to which ω1 is a ‘good state’ and ω2 is a ‘bad state’ cannot be reflected by the DM’s choices

of signals. An agent who views ω1 as a good state and ω2 as a bad state may be indistinguishable

from an agent who views ω2 as the good state and ω1 as the bad state.

13

Page 14: Anticipatory Feelings and Attitudes to Information

Proposition 1 If u rationalizes (%p)p∈(0,1), then for every real number c 6= 0, the function

v(z) = u(z) + cz also rationalizes (%p)p∈(0,1). In particular, we can pick c = u(1) − u(0), such

that v(1) = v(0).

Proof. Assume that u rationalizes (%p)p∈(0,1). Let

Ez (p, q) = π · [pq1π] + (1− π) · [p(1− q1)

1− π ]

where π = pq1 + (1− p)(1− q2)

By the definition of v, V (p, q) = U(p, q) − c · Ez(p, q). Note that Ez(p, q) = p for every signal

q. Therefore, V (p, q) > V (p, r) if and only if U(p, q) > U(p, r). It follows that v rationalizes

(%p)p∈(0,1). By simple algebraic manipulation, we can check that if c = u(1)− u(0), then v(0) =

v(1) = u(0).

This simple result reveals a sense, in which the model fails to capture the intuition that

people’s desire to hold optimistic beliefs determines their attitude to information. A belief x is

more optimistic than another belief y if u(x) > u(y). Intuitively, an agent for whom p = 1 is

more optimistic than p = 0 should behave differently from an agent, for whom p = 0 is more

optimistic than p = 1. However, Proposition 1 shows that they may display the same choices

over signals.

It follows that the ordinal utility rankings over posterior beliefs are quite meaningless for

choices over signals. To take an extreme case, suppose that u is a differentiable, strictly increasing

function. Normalize u(0) = 0. Let c = −2 · maxx∈[0,1] u0(x). Define v(x) = u(x) + c · x. It is

easy to show that v is a strictly decreasing function. Thus, every ordinal utility ranking which is

14

Page 15: Anticipatory Feelings and Attitudes to Information

induced by u is reversed by v. Nevertheless, u and v rationalize the same attitude to information.

Proposition 1 is a straightforward consequence of assuming that the DM uses Bayes’ rule to

update his beliefs. Recall that by Bayes’ rule, the mean of σ(p, q) is p. Since the only choices

that our DM makes are between signals, his revealed preferences over beliefs are only w.r.t such

mean-preserving distributions. In particular, no choice experiment reveals the ranking between

u(x) and u(y), for any x, y ∈ [0, 1]: the DM cannot choose to believe that the state of nature is

ω1 with probability p0 6= p.

4 Characterization Results

In this section, we partially characterize the attitudes to information, which are consistent with

maximization of expected utility over posterior beliefs. Recall the “information seeking/aversion”

example of Section 1. As this example suggests, attitudes to information that remain qualitatively

the same both within and across priors are consistent with expected utility maximization over

beliefs. An information-seeking DM can be rationalized by a strictly convex VNM function u.

Similarly, an information-averse DM can be rationalized by any strictly concave u.

Remark 1 Suppose that u rationalizes a preference profile (%p)p∈(0,1). If u is concave, then the

DM is information averse. If u is convex, the DM is information seeking.

Proof. Suppose that u is concave. Denote the posteriors induced by the prior p and the

signal q by z1 and z2 (z2 > z1). Likewise, denote the posteriors induced by p and the signal r by

w1 and w2 (w2 > w1). If q > r, then z2 > w2 and z1 < w1. By concavity of u,

πq · u(z2) + (1− πq) · u(z1) ≤ πr · u(w2) + (1− πr) · u(w1)

15

Page 16: Anticipatory Feelings and Attitudes to Information

That is, U(p, q) ≤ U(p, r), such that q %p r. The case of convex u is similarly handled.

So far, so good. The more interesting question, however, is whether attitudes to information,

which display preference reversal w.r.t signal quality - can be explained by maximization of

expected utility over beliefs. The interesting cases of anomalous attitudes to information, such

as the “anxiety” and “confirmatory bias” examples of Section 2, involve such preference reversals.

Definition 1 A preference profile (%p)p∈(0,1) displays preference reversal w.r.t full information

if there exist priors p, p0 ∈ (0, 1) and signals q, q0, such that (1, 1) Âp q and q0 Âp0 (1, 1).

We shall see that certain attitudes to information, which display preference reversals that

intuitively stem from anticipatory feelings (including some of the examples given in Section 2)

are inconsistent with maximization of expected utility over beliefs. The following proposition

states that if the DM exhibits a preference reversal w.r.t. full information across some pair of

priors, then he must exhibit such a reversal within every prior sufficiently close to 1 or 0.

Proposition 2 Suppose that (%p)p∈(0,1) displays preference reversal w.r.t full information. Then,

for every prior p sufficiently far from 12, there exists a pair of signals q and r such that q Âp

(1, 1) Âp r.

Proof. By Proposition 1, let u(0) = u(1) = 0. Therefore, U(p, (1, 1)) = 0 for every prior

p. Let p0, p00, q0 and q00 satisfy (1, 1) Âp0 q0 and q00 Âp00 (1, 1). Then, there exist x, y ∈ (0, 1)

satisfying u(x) > 0 and u(y) < 0. Choose x and y such that x = argmaxz∈[0,1] u(z) and

y = argminz∈[0,1] u(z). By Bayes’ formula, there exists a prior p∗ ∈ (0, 1) such that for every

p < p∗ and every z ∈ {x, y}, there exists a signal qz = (1, az) such that az ∈ [12 , 1) and the

support of σ(p, qz) is {0, z}. It follows that U(p, qx) > 0 > U(p, qy). Similarly, there exists a

16

Page 17: Anticipatory Feelings and Attitudes to Information

prior p ∈ (0, 1) such that for every p > p and every z ∈ {x, y}, there exists a signal r = (bz, 1)

such that bz ∈ [12 , 1) and the support of σ(p, rz) is {1, z}. Thus, U(p, rx) > 0 > U(p, rz).

Proposition 2 shows that if the DM prefers full information to some noisy signal at some

prior and prefers some noisy signal to full information at another prior, then he cannot display

an unequivocal attitude to full information at priors that are close to one or zero. In particular,

this means that the “anxiety” and “confirmatory bias” examples of Section 2 are inconsistent

with maximization of expected utility over beliefs.

We say that %p is monotone if either q > r implies q %p r, or q > r implies r %p q. That

is, the DM either always prefers more informative signals at p or always prefers less informative

signals at p. (Note that Proposition 2 implies that if (%p)p∈(0,1) displays preference reversal w.r.t

full information, then %p is not monotone for any prior p close to 0 or 1.) We will say that

(%p)p∈(0,1) is uniformly monotone if either q > r implies q %p r for all p ∈ (0, 1), or q > r implies

r %p q for all p ∈ (0, 1).

Proposition 3 If %p is monotone for all p ∈ (0, 1), then (%p)p∈(0,1) is uniformly monotone.

Proof. By Proposition 1, we can set u (0) = u(1) = 0 w.l.o.g. Therefore, U(p, (1, 1)) = 0 for

every p ∈ (0, 1). However, there may or may not exist x ∈ (0, 1), for which u(x) = 0. We treat

these cases separately.

Case 1: u(x) > 0 for every x ∈ (0, 1). Then, 12 Âx 1. Suppose that u is concave. By

Proposition 2, the DM is information averse. Suppose that u is not concave. Then, there exist

a prior p and a signal q arbitrarily close to (12, 12), such that U(p, q) > U(p, (1

2, 12)). Clearly,

q Âp (1, 1) because u(x) > 0 for every x ∈ (0, 1). Therefore, %p is not monotone, a contradiction.

Case 2: u(x) < 0 for every x ∈ (0, 1). The proof is essentially the same as in case 2.

17

Page 18: Anticipatory Feelings and Attitudes to Information

Case 3: u(x) = 0 for some x ∈ (0, 1). Therefore, U(x, (12, 12)) = 0, such that (1

2, 12) ∼x (1, 1).

By monotonicity of %x, q ∼x 1 for every q ∈ [12 , 1] × [12 , 1]. If u(y) = 0 for every y ∈ [0, 1],

then q ∼p r for every p ∈ (0, 1) and every q, r ∈ [12 , 1] × [12 , 1], and the result follows trivially.

Alternatively, suppose that u(y) 6= 0 for some y 6= x. There are two remaining sub-cases to

consider. First, suppose that u assigns a value of zero to posteriors that are close to zero as

well as to posteriors that are close to one. Thus, w.l.o.g., we can pick x to be arbitrarily close

to 0. By Bayes’ formula, there exists a signal q = (1, a) such that a ∈ [12, 1] and the support of

σ(x, q) is {0, y}. Since u(0) = 0 and u(y) 6= 0, U(x, q) 6= 0, so that q ¿x (1, 1), a contradiction.

Second, suppose that u is non-zero near at least one of the extreme points of [0, 1]. W.l.o.g.,

suppose that u assigns non-zero values to posteriors that are close to zero. By Bayes’ formula,

there exist y ∈ (0, x) and a signal r = (b, 1) such that y is arbitrarily close to 0, b ∈ [12, 1] and

the support of σ(x, r) is {y, 1}. Since u(1) = 0 and u(y) 6= 0, U(x, r) 6= 0. Therefore, r ¿x (1, 1),

a contradiction.

The intuition behind Proposition 3 is as follows. The DM’s attitude to information is re-

flected in the curvature of u. Convexity of u corresponds to information seeking and concavity

of u corresponds to information aversion. A non-uniformly monotonic preference profile thus

corresponds to a utility function, which is convex in one region and concave in another. This

allows us to find a prior p, for which %p is increasing when signal quality is low and decreasing

when signal quality is high, or the other way around - hence, %p violates monotonicity.

Whereas the previous results rule out the “anxiety” and “confirmatory bias” examples of

Section 2, our next result addresses the “preference for biased information” example, which also

involves preference reversal w.r.t full information.

18

Page 19: Anticipatory Feelings and Attitudes to Information

Proposition 4 There exists no continuous function u, which rationalizes preference for biased

information.

Proof. Assume the contrary. By Proposition 1, we can normalize u(0) = u(1) = 0. Thus,

U(p, (1, 1)) = 0 for every prior p. By definition, preference for biased information implies pref-

erence reversal w.r.t full information. As we have already seen (see proof of Proposition 1), this

implies that there exist x, y ∈ (0, 1), such that u(x) > 0 > u(y). If u(z) = 0 for every z suffi-

ciently close to 0 or 1, then (1, 1) ∼z (1, a) or (1, 1) ∼z (a, 1) for some a arbitrarily close to 1, in

contradiction to the assumption of preference for biased information. By continuity of u, there

exists z∗ arbitrarily close to 1, such that the sign of u(z) is constant for every z > z∗. W.l.o.g,

assume u(z) < 0 for every z > z∗. For every a ∈ [12, 1), the support of σ(z, (1, a)) consists of 0

and some z0 ∈ (z, 1). Since u(0) = 0 and u(z0) < 0, U(z, (1, a)) < 0. Also, for every a ∈ [12, 1),

the support of σ(z, (a, 1)) consists of 1 and some z0 ∈ (0, z). By Bayes’ formula, z0 tends to 1 as

z tends to 1. Therefore, we can pick z ∈ (z∗, 1) to be arbitrarily close to 1, such that z0 ∈ (z∗, z).

Since u(1) = 0 and u(z0) < 0, U(z, (a, 1)) < 0. It follows that (1, 1) Âz (1, a) and (1, 1) Âz (a, 1),

a contradiction.

The lesson from Propositions 2-4 is that commonly observed, anomalous attitudes to informa-

tion, which intuitively stem from anticipatory feelings, cannot be explained by maximization of

expected utility from beliefs. These results strongly rely on the fact that the support of σ(p, (1, 1))

is constant ({0, 1}) across priors. Recall that by Proposition 1, we can assume u(0) = u(1) = 0

w.l.o.g.2 This means that the DM’s evaluation of the fully informative signal (1, 1) is the same

2Note that Proposition 1 has been employed as a lemma in the proofs of Propositions 3-5. However, it is nota necessary condition, but merely simplifies the proofs.

19

Page 20: Anticipatory Feelings and Attitudes to Information

at all priors. This property sets severe limits to the extent to which preferences reversals w.r.t

full information can be accommodated by the model.

5 Concluding Remarks

The idea that anticipatory feelings affect our well-being, hence that there is such a thing as

“utility from beliefs”, is intuitive. In this paper, we have examined the extent to which the

utility maximization paradigm can accommodate this intuition. Our point of departure was the

idea that utility from beliefs can be revealed by choices over signals. Our investigation has led

to the conclusion that despite this intuition, expected utility maximization over beliefs rules

out many anomalous attitudes to information, which one would want to accommodate. Also,

it is somewhat incapable of incorporating the intuition that people’s attitude to information is

dictated by what they perceive as good or bad states. In this section, we discuss our methodology

and results.

The consequence space. It might be argued that our negative results follow from a narrow

specification of the consequence space and could be overcome by adding the DM’s prior belief

to the description of a consequence. In this way, a consequence would be a pair consisting of a

prior probability and a posterior probability, such that the utility from a certain posterior belief

may be different for different priors. What would be the implications of such an extension?

This extension is exposed to an obvious infinite-regress argument. Today’s prior is yesterday’s

posterior. The DM’s prior belief was presumably reached as a result of previous observations

of signals, and consequently of previous choices concerning which signals to observe. How were

these choices affected by the DM’s utility over “today’s beliefs”? Thus, once we fix the conse-

20

Page 21: Anticipatory Feelings and Attitudes to Information

quence space, we can always go one step backwards and ask how today’s utility function affected

yesterday’s choices.

The extension also carries a great loss in explanatory power w.r.t choice behavior. Let c ∈

[0, 1] be some posterior belief and let p, p0 be two different prior beliefs. Note that no choice

experiment can reveal the ranking between the consequences (c, p) and (c, p0). Therefore, from

a revealed-preference perspective, no necessary relation exists between u(c, p) and u(c, p0). Any

profile of utility functions (up)p∈(0,1), for which Eup(q) ≥ Eup(r) if and only if q %p r for every

prior p and every pair of signals q, r, represents (%p)p∈(0,1). It is meaningless to compare the

DM’s expected utility from a given distribution over posteriors at different priors.

Analogy to risk attitudes. As Grant et. al. (1998) point out, a useful way of thinking

about attitudes to information is by exploring the analogy with risk attitudes. The following

model is formally very similar to the model of Section 2. A DM has an initial wealth w ∈ W .

Given w, he has a preference relation ºw over some set of fair incremental lotteries (i.e., lotteries

whose mean is 0). For instance, a lottery in which a gain of 1000 and a loss of 1000 have

equal probabilities is such a fair gamble. Is the preference profile (ºw)w∈W consistent with

maximization of expected utility over the DM’s final wealth? If so, what is the relation between

the DM’s attitudes to spread (exhibited by (ºw)w∈W at different initial wealth levels w) and

the risk attitudes exhibited by his VNM utility function over final wealth?

These questions were studied by decision theorists (Rotheschild and Stiglitz (1970), Schmei-

dler (1979), Landsberger and Meilijson (1990), among others), whose goal was primarily to

redefine risk aversion as a property of attitudes toward spread, rather than a property of pref-

erences over (final) wealth. As far as we know, none of these studies examined the case of

21

Page 22: Anticipatory Feelings and Attitudes to Information

wealth-dependent qualitative attitudes to spread. In terms of this literature, our results trans-

late into statements of the following form: wealth-dependent attitudes to spread over a certain

class of fair lotteries are inconsistent with expected-utility maximization over final wealth.

Thus, a by-product of our analysis is a contribution to the classical literature on risk attitudes.

Of course, the class of “fair lotteries” that we study in this paper has no natural interpretation

when the consequence space consists of final wealth levels, rather than posterior beliefs.

Bayesian updating. Throughout this paper, we assumed that the DM translates signals

into lotteries over posteriors in accordance with Bayes’ rule. Our analysis relied on very abstract

properties of Bayes’ law (especially its martingale property), while the actual formula has not

been manipulated. The non-linear form of Bayes’ law actually hampers constructive results, even

if attention is restricted to a sufficiently narrow class of signals, which would admit a larger class

of rationalizable attitudes to information. The problem of utility-over-beliefs representation of

attitudes to information under non-Bayesian updating rules is left for future research.

Given our negative results in this paper, we would like to speculate that to the extent that

anticipatory feelings do explain anomalous attitudes to information, an additional component

in the explanation is needed, namely that agents do not update according to Bayes’ rule. The

premise underlying much of the work on anticipatory feelings has been that as far as signal

processing is concerned, agents do not systematically fool themselves; anomalies can be accounted

for just by adding beliefs into the utility function, without abandoning Bayesian updating. As we

have demonstrated, this premise leads to conclusions, which jar with our intuitions concerning the

link between anticipatory feelings and attitudes to information. We believe that Non-Bayesian

elements in the updating process are indispensable in order to formalize this link.

22

Page 23: Anticipatory Feelings and Attitudes to Information

References

[1] Akerlof, G. and W. Dickens, “The Economic Consequences of Cognitive Dissonance,” Amer-

ican Economic Review, LXXII (1982), 307-319.

[2] Bénabou, R. and J. Tirole, “Self-Confidence and Personal Motivation,” Quarterly Journal

of Economics, 470 (2002), 871-915.

[3] Caplin, A. and K. Eliaz, “Aids Policy and Psychology: An Implementation-theoretic Ap-

proach,” mimeo, NYU, (2002).

[4] Caplin, A. and J. Leahy, “The Supply of Information by a Concerned Expert,” Research

Report 99-08, C.V. Starr Center, NYU, (1999).

[5] Caplin, A. and J. Leahy, “Psychological Expected Utility,” Quarterly Journal of Economics

116 (2001), 55-79.

[6] Carrillo, J. and T. Mariotti, “Strategic Ignorance as a Self-Disciplining Device,” Review of

Economic Studies, LXVI (2000), 529-544.

[7] Cohen, J., “Men Avoid Preventive Health Care in Sickness and in Health,” ABC News,

(2002), available at http://abcnews.go.com/sections/

living/DailyNews/doctordelay060702.html.

[8] Geanakopolos, J., D. Pearce and E. Stachetti, “Psychological Games and Sequential Ratio-

nality,” Games and Economic Behavior 1 (1989), 60-79.

[9] Grant, S., A. Kajii and B. Polak, “Preferences for Information and Dynamic Inconsistency,”

Theory and Decision 48 (2000), 263-286.

23

Page 24: Anticipatory Feelings and Attitudes to Information

[10] Grant, S., A. Kajii and B. Polak, “Intrinsic Preferences for Information,” Journal of Eco-

nomic Theory 83 (1998), 233-259.

[11] Kozsegi, B., “Who Has Anticipatory Feelings?” mimeo, UC Berkeley, (2001a).

[12] Kozsegi, B., “Anxiety, Doctor-Patient Communication and Health Education” mimeo, UC

Berkeley, (2001b).

[13] Kreps, D. and E. Porteus, “Temporal Resolution of Uncertainty and Dynamic Choice The-

ory,” Econometrica 46 (1978), 185-200.

[14] Landsberger, M. and I. Meilijson, “Lotteries, Insurance, and Star-Shaped Utility Function,”

Journal of Economic Theory 52 (1990), 1-17.

[15] Lerman, C., C. Hughes, S. J. Lemon, D. Main, C. Snyder, C. Durham, S. Narod and H. T.

Lynch, What You Don’t Know Can Hurt You: Adverse Psychological Effects in Members

of BRCA1-Linked and BRCA2-Linked Families Who Decline Genetic Testing,” Journal of

Clinical Oncology 16 (1998), 1650-1654.

[16] Lyter, D., R. Valdiserri, L. Kingsley, W. Amoroso, and C. Rinaldo, “The HIV Antibody

Test: Why Gay and Bisexual Men Want or Do Not Want To Know Their Results”, Public

Health Reports, Vol. 102 (1987), 468-474.

[17] Rotheschild, M. and J. Stiglitz, “Increasing Risk: I. A Definition,” Journal of Economic

Theory 2 (1970), 225-243.

[18] Safra, Z. and E. Sulganik, “On the Nonexistence of Blackwell’s Theorem-type Results With

General Preference Relations,” Journal of Risk and Uncertainty 10 (1995), 187-201.

24

Page 25: Anticipatory Feelings and Attitudes to Information

[19] Schmeidler, D., “A Bibliographical Note on a Theorem of Hardy, Littlewood and Polya,”

Journal of Economic Theory 20 (1979), 125-28.

[20] Wakker, P. “Non-expected Utility as Aversion of Information,” Journal of Behavioral Deci-

sion Making 1 (1988), 169-175.

[21] Weinberg, B. “A Model of Overconfidence,” mimeo, Ohio State University, (2002).

[22] Yariv, L., “Believe and Let Believe: Axiomatic Foundations for Belief Dependent Utility

Functionals,” Discussion Paper 1344, Cowles Foundations, Yale University, (2001).

25