risk aversion, information acquisition, and technology ... · increasing adoption policies...

24
Risk Aversion, Information Acquisition, and Technology Adoption Canan Ulu Jim Smith McDonough School of Business Fuqua School of Business Georgetown University Duke University SAMSI Games and Decisions in Reliability and Risk Workshop May 2016 0

Upload: others

Post on 25-Sep-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Risk Aversion, Information Acquisition,and Technology Adoption

Canan Ulu Jim SmithMcDonough School of Business Fuqua School of Business

Georgetown University Duke University

SAMSI Games and Decisions in Reliability and Risk WorkshopMay 2016

0

Page 2: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Problem: Should Jim buy a Tesla? Or should hewait and learn more?

 

Wait

Adopt

Quit

Wait 

Adopt 

Quit 

Value of te

chno

logy 

Hear about it   See 99 rating in consumer reports 

Read about cars catching fire after a crash  

Wait

Adopt

Quit

103 rating in consumer reports 

Wait

Adopt

Quit

Other examples: farmer planting a new variety of soybean, utilitybuilding a power plant based on a new technology, or doctors changingtreatments

1

Page 3: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

We study a DP model of information acquisitionin technology adoption decisions.

We build on McCardle (1985) and Ulu and Smith (2009), addingrisk aversion.

In each period, the consumer can adopt the technology, gatherinformation about the technology, or quit.

• State variables: probability distribution on technology benefitswealth

• Beliefs are updated over time using Bayes’ Rule.

• Arbitrary distributions are allowed.

• Information gathering is costly.

We focus on structural properties of the model:

• Properties of the value function (increasing, convex, . . . )

• Monotonicity properties of the optimal policies

• Effects of risk aversion

2

Page 4: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Modeling learning: Notation

k-1

Observe signal x

θ

k

Value of technology

Periods to go

Prior Signal Distribution

Posterior

π(θ) f(x;π) =∫θ L(x|θ)π(θ) dθ Π(θ;π, x) = L(x|θ)π(θ)

f(x;π)

π f(π) Π(π, x)

3

Page 5: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Decision Tree Example: Risk Neutral

Initial Wealth Wealth25 .05 Low (-20)

5.000Neg Pos Adopt .35 Med (-10)

Low (-20) 0.80 0.20 15.000Med (-10) 0.70 0.30 35.500High (25) 0.15 0.85 .60 High (25)

50.000.11 Low (-20)

2.000Adopt .65 Med (-10)

12.00019.333

.38 Negative .24 High (25)2 47.000

22.000Reject

1 22.00035.5

Wait (-3) .02 Low (-20)2.000

33.500Adopt .17 Med (-10)

12.00040.400

.63 Positive .82 High (25)1 47.000

40.400Reject

22.000Reject

25.00025.000

Likelihood

4

Page 6: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Decision Tree Example: Risk Averse

Initial Wealth Wealth Utility25 .05 Low (-20)

5.000 1.609Neg Pos Adopt .35 Med (-10)

Low (-20) 0.80 0.20 15.000 2.708Med (-10) 0.70 0.30 3.376High (25) 0.15 0.85 .60 High (25)

50.000 3.912.11 Low (-20)

2.000 0.693Adopt .65 Med (-10)

12.000 2.4852.621

.38 Negative .24 High (25)2 47.000 3.850

3.091Reject

2 22.000 3.0913.390563

Wait (-3) .02 Low (-20)2.000 0.693

3.391Adopt .17 Med (-10)

12.000 2.4853.570

.63 Positive .82 High (25)1 47.000 3.850

3.570Reject

22.000 3.091Reject

25.000 3.2193.219

Likelihood

5

Page 7: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

The value function:

c = cost of waiting (c > 0)

u(w) = DM’s utility for wealth w

Value (or derived utility) function with k periods remaining:

U0(w, π) = u(w)

Uk(w, π) = max

E[u(w + θ̃) |π ] (adopt)

E[Uk−1(w − c,Π(π, x̃)) | f(π) ] (wait)

u(w) (quit)

where

E[u(w + θ̃) |π ] =

∫θ

u(w + θ)π(θ) dθ

E[Uk−1(w,Π(π, x̃)) | f(π) ] =

∫x

Uk−1(w,Π(π, x))f(x;π) dx

6

Page 8: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Illustrative example: beta-Bernoulli model

θ = p− 0.5 where π(p) ∝ p(α−1)(1− p)(β−1) with p ∈ [0, 1]

Expected benefit (E[ θ ]) = α/(α+ β)− 0.5; “precision” = (α+ β)

Signals are + or − with probability p or (1− p). Precision increases byone each period. Given prior with (α, β),

+ signal =⇒ (α+ 1, β)− signal =⇒ (α, β + 1)

Example: Start with (α, β) = (2.25, 1.75), observe (−,+,−,+,−,−):

‐0.50 ‐0.25 0.00 0.25 0.50

Benefit of the technology ()

Prior

After ‐

After ‐,+

After ‐,+,‐

After ‐,+,‐,+

After ‐,+,‐,+,‐

After ‐,+,‐,+,‐,‐

7

Page 9: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Illustrative example: risk-neutral results

u(w) = w; initial wealth = 1.06; c = 0.01; long time horizon

‐0.25

‐0.20

‐0.15

‐0.10

‐0.05

0.00

0.05

0.10

0.15

0.20

0.25

0 5 10 15 20 25

Expected Benefit:E[]

= (/(+

‐0.5)

Time/Precision (+)

Adoption Region

RejectionRegion

WaitRegion

LR‐improving 

Policy regions

0.90

0.95

1.00

1.05

1.10

1.15

‐0.10 ‐0.05 0.00 0.05 0.10 0.15

Expected Utility

Expected benefit: E[] = (/(+)‐0.5)

Reject

Adopt

Wait

LR‐improving 

Value function with α+ β = 10

8

Page 10: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Illustrative example: risk-averse results

u(w) = 1.2− 0.2w(1−γ) where γ = 6; initial wealth = 1.06; c = 0.01

‐0.25

‐0.20

‐0.15

‐0.10

‐0.05

0.00

0.05

0.10

0.15

0.20

0.25

0 5 10 15 20 25

Expected Ben

efit:E[]

= (/(+

‐0.5)

Time/Precision (+)

Adoption Region

WaitRegion

RejectionRegion

LR‐improving 

Policy regions

0.90

0.95

1.00

1.05

1.10

1.15

‐0.10 0.00 0.10 0.20 0.30

Expected Utility

Expected benefit: E[] = (/(+)‐0.5)

Reject

Adopt

Wait

LR‐improving 

Value function with α+ β = 10

9

Page 11: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

General results: Defining “better” priors

Definition: π2 likelihood-ratio (LR) dominates π1 (π2 �LR π1) ifπ2(θ)/π1(θ) is increasing in θ.

Examples of LR improvements:

• Beta: Increasing α while holding the precision (α+ β) constant• Normal: Increasing the mean while holding the variance constant

LR-dominance implies FOSD, but the reverse is not true.

0 1

π1

π2

Not a LR-improvement

0 1

π1

π2

A LR-improvement

The LR-order survives Bayesian updating: given a signal x

π2 �LR π1 ⇔ Π(π2, x) �LR Π(π1, x), for all x ∈ X.

10

Page 12: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

General results: Ordered signal processes

Definition: The signal process L(x|θ) satisfies the monotone-likelihood-ratio (MLR) property if the signal space X is ordered and

L(x|θ2) �LR L(x|θ1) for all θ2 ≥ θ1 .

Examples: Bernoulli signals; normal signals

If the signal process satisfies theMLR property, then:

• π2 �LR π1 ⇒ f(π2) �LR f(π1)

• For any prior π, x2 ≥ x1 ⇔ Π(π, x2) �LR Π(π, x1).

11

Page 13: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

General results: Increasing value functions

Definition: V (π) is LR-increasing if V (π2) ≥ V (π1) wheneverπ2 �LR π1.

Proposition [Increasing]: Suppose the DM’s utility function u(w) isincreasing in w and the signal process satisfies the MLR property. Then,for all k and w, the value function Uk(w, π) is LR-increasing in π.

12

Page 14: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

General results: Increasing value functionsValue (or derived utility) function with k periods remaining:

U0(w, π) = u(w)

Uk(w, π) = max

E[u(w + θ̃) |π ] (adopt)

E[Uk−1(w − c,Π(π, x̃)) | f(π) ] (wait)

u(w) (quit)

0.90

0.95

1.00

1.05

1.10

1.15

‐0.10 0.00 0.10 0.20 0.30

Expected Utility

Expected benefit: E[] = (/(+)‐0.5)

Reject

Adopt

Wait

LR‐improving 

Value function with α+ β = 10

13

Page 15: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

General results: Increasing policies

Proposition: Suppose the DM’s utility function is increasing and thesignal process satisfies the MLR property.

• Rejection: If it is optimal to reject with prior π2, it is also optimalto reject with any prior π1 such that π2 �LR π1.

Proof: Follows from LR-increasing value functions.

• Adoption: Suppose the DM is risk neutral (or risk seeking). If it isoptimal to adopt with prior π1, then it is also optimal to adopt withany prior π2 such that π2 �LR π1.

Proof: Utility difference between adoption and waiting is LR-increasing.

With risk neutrality, policies “increase” from quit to wait to adopt as πLR-improves.

14

Page 16: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Illustrative example revisited

Policies are LR-increasing.

‐0.25

‐0.20

‐0.15

‐0.10

‐0.05

0.00

0.05

0.10

0.15

0.20

0.25

0 5 10 15 20 25

Expected Benefit:E[]

= (/(+

‐0.5)

Time/Precision (+)

Adoption Region

RejectionRegion

WaitRegion

LR‐improving 

Policy regions

0.90

0.95

1.00

1.05

1.10

1.15

‐0.10 ‐0.05 0.00 0.05 0.10 0.15

Expected Utility

Expected benefit: E[] = (/(+)‐0.5)

Reject

Adopt

Wait

LR‐improving 

Value function with α+ β = 10

15

Page 17: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Illustrative example revisited

Policies are LR-increasing.

‐0.25

‐0.20

‐0.15

‐0.10

‐0.05

0.00

0.05

0.10

0.15

0.20

0.25

0 5 10 15 20 25

Expected Ben

efit:E[]

= (/(+

‐0.5)

Time/Precision (+)

Adoption Region

WaitRegion

RejectionRegion

LR‐improving 

Policy regions

0.90

0.95

1.00

1.05

1.10

1.15

‐0.10 0.00 0.10 0.20 0.30

Expected Utility

Expected benefit: E[] = (/(+)‐0.5)

Reject

Adopt

Wait

LR‐improving 

Value function with α+ β = 10

16

Page 18: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Illustrative example revisited

0.90

0.95

1.00

1.05

1.10

1.15

‐0.10 ‐0.05 0.00 0.05 0.10 0.15

Expected Utility

Expected benefit: E[] = (/(+)‐0.5)

Reject

Adopt

Wait

LR‐improving 

Risk neutral

0.90

0.95

1.00

1.05

1.10

1.15

‐0.10 0.00 0.10 0.20 0.30Expected Utility

Expected benefit: E[] = (/(+)‐0.5)

Reject

Adopt

Wait

LR‐improving 

Risk averse

With risk aversion, can adopt and wait cross twice as weLR-improve π?

17

Page 19: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

If the DM is risk averse, adoption policies maynot be monotonic in π.

Example: log utility; three technology values; signals satisfy MLR property

 

Wealth Utility

Initial Wealth=23.002 .08 Low (‐20)

c=3 3.0 1.10

log utility

Adopt .35 Med (‐10.5)

12.5 2.52

.57 High (+25)

48.0 3.87

.15 Low (‐20)

0.002 ‐6.21

Adopt .65 Med (‐10.5)

9.5 2.25

.41 ‐ Signal .19 High (+25)

2 45.0 3.81

Quit

1 20.0 3.003.177209003

Wait (‐3) .03 Low (‐20)

0.002 ‐6.21

Adopt .14 Med (‐10.5)

9.5 2.25

.59 + Signal .83 High (+25)

1 45.0 3.81

Quit

20.0 3.00

Quit

23.0 3.14  

Wealth Utility

.00 Low (‐20)

3.0 1.10

Adopt .38 Med (‐10.5)

12.5 2.52

.62 High (+25)

48.0 3.87

.00 Low (‐20)

0.002 ‐6.21

Adopt .77 Med (‐10.5)

9.5 2.25

.38 ‐ Signal .23 High (+25)

2 45.0 3.81

Quit

2 20.0 3.003.362675024

Wait (‐3) .00 Low (‐20)

0.002 ‐6.21

Adopt .14 Med (‐10.5)

9.5 2.25

.62 + Signal .86 High (+25)

1 45.0 3.81

Quit

20.0 3.00

Quit

23.0 3.14

18

Page 20: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Comparing waiting and adopting in this example:

0.00

0.10

0.20

0.30

0.40

0.50

0.60

0.70

0.80

0.90

1.00

0 5 10 15 20 25 30 35 40 45 50 55

Cumulative Probability

Wealth + Technology Value () ‐ Search Costs

Adopt

Wait

‐8.00

‐6.00

‐4.00

‐2.00

0.00

2.00

4.00

0 5 10 15 20 25 30 35 40 45 50 55

Utility

Wealth + Technology Value () ‐ Search Costs

Log utility

With log utility, a bad technology outcome + search costs can becatastrophic if the resulting wealth level is near zero.

• Search costs push the DM “over the edge”

• Can we ensure monotonicity by limiting the degree of risk aversion?

19

Page 21: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Increasing Adoption Policies

Proposition Suppose the DM is risk averse and her utility function u exhibitsdecreasing absolute risk aversion (DARA), i.e., her risk tolerance τu(w) isincreasing. Then, if

τu(w0 + θ − c) ≥ −θ

where w0 = w − kc and θ = min θ (“Not too risk averse”), then adoptionpolicies are monotonic.If u is CARA, no risk tolerance bound is required.

We define a new property: “sLR-increasing”:

• LR-increasing functions are sLR-increasing

• sLR-increasing functions are single-crossing

• Bayesian updating preserves sLR-increasing property

We show utility difference between adoption and waiting is sLR-increasing

• Then, utility difference between adoption and waiting is singlecrossing.

20

Page 22: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Summary:With a DARA utility function that is “not too risk averse,” we have:

‐0.25

‐0.20

‐0.15

‐0.10

‐0.05

0.00

0.05

0.10

0.15

0.20

0.25

0 5 10 15 20 25

Expected Ben

efit:E[]

= (/(+

‐0.5)

Time/Precision (+)

Adoption Region

WaitRegion

RejectionRegion

LR‐improving 

The same structural properties as in the risk-neutral model(existence of thresholds, etc.), but risk aversion leads to quittingsooner and adopting later (if CARA).

21

Page 23: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Generalizations:

Model with discounting, u(w + NPV of costs/benefits)

• Delay is costly; also risk reducing

• Results and proofs follow the same pattern

Other applications of s-increasing: Monotonic policies in DPs

• Submodularity (increasing differences) conditions (Topkis (1979),Lovejoy (1987a,b)) are sometimes hard to establish

• Single-crossing conditions (e.g., Milgrom and Shannon (1994), Quahand Strulovici (2012),. . . ) are hard to use in DPs

22

Page 24: Risk Aversion, Information Acquisition, and Technology ... · Increasing Adoption Policies Proposition Suppose the DM is risk averse and her utility function uexhibits decreasing

Thank you!

23