citation: vartanian, o., & mandel, d. r. (in press). neural ... university press. neural bases...

38
1 Citation: Vartanian, O., & Mandel, D. R. (in press). Neural bases of judgment and decision making. In M. K. Dhami, A. Schlottmann, & M. Waldmann (Eds.), Judgment and decision making as a skill: Learning, development, and evolution. Cambridge, UK: Cambridge University Press. Neural Bases of Judgment and Decision Making Oshin Vartanian 1,2 & David R. Mandel 1,3 1 Defence R&D Canada — Toronto 2 University of Toronto — Scarborough 3 University of Toronto — St. George

Upload: trinhthien

Post on 06-May-2018

220 views

Category:

Documents


1 download

TRANSCRIPT

1

Citation:

Vartanian, O., & Mandel, D. R. (in press). Neural bases of judgment and decision

making. In M. K. Dhami, A. Schlottmann, & M. Waldmann (Eds.), Judgment and

decision making as a skill: Learning, development, and evolution. Cambridge, UK:

Cambridge University Press.

Neural Bases of Judgment and Decision Making

Oshin Vartanian1,2 & David R. Mandel1,3

1 Defence R&D Canada — Toronto

2 University of Toronto — Scarborough

3 University of Toronto — St. George

2

Introduction

Humans are naturally inclined to seek explanations of behaviour. Due to recent technical

and theoretical advances, neuroscience has begun to elucidate the material causes of human

behaviour, including judgment and decision making. Material-cause explanations focus on the

substrates that comprise or give rise to a phenomenon (Aristotle 1929; Killeen 2001), and in the

present chapter we focus on recent studies examining the neural substrates of human judgment

and decision making. Specifically, we review evidence from brain imaging literature to assess

the contribution of neuroscience to understanding judgment and decision making in four areas:

(a) decision making under risk; (b) base-rate neglect in probability judgment under conditions of

uncertainty; (c) belief bias—namely, the biasing effect of personal knowledge on epistemic

judgments; and (d) decision making in the social context of two-party economic exchange

games. Within each area we will focus on studies in which this methodology has been applied to

address theoretically important questions, enabling us to evaluate the contribution of findings to

our understanding of the causal origins of judgment and decision making.

Generally speaking, neuroscientific explanations of psychological phenomena have taken

the form of investigating the involvement of various brain structures that underlie behaviours of

interest. However, cognitive and social neuroscientists are less interested in human brain

mapping per se than they are in understanding the underlying processes and mechanisms that are

represented by the involvement of the activated brain structures (Poldrack 2006). This is because

when knowledge about brain function is combined with standard forms of behavioural evidence,

inferences about underlying psychological processes and mechanisms may be augmented. The

extent to which satisfactory explanations can be provided is in turn a function of the ability of the

available techniques (and related theories) to pinpoint with specificity the processes and

3

mechanisms associated with the regions of interest. Although specificity can have multiple

connotations in the context of neuroimaging, here we will focus on functional specificity, and

address its effect on generating accounts of behaviour.

By functional specificity (i.e., selectivity) we refer to the ability to infer specific function

(e.g., memory) based on the activation of a specific brain region or structure. From an

evolutionary perspective it is unlikely that any given neural tissue evolved to partake in only one

specific function (e.g., to only engage when the organism detects a red umbrella). It is far more

likely that neural tissue evolved to serve multiple functions (e.g., to engage when the organism

experiences pleasant or unpleasant emotion, views an extremely attractive or extremely

unattractive face, etc.). Of course, an evolved design feature that is economically advantageous is

not necessarily ideal for making specific functional inferences. This highlights the necessity to

have a clear understanding of the inferential processes and (associated problems) used to

evaluate neuroscientific findings, which we will focus on next.

The key issue concerns the appropriate rules of logic for using findings of differential brain

activation as tests of cognitive theories. Currently, this is often done through “reverse inference,”

whereby the engagement of a particular cognitive process is inferred from the activation of a

particular brain region (Poldrack 2006). For example, given considerable neuropsychological

evidence demonstrating that Broca’s Area is involved in linguistic processing (in the form of

speech production), a researcher would engage in reverse inference by concluding that a task

manipulation involved linguistic processing because it activated Broca’s Area. Such inferences

are logically invalid because the mapping of theoretical postulates to observations is usually a

many-to-one enterprise if the putative theories are well specified (i.e., more than one theory will

make the same prediction) and a many-to-many enterprise if they are not (i.e., each theory makes

4

multiple mutually exclusive predictions) (Bub 2000). Poldrack (2006) has shown that this type of

reasoning would be valid if and only if Broca’s Area were activated in speech production.

Because such functional exclusivity is usually lacking, reverse inference is often invalid.

However, what if the focus were shifted from the logical validity of the argument to the

degree of belief in the plausibility of the conclusion? Under this condition belief in the reverse

inference would increase as a function of (a) the selectivity (i.e., specificity) of the neural

response (i.e., the ratio of process-specific activation to the overall likelihood of activation in that

area across all tasks) as well as (b) prior belief in the engagement of a specific cognitive process

given the task manipulation of interest (Poldrack 2006). How can this increase be achieved? The

degree of the selectivity of a region is outside of the experimenter’s control. However, there is an

inverse relationship between the size of a region and its selectivity. For example, whereas a large

region of the brain known as the occipital lobe is involved in vision, various layers of cortex

within it are selectively attuned in relation to specific visual functions (e.g., Area V4 to colour,

Area V5 to motion, etc.). This suggests that to the extent possible, researchers should aim to

formulate hypotheses about smaller regions of interest that will be involved in fewer functions.

However, what is under the researcher’s control is determining the extent to which a cognitive

process of interest is engaged given a task manipulation. This can be achieved through proper

task analysis prior to data collection in the scanner. Hence, the combination of a well-designed

experimental manipulation conducted in relation to hypothesized activation in a well-defined

(i.e., small) region of interest (ROI) can increase our degree of belief in the plausibility of a

reverse inference. While not deductively valid, this form of abductive reasoning has proven to be

an essential tool for discovery in science (Polya 1954).

Neuroscience and the Logic and Prospect of Explanation

5

Although neuroscience today provides the primary scientific basis for explaining the

material causes of behaviour, we agree with Killeen (2001) that a complete explanation of a

phenomenon requires understanding all four types of Aristotle’s (1929) (be)causes:

• Efficient causes, capturing the contemporary meaning of cause, represent the triggers

sufficient to generate or prevent an effect against its causal background (e.g., Mackie

1974; Mandel 2005b).

• Final causes are functional explanations that address purposive questions, such as “Why

does it do that?” and “What is it supposed to do?” (Minsky 1967).

• Formal causes are models that specify the transition from efficient causes to final causes,

whose explanatory value is measured in terms of correspondence between attributes of

the model and attributes of the phenomenon that the model is meant to describe.

• Material causes are explanations of the substrates that comprise a phenomenon or give

rise to it, an exclusive focus on which is known as reductionism.

We believe that the most exciting promise of neuroscience for understanding the origins of

judgment and decision making in humans is not in fleshing out explanations of material

causality, but rather in the potential of such findings to test and refine explanations of the formal

and final causes of human behaviour (see also Camerer et al. 2005). In this sense, an important

contribution neuroscience can make involves placing empirical constraints on formal models

(Goel 2005; see Newell 1990; Pylyshyn 1984). This contribution is critical because the mental

computations that are described in formal models must be realizable in the physical substrate of

the brain, and knowledge gained through neuroscience about brain structure and function can in

turn help us discriminate between competing formal models. Next, we turn to a discussion of the

contribution of neuroscience to the resolution of theoretical problems in three areas of judgment

6

and decision making. Most studies discussed in this chapter involve functional Magnetic

Resonance Imaging (fMRI). For a brief overview of some of the technical features of this

method, refer to Appendix 1.

Decision Making under Risk

We begin our exploration of the neuroscientific literature on judgment and decision

making by focusing on an important topic in behavioural decision making research: Decision

making under risk. Specifically, we will examine the extent to which knowledge related to the

neural processing of risk has contributed to the resolution of theoretical debates related to risky

choice. The literature on the neural processing of risk can be partitioned in relation to three

questions. First, to what extent is risky choice influenced by emotions? Second, to what extent

does making risky choices recruit cognitive and/or emotional processes over and above what is

recruited in mere anticipation of risky outcomes? Third, is risk processed differently when

individuals are faced with the prospect of gains versus losses? Although each of these questions

has been studied by a large number of researchers, here we will focus our discussion on a recent

meta-analysis of the literature on the neural processing of risk (Mohr, Biele, & Heekeren 2010).

In the process we will also touch on specific studies to clarify outstanding issues, as well as

pointing out shortcomings that remain to be addressed in future neural studies on decision

making under risk.

Risky choice: role of emotion

The literature on the neuroscience of judgment and decision making has been heavily

concerned with the role of emotion in choice under risk and uncertainty (see Bechara, in press;

Sanfey 2007; Sanfey et al. 2006). This focus is perhaps not surprising, given the centrality of this

question to behavioural decision making theorists (e.g., Loewenstein et al. 2001; Slovic et al.

7

2004). Furthermore, methodologically, investigating this problem at the brain level has been

made possible due to the continued development of increasingly sophisticated cortical maps of

emotion processing (Figure 1). Essentially, researchers have investigated whether (a) any part of

the brain’s extended emotion system is activated more when participants process risk compared

to a baseline condition, or (b) activation in any part of this system is correlated with

experimentally manipulated variations in the level of risk. Mohr et al. (2010) employed the

activation likelihood estimation method (ALE) to conduct a meta-analysis to address this issue.

Unlike meta-analyses in the behavioural sciences that are conducted to estimate true effect sizes

for specific manipulations of interest, ALE is conducted to pinpoint specific brain structures

and/or networks that are reliably activated in relation to specific experimental manipulations.

Following the localization of the relevant structures, interpretations of the functions of those

structures and/or networks is required to elucidate their contribution to the effects of interest. As

noted earlier, the success of this undertaking will depend on (a) the functional selectivity of the

structures and/or networks, and (b) proper task analysis; in this case, the choice of studies to be

included in the meta-analysis.

Mohr et al. (2010) relied on the following six criteria to select studies for their meta-

analysis: (1) fMRI studies involving healthy young adult humans, to allow generalizability to the

population of neurologically healthy normal persons; (2) imaging data acquired over the whole

brain, so as not to limit analyses to selected ROIs; (3) availability of peak activation coordinates

from group activation maps; (4) tasks with outcomes that were at least partly uncertain; (5)

information about outcome probabilities was available to the participants; and (6) outcomes of

the task were independent of the behaviour of others, as would be the case for example in two-

8

person economic exchange games (see Decision Making in Economic Games in this chapter).

These criteria resulted in the selection of thirty fMRI studies for their meta-analysis.

The results demonstrated that risky choice activated a distributed network including key

components of the brain’s emotion circuitry, particularly the bilateral anterior insula, the

thalamus, and the dorsomedial prefrontal cortex (DMPFC) (Figure 1). The insula plays a general

and well-established role in interoception (Craig, 2002), as well as experiences of distress, pain,

anger, and disgust (Adolphs 2002; Ohara et al. 2005). Furthermore, lesions to the insula cause

disturbances in emotional processing. Located above the brainstem, the thalamus is frequently

referred to as the brain’s “relay station.” This is because it receives sensory information from the

body, which it sends to the cerebral cortex; and it receives information from the cerebral cortex,

which it sends to the body. Given its transmission of visceral sensory information, it forms a core

part of the emotion system (Barrett et al. 2007). Finally, DMPFC is involved in the more

cognitive aspects of emotion processing, such as the formation of strategic preferences in the

face of conflict (Venkatrama et al. 2009a,b). In summary, the results of Mohr et al.’s (2010)

meta-analysis showed that risky choice does activate well-established components of the brain’s

emotion circuitry, offering confirmatory evidence to support the role of emotion in judgment and

decision making (Loewenstein et al. 2001; Slovic et al. 2004).

-------------------------------

Insert Figure 1 about here

-------------------------------

Risky choice: action versus anticipation

It is possible to distinguish between risk processing that occurs during or before choice

(decision risk), and risk processing that occurs after or without a choice (anticipation risk). This

9

is an important distinction because, whereas risk information may be used to guide choices in the

context of decision risk, this will not be the case in the context of anticipation risk (Mohr et al.

2010). Correspondingly, the neural representation of risk may differ between decision risk and

anticipation risk. Note that, in this case, the meta-analysis was not geared toward showing

activation in any specific area in decision or anticipation risk, but simply to show whether a

greater network was activated in the former condition. The decision–anticipation contrast

revealed relatively greater activation in right anterior insula, DMPFC, dorsal lateral prefrontal

cortex (DLPFC), parietal cortex, striatum, and the occipital cortex. The network activated by

decision risk shows much overlap with the distributed network involved more generally in

decision making (Sanfey, 2007). In contrast, the anticipation–decision contrast only revealed

relatively greater activation in left anterior insula and left superior temporal gyrus. The results of

this analysis demonstrated that decision risk recruits cognitive and/or emotional processes over

and above what is recruited in mere anticipation of risky outcomes.

Risky choice: gains versus losses

Interestingly, the existing neural studies on risk have either investigated risky choice in

the context of gains, or risky choice in the context of gains and losses. In other words, no study

thus far has studied risky choice in the context of losses exclusively. Therefore, Mohr et al.’s

(2010) meta-analysis contrasted risky choice in the context of gains with risky choice in the

context of gains and losses, with the proviso that this contrast does not allow for a direct look at

neural processes that underlie risky choice in the context of losses per se. The results

demonstrated that left anterior insula was activated more when losses and gains were possible,

compared to risky choice in the context of gains alone. This activation is consistent with the

well-established role of the insula in the experience of negative psychological states such as

10

distress (Ohara et al. 2005), as would be expected in the context of potential losses as well as

gains.

Neural bases of risky choice: conclusions

The results of Mohr et al.’s (2010) meta-analysis of neural studies of risky choice

demonstrate that neuroscientific findings can be used to shed light on theoretically important

questions in behavioural decision making. Specifically, the results have shown that (a) risky

choice recruits emotion, (b) making risky choices recruits more cognitive and/or emotional

processes than the mere anticipation of possible outcomes that might arise from a risky choice,

and (c) risk is processed differently when individuals are faced with the prospect of losses and

gains, as opposed to the prospect of gains exclusively. Although the interpretation of the findings

is based on reverse inference, confidence in the functional inferences is bolstered by the

selectivity of the activated structures, as well as the proper choice of studies to be included in the

ALE meta-analysis. Furthermore, given that fMRI results are by definition correlational,

inferences about the direction of the observed effects are made possible by confirmatory

evidence gleaned from neuropsychological studies of patient populations.

Two important caveats remain in extending the findings of Mohr et al.’s (2010) meta-

analysis to all instances of risky choice. First, as noted above, the meta-analysis did not include

any fMRI study that had investigated risk perception in the context of losses alone. This is a

shortcoming that can be easily addressed in future studies. Second, whereas all the studies

included in this meta-analysis involved risky choice in the financial domain, recent evidence

suggests that contextual factors can influence various aspects of the neural systems underlying

choice (Mandel & Vartanian, in press). Specifically, hypothetical monetary decisions activated a

dissociable neural system than formally identical hypothetical decisions involving lives

11

(Vartanian, Mandel, & Duncan, in press). Thus, caution must be exercised in extending the

results of this meta-analysis to non-financial domains.

Probability Judgment and Base-rate Neglect

We continue our exploration of the neuroscientific literature on judgment and decision

making by focusing on a central question: Are people who make judgments that deviate from

normative benchmarks aware of the discrepancy? Neuroscience has contributed to this question

in two ways. First, there is evidence demonstrating that people can make advantageous decisions

in the absence of conscious awareness. Specifically, emotions expressed as unconscious

“hunches” can affect decision making (Bechara et al. 1994; 1997; 2000). Furthermore, the

ventral medial prefrontal cortex (VMPFC)—which forms part of the neural system for emotion

(Barrett et al. 2007, Figure 1)—may play a critical role in this process by linking emotions to

cognition. Extending this line of reasoning, it is of interest to ask whether there are brain

structures or neural systems that respond to deviations between norms and behaviour, and

whether sensitivity to such deviations might in turn influence behaviour.

Second, perhaps one of the most intensely studied problems in neuroscience involves

elucidating the neural system that underlies detecting and overriding errors in the service of task

goals (i.e., error monitoring and cognitive control). There is now a great deal of convergent

evidence linking the anterior cingulate cortex (ACC) to error monitoring and the lateral

prefrontal cortex (PFC) to cognitive control (Ridderinkhof et al. 2004; van Veen & Carter 2006).

Extending this line of work, one might expect that awareness of norm violations would involve

the ACC due to its role in error monitoring, whereas overriding such errors in the service of

normative responding would involve the lateral PFC.

A classic example of a normative violation in judgment under uncertainty is base-rate

12

neglect (Kahneman & Tversky 1973). In some studies of base-rate neglect, subjects are asked to

make relative probability judgments in which stereotype-relevant individuating information cues

are a salient but normatively inappropriate response. Subjects are usually first presented with

information about the composition of a sample (e.g., 70 lawyers and 30 engineers), and then

presented with a short personality description of a randomly drawn person from that sample.

Given these base rates, it is probable that a person drawn randomly from the sample will be a

lawyer. However, Kahneman and Tversky (1973) observed that when the individuating

information was more typical or representative of an engineer, most subjects neglected the base-

rate information and judged the target individual as more likely to be an engineer than a lawyer.

There are different schools of thought regarding the nature of this error. Some theorists

have proposed that subjects neglect the base rates because they may be unaware of a discrepancy

between their judgments and the indicative nature of the information provided (Evans 2003). To

them, base-rate neglect is an example of what Kahhneman and Tversky (1982) called errors of

comprehension—namely, errors that stem from failing to understand a relevant normative

principle (such as the fact that the probability of random draws from n categories ought to take

into account the base rates of those categories). Other theorists have proposed that subjects may

be aware of the discrepancy, but they are unable to inhibit the intuitive response in favour of the

response indicated by the distributional base-rate information (Epstein 2004; Sloman 1996). To

them, base-rate neglect is an example of what Kahneman and Tversky (1982) called errors of

application—namely, errors that arise when one knows the relevant principle but fails to apply it.

De Neys et al. (2008) conducted an fMRI experiment in an attempt to shed light on this

issue. They based their predictions on a model of conflict detection and resolution according to

which the ACC responds to the detection of conflict, error and expectancy violation, and the

13

lateral PFC is recruited in response selection by inhibiting or overriding inappropriate responses

(Ridderinkhof et al. 2004; van Veen & Carter 2006). De Neys et al. presented their subjects in

the fMRI scanner with various types of base-rate problems. Before entering the scanner, subjects

were given practice problems for familiarization with the concept of random sampling. Subjects

were instructed to think like statisticians, given that previous research had demonstrated that

doing so would increase the likelihood that they would consider probability information

(Schwarz et al. 1991). We will focus here only on the “incongruent” trials in which there was a

discrepancy between base-rate information and the response cued by the individuating

information. An example of an incongruent trial follows:

(Base rate) 5 engineers and 995 lawyers

(Individuating information) Jack is 45 and has four children. He shows no interest in

political and social issues and is generally conservative. He likes sailing and mathematical

puzzles.

What statement is most likely to be true?

a. Jack is an engineer

b. Jack is a lawyer

We refer to judgments on such trials that were congruent with the base-rate information as

correct and those that were incongruent with the base-rate information as incorrect.

Replicating earlier studies, most subjects (55%) made incorrect responses on incongruent

14

trials, judging the stereotype-consistent, low base-rate category as more probable. Furthermore,

as expected, correct judgments on incongruent trials required more cognitive processing as

measured by reaction time compared to incorrect judgments (see also De Neys & Glumicic

2008). In line with earlier work by Ridderinkhof et al. (2004) and van Veen and Carter (2006),

De Neys et al. (2008) found that lateral PFC, given its role in inhibiting or overriding

inappropriate responses, was recruited more when subjects judged incongruent trials correctly

than when they did not. A key prediction of De Neys et al. (2008) was that if subjects were

unaware of a conflict between their choices and probability, then ACC activation on incorrect

incongruent trials should not be observed. In contrast, if subjects were aware of the conflict but

were unable to inhibit the intuitive response in favour of the response driven by the base-rate

information, then ACC activation should be observed. The results supported the latter prediction,

suggesting that subjects were aware of the conflict but were unable to overcome it. An analysis

of the cognitive processes underlying base-rate neglect also supports this interpretation (De Neys

& Glumicic 2008).

The findings of De Neys et al. (2008) indicate that the source of base-rate neglect in

probability judgment may not be failure to comprehend the relevance of base-rate information,

but rather insufficient involvement of brain systems required to inhibit or override inappropriate

responses, in particular the lateral PFC. Furthermore, the findings offer support for the

recruitment of this system (ACC and lateral PFC) in tasks that require error detection and

subsequent inhibition for successful performance. These findings exemplify how neural data may

be used to shed light on the cognitive bases of judgment processes. First, the researchers

predicted the recruitment of specific structures (ACC and lateral PFC) based on the engagement

of specific cognitive processes. Second, although the ACC and lateral PFC are engaged by

15

multiple cognitive processes, there is much convergent evidence to suggest that they jointly form

a system that underlies the detection and override of conflict. Third, on the critical conflict trials,

the neural and behavioural data pointed to the same direction, adding another level of

explanation to the causal model.

The neural findings converge with recent behavioural research examining the basis for

error in probability judgment in showing that systematic biases are often attributable to errors of

application. For example, Mandel (2005a) showed that the coherence of probability assessments

of binary complements was greater when the complements were judged consecutively rather than

spaced apart by an unrelated task. If incoherence had stemmed from an error of comprehension,

then the presentation of test items should make little difference. The fact that presentation format

did make a difference indicates that people lose track of the relevant normative principle (in

those studies, the additivity property) when salient cues to their relevance are not present. Like

De Neys et al. (2008), these findings indicate the importance of errors of application in judgment

under uncertainty. The triangulation of such sources of evidence could be used to constrain

formal-cause explanations of judgment and decision making.

Epistemic Judgments and Belief Bias

In taking up our third area of investigation, we move from the realm of judgment under

uncertainty to a case of judgment under conditions of certainty; namely, deductive reasoning.

Neuroimaging and neuropsychological studies have made substantial contributions to our

understanding of the cognitive architecture of reasoning (reviewed in Goel 2007). Two main

conclusions can be derived from these studies. First, there is no single brain module for

reasoning. Rather, the act of reasoning activates a distributed network in the brain that engages

different structures as a function of task characteristics. Second, patterns of activation can be

16

predicted reliably because this network involves the dynamic configuration of relevant

component subsystems. Overall, the main contribution of neuroscience to understanding

reasoning has been the verification at the neural level of a hierarchical and context-dependent

reasoning anatomy.

Here, we focus on deductive reasoning research that has examined how people make

judgements about what is implied by—that is, what must follow from—a given set of true

statements. For example, given the information that “all mammals are animals” and “all humans

are mammals,” we are logically compelled to draw the conclusion that “all humans are animals.”

One critical feature of such reasoning is that the validity of the syllogistic argument is a function

of its logical form rather than its content. Thus, given the premises “all A are B” and “all C are

A” are regarded as true, we are compelled to draw the correct logical conclusion that “all C are

B,” even though we have no knowledge of the entities A, B, and C. Indeed, we would be

logically bound to state that “all C are B” is true under circumstances in which we knew on the

basis of prior knowledge that at least one of the premises was false, but were instructed to

assume that the premises were true regardless of what our prior knowledge might indicate.

Researchers have long been interested in the effect of such prior knowledge on reasoning,

particularly when there is a conflict between what is known to be true or false in fact and what

would necessarily be true or false if one were to temporarily suspend belief and assume that the

premises for that proposition were true. A robust finding, known as belief bias, is that subjects

perform more accurately on reasoning tasks when the logical conclusion is congruent with their

beliefs compared to when it is incongruent (Byrne et al. 1993; Evans et al., 1983; Wilkins 1928).

In syllogistic reasoning tasks, belief-congruent trials are those with either valid and believable or

17

invalid and unbelievable conclusions, whereas belief-incongruent trials are those with either

valid and unbelievable or invalid and believable conclusions.

To illustrate this bias, consider the following syllogistic argument (i.e., two premises

followed by a conclusion):

No cigarettes are inexpensive.

Some addictive things are inexpensive.

∴ Some addictive things are not cigarettes.

When asked to decide whether or not the conclusion must follow from the premises, 96 percent

of subjects respond affirmatively to this valid belief-congruent syllogism (Evans et al. 1983).

Now, consider the following argument:

No addictive things are inexpensive.

Some cigarettes are inexpensive.

∴ Some cigarettes are not addictive.

This valid but belief-incongruent syllogism is accepted as valid by only 46 percent of subjects

(Evans et al. 1983).

This performance deficit has been explained in terms of the inability of subjects to inhibit

their personal beliefs in the service of logical assessments. Given that inhibition requires drawing

on limited cognitive resources, one might expect to see a relationship between individual

differences in inhibitory ability and vulnerability to this bias. This explanation is supported by

18

studies showing that subjects who score higher in tests of general intellectual ability are less

susceptible to this bias (Stanovich & West 2000), and by direct evidence from dual-task

paradigms demonstrating that taxing working memory leads to higher susceptibility to it (De

Neys 2006). Furthermore, De Neys and Van Gelder (2008) have shown that across the lifespan

there is a curvilinear relationship between one’s ability to reason about arguments where logic

and beliefs conflict, such that superior performance is observed when inhibitory ability is at its

peak in young adulthood. Thus, correlational, experimental, and developmental evidence

converge on inhibition as a likely mechanism underlying susceptibility to this bias.

Recently, neuroscience has begun to shed light on the neural bases of the susceptibility of

epistemic judgments to belief. Goel and Dolan (2003) conducted a study to investigate the brain

in the course of reasoning about arguments with and without believable content. They presented

subjects with syllogisms in the fMRI scanner and asked them to judge whether the conclusions

were valid (i.e., whether they necessarily followed from their premises). As expected, belief bias

was evident: Accuracy was significantly higher for belief-congruent trials compared to belief-

incongruent trials. Two neural results emerged. First, errors on belief-incongruent trials activated

VMPFC (Figure 1). Second, when subjects made the correct judgment on belief-incongruent

trials, activation was observed in right lateral PFC.

According to the somatic marker hypothesis, the VMPFC is the structure critical for

forming associations between cognitions and emotions (Bechara et al., 1994, 1997, 2000). This

hypothesis is supported by much neuropsychological evidence showing that patients with lesions

to VMPFC are unable to use emotional cues in decision making. Goel and Dolan’s (2003)

experiment exemplifies how neuroscientific data may contribute to important theoretical debates.

Specifically, the extent to which beliefs are driven by emotions is one of the oldest questions in

19

psychology (James, 1890). McDougall (1921) went so far as to argue that beliefs can be viewed

as “derived emotions.” Nevertheless, only recently have investigators begun to study the

influence of emotions on beliefs in the context of deductive reasoning directly (e.g., Goel &

Vartanian, in press). The results by Goel and Dolan (2003) suggest that to understand the

influence of beliefs on reasoning, our models would do well by incorporating emotions.

Furthermore, one would expect the involvement of a neural system that underlies inhibition

(and cognitive control) to be active when subjects make correct judgment on belief-incongruent

trials. Indeed, a review of the neuroimaging literature has shown that the primary function of the

right inferior PFC is inhibition (Aron et al. 2004). This is consistent with its role within the

context of reasoning (Goel et al. 2000) as well as its association with cognitive control—as was

apparent in De Neys et al.’s study discussed above. The results highlight the role of inhibition in

overcoming the influence of beliefs during reasoning, pointing yet again to inhibitory processes

as an important factor in the expression of normative behaviour. As such, these results shed light

not only on the material causes of belief bias, but also on its formal causes. Although the results

were interpreted in part through association logic (i.e., reverse inference), the interpretations

were consistent with prior behavioural evidence that paved the way for the task analysis and the

predictions that followed.

Decision Making in Economic Games

In our final area of investigation, we shift our focus from judgment in intrapersonal

contexts to decision making in interpersonal domains. A guiding assumption of standard

economic models is that humans act purely based on self-regarding preferences (Camerer & Fehr

2006). Preferences are self-regarding if a person does not take into consideration other people’s

behaviour or outcomes associated with those behaviours insofar as they do not affect his or her

20

economic well-being. The major contribution of neuroscience to this research programme has

been to highlight that notions of fairness—and the emotions that accompany them—have a major

impact on choice. Given that games have proven to be a useful arena for studying the descriptive

validity of the self-regarding preference assumption, we turn next to a description of the research

involving some well-known economic exchange games.

Consider the Ultimatum Game (UG) (Guth et al. 1982), which involves two players, the

“proposer” and the “responder.” The proposer is given a sum of money and must decide how to

allocate it between himself or herself and the responder. The split is proposed to the responder,

who can accept or reject the offer. If the responder accepts the offer, the sum of money is divided

as proposed. If the offer is rejected, neither player receives anything. Either way, this game does

not involve negotiation because the game ends when the responder decides. According to

standard economic theory, two types of behavior are rational: First, the proposer should offer the

minimum sum to the responder because having the most money for oneself is optimal. Second,

the responder should accept any sum offered because having some money trumps having none.

Contrary to these assumptions, however, most offers range between 40%-50%, and offers

markedly below this range are rejected by about half the responders (Henrich et al. 2001).

Why is this pattern observed? Sanfey et al. (2003) posited that part of the problem with

the standard economic solution to UG is that it ignores the responder’s emotional reaction to

offers that are perceived as unfair. This suggestion is consistent with evidence from behavioural

studies demonstrating that subjects experience anger in relation to unfair offers or outcomes

(Pillutla & Murnighan 1996). For instance, in one field study, Dhami et al. (2005) found that

prisoners’ perceived unfairness of their trial and sentencing decisions were directly related to

their current anger. In the UG context, perceived unfairness may trump economic considerations,

21

resulting in seemingly irrational choices in which the responder opts for the dominated option.

Sanfey et al. tested their prediction in an fMRI study. Before entering the scanner, subjects were

introduced to ten people who would be their partners in ten separate UG rounds. Subjects were

told that they would play each partner only once. The findings revealed that subjects accepted all

fair offers (i.e., 50-50 splits), and that the likelihood of offer acceptance was directly related to

degree of fairness.

If a sense of unfairness were accompanied by the experience of negative emotion, one

would expect to observe activation specifically in structures that underlie negative emotion under

such conditions. In fact, the neural findings revealed an interesting interplay between the insula

and dorsal lateral PFC when subjects were confronted with unfair offers. Activation in insula is

associated with the experience of distress and pain (Ohara et al. 2005), as well as the emotions of

anger and disgust (Adolphs 2002) (Figure 1). In turn, activation in dorsal lateral PFC has been

linked to cognitive control (Petrides 2005). Interestingly, unfair offers that were rejected resulted

in relatively higher activation in insula than dorsal lateral PFC, whereas unfair offers that were

accepted resulted in relatively higher activation in dorsal lateral PFC than insula.

Notwithstanding difficulties in comparing the magnitude of the fMRI signal across two

different brain regions, these findings indicate that rejection of unfair offers is driven by an

emotional response to unfairness, whereas acceptance of unfair offers is driven by the exercise of

cognitive control, perhaps attenuating the emotional response in favour of economic gain. The

findings show that decision making does not operate purely based on self-regarding preferences,

but may be the outcome of competing processes—in this case, emotion in response to a sense of

unfairness and cognitive control. People care about the extent to which others’ behaviour

corresponds to a fairness norm and are willing to forego economic gains to uphold such norms.

22

Moreover, the brain circuitry that underlies responses to unfairness has emotional and cognitive

components, the differential activation of which would seem to offer a preliminary material

account of people’s responses to unfairness.

Another intriguing question concerns how far people would go to enforce fairness norms.

Punishment of norm violations predates the establishment of modern penal institutions, and

knowledge of punishment deters cheating (Fehr & Gächter 2002). In addition, economic models

of choice that incorporate elements of fairness and cooperation predict economic behaviour

better than models based purely on self-interest motives (Rabin 1993). To address the

punishment issue, de Quervain et al. (2004) studied an economic exchange game that allowed a

player whose trust was violated to punish the violator. The researchers posed two questions:

First, they asked whether trust violation would be punished even when such punishment bore a

direct economic cost for the punisher (i.e., altruistic punishment). Second, they asked whether

altruistic punishment would be enjoyable to the punisher. Both questions are important to

consider because, as social animals, people often have to decide whether to compete or cooperate

with others for the acquisition of scarce resources. Which strategy we adopt depends in part on

whom we trust and how we choose to maintain trust-enhancing reciprocity norms. de Quervain

et al.’s experiment addressed whether the motivation to enforce norms through punishment is

strong enough to override the short-term monetary disadvantage in terms of cost incurred to the

punisher.

In this experiment, subjects played an allocation game with seven different counterparts.

At the beginning of the game, the two players, A and B, were both endowed with 10 monetary

units. Subjects were always in the A role and had two options. One option was to keep all 10

monetary units. If this option was selected, the game ended, and A and B each earned 10

23

monetary units. If A trusted B, the second option was for A to send all 10 monetary units to B, in

which case the experimenter quadrupled the amount transferred from A to B (i.e., 4 × 10

monetary units) such that B would now have 50 monetary units in total. At this point, B had the

choice of keeping all 50 monetary units or sending half to A. If B decided to keep all the money,

A was given 1 minute during which he could decide whether to punish B for non-cooperation

with up to 20 monetary units. Subject A was scanned using positron emission tomography (PET)

only during this 1-minute interval leading up to the decision (to reduce exposure to radioactivity)

and only when B opted not to cooperate (i.e., when B kept all the money). Like fMRI, PET is a

technique to measure brain activity indirectly, but in this case through monitoring blood flow

tagged with radioactive isotopes. In the span of that minute, subjects also rated the extent to

which B’s behaviour was unfair, and their desire to punish B.

As can be seen, this game is designed such that if A trusts B, and B is trustworthy and

reciprocates A’s cooperative behaviour, both players can earn a much larger monetary sum. de

Quervain et al. (2004) hypothesized that if A trusts B and offers 10 monetary units, but B does

not reciprocate, A will interpret this as a norm violation, consider the behaviour unfair, and as a

result have a desire to punish B. For our purposes here, we focus on cases of altruistic

punishment in which punishing B also costs A monetary units. The findings showed that when B

opted to keep all the money, the vast majority of subjects (85 percent) opted for altruistic

punishment.

The neural findings from this study indicated that the caudate nucleus and the thalamus

were recruited when subjects opted to punish B (Figure 1). The caudate nucleus has been linked

strongly to goal-directed action to increase reward (Delgado et al. 2004; O’Doherty et al. 2004),

as has the thalamus to processing reward-related information in monetary contexts (Delgado et

24

al. 2003). This suggests that activation in the caudate nucleus and the thalamus reflects important

components of the neural bases of A’s desire to punish B, as indicated by self-report data

collected in the scanner. In fact, the association between reward-related activity in the brain and

punishment may have been amplified by the explicit instruction to self-report on “desire to

punish” in the period leading up to the decision. In addition, rate of punishment was correlated

positively with rate of blood flow in the caudate nucleus. The findings show not only that people

are prone to punishing violations of fairness norms, but also that anticipating the punishment of

norm violators in the period leading up to behaviour activates reward-processing centres of the

brain.

The combined results of Sanfey et al. (2003) and de Quervain et al. (2004) suggest that in

the context of economic exchange, people are sensitive to notions of fairness above and beyond

mere economic gain. Furthermore, people pay close attention to others’ behaviour (see Camerer

& Fehr 2006), and are willing to enforce reciprocity norms even when doing so involves

accruing a monetary cost. The neural data suggest that responding to a sense of unfairness is

emotionally driven, and that punishing a norm violator is enjoyable at least partly because of the

anticipation of its hedonic reward. In other words, revenge is sweet because we savour it.

Of course, both studies share the limitation that each subject interacts with another agent

only once. This type of interaction arguably lacks external validity since everyday interactions

are often repeated in nature and governed by relational norms (Mandel 2006; Mills & Clark

1982). Norm-based decision making may thus have its origins in the contribution it makes to

reputation building and the development of an intention to trust. Whereas normative

considerations may promote reciprocity as a fair strategy among agents who do not have an

enduring relationship (e.g., mere acquaintances), recent research suggests that generosity may be

25

an even more important normative factor in personal relationships (e.g., friendships) that extend

over time (Mandel 2006; Van Lange et al. 2002).

King-Casas et al. (2005) conducted an fMRI experiment to examine normative

considerations in the context of a repeated (i.e., 10 round) economic exchange game. In this

game the “investor” was provided with $20 and decided how much of it to offer to the “trustee.”

After this decision the investment tripled, and the trustee decided how much of the appreciated

amount to repay to the investor. King-Casas et al. defined reciprocity as “a fractional change in

money sent across rounds by one player in response to fractional change in money sent by their

partner” (pages 78-79). They further distinguished between benevolent reciprocity—cases in

which investors are being generous (i.e., sending more money in response to a defection by the

trustee)—and malevolent reciprocity—cases in which investors are being selfish (i.e., sending

less money in response to the trustee’s generosity). Analysis of the behaviour of investor-trustee

pairs identified reciprocity to be the strongest predictor of subsequent increases or decreases in

trust. This is analogous to saying that subjects followed a tit-for-tat strategy throughout the

rounds. The neural findings revealed that the caudate nucleus was activated more in benevolent

(than malevolent) reciprocity, and that the time course of its activation was phase advanced by

14 seconds as player reputations developed. This phase advancement in the caudate may

represent the neural signature of the “intention to trust” as players built mental representations of

their counterparts’ reputation in the course of successive interactions, which in turn predicted

behaviour. However, this inference remains to be tested directly.

The results of the three experiments discussed in this section indicate that human

decisions are not purely self-regarding (Camerer & Fehr 2006). Humans appear to be acutely

responsive to the extent to which the behaviour of others deviates from various social norms, and

26

they use such deviations to compute mental representations of others (e.g., trustworthy, unfair,

etc.) that in turn influence choice behaviour. Furthermore, behaviour appears to be influenced by

the interaction of several neural systems that drive behaviour under specific conditions (see

Sanfey 2007; Sanfey et al. 2006). A clear example of this phenomenon is the reciprocal

modulation of insula and dorsal lateral PFC that drives acceptance or rejection of unfair offers in

UG (Sanfey et al. 2003). Not only do these results contribute to a better understanding of the

material causes of judgments and decisions in the context of economic exchange games, they

also refine our understanding of the formal and final causes of behaviour. For example, all three

studies suggest that emotion must be given a prominent role in models of behaviour in the

context of economic exchange games. In addition, the goal of adhering to trust-enhancing

relational norms appears to be a strong predictor of behaviour, thereby shedding light on our

understanding of the final causes or functional bases of behavioural choice in economic

exchange contexts.

General conclusion

In this chapter we discussed neuroscientific studies in four research areas to show that

this new line of evidence is transforming our understanding of the causal origins of judgment and

decision making. Our discussion of the contribution of neuroscience to understanding the causal

origins of judgment and decision making can be underscored by four themes. The first consists

of the confirmation of factors that influence decision making under risk, specifically emotions,

the nature of prospects (losses and gains versus gains alone), and task conditions (decision versus

anticipation). Second, for probability judgment, the neural findings converge with recent

behavioural research to show that systematic biases are often attributable to errors of application

rather than errors of comprehension (see Mandel 2005a). Third, for epistemic judgment, the

27

neural findings have confirmed the role of inhibition in belief bias, and highlighted the

involvement of emotions in belief contemplation. This is an important contribution to the

historically important problem regarding the interaction between emotions and beliefs (James

1890; McDougall 1921). Fourth, neural findings have highlighted that notions of fairness have a

major impact on choice in economic exchange. These results have contributed to the revision of

the classical assumption of standard economic models that viewed humans as purely self-

regarding actors (Camerer & Fehr 2006). In addition, they have contributed to emerging

consensus that observed behaviour may be the result of multiple and sometimes competing

cognitive and emotional processes at the brain level (Sanfey 2007; Sanfey et al. 2006). We

believe that these and other neuroscientific findings have begun to add value to our

understanding of the material, final, and formal causes of judgment and decision making by

offering new insights and perspectives on old problems.

28

References

Adolphs, R. (2002). Neural systems for recognizing emotion. Current Opinion in Neurobiology,

12, 169-177.

Aristotle (1929). The physics (Vol I; P. H. Wicksteed & F. M. Comford, Trans.). London:

Heinemann.

Aron, A. R., Robbins, T. W., & Poldrack, R. A. (2004). Inhibition and the right inferior frontal

cortex. Trends in Cognitive Sciences, 8, 170-177.

Barrett, L. F., Mesquita, B., Ochsner, K. N., & Gross, J. J. (2007). The experience of emotion.

Annual Review of Psychology, 58, 373-403.

Bechara, A. (in press). Human emotions in decision making: Are they useful or disruptive? In O.

Vartanian and D. R. Mandel (Eds.), Neuroscience of decision making. New York:

Psychology Press.

Bechara, A., Damasio, H., & Damasio, A. R. (2000). Emotion, decision making, and the

orbitofrontal cortex. Cerebral Cortex, 10, 295-307.

Bechara, A., Damasio, A. R., Damasio, H., Anderson, S. W. (1994). Insensitivity to future

consequences following damage to human prefrontal cortex. Cognition, 50, 7-15.

Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding advantageously

without knowing the advantageous strategy. Science, 275, 1293-1295.

Bub, D. N. (2000). Methodological issues confronting PET and fMRI studies of cognitive

function. Cognitive Neuropsychology, 17, 467-484.

Byrne, R. M. J., Evans, J. St. B. T., Newstead, S. E. (1993). Human reasoning: The psychology

of deduction. New York: Psychology Press.

29

Camerer, C., & Fehr, E. (2006). When does “economic man” dominate social behavior? Science,

311, 47-52.

Camerer, C., Loewenstein, G., & Prelec, D. (2005). Neuroeconomics: How neuroscience can

inform economics. Journal of Economic Literature, 43, 9-64.

De Neys, W. (2006). Dual processing in reasoning: Two systems but one reasoner.

Psychological Science, 17, 428-433.

De Neys, W., & Glumicic, T. (2008). Conflict monitoring in dual process theories of thinking.

Cognition, 106, 1245-1299.

De Neys, W., & Van Gelder, E. (2008). Logic and belief across the lifespan: The rise and fall of

belief inhibition during syllogistic reasoning. Developmental Science, 12, 123-130.

De Neys, W., Vartanian, O., & Goel, V. (2008). Smarter than we think: When our brains detect

that we are biased. Psychological Science, 19, 483-489.

de Quervain, D. J-F., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A., &

Fehr, E. (2004). The neural basis of altruistic punishment. Science, 305, 1254-1258.

Delgado, M. R., Locke, H. M., Stenger, V. A., & Fiez, J. A. (2003). Dorsal striatum responses to

reward and punishment: effects of valence and magnitude manipulations. Cognitive,

Behavioral, and Affective Neuroscience, 3, 27-38.

Delgado, M. R., Stenger, V. A., & Fiez, J. A. (2004). Motivation-dependent responses in the

human caudate nucleus. Cerebral Cortex, 14, 1022-1030.

Desco, M., Hernandez, J. A., Santos, A., & Brammer, M. (2001). Multiresolution analysis in

fMRI: Sensitivity and specificity in the detection of brain activation. Human Brain

Mapping, 14, 16 –27.

30

Dhami, M. K., Mandel, D. R., & Souza, K. (2005). Escape from reality: Prisoner's counterfactual

thinking about crime, justice and punishment. In D. R. Mandel, D. J. Hilton, & P.

Catellani (Eds.), The psychology of counterfactual thinking (165-182). New York:

Routledge.

Epstein, S. (1994). Integration of the cognitive and psychodynamic unconscious. American

Psychologist, 49, 709-724.

Evans, J. St. B. T. (2003). In two minds: Dual process accounts of reasoning. Trends in

Cognitive Sciences, 7, 454-459.

Evans, J. St. B. T., Barston, J., & Pollard, P. (1983). On the conflict between logic and belief in

syllogistic reasoning. Memory and Cognition, 11, 295-306.

Fehr, E. & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415, 137-140.

Goel, V. (2005). Can there be a cognitive neuroscience of central cognitive systems? In D.

Johnson, & C. Erneling (Eds.), Mind as a scientific object: Between brain & culture

(265-282). Oxford: Oxford University Press.

Goel, V. (2007). The anatomy of deductive reasoning. Trends in Cognitive Sciences, 11, 435-

441.

Goel, V., Buchel, C., Frith, C., & Dolan, R. (2000). Dissociation of mechanisms underlying

syllogistic reasoning. NeuroImage, 12, 504-514.

Goel, V., & Dolan, R. (2003). Explaining modulation of reasoning by belief. Cognition, 87, B11-

B22.

Goel, V., & Vartanian, O. (in press). Negative emotions can attenuate the influence of beliefs on

logical reasoning. Cognition and Emotion.

31

Guth, W., Schmittberger, R., & Schwarze, B. (1982). An experimental analysis of ultimate

bargaining. Journal of Economic Behavior and Organization, 3, 376-388.

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In

search of Homo Economicus: Behavioral experiments in 15 small-scale societies. The

American Economic Review, 91, 73-78.

James, W. (1890). Principles of psychology. New York: Dover Publications.

Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review,

80, 237-251.

Kahneman, D., & Tversky, A. (1982). On the study of statistical intuitions. Cognition, 11, 123–

141.

Killeen, P. R. (2001). The four causes of behavior. Current Directions in Psychological Science,

10, 136-140.

Loewenstein, G. F., Weber, E. U., Hsee, C. K., & Welch, N. (2001). Risk as feelings.

Psychological Bulletin, 127, 267–286.

Logothetis, N. K. (2008). What we can do and what we cannot do with fMRI. Nature, 453, 869-

878.

Mackie, J. L. (1974). The cement of the universe: A study of causation. Oxford, England: Oxford

University Press.

Mandel, D. R. (2005a). Are risk assessments of a terrorist attack coherent? Journal of

Experimental Psychology: Applied, 11, 277-288.

Mandel, D. R. (2005b). Counterfactual and causal explanation: From early theoretical views to

new frontiers. In D. R. Mandel, D. J Hilton, & P. Catellani (Eds.), The psychology of

counterfactual thinking (11-23). New York: Routledge.

32

Mandel, D. R. (2006). Economic transactions among friends: Asymmetric generosity but not

agreement in buyers' and sellers' offers. Journal of Conflict Resolution, 50, 584-606.

Mandel, D. R., & Vartanian, O. (in press). Frame, brains, and content domains: Neural and

behavioral effects of descriptive content on preferential choice. In O. Vartanian and D. R.

Mandel (Eds.), Neuroscience of decision making. New York: Psychology Press.

McDougall, W. (1921). Belief as a derived emotion. Psychological Review, 28, 315-327.

Mills, J., & Clark, M. S. (1982). Communal and exchange relationships. In L. Wheeler (Ed.),

Review of personality and social psychology (121-144). Beverly Hills, CA: Sage.

Minsky, M. (1967). Computation: Finite and infinite machines. Englewood Cliffs, NJ: Prentice-

Hall.

Mohr, P. N. C., Biele, G., & Heekeren, H. R. (2010). Neural processing of risk. Journal of

Neuroscience, 30, 6613-6619.

Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.

O’Doherty, J., Dayan, P., Schultz, J., Deichmann, R., Friston, K., & Dolan, R. J. (2004).

Dissociable roles of ventral and dorsal striatum in instrumental conditioning. Science,

304, 452-454.

Ohara P. T., Vit, J. P., & Jasmin, L. (2005). Cortical modulation of pain. Cellular and Molecular

Life Sciences, 62, 44-52.

Petrides, M. (2005). Lateral prefrontal cortex: architectonic and functional organization.

Philosophical Transaction of the Royal Society of London B: Biological Sciences, 360,

781-795.

Pillutla, M. M., & Murnighan, J. K. (1996). Unfairness, anger, and spite: Emotional rejections of

ultimatum offers. Organizational Behavior and Human Decision Processes, 68, 208-224.

33

Poldrack, R. A. (2006). Can cognitive processes be inferred from neuroimaging data? Trends in

Cognitive Sciences, 10, 59-63.

Polya, G. (1954). Mathematics and plausible reasoning. Princeton, NJ: Princeton University

Press.

Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science.

Cambridge, MA: MIT Press.

Rabin, M. (1993). Incorporating fairness into game theory and economics. The American

Economic Review, 83, 1281-1302.

Ridderinkhof, K. R., Ullsperger, M., Crone, E. A., & Nieuwenhuis, S. (2004). The role of the

medial frontal cortex in cognitive control. Science, 306, 443-447.

Sanfey, A. G. (2007). Decision neuroscience: New directions in studies of judgment and decision

making. Current Directions in Psychological Science, 16, 151-155.

Sanfey, A. G., Loewenstein, G., McClure, S. M., & Cohen, J. D. (2006). Neuroeconomics: cross-

currents in research on decision-making. Trends in Cognitive Sciences, 10, 108-116.

Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The neural

basis of economic decision-making in the Ultimatum Game. Science, 300, 1755-1758.

Schwarz, N., Strack, F., Hilton, D., & Naderer, G. (1991). Base-rates, representativeness, and the

logic of conversation: The contextual relevance of “irrelevant” information. Social

Cognition, 9, 67-84.

Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin,

119, 3-22.

34

Slovic, P., Finucane, M. L., Peters, E., MacGregor, D. G. (2004). Risk as analysis and risk as

feelings: some thoughts about affect, reason, risk, and rationality. Risk Analysis, 24, 311–

322.

Stanovich, K. E., & West, R. E. (2000). Individual differences in reasoning: Implications for the

rationality debate. Behavioral and Brain Sciences, 23, 645-726.

Van Lange, P. A. M., Ouwerkerk, J. W., & Tazelaar, M. J. A. (2002). How to overcome the

detrimental effects of noise in social interaction: The benefits of generosity. Journal of

Personality and Social Psychology, 85, 768-780.

van Veen, V., & Carter, C. S. (2006). Conflict and cognitive control in the brain. Current

Directions in Psychological Science, 15, 237-240.

Vartanian, O., Mandel, D. R., & Duncan, M. (in press). Money or life: Behavioral and neural

context effects on choice under uncertainty. Journal of Neuroscience, Psychology, and

Economics.

Venkatraman, V., Rosati, A. G., Taren, A. A., & Huettel, S. A. (2009a). Resolving response,

decision, and strategic control: evidence for a functional topography in dorsomedial

prefrontal cortex. Journal of Neuroscience, 29, 13158 –13164.

Venkatraman, V., Payne, J. W., Bettman, J. R., Luce, M. F., & Huettel, S. A. (2009b). Separate

neural mechanisms underlie choices and strategic preferences in risky decision making.

Neuron, 62, 593– 602.

Wilkins, M. C. (1928). The effect of changed material on the ability to do formal syllogistic

reasoning. Archives of Psychology, 16, 5-83.

35

Figure captions

Figure 1. Key brain areas in the neural reference space for mental representation of emotion.

Note. The ventral system for core affect includes two closely connected circuits that are anchored

in the orbitofrontal cortex (the entire ventral surface of the front part of the brain lying behind the

orbital bone above the eye; Figure 1c). The more sensory system involves the lateral sector of the

orbitofrontal cortex (OFC) and includes the lateral portions of Brodmann Area (BA) 11 and 13,

BA 47/12 (a, c, purple). It is closely connected to the anterior insula (d, yellow) and the

basolateral (BL) complex in the amygdala (d, rose, ventral aspect). The visceromotor circuitry

includes the ventral portion of the ventromedial prefrontal cortex (VMPFC),which lies in the

medial sector of the OFC (a, b, c, blue) and includes medial BA 11 and 13 ventral portions of BA

10, as well as BA 14, where the medial and lateral aspects of OFC connect; VMPFC is closely

connected to the amygdala (including the central nucleus, d, rose, dorsal aspect) and the

subgenual parts of the anterior cingulate cortex involving the anterior aspects of BA 24, 25, and

32 on the medial wall of the brain (ACC; b, copper and tan). The dorsal system is associated

with mental state attributions including the dorsal aspect of the VMPFC corresponding to the

frontal pole in BA 10 (b, maroon), the anterior ACC (peach), and the dorsomedial prefrontal

cortex (DMPFC) corresponding to the medial aspects of BA 8, 9, and 10 (a, b, green).

Ventrolateral prefrontal cortex (VLPFC) is shown in red (a). Also shown for reference are the

thalamus (b, light pink), the ventral striatum (d, green), and the middle frontal gyrus in the

dorsolateral prefrontal cortex (a, orange). Reprinted, with permission, from the Annual Review of

Psychology, Volume 58 ©2007 by Annual Reviews www.annualreviews.org.

36

Figure 1

37

Appendix 1

fMRI: a few basic facts

While a detailed discussion of the features of this technology is beyond the scope of this

chapter, we present a condensed description of its key features here. First, functional Magnetic

Resonance Imaging (fMRI) is not a direct measure of brain activity. Rather, it is a measure of the

haemodynamic response (i.e., change in blood flow) in relation to neural activity in the brain.

When neurons are active, they increase their consumption of oxygen. The local response to

increased consumption of oxygen is increased blood flow to the region, accompanied by changes

in local blood volume and flow. Oxygen consumption changes the local concentration of

oxyhemoglobin and deoxyhemoglobin molecules. Most current fMRI studies rely on the blood-

oxygen-level dependent (BOLD) response to quantify this haemodynamic response by

measuring local oxyhemoglobin and deoxyhemoglobin concentrations. The BOLD response can

be used as a proxy measure for brain activity. The temporal dynamics are such that the BOLD

response lags neuronal activity by a few seconds (lag time can vary as a function of physical and

biological characteristics of the region). This lag is taken into consideration during the analysis

of fMRI data.

Second, while electrophysiological recordings with monkeys enable researchers to record

from individual neurons in the brain, fMRI is a measure of neuronal mass activity, making

inferences about the involvement of specific processes driven by specific classes of neurons

problematic. Consider that in humans there are about 90,000–100,000 neurons under 1 mm2 of

cortical surface (Logothetis 2009). When reconstructing two-dimensional brain scans into three-

dimensional brain volumes, researchers can partition the volume of the brain into (≈ 100,000)

three-dimensional pixels of equal size—known as voxels. In turn, it is possible to determine

38

statistically the degree of correlation between activation in any given voxel and the behaviour of

interest. Thus, while at the level of fMRI data the voxel constitutes the smallest unit of analysis,

each voxel in turn houses a large number of potentially heterogeneous types of neurons. This

means that the observed activation maps are necessarily maps of mass action of groups of

neurons. This is important to note because to the extent that neurons tied to different processes

are housed within the same voxel, the activation in any given voxel will be an average activation

of different types of input. Going back to Poldrack (2006), this makes it all the more critical that

researchers test hypotheses focused on specific and anatomically well defined regions of interest

(ROI), defined as structures of a priori theoretical interest in brain mapping.

Third, when a subtractive contrast indicates that area Z was activated more in condition A

compared to condition B, this contrast reveals a relative rather than an absolute difference in

activation. In other words, this difference could be due to deactivation or inhibition of brain

activity in Z under condition B, activation or excitation of brain activity in Z under condition A,

or some combination of both. This explains why fMRI hypotheses are typically stated in terms of

relative differences. This is not a problem for the researcher if testing a hypothesis relies on

testing for relative differences, but it does become an issue of the presence/absence of a

process—inferred from the “involvement” of a region—must be ascertained. Furthermore, the

high level of noise intrinsic to the data acquisition process means that the difference between

activated and non-activated areas represents approximately a 5 percent difference in the signal

(Desco et al. 2001). This dictates that researchers conduct proper power analyses to select

hypotheses that can be tested given the low signal-to-noise ratio.