psychological assessment - sampplsamppl.psych.purdue.edu/~dbsamuel/suzuki et al (2018) brining the...

15
Psychological Assessment Bringing the Brain Into Personality Assessment: Is There a Place for Event-Related Potentials? Takakuni Suzuki, Kaylin E. Hill, Belel Ait Oumeziane, Dan Foti, and Douglas B. Samuel Online First Publication, June 21, 2018. http://dx.doi.org/10.1037/pas0000611 CITATION Suzuki, T., Hill, K. E., Ait Oumeziane, B., Foti, D., & Samuel, D. B. (2018, June 21). Bringing the Brain Into Personality Assessment: Is There a Place for Event-Related Potentials?. Psychological Assessment. Advance online publication. http://dx.doi.org/10.1037/pas0000611

Upload: others

Post on 20-Jul-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

Psychological AssessmentBringing the Brain Into Personality Assessment: Is Therea Place for Event-Related Potentials?Takakuni Suzuki, Kaylin E. Hill, Belel Ait Oumeziane, Dan Foti, and Douglas B. SamuelOnline First Publication, June 21, 2018. http://dx.doi.org/10.1037/pas0000611

CITATIONSuzuki, T., Hill, K. E., Ait Oumeziane, B., Foti, D., & Samuel, D. B. (2018, June 21). Bringing the BrainInto Personality Assessment: Is There a Place for Event-Related Potentials?. PsychologicalAssessment. Advance online publication. http://dx.doi.org/10.1037/pas0000611

Page 2: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

Bringing the Brain Into Personality Assessment: Is There a Place forEvent-Related Potentials?

Takakuni Suzuki, Kaylin E. Hill, Belel Ait Oumeziane, Dan Foti, and Douglas B. SamuelPurdue University

Advances in technology have provided opportunities to assess physiological correlates and further ourunderstanding of a number of constructs, including personality traits. Event-related potentials (ERPs),scalp-recorded measures of brain activity with millisecond temporal resolution, show properties thatmake them potential candidates for integrating neurophysiological methods into personality research.Several commonly used ERPs have trait-like properties including test–retest stability approaching .8 overtwo weeks. Additionally, ERP methods are relatively inexpensive and tolerable compared to otherneurophysiological methods (e.g., functional magnetic resonance imaging [fMRI]) making them easier toobtain sample sizes required for individual differences research. Finally, the tasks that elicit ERPs areflexible enough to allow researchers to customize the tasks to the psychological constructs of interest.These factors suggest that ERPs could potentially be useful in the study of personality and individualdifferences. A baseline approach to this line of inquiry is to examine the properties of ERPs asneurophysiological individual differences markers and probe their links to personality traits as assessedby self-report questionnaires. This article does this for three well-studied ERPs. Techniques commonlyused in personality assessment research—but rarely in ERP research—were applied to these candidateERPs to examine their psychometric properties and personality correlates. Overall, although ERPs showpromising properties as neurophysiological indicators of individual differences, they were only margin-ally related with existing personality traits. Further research clarifying the ERPs measurement propertiesand potential links with known personality processes is needed. Finally, we list some strategies to furtherintegrate these two areas of research.

Public Significance StatementThis article found that event-related potentials (ERPs; brain activity in response to specific stimuli)show promising properties as indicators of individual differences. This finding provides optimism forthe utility of such neurophysiological indicators to enrich the understanding of personality. However,ERPs’ relationships with self-reported personality traits were generally weak and this article providessome strategies to further integrate these two areas of research.

Keywords: personality, five-factor model, neurophysiology, electroencephalogram, event-relatedpotentials

Supplemental materials: http://dx.doi.org/10.1037/pas0000611.supp

Although personality assessment has a rich history of integrat-ing diverse methods, the majority of research over the past fewdecades has been predominated by the self-report method. This issensible as personality is efficiently and effectively assessedthrough self-report questionnaires (e.g., NEO Personality Inven-tory; McCrae & Costa, 2010). More recently, personality assess-ment has increasingly incorporated a variety of methodologies that

have become more accessible because of technological advances.These include, but are not limited to, ecological momentary as-sessment methods that assess behaviors and affect with fine-grained temporal resolution (e.g., Hepp, Carpenter, Lane, & Trull,2016), and wearable recording and coding methodology thatallow for assessment of moment-to-moment behaviors (e.g.,Mehl, Pennebaker, Crow, Dabbs, & Price, 2001). Such integra-tion of technological advances and methodologies into person-ality research has enriched the nomological network and pro-vided a dynamic understanding of personality (Cronbach &Meehl, 1955).

Links between neurophysiology and personality have been pro-posed by several personality models (e.g., Gray, 1981). Morerecently, the Research Domain Criteria (Cuthbert & Insel, 2013)proposed by the National Institute of Mental Health stronglyemphasizes the integration of neurophysiological methods into

Takakuni Suzuki, Kaylin E. Hill, Belel Ait Oumeziane, Dan Foti, andDouglas B. Samuel, Department of Psychological Sciences, Purdue Uni-versity.

Correspondence concerning this article should be addressed to TakakuniSuzuki, Department of Psychological Sciences, Purdue University, 703Third Street, West Lafayette, IN 47907. E-mail: [email protected]

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Psychological Assessment© 2018 American Psychological Association 2018, Vol. 1, No. 2, 0001040-3590/18/$12.00 http://dx.doi.org/10.1037/pas0000611

1

Page 3: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

psychological research to yield insights into understanding theneural mechanisms of human behavior. Although biological assaysthat identify the genetic and hormonal links with personality traitshave gained some traction (e.g., van den Berg et al., 2016), it hasproven challenging to integrate neurophysiological informationinto the nomological network of personality. The present articlemakes the case that event-related potentials (ERPs), time-lockedelectroencephalogram (EEG) recordings of brain activity in re-sponse to specific stimuli (Luck, 2014), have properties that makethem potential candidates for enriching personality research. Onthe other hand, of interest to ERP researchers, personality traits areoften assessed by questionnaires that tend to have high face va-lidity. Such property of the personality measures may shed light onthe individual differences processes ERPs are involved in or re-flect.

The Potential of Neurophysiology

Several factors make ERPs a promising avenue to enrich theunderstanding of the neurophysiological aspect of personality con-structs. First, some ERPs are known to have trait-like properties,including strong test–retest reliability. For example, the error-related negativity (ERN; described in more detail below) hastwo-week test–retest reliability as high as .8 (Olvet & Hajcak,2009). This is comparable with general and pathological person-ality traits (Gnambs, 2014; Suzuki, Griffin, & Samuel, 2016).Second, the methods that elicit ERPs are relatively inexpensivecompared with others that measure brain activity. For example, thecosts are generally two to three orders of magnitude less thanmagnetic resonance imaging (MRI). This makes the ERP methodmuch more amenable to securing larger samples useful for indi-vidual differences work. Third, tasks used to elicit ERPs aremodifiable. This flexibility provides an opportunity for researchersto tailor existing tasks to make them more relevant to the psycho-logical constructs of interest. For example, researchers may changethe typical stimuli of a task (e.g., arrows or alphabets) to somethingthat are theoretically more relevant to personality traits (e.g., facialstimuli for agreeableness-related traits; Munro et al., 2007). Ad-ditionally, because they measure brain waves as they occur, ERPsalso provide excellent temporal resolution (Luck, 2014). Theseproperties make ERPs promising candidates for enriching thenomological network of personality traits and processes.

Three well-studied ERPs stand out as particularly compellingcandidates to examine the relevance of ERPs in personality as-sessment: the ERN, P3 (or P300), and reward-related positivity(RewP; also referred to as feedback negativity [FN] or feedback-related negativity [FRN]; Proudfit, 2015). The ERN is an ERP thatoccurs in the frontocentral region of the scalp within 100 ms of thecommission of an error (Falkenstein, Hohnsbein, Hoormann, &Blanke, 1991; Gehring, Goss, Coles, Meyer, & Donchin, 1993). Itis thought to be a neural response indicating an error monitoringcircuit originating from the anterior cingulate cortex (Holroyd &Coles, 2002). RewP and P3 are ERPs that occur, respectively, inthe frontocentral sites after 250–300 ms and the parietal sites after300–400 ms of the brain (Sutton, Braren, Zubin, & John, 1965).These are thought to capture different stages of the feedbackprocessing: RewP likely reflects the initial, binary evaluation ofthe outcome as good versus bad and P3 is more sensitive to

outcome probability and magnitude (Donchin & Coles, 1988; Foti,Weinberg, Dien, & Hajcak, 2011).

These ERPs have also been related to several personality traitsin past studies. For example, higher neuroticism-related traits tendto have larger ERN amplitudes (Foti, Kotov, & Hajcak, 2013),smaller RewP amplitudes (Foti & Hajcak, 2009), and smaller P3amplitudes (Foti & Hajcak, 2009). Higher extraversion-relatedtraits tend to have smaller ERN amplitudes (e.g., Hall, Bernat, &Patrick, 2007) and smaller P3 amplitudes (e.g., Daruna, Karrer, &Rosen, 1985). Higher conscientiousness-related traits tend to havelarger ERN amplitudes (Stahl, Acharki, Kresimon, Völler, & Gib-bons, 2015) and larger P3 amplitudes (Yancey, Venables, Hicks, &Patrick, 2013).

Applying Psychometric Approaches to Assess theMeasurement Properties of ERPs

These studies suggest some consistent pattern of relationshipsbetween these ERPs and personality traits across studies. At thesame time, the ERPs appear to relate to a wide range of personalitytraits that are conceptually distinct. Furthermore, personality traitsalso relate to more than one ERP, obscuring the potential links.There are several possible explanations for such findings. Forexample, it may suggest that ERPs and personality traits do nothave simple one-to-one relationships. However, because most re-search examining the relationship between ERPs and personalitytraits have focused only on parts of the ERPs or personality space,the relationships between ERPs and personality traits relative toother ERP and trait relationships are unclear. Nonspecific relation-ships are commonly observed in psychological assessment, espe-cially at the initial stage of measurement construction, and severalprocedures and techniques have been developed to understand andrefine the assessment measures (e.g., Campbell & Fiske, 1959;Cronbach & Meehl, 1955). These techniques have rarely beenapplied to ERPs and suggest that ERPs have not yet been exam-ined thoroughly as a measurement model to assess neurophysio-logical individual differences, with a critical need for potentialfurther refinement. Specifically, we take three approaches as de-scribed below.

First, most studies that examined the relationships betweenpersonality traits and ERPs focused only on parts of the personalitytrait space or a single ERP. Only a simultaneous examination ofthe patterns of multiple relationships will allow the understandingof convergent and discriminant relationships, such as in the mul-titrait multimethod correlation matrix (Campbell & Fiske, 1959).This approach could be particularly helpful for investigating theone-to-one correspondence of the relationships between ERPs andpersonality traits. Although some ERP researchers have taken thisapproach (Foti et al., 2013), multiple ERPs elicited from more thanone task are rarely simultaneously examined with a completeassessment of the personality space. This information will providethe “big picture” relationships among the ERPs and personalitytraits and may provide routes for further refinement of the ERPs toimprove their assessment precision of the neurophysiological con-structs of interest.

Second, ERPs are usually operationalized as single averages ofthe ERP recordings elicited from many trials (Luck, 2014). Forexample, the ERN could be elicited using the arrow flanker taskthat consists of 300 trials (Eriksen & Eriksen, 1974). In this task,

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

2 SUZUKI, HILL, AIT OUMEZIANE, FOTI, AND SAMUEL

Page 4: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

participants are asked to respond to stimuli quickly and partici-pants make errors approximately 20% of the time (i.e., approxi-mately 60 mistakes). The ERN has traditionally been defined asthe single score of the difference between the averages of theneural responses when errors are made and when correct responsesare made (although some researchers now examine the average oferror, correct, and differences in neural responses separately;Meyer, Lerner, De Los Reyes, Laird, & Hajcak, 2017; Olvet &Hajcak, 2009). This reasonably assumes that the ERP variationsacross trials are primarily due to error or processes that are not ofinterest to the researcher, and these procedures attempt to mini-mize the influence of such effects (e.g., ERPs have been found tochange across time, possibly because of fatigue or boredom; Bok-sem, Meijman, & Lorist, 2006; Wascher & Getzmann, 2014). Assuch, these within-person variations over the trials may themselvesbe candidates for linking ERPs with personality. However,whether ERPs reflect the same underlying process throughout thetask has not been tested. Psychometric techniques commonly usedin personality may aid in empirically testing this. For example, the300 trials can be divided into several finer “units” that can beanalyzed as psychometric “items.” These units can be subjected tostatistical analyses to examine their absolute amplitude fluctu-ation (e.g., repeated-measures analysis of variance; ANOVA) aswell as the cohesiveness of their variances (e.g., unidimension-ality as tested by confirmatory factor analysis; CFA). Theseanalyses will provide information regarding the properties ofERPs as measurement models of neurophysiological individualdifferences. Only with establishing such properties of ERPs,can researchers examine their relationships to other individualdifference models.

Finally, past studies that have examined the relationships be-tween ERPs and personality using relatively small sample sizes.Many studies use sample sizes of less than 50 and only a handfulof studies used more than 100 participants (e.g., only two of the sixreviewed above). Given the small sample sizes in most studies,correlation coefficients may have not sufficiently stabilized, whichcould have inflated the effect sizes and the unintentional Type Ierror rates (Schönbrodt & Perugini, 2013). In the present study weuse samples with size ranging from 184 to 530 participants toprovide a more robust estimate of the relationships between ERPsand personality traits.

Contextualizing ERPs With Personality Traits

The five-factor model (FFM) provides a good framework toexamine ERPs’ relationship patterns to personality traits (McCrae& Costa, 2010). The general consensus among personality re-searchers is that five factors adequately capture most of the per-

sonality traits space (John, Naumann, & Soto, 2008). The FFM hasalso been shown to be comprehensive as its strength lies in itsability to integrate other models. This includes a number of modelsthat have been related to ERPs, such as Eysenck’s PersonalityInventory (McCrae & Costa, 1985), Tellegen’s MultidimensionalPersonality Questionnaire (Church, 1994), and Gray’s BehavioralInhibition System/Behavioral Activation System (Segarra, Poy,López, & Moltó, 2014). Therefore, the FFM provides a frameworkto capture a wide range of personality constructs that include thetraits related to ERPs. This provides an opportunity to replicatepast studies as well as to simultaneously examine the relationshipswithin the same broad framework.

In this study, we subject ERPs to procedures and statisticaltechniques commonly used in personality research to examinetheir properties as neurophysiological individual differences mark-ers. Specifically, the interrelationships among ERPs and theirrelationships to personality traits were examined simultaneouslyusing zero-order correlation as well as quantification of personalityprofile similarities. Because the ERPs examined in the presentstudy are thought to reflect different processes, we expected to findsome differential relationships, such that each ERP was character-ized by different sets of personality traits. Specifically, it washypothesized that larger ERN amplitude (i.e., more negative) willtend to have higher Neuroticism and Conscientiousness traits andnot to relate to other traits. Larger RewP and P3 (i.e., morepositive) were expected to have higher Extraversion trait but notrelated to other traits. Each ERP was further divided into five unitsto explore the fluctuations of the absolute ERP amplitudes acrossthe task as well as the cohesiveness of the variances (i.e., do peoplehave similar relative ERP amplitude compared with others?) as anindividual difference marker. Further, we hypothesized that calcu-lating the latent ERPs would strengthen the relationship with theirrelevant personality traits. The overall purpose of this article is toexamine the utility of psychometric techniques and procedures toexamine the ERPs as neurophysiological measurement models.

Method

Participants

Participants in this study were sampled from undergraduate andgeneral community populations. Undergraduate students were re-cruited to participate for course credit. Community members wererecruited through advertisements posted throughout campus andthe local community and were compensated with cash payments.The summary of demographics of participants for each task andquestionnaire are provided in Table 1. The data collection was

Table 1Demographic Information of Participants Who Completed the Event-Related Potential Tasks and Questionnaire

Data N Mean age Female (%) White (%) Black (%) Asian (%)

American Indian/Alaska Native

(%)

Native Hawaiianor Other Pacific

Islander (%)

Preferred notto answer

(%)

Doors 530 22.0 57.1 73.0 5.5 22.4 .8 .2 .4Flanker 184 22.0 58.5 71.2 4.3 23.9 1.1 .0 1.1Personality 383 23.0 59.9 71.7 6.3 22.3 1.0 .3 .5

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

3PROPERTIES OF ERPS AND PERSONALITY

Page 5: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

approved by the appropriate institutional review board. Becausenot all participants completed both tasks and personality measures,specific sample sizes used in each analysis are reported in thetables when they differ. Further, of the data included here, thebehavioral and physiological data from the flanker task were usedand analyzed with the personality data in Hill, Samuel, and Foti(2016) and a subset of the current doors reward task physiologicaldata was used, though not in conjunction with the current set ofpersonality data, in Ait Oumeziane and Foti (2016).

General Procedure

Participants first completed a series of laboratory tasks whilecontinuous EEG data were recorded. Both laboratory tasks wereadministered using the Presentation software (NeurobehavioralSystems, Inc., Berkeley, CA) to control the timing and presenta-tion of stimuli. Participants then completed a battery of self-reportmeasures. All participants were compensated with course credit orpaid $10.00 per hour. Participants who completed the doors taskwere compensated with an additional $5.00 payment. Although theduration of participation varied depending on the tasks and ques-tionnaires administered, the entire procedure took approximatelytwo hours for all participants.

Laboratory Tasks

Flanker task. The error-related negativity (ERN) and correct-related negativity (CRN) were elicited using an arrow flanker task(Eriksen & Eriksen, 1974). The task contained 300 trials brokeninto 10 blocks with self-paced breaks and feedback provided aftereach block. On each trial, five arrows were presented in the centerof the screen for 200 ms and followed by an intertrial interval thatrandomly varied from 2,300–2,800 ms. A fixation cross waspresented before the onset of each stimulus. Participants wereinstructed to attend to the center arrow of the array of five and toclick on the side of the mouse indicated by the arrow (e.g., rightclick if the center arrow was pointing right). There were congruenttrials (����� or �����) and incongruent trials (����� or�����). The task included a practice block of 15 trials and tookparticipants approximately 12 min to complete.

Doors reward task. Neural activities (i.e., RewP and P3)involved in the processing of rewards (i.e., monetary gains) andnonreward (i.e., monetary losses) were elicited using a simplelaboratory gambling task. The task consisted of 50 trials (i.e., 50%gains, 50% losses), presented in a pseudorandom order. On eachtrial, participants were shown an image of two doors and asked toselect which door to open using a computer mouse. A feedbackstimulus was presented denoting whether money was won or lost.A correct (i.e., $0.40 gain) and incorrect guess (i.e., $0.20 loss)were indicated by a green up and red down arrow, respectively. Afixation cross was presented before the onset of each stimulus. Theorder and duration of the stimuli were as follows: (a) two doorsuntil a response was detected, (b) a fixation cross for 1,000 ms, (c)a feedback stimulus for 2,000 ms, (d) a fixation cross for 1,500 ms,and (e) instructions “Click for the next round” were presented.

Psychophysiological Recording and Data Reduction

Each participant was fitted with a 32-electrode cap (actiCAP),oriented on the International 10/20 system. The continuous EEG

data were recorded through acti-CHamp amplifer (Brain ProductsGmbH, Munich, Germany) and digitized at 24-bit resolution, witha 500 Hz sampling rate. Per BrainVision’s design of the acti-CHamp system, we did not have an online reference electrodeduring recording. The electrooculogram was recorded from twoauxiliary electrodes placed 1 cm above and below the left eye toform a bipolar channel. The impedance at each electrode site waskept below 30 kOhm.

Offline analysis of the EEG data was performed using Brain-Vision Analyzer software (Brain Products). All signals were re-referenced to the average of the two mastoids (i.e., TP9/10) andband-pass filtered with cutoffs at 0.1 and 30 Hz. Horizontal elec-trooculograms were recorded from electrodes FT9/10 and verticalelectrooculograms were recorded using two auxillary facial elec-trodes placed 1 cm above and below the left eye, forming a bipolarchannel. A regression-based technique was used to correct forblinks and eye movements on each trial using signal gathered fromthe eye electrode sites (Gratton, Coles, & Donchin, 1983). Al-though EEG data are measured in voltage, it is referenced to anarbitrary point and is an interval measure.

For the flanker task, the data were then segmented to isolate therelevant time windows, specifically �400 to 800 ms around theparticipant’s response to capture the ERN and CRN. For the doorstask, EEG signals were segmented for each trial, starting at 200 msbefore feedback onset and 800 ms after feedback onset. Artifactrejection for individual channels was conducted using trial-wise,semiautomated procedures and visual inspection. The ERN and CRNwere scored on a trial-wise basis as the activity from 0–100 mspostresponse at a pooling of Fz, Cz, FC1, and FC2. These poolingswere compared with the trial-wise baseline of each participant (�400to �200 ms preresponse). The waveforms and scalp distribution ofthe grand averages are provided in Figure 1.

For the doors task, single-trial averages for each condition (i.e.,gain, loss) comprising the RewP were scored as the averageactivity from 240–330 ms after feedback onset and at a fronto-central electrode pooling (Fz, Cz, FC1, and FC2); this time win-dow and pooling were determined from the peak of the differencewave (gain minus loss; peak latency � 285 ms) for the grandaverage across all subjects. However, only single-trial RewP av-erages during this time window were scored for each participant.For P3 amplitude after monetary feedback (i.e., wins and losses),single-trial averages were scored as the average activity on gainand loss trials from 330–400 ms (peak latency � 365 ms) andaveraged at electrodes Cz, Pz, CP1, and CP2. Similar to the RewP,single-trial P3 averages were scored on each condition (i.e., gain,loss) for each participant. The activity in the 200-ms intervalbefore feedback onset was used as the baseline activity for eachtrial. The waveforms and scalp distributions of the grand averagesare provided in Figure 2.

Self-Report Measure

The Five Factor Model Rating Form (FFMRF; Mullins-Sweatt,Jamerson, Samuel, Olson, & Widiger, 2006) is a 30-item self-report questionnaire that assesses the five domains of the FFM andthe 30 facets that comprise the domains. Each facet is assessed byone item and each domain is assessed by six items. Cronbach’s� of the domains in the present sample ranged from .68 (Open-ness to Experience) to .82 (Conscientiousness). Personality data

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

4 SUZUKI, HILL, AIT OUMEZIANE, FOTI, AND SAMUEL

Page 6: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

are often assumed to be interval data and was treated as such inthis article.

Single Trial ERP Data Reorganization

ERP data were first screened for within participant, trial-by-trialoutliers (�3 SD) within each condition of each participant (e.g.,different mean and SDs were used for CRN and ERN and for eachparticipant). Grand averages for each ERP (i.e., ERPs as tradition-ally defined), a single score using all trials within a condition, werecalculated at this point after the removal of the outliers. Next, thesetrial data were also organized into five units for each ERP com-ponent. The number of units was arbitrarily chosen to keep itconsistent across ERP components while maximizing the numberof units.1 For flanker task ERPs, the five units were defined asevery 60 trials (i.e., first 60 trials were in unit 1, the next 60 trialswere in unit 2, and so on) regardless of the number of errors made.On average, participants made 5.0 errors in unit 1, 6.5 errors in unit2, 7.4 errors in unit 4, and 4.2 errors in unit 5. No participant wasexcluded based on performance. For the doors task ERPs, the fiveunits were defined as every five trials for each condition. Unitaverages were calculated using trials remaining after the within-participant outlier removal. Each unit is conceptualized as an“item” in a personality measure that are subjected to psychometricanalyses. Next, the difference scores (i.e., �ERN and �RewP)were calculated using the respective grand and unit averages.These averages were further screened for any between-participants outliers (�3 SD) for each unit and grand averagesseparately. The final ERP dataset consisted of five units andgrand average for each component and differences. The ERPselicited by the flanker task were the ERN, CRN, and thedifference between ERN and CRN (�ERN; CRN subtractedfrom ERN). The ERPs elicited from the doors task were RewPand P3 elicited from the gain (Gain) and loss (Loss) trials aswell as the difference between RewP Gain and RewP Loss(�RewP; Loss subtracted from Win).

Most correlation analyses of ERPs and personality traits wereconducted using “corr.test” function of the “psych” package (Ver-sion 1.7.5; Revelle, 2014) in R (Version 3.3.0; Team, 2017), all

ANOVAs were conducted using the “aov” function of the built-in“stats” package (Version 3.3.0), all SEM analyses and derivation oflatent scores were conducted using Mplus default options (e.g.,maximum likelihood estimator, geomin rotation; Version 7.31;Muthén & Muthén, 1998/2017), and the rest were conducted usingMicrosoft Excel, 2013.

Analytic Plan

All results will focus on effect sizes rather than statisticalsignificance (except for the ANOVAs). Following Cohen’s sug-gestion, we define r � .1 as small, r � .3 as medium, and r � .5as large (Cohen, 1992). For the CFAs and exploratory structuralequation modeling (ESEMs), we used root mean square error ofapproximation (RMSEA) � .08, comparative fit index (CFI) �

.90, Tucker-Lewis Index (TLI) � .90, and standardized root meansquare residual (SRMR) � .10 as indicators of adequate fit (Hu &Bentler, 1999).

Analyses of Grand Average ERPs

In the first set of analyses, the properties of the single grandaverage for each ERP calculated using all relevant trials wereexamined. The main goal for these sets of analyses were toreplicate previous research by using a relatively large sample sizeand to extend past research by simultaneously analyzing multipleERPs and personality traits to examine their relative relationships.First, the correlations among the grand average ERPs were calcu-lated to examine the differential relationships among ERPs. Sec-ond, the correlations between the grand averages of the ERPs andpersonality traits at the domain level were examined to replicate

1 Flanker task has 300 trials and doors task has 50 trails divided into 25trials for each condition. Therefore, if the same number of “units” were tobe used for all tasks and conditions, it requires to have 5 or 25 trials.However, if we make 25 units, it would leave each flanker task with 12trials and each doors task condition with one trial per unit. This will leadto many missing data and we decided to use five units.

Figure 1. Left: ERP responses to correct and error responses at a pooling of Fz, Cz, FC1, and FC2. Responsewas at 0 ms. The ERN was scored as the average activity in the shaded window (0–100 ms) on error trials. TheCRN was scored as the average activity in the shaded window (0–100 ms) on correct trials. Right: Scalpdistribution of the difference between error and correct responses. See the online article for the color version ofthis figure.

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

5PROPERTIES OF ERPS AND PERSONALITY

Page 7: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

past studies. Third, the similarity of the facet-level FFMRF per-sonality profiles among grand average ERPs were assessed toexamine whether the relationships among ERPs would changewhen an external criteria were used to quantify similarities. Amodification of a procedure that quantifies the similarity of cor-relation patterns among constructs was used (Westen &Rosenthal, 2003). To analyze the similarities, correlations be-tween grand average ERPs and the 30 FFMRF facets were firstcalculated (online supplemental material Table 1). These cor-relations were then transformed into z-scores. Finally, the per-sonality profile similarities of two ERPs were calculated bycorrelating these z-scored personality profiles of the ERPs (i.e.,two ERPs were correlated on 30 z-transformed correlations;

correlation of z-transformed columns in online supplemental ma-terial Table 1). Conceptually, this is a correlation using the corre-lations between ERPs and personality traits as “raw” data.

Analyses of Unit-Level ERPs

A series of unit-level ERP analyses were conducted to furtherextend previous research by examining the psychometric proper-ties of ERPs as well as examining whether unit-level informationwould elucidate the relationships between ERPs and personalitytraits. First, the fluctuations in absolute amplitudes were examinedusing repeated-measures ANOVA. The practice of calculating thegrand average assumes that either there is no systematic fluctua-

Figure 2. Left: ERP responses to monetary gains and losses at a pooling of Fz, Cz, FC1, and FC2 (top) andat Cz, Pz, CP1, and CP2 (bottom). Feedback onset was at 0 ms. The RewP was scored as the average activityin the top shaded window (240–330 ms) and the P3 (gains and losses) in the bottom shaded window (330–400ms). Right: Scalp distribution of the difference between gain and loss conditions for the RewP (top) and separatescalp distributions for the P3 (Middle right: gain; bottom right: loss). See the online article for the color versionof this figure.

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

6 SUZUKI, HILL, AIT OUMEZIANE, FOTI, AND SAMUEL

Page 8: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

tion in ERPs across trials or that it is not meaningful. To ourknowledge, such fluctuation has not been empirically examined.Second, the coherence of the covariances among units were ex-amined using CFAs for each ERP component. If one latent con-struct is sufficient to explain the covariances among units of asingle ERP component, this provides evidence for the ERP vari-ances across units to be generally assessing the same construct. Ifa single latent construct is insufficient to account for the covari-ances, this seriously challenges the calculation of a single valuefrom all trials. We conducted these analyses for all ERPs, includ-ing �ERN and �RewP, since these two are the summary ERPscommonly reported in the literature. Further, one-factor CFAs forthe ERP counterparts (i.e., share the time window and electrodelocation definition) were conducted to examine the covariane ofunits across conditions (e.g., error vs. correct), as well. If a one-factor CFA did not fit well, this was followed by a two-factor CFAwith latent factor as defined one of the conditions.

Finally, ERP latent scores saved from the single ERP compo-nent CFAs and personality latent scores from ESEM were used toexamine the relationships between latent ERPs and personality.The latent constructs theoretically reflect the underlying constructcausing the shared variances among the units. The purpose of thisanalysis was to examine whether the latent indicators of ERPcomponents and personality traits would clarify (i.e., strengthensome and weaken some) the relationships between ERPs andpersonality traits than the average of observed amplitudes andratings. For the ESEM, five factors were chosen to reflect the FFMthat underlies the FFMRF. Also, ESEM was chosen over CFA forthe analysis of personality traits because of the overly strict natureof CFA to assess multidimensional personality models (Hopwood& Donnellan, 2010).

Results

Analyses of Grand Average ERPs

First, correlations among the ERP grand averages were exam-ined (Table 2, top). The correlational pattern suggests that the ERPcomponents elicited from the same tasks and ERP counterparts(i.e., shared the time-window and electrode location definition)were generally related to each other, as would be expected. Aninteresting finding was that the ERP difference (�) scores tendedto relate much stronger with one of the components that were usedin the calculation than the other ERP used. For example, doors�RewP score was related much stronger with the RewP Gain (r �.55) and not as much with RewP Loss (r � �.07).

The correlations between the grand averages of the ERPs andpersonality traits at the domain level were examined next (Table 2,bottom). In general, effect sizes were small (max | r | � .11), butthere were five correlations that were r � .10: Decreased flanker�ERN amplitude was (contrary to our expectation and literature)related to higher levels of Neuroticism and Extraversion andincreased doors P3 Loss amplitude was related to Agreeableness.However, none of the doors RewP components had correlations oflarger than .10 with any of the personality traits examined. We alsocalculated correlations corrected for attenuation using unpublishedFFMRF domain 1-year test–retest correlations (N � 153; rangingfrom .56 to .71; Samuel & Lynam, 2017) for all ERPs to reduce theeffect of the relatively low reliability of FFMRF as a short mea-sure. This provides some indication of the “ceiling” of the corre-lations. There was no consistent similar dependability index avail-able for ERPs; thus, disattenuation was conducted only using theFFMRF dependability. The strongest correlation achieved with

Table 2Zero-Order Correlations Among Event-Related Potentials and Five-Factor Model Traits

Doors

Flanker P3 RewP FFM

Task CRN ERN �ERN Win Loss Win Loss � N E O A

FlankerERN .60�ERN �.29 .59

DoorsP3 Win .20 .10 �.08P3 Loss .23 .09 �.13 .87RewP Win .20 .11 �.07 .78 .71RewP Loss .26 .23 .01 .60 .69 .79RewP � �.01 �.14 �.17 .45 .23 .55 �.07

FFMN �.06 .03 .11 .05 .01 .03 .03 .02E �.06 .04 .10 .08 .07 .05 .02 .06 �.23O �.09 .00 .10 .01 �.01 �.03 �.08 .05 .02 .42A .00 .05 .07 .10 .10 .06 .05 .04 �.12 .27 .17C .03 .06 .04 �.04 �.02 �.02 .01 �.04 �.10 .32 .08 .27

Note. Bold � r � .1 (two correlations that appear to have r � .1 are because of rounding and are r � .1); underline � r � .3; bold and underline �r �. 5; CRN � correct-related negativity; ERN � error-related negativity; � � change score; FFM � five-factor model; N � neuroticism; E �extraversion; O � openness to experience; A � agreeableness, C � conscientiousness. Sample sizes for correlations among flanker event-related potentials(ERPs) ranged from 182 to 183, for correlations among doors ERPs ranged from 523 to 528, for correlations among personality traits were 386, forcorrelations between flanker and doors ERPs ranged from 178 to 181, for correlations between flanker ERPs and personality traits were 159, for correlationsbetween doors ERP and personality traits ranged from 371 to 373.

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

7PROPERTIES OF ERPS AND PERSONALITY

Page 9: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

this method was r � .14 (between doors P3 Loss and FFMAgreeableness).

The similarity of the facet-level FFMRF personality profilesamong grand average ERPs were assessed next (see Table 3). Thecorrelations between ERPs and FFMRF facets ranged fromr � �.18 to .17. The pattern in the similarity of ERP personalityprofiles were generally comparable with the zero-order correlationrelationships in that they seem to reflect the relationship betweenthe ERP counterparts (e.g., P3 Gain and P3 Loss) and task vari-ances. This means that, not only do these ERPs have sharedvariance, but also “meaningfully” share variances as operational-ized by personality profile similarity. However, there were severaldifferences between the two analyses, as well. Specifically, RewPGain and �RewP had more similar personality profiles with P3components than with the other RewP components. Also, the�ERN had about equally similar (in strength) personality profileswith ERN (positive correlation) and CRN (negative correlation).

Analyses of Unit-Level ERPs

First, the absolute amplitude fluctuation across units for eachcomponent was examined using repeated-measures ANOVA (Ta-ble 4, Figure 3; because of the power of at least .91 to detect even

effect size of .02, only statistically significant results are pre-sented). The series of ANOVAs suggest that CRN and ERNamplitudes were statistically significant (i.e., p � .05) at theoverall level (partial 2 � .11 and .04, respectively2), suggestingthat not all units have the same mean amplitude. On the other hand,the �ERN amplitude did not fluctuate (i.e., the differences in unitmean scores were within the range of error). The ANOVA for P3Gain units was statistically significant, as well, but the effect sizewas small (partial 2 � .01). None of the other ANOVA for doorsERPs were statistically significant. This likely suggests that theERP amplitudes elicited using doors task are generally stablethroughout the task. Overall, the absolute ERP amplitudes gener-ally seem to remain stable over time for most ERPs, providingsupport for the practice of calculating the average across trials.

Next, to examine the adequacy of one latent factor underlyingthe observed unit covariances for each ERP, one-factor CFAs wereconducted. All fit indices suggested that ERN, �ERN, and RewPWin, RewP Loss and �RewP had adequate fit (see Table 5). P3Gain and Loss had adequate fit according to all indices except forRMSEA, although RMSEAs were still below .10. In other words,the results suggest that unit variances of these ERPs likely can beexplained by a one latent factor. Flanker CRN had adequate fitaccording to most indices, but RMSEA was high (.15). Althoughflanker CRN also may be explained by a one latent factor, it cannotbe concluded with as much confidence as with other components.These results suggest that past assumptions of the unidimension-ality for the practice of averaging trials seem to be justified.Further, the results from the one-factor CFA combining the ERPcomponent pairs suggest that CRN and ERN are distinct (see Table5). This provides support for further examination of these differentconditions. On other hand, the results from the same analysis forP3 and RewP suggest that the time-window and electrode locationdefinition is sufficient to explain the covariations of the 10 unitscombined from the two conditions. ESEM fit indices for the fivefactor structure of the FFMRF (that indicate generally adequate fit,except for TLI � .86) are also included in Table 5. For interestedreaders, local fit indicators for all analyses (factor loadings, resid-

2 Partial 2 was calculated using the formula: SSunit/(SSunitSSerror).

Table 3Five-Factor Model Profile Similarities Among Event-Related Potentials

Flanker Doors

Task CRN ERN �ERN P3 win P3 loss RewP win RewP loss

FlankerERN .58�ERN �.46 .46

DoorsP3 win �.04 .18 .24P3 loss .11 .25 .15 .91RewP win .15 .31 .17 .88 .87RewP loss .27 .40 .15 .49 .65 .74�RewP �.07 .03 .11 .83 .64 .72 .08

Note. Bold � r � .1; underline � r � .3; bold and underline � r �. 5; CRN � correct-related negativity;ERN � error-related negativity; � � change score. Sample sizes are as reported in the methods section. Samplesizes for correlations between flanker event-related potentials (ERPs) and personality traits ranged from 156 to159 and for correlations between doors ERPs and personality traits ranged from 367 to 373.

Table 4Multivariate Analysis of Variance Results for Correct-RelatedNegativity, Error-Related Negativity, and P3 Win

Component df SS MS F Sig.

CRNUnit 4 275.8 68.94 21.92 p � .01Error 725 2280.1 3.14

ERNUnit 4 515 128.71 6.962 p � .01Error 697 12886 18.49

P3 winUnit 4 282 70.44 2.908 p � .02Error 2103 50932 24.22

Note. SS � sum of squares; MS � mean squares; F � F-statistic; Sig. �statistical significance; CRN � correct-related negativity; ERN � error-related negativity.

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

8 SUZUKI, HILL, AIT OUMEZIANE, FOTI, AND SAMUEL

Page 10: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

ual variance, and normalized residual correlations) are reported asonline supplemental material Tables 2 through 14.

Finally, ERP latent scores saved from the one-factor CFAs foreach ERP and personality latent scores from ESEM were used toexamine the relationships between latent ERPs and personalityrelationships. The results suggest that the strength of correlationsdid not increase much and are still relatively small (Table 6;maximum r � .16; maximum change in r � .07). However, somechanges were noted. For example, the Openness factor increasedits correlation with �ERN (the correlations increased from r � .10

to r � .16). The correlation between CRN and Openness alsoincreased from r � �.09 to r � �.15.

Discussion

The present article demonstrated that the three candidate ERPsexamined have promising properties as indicators of neurophysi-ological individual differences. They showed expected interrela-tionships across components as well as consistency within eachcomponent. However, they had small and nonspecific relationships

Figure 3. The unit amplitudes of the correct-related negativity, error-related negativity, and P3 Win. The barsindicate between-subjects 95% confidence interval. See the online article for the color version of this figure.

Table 5The Confirmatory Factor Analyses and Exploratory Structural Equation Modeling Fit Indices

AnalysisNumber of

factors �2 df p RMSEA CFI TLI SRMR

CFACRN 1 26.562 5 �.01 .153 .982 .964 .013ERN 1 9.519 5 .09 .070 .988 .976 .025�ERN 1 9.570 5 .09 .071 .978 .957 .033P3 win 1 31.745 5 �.01 .100 .982 .965 .019P3 loss 1 26.633 5 �.01 .090 .987 .973 .018RewP win 1 17.926 5 �.01 .070 .991 .981 .016RewP loss 1 11.171 5 .05 .048 .994 .989 .014�RewP 1 9.980 5 .08 .043 .954 .908 .026Flanker 1 270.747 35 �.01 .191 .858 .817 .119P3 1 197.531 35 �.01 .094 .959 .948 .029RewP 1 148.312 35 �.01 .078 .963 .952 .031Flanker 2 78.038 34 �.01 .084 .973 .965 .033

ESEMFFM 616.15 295 �.01 .053 .902 .855 .036

Note. CRN � correct-related negativity; ERN � error-related negativity; � � change score; FFM �five-factor model; CFA � confirmatory factor analysis; ESEM � exploratory structural equation modeling;RMSEA � root mean squares error or approximation; CFI � comparative fit index; TLI � Tucker-Lewis Index;SRMR � standardized root mean square residual. Sample sizes for CRN and ERN were 161; for �ERN was 160;for P3 gain, P3 loss, and RewP loss were 376; for RewP gain and �RewP were 375; and for FFM were 386.

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

9PROPERTIES OF ERPS AND PERSONALITY

Page 11: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

with personality traits measured through a self-report measure.Given the promising properties of ERPs, this finding is surprisingand we provide some potential explanations and avenues for futureresearch.

Psychometric Properties of ERPs

As assumed and expected, ERPs seem to relate and are similarto other ERPs that are their counterparts (e.g., CRN and ERN) andto the ERPs that are elicited from the same task (RewP’s and P3’s).This suggests that ERPs are measuring constructs relevant to tasksand not a general variance of neurophysiological activity (e.g.,excitability related to a task response). Such explicit examinationof discriminant and convergent relationships among ERPs arerelatively uncommon in ERP research. Also, the ERPs defined asdifferences tended to be less cogent than the ERPs that were usedin their calculation. �RewP fared the worst among all ERPsexamined and may be a good candidate for further investigation.Particularly noteworthy is that, when analyses were constrained tofocusing only on �RewP, it seemed to show good psychometricproperties (e.g., CFA fit indices suggest it as a unidimensionalconstruct). However, when it was examined in context of otherERPs it did not show the pattern as the other EPRs did. Thispattern of finding highlights the importance of examining multipleERPs simultaneously.

The unit-level analyses of the ERPs provided insight into thepotential use of this approach to examine the cogence of ERPsacross trials. The results from the one-factor CFAs on the P3 andRewP units from both conditions suggest that the ERP variancesfrom gain and loss conditions are highly correlated. This likelyreduces the signal-to-noise ratio and may partially explain the poorpsychometric properties of �RewP. This is a surprising findinggiven the rich literature on these constructs. This may be ex-plained, at least partially, by having only five trials in each unit.

For example, 20 trials seem necessary for �RewP to stabilize(Levinson, Speed, Infantolino, & Hajcak, 2017). Additionally, inthis article, we focused on time-window defined average ERPs (astraditionally examined). However, for example, RewP from gainand loss conditions may be better defined by other methods, suchas time-frequency analysis (Foti, Weinberg, Bernat, & Proudfit,2015). This may be an area that may merit further investigation asa neurophysiological individual difference indicator. The flankertask, on the other hand, seemed to elicit condition-specific time-window defined average variances. This provides evidence for theERPs elicited by the flanker task as individual difference indica-tors that assess different processes underlying each condition andcontinued use of this method. One potential explanation for thedifferences in the effect of conditions between these two tasks isthe different engagement level. In the flanker task, the participants’performances have influence over the conditions whereas, in thedoors task, the conditions are random. In the one-factor CFAs foreach component, all ERPs (perhaps with the exception of flankerCRN) showed unidimensionality among units, suggesting similarvariances across task. Repeated-measures ANOVA suggested thatthere is some fluctuation in the absolute amplitude of the threeERPs. However, the effect sizes were small and are likely becauseof the high power for detecting any differences. In light of the largepower to detect such differences, it is actually remarkable that onlythree of the eight ERPs showed variations across units. In sum-mary, combined with the literature of the stability of these ERPs,they seem to be assessing trait-like constructs and show promise asmeasuring neurophysiological individual differences.

The Relationships Between ERPs and FFM Traits

Despite the promising properties of ERPs as individual differ-ence markers, the correlations between ERPs and personality traitswere small and did not seem to have specificity. Examining the

Table 6The Correlations Among Latent Event-Related Potentials and Personality Traits and Change in the Correlations

Flanker Doors FFM

Task CRN ERN �ERN P3 win P3 loss RewP win RewP loss �RewP N E O A

FlankerERN .63�ERN �.33 .54

DoorsP3 Win .21 .12 �.08P3 Loss .28 .13 �.14 .87RewP Win .24 .13 �.11 .78 .72RewP Loss .31 .27 �.02 .57 .67 .78�RewP .00 �.12 �.14 .49 .27 .58 �.05

FFMN �.08 .01 .11 .01 �.02 .01 .01 .00E �.04 .06 .11 .12 .09 .06 .03 .04 �.23O �.15 .01 .16 .02 .00 �.02 �.06 .05 .04 .47A .01 .09 .09 .10 .08 .07 .04 .06 �.06 .06 .08C .04 .10 .08 .01 .01 �.01 .03 �.05 �.13 .34 .04 .25

Note. Bold � r � .1 (one correlation that appears to have r � .1 is because of rounding and is r � .1); underline � r � .3; bold and underline � r �.5; CRN � correct-related negativity; ERN � error-related negativity; � � change score; FFM � five-factor model; N � neuroticism; E � extraversion;O � openness to experience; A � agreeableness, C � conscientiousness. Sample sizes for correlations among flanker event-related potentials (ERPs)ranged from 160 to 161, for correlations among doors ERPs ranged from 523 to 528, for correlations among personality traits ranged from 375 to 376, forcorrelations between flanker and doors ERPs ranged from 158 to 159, for correlations between flanker ERPs and personality traits ranged from 160 to 161,for correlations between doors ERP and personality traits ranged from 375 to 376.

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

10 SUZUKI, HILL, AIT OUMEZIANE, FOTI, AND SAMUEL

Page 12: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

latent correlations did not change this. This finding was somewhatsurprising given past findings that identified several notable rela-tionships between various ERPs and specific personality traits.This may indicate that past studies that focused on specific traitsand ERPs within relatively small samples overestimated the effectsizes. Further, even when examining the correlations holistically,there were no clear one-to-one correspondence between the grandaverage ERPs and personality traits. This pattern is in line with thereview of the literature that suggests nonspecific relationshipsbetween ERPs and personality traits. Alternatively, ERPs mayrelate to specific functions that can be captured only by specificpersonality traits. For example, although anxiety- and depression-related traits are subsumed under the Neuroticism domain, theymay differentially relate to ERN and RewP (Moser, Moran, &Jendrusina, 2012; Proudfit, 2015). Although we did not find evi-dence for such relationship in this sample (online supplementalmaterial Table 1; FFMRF 1 and 3, respectively), the use of theone-item assessment for each facet may have limited the size andstability of the correlations.

These smaller effect sizes may also indicate the broader truththat correlations across different methods are often quite small.For example, a meta-analysis by Lauriola, Panno, Levin, andLejuez (2014) found that risk taking, as assessed by using acommonly used behavioral measure, and personality traits ofsensation seeking and impulsivity, as assessed by using self-reports, were small (r � .14 and r � .10, respectively). Anothermeta-analysis by Cyders and Coskunpinar (2011) also foundthat the relationship between self-reported and behavioral im-pulsivity was small (r � .10). Therefore, rather than the presentfindings being an exception, there may be a ceiling in thestrength of relationships among individual differences markersfrom varying methods (i.e., self-report, behavioral, and neuro-physiological).

This article does not provide a definite answer and more re-search is needed examining the utility of ERPs as indicators ofneurophysiological individual differences, and we outline someavenues below. Nonetheless, this is the first study that has empir-ically examined the properties of ERPs as an individual differencemodel using techniques commonly used in personality research.By contextualizing ERPs using personality traits, the approachesused here could also further our understanding of what individualdifference processes ERPs may be involved in. This article high-lights the importance of examining the relationships among mul-tiple ERPs and personality traits simultaneously as well as testingthe assumptions underlying the common procedures used in ERPand personality research.

Limitations

Some limitations of the current study should be noted. Onelimitation of the present study was our reliance on a brief measureto assess the FFM. The FFMRF has validity support as an efficientmeasure of the FFM, but as with any short form there are validitytradeoffs (Samuel, Mullins-Sweatt, & Widiger, 2013). Comparedwith the longer, 240-item NEO PI-R (McCrae & Costa, 2010), theFFMRF tends to have lower reliability, which in turn caps validity(Credé, Harms, Niehorster, & Gaye-Valentine, 2012). This use ofthe FFMRF likely limited the range of correlations that could be

found between ERPs and personality traits in the present study.Future studies that employ longer and more robust measures wouldlikely result in stronger correlations with ERPs. Additionally,because the participants in this study were sampled from under-graduate and general community populations, the sample likely didnot include those at the most extreme ends of the ERP or person-ality trait distributions. However, this sampling strategy is highlyrelevant for broad array of individual differences across thesedimensional constructs.

Possible Explanations and Future Directions

Given the psychometric properties of the ERPs, they seempromising for the assessment of neurophysiological individualdifferences. However, they had only weak relationships with FFMtraits, which is known to capture a wide range of personality-related behaviors. Below, we highlight some potential reasons forthis finding and provide some strategies to potentially alleviatethese issues, particularly focusing on those that are feasible.

1. ERPs do not have a one-to-one correspondence withFFM traits, but rather are assessing interaction of FFMtraits. For example, the ERN, which is thought to reflectpreconscious error-monitoring, could be a combinationof Neuroticism and Conscientiousness or a combinationof specific facets. For example, Hill et al. (2016) foundthat an interaction of personality traits, but not the maineffects, was a significant indicator of the ERN. Suchexamination of the interactions of personality traits mayelucidate the complex relationships between ERPs andpersonality traits.

2. Similarly, the interactions of ERPs could be indicators ofpersonality. For example (Meyer et al., 2017), proposedregressing outcome variables (e.g., personality traits) onERN, CRN, and the interaction term of the ERN andCRN. Perhaps self-reported personality traits are thecombination of several ERPs. Designing a study withmultiple ERPs that seem to be relevant to certain person-ality traits may shed light on this possibility.

3. ERP tasks used in this article did not elicit personality-relevant behaviors. For example, the flanker task usesarrows that are hard to imaginably relate to a specificpersonality trait. In contrast, Munro et al. (2007) usedtwo forms of flanker task: One using alphabets (i.e.,symbol) and another using pictures of angry and fearfulfaces. They found that individuals with higher psycho-pathic traits did not differ from those with lower psycho-pathic traits on the flanker task using the alphabet, buthad smaller ERN amplitudes elicited from the flankertask using faces. Because psychopathy is a trait that ischaracterized by lack of empathy, this task showed thepattern of ERN amplitudes that was expected from par-ticipants’ trait levels. Similar modification of tasks mayhelp strengthen the relationships between ERP and self-reported personality traits.

4. These two methods may have weak relationships, but arecomplementary. In other words, ERPs could be assessing

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

11PROPERTIES OF ERPS AND PERSONALITY

Page 13: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

individual differences that are not fully captured by self-reported FFM traits. For example, ERPs and FFM mayhave incremental validity over each other in predictingfuture behaviors, such as anxiety or depressive symp-toms. Studies including ERP and personality measures atbaseline with a follow-up assessment of behaviors orfunctioning may shed light on this possibility.

5. The definition of personality may need to be broadenedor redefined. Patrick and colleagues (2013) proposed aconcept they call “psychoneurometric.” The underlyingidea is to obtain variances of the construct of interestfrom ERP and personality data using structural equationmodeling. In their example, they used two self-reportmeasures to assess disinhibition and two forms of P3 toform a single disinhibition latent construct. They thenrelated this to constructs assessed through other measuresand ERPs, such as mental disorder symptoms and adifferent form of P3.

6. ERPs are assessing individual differences at differentlevels than self-report measures. It is likely the case thatmany processes occur between the neurophysiologicalactivity and self-reported traits. To test this possibility, itwill likely involve assessing individual differences atdifferent levels (e.g., ERPs occurring at different times,observed behaviors). For example, ERPs and personalitytraits may have a stronger relationship among partici-pants who change their strategy throughout the task (i.e.,slower reaction time [RT] after making a mistake) andare engaged than those who do not and are not (Luu,Collins, & Tucker, 2000; Pailing & Segalowitz, 2004).

Summary

The recent increase in availability and accessibility of variousmethodologies has progressed personality research. ERPs are onesuch method and they provide an opportunity to progress person-ality research further. The current article found that ERPs havepsychometric properties that make them a potential candidate foruse in personality research. However, their relationships to per-sonality traits suggest that such relationship may be fairly compli-cated. As demonstrated in this article, not only can personalityresearchers benefit from integrating ERP methodology, ERP re-searchers can benefit from incorporating procedures and tech-niques used in personality research. The research examining theintersection of these two research areas have begun only recently.We hope that the approach taken in this article stimulates futurecollaborations between personality and ERP researchers to refineour understanding of personality traits as well as ERPs.

References

Ait Oumeziane, B., & Foti, D. (2016). Reward-related neural dysfunctionacross depression and impulsivity: A dimensional approach. Psycho-physiology, 53, 1174–1184. http://dx.doi.org/10.1111/psyp.12672

Boksem, M. A. S., Meijman, T. F., & Lorist, M. M. (2006). Mental fatigue,motivation and action monitoring. Biological Psychology, 72, 123–132.http://dx.doi.org/10.1016/j.biopsycho.2005.08.007

Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminantvalidation by the multitrait-multimethod matrix. Psychological Bulletin,56, 81–105. http://dx.doi.org/10.1037/h0046016

Church, A. T. (1994). Relating the Tellegen and five-factor models ofpersonality structure. Journal of Personality and Social Psychology, 67,898–909. http://dx.doi.org/10.1037/0022-3514.67.5.898

Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.http://dx.doi.org/10.1037/0033-2909.112.1.155

Credé, M., Harms, P., Niehorster, S., & Gaye-Valentine, A. (2012). Anevaluation of the consequences of using short measures of the Big Fivepersonality traits. Journal of Personality and Social Psychology, 102,874–888. http://dx.doi.org/10.1037/a0027403

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychologicaltests. Psychological Bulletin, 52, 281–302. http://dx.doi.org/10.1037/h0040957

Cuthbert, B. N., & Insel, T. R. (2013). Toward the future of psychiatricdiagnosis: The seven pillars of RDoC. BMC Medicine, 11, 126. http://dx.doi.org/10.1186/1741-7015-11-126

Cyders, M. A., & Coskunpinar, A. (2011). Measurement of constructsusing self-report and behavioral lab tasks: Is there overlap in nomotheticspan and construct representation for impulsivity? Clinical PsychologyReview, 31, 965–982. http://dx.doi.org/10.1016/j.cpr.2011.06.001

Daruna, J. H., Karrer, R., & Rosen, A. J. (1985). Introversion, attention andthe late positive component of event-related potentials. Biological Psy-chology, 20, 249–259. http://dx.doi.org/10.1016/0301-0511(85)90001-8

Donchin, E., & Coles, M. G. H. (1988). Is the P300 component a mani-festation of context updating? Behavioral and Brain Sciences, 11, 357–374. http://dx.doi.org/10.1017/S0140525X00058027

Eriksen, B. A., & Eriksen, C. W. (1974). Effects of noise letters upon theidentification of a target letter in a nonsearch task. Perception & Psy-chophysics, 16, 143–149. http://dx.doi.org/10.3758/BF03203267

Falkenstein, M., Hohnsbein, J., Hoormann, J., & Blanke, L. (1991). Effectsof crossmodal divided attention on late ERP components. II. Errorprocessing in choice reaction tasks. Electroencephalography and Clin-ical Neurophysiology, 78, 447– 455. http://dx.doi.org/10.1016/0013-4694(91)90062-9

Foti, D., & Hajcak, G. (2009). Depression and reduced sensitivity tonon-rewards versus rewards: Evidence from event-related potentials.Biological Psychology, 81, 1–8. http://dx.doi.org/10.1016/j.biopsycho.2008.12.004

Foti, D., Kotov, R., & Hajcak, G. (2013). Psychometric considerations inusing error-related brain activity as a biomarker in psychotic disorders.Journal of Abnormal Psychology, 122, 520–531. http://dx.doi.org/10.1037/a0032618

Foti, D., Weinberg, A., Bernat, E. M., & Proudfit, G. H. (2015). Anteriorcingulate activity to monetary loss and basal ganglia activity to monetarygain uniquely contribute to the feedback negativity. Clinical Neurophys-iology, 126, 1338–1347. http://dx.doi.org/10.1016/j.clinph.2014.08.025

Foti, D., Weinberg, A., Dien, J., & Hajcak, G. (2011). Event-relatedpotential activity in the basal ganglia differentiates rewards from non-rewards: Temporospatial principal components analysis and source lo-calization of the feedback negativity. Human Brain Mapping, 32, 2207–2216. http://dx.doi.org/10.1002/hbm.21182

Gehring, W. J., Goss, B., Coles, M. G. H., Meyer, D. E., & Donchin, E.(1993). A neural system for error detection and compensation. Psycho-logical Science, 4, 385–390. http://dx.doi.org/10.1111/j.1467-9280.1993.tb00586.x

Gnambs, T. (2014). A meta-analysis of dependability coefficients (test–retest reliabilities) for measures of the Big Five. Journal of Research inPersonality, 52, 20–28. http://dx.doi.org/10.1016/j.jrp.2014.06.003

Gratton, G., Coles, M. G. H., & Donchin, E. (1983). A new method foroff-line removal of ocular artifact. Electroencephalography and ClinicalNeurophysiology, 55, 468–484. http://dx.doi.org/10.1016/0013-4694(83)90135-9

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

12 SUZUKI, HILL, AIT OUMEZIANE, FOTI, AND SAMUEL

Page 14: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

Gray, J. A. (1981). A critique of Eysenck’s theory of personality. In P. H. J.Eysenck (Ed.), A model for personality (pp. 246–276). Berlin, Germany:Springer. http://dx.doi.org/10.1007/978-3-642-67783-0_8

Hall, J. R., Bernat, E. M., & Patrick, C. J. (2007). Externalizing psycho-pathology and the error-related negativity. Psychological Science, 18,326–333. http://dx.doi.org/10.1111/j.1467-9280.2007.01899.x

Hepp, J., Carpenter, R. W., Lane, S. P., & Trull, T. J. (2016). Momentarysymptoms of borderline personality disorder as a product of trait per-sonality and social context. Personality Disorders: Theory, Research,and Treatment, 7, 384–393. http://dx.doi.org/10.1037/per0000175

Hill, K. E., Samuel, D. B., & Foti, D. (2016). Contextualizing individualdifferences in error monitoring: Links with impulsivity, negative affect,and conscientiousness. Psychophysiology, 53, 1143–1153. http://dx.doi.org/10.1111/psyp.12671

Holroyd, C. B., & Coles, M. G. H. (2002). The neural basis of human errorprocessing: Reinforcement learning, dopamine, and the error-relatednegativity. Psychological Review, 109, 679–709. http://dx.doi.org/10.1037/0033-295X.109.4.679

Hopwood, C. J., & Donnellan, M. B. (2010). How should the internalstructure of personality inventories be evaluated? Personality and SocialPsychology Review, 14, 332–346. http://dx.doi.org/10.1177/1088868310361240

Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariancestructure analysis: Conventional criteria versus new alternatives. Struc-tural Equation Modeling, 6, 1–55. http://dx.doi.org/10.1080/10705519909540118

John, O. P., Naumann, L. P., & Soto, C. J. (2008). Paradigm shift to theintegrative Big Five trait taxonomy: History, measurement, and concep-tual issues. In O. P. John, R. W. Robins, & L. A. Pervin (Eds.),Handbook of personality: Theory and research (3rd ed., pp. 114–158).New York, NY: Guilford Press.

Lauriola, M., Panno, A., Levin, I. P., & Lejuez, C. W. (2014). IndividualDifferences in Risky Decision Making: A Meta-analysis of SensationSeeking and Impulsivity with the Balloon Analogue Risk Task. Journalof Behavioral Decision Making, 27, 20–36. http://dx.doi.org/10.1002/bdm.1784

Levinson, A. R., Speed, B. C., Infantolino, Z. P., & Hajcak, G. (2017).Reliability of the electrocortical response to gains and losses in the doorstask. Psychophysiology, 54, 601–607. http://dx.doi.org/10.1111/psyp.12813

Luck, S. J. (2014). An Introduction to the Event-Related Potential Tech-nique. Cambridge, MA: MIT Press.

Luu, P., Collins, P., & Tucker, D. M. (2000). Mood, personality, andself-monitoring: Negative affect and emotionality in relation to frontallobe mechanisms of error monitoring. Journal of Experimental Psychol-ogy: General, 129, 43– 60. http://dx.doi.org/10.1037/0096-3445.129.1.43

McCrae, R. R., & Costa, P. T. (2010). NEO Inventories for the NEOpersonality inventory-3 (NEO PI-3), NEO Five-Factor inventory-3(NEO-FFI-3) and NEO personality inventory-revised (NEO PI-R): pro-fessional manual. Odessa, FL: Psychological Assessment Resources.

McCrae, R. R., & Costa, P. T., Jr. (1985). Comparison of EPI andpsychoticism scales with measures of the five-factor model of person-ality. Personality and Individual Differences, 6, 587–597. http://dx.doi.org/10.1016/0191-8869(85)90008-X

Mehl, M. R., Pennebaker, J. W., Crow, D. M., Dabbs, J., & Price, J. H.(2001). The Electronically Activated Recorder (EAR): A device forsampling naturalistic daily activities and conversations. Behavior Re-search Methods, Instruments, & Computers, 33, 517–523. http://dx.doi.org/10.3758/BF03195410

Meyer, A., Lerner, M. D., De Los Reyes, A., Laird, R. D., & Hajcak, G.(2017). Considering ERP difference scores as individual differencemeasures: Issues with subtraction and alternative approaches. Psycho-physiology, 54, 114–122. http://dx.doi.org/10.1111/psyp.12664

Moser, J. S., Moran, T. P., & Jendrusina, A. A. (2012). Parsing relation-ships between dimensions of anxiety and action monitoring brain po-tentials in female undergraduates. Psychophysiology, 49, 3–10. http://dx.doi.org/10.1111/j.1469-8986.2011.01279.x

Mullins-Sweatt, S. N., Jamerson, J. E., Samuel, D. B., Olson, D. R., &Widiger, T. A. (2006). Psychometric properties of an abbreviated in-strument of the five-factor model. Assessment, 13, 119–137. http://dx.doi.org/10.1177/1073191106286748

Munro, G. E. S., Dywan, J., Harris, G. T., McKee, S., Unsal, A., &Segalowitz, S. J. (2007). ERN varies with degree of psychopathy in anemotion discrimination task. Biological Psychology, 76, 31–42. http://dx.doi.org/10.1016/j.biopsycho.2007.05.004

Muthén, L. K., & Muthén, B. O. (2017). Mplus User’s Guide. SeventhEdition. Los Angeles, CA: Author. (Original work published on 1998).

Olvet, D. M., & Hajcak, G. (2009). Reliability of error-related brainactivity. Brain Research, 1284, 89 –99. http://dx.doi.org/10.1016/j.brainres.2009.05.079

Pailing, P. E., & Segalowitz, S. J. (2004). The error-related negativity as astate and trait measure: Motivation, personality, and ERPs in response toerrors. Psychophysiology, 41, 84–95. http://dx.doi.org/10.1111/1469-8986.00124

Patrick, C. J., Venables, N. C., Yancey, J. R., Hicks, B. M., Nelson, L. D.,& Kramer, M. D. (2013). A construct-network approach to bridgingdiagnostic and physiological domains: Application to assessment ofexternalizing psychopathology. Journal of Abnormal Psychology, 122,902–916. http://dx.doi.org/10.1037/a0032807

Proudfit, G. H. (2015). The reward positivity: From basic research onreward to a biomarker for depression. Psychophysiology, 52, 449–459.http://dx.doi.org/10.1111/psyp.12370

Revelle, W. (2014). Psych: Procedures for psychological, psychometric,and personality research. Evanston, IL: Northwestern University.

Samuel, D. B. & Lynam, D. R. (2017). Five-factor model rating form.Unpublished manuscript.

Samuel, D. B., Mullins-Sweatt, S. N., & Widiger, T. A. (2013). Aninvestigation of the factor structure and convergent and discriminantvalidity of the five-factor model rating form. Assessment, 20, 24–35.http://dx.doi.org/10.1177/1073191112455455

Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do corre-lations stabilize? Journal of Research in Personality, 47, 609–612.http://dx.doi.org/10.1016/j.jrp.2013.05.009

Segarra, P., Poy, R., López, R., & Moltó, J. (2014). Characterizing Carverand White’s BIS/BAS subscales using the Five Factor Model of person-ality. Personality and Individual Differences, 61–62, 18–23. http://dx.doi.org/10.1016/j.paid.2013.12.027

Stahl, J., Acharki, M., Kresimon, M., Völler, F., & Gibbons, H. (2015).Perfect error processing: Perfectionism-related variations in action mon-itoring and error processing mechanisms. International Journal of Psy-chophysiology, 97, 153–162. http://dx.doi.org/10.1016/j.ijpsycho.2015.06.002

Sutton, S., Braren, M., Zubin, J., & John, E. R. (1965). Evoked-potentialcorrelates of stimulus uncertainty. Science, 150, 1187–1188. http://dx.doi.org/10.1126/science.150.3700.1187

Suzuki, T., Griffin, S. A., & Samuel, D. B. (2016). Capturing the DSM–5alternative personality disorder model traits in the five-factor model’snomological net. Journal of Personality, 85, 220–231. http://dx.doi.org/10.1111/jopy.12235

Team, R. C. (2017). R language definition. Vienna, Austria: R foundationfor statistical computing.

van den Berg, S. M., de Moor, M. H., Verweij, K. J., Krueger, R. F.,Luciano, M., Arias Vasquez, A., . . . the Generation Scotland. (2016).Meta-analysis of genome-wide association studies for extraversion:Findings from the genetics of personality consortium. Behavior Genet-ics, 46, 170–182. http://dx.doi.org/10.1007/s10519-015-9735-5

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

13PROPERTIES OF ERPS AND PERSONALITY

Page 15: Psychological Assessment - SAMPPLsamppl.psych.purdue.edu/~dbsamuel/Suzuki et al (2018) Brining the … · Psychological Assessment Bringing the Brain Into Personality Assessment:

Wascher, E., & Getzmann, S. (2014). Rapid mental fatigue amplifiesage-related attentional deficits. Journal of Psychophysiology, 28, 215–224. http://dx.doi.org/10.1027/0269-8803/a000127

Westen, D., & Rosenthal, R. (2003). Quantifying construct validity: Twosimple measures. Journal of Personality and Social Psychology, 84,608–618. http://dx.doi.org/10.1037/0022-3514.84.3.608

Yancey, J. R., Venables, N. C., Hicks, B. M., & Patrick, C. J. (2013).Evidence for a heritable brain basis to deviance-promoting deficits in

self-control. Journal of Criminal Justice, 41, 309–317. http://dx.doi.org/10.1016/j.jcrimjus.2013.06.002

Received June 19, 2017Revision received March 30, 2018

Accepted April 6, 2018 �

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

14 SUZUKI, HILL, AIT OUMEZIANE, FOTI, AND SAMUEL