random field theory mkael symmonds, bahador bahrami

45
Random Field Theory Mkael Symmonds, Bahador Bahrami

Upload: dulce-sam

Post on 31-Mar-2015

223 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Random Field Theory Mkael Symmonds, Bahador Bahrami

Random Field Theory

Mkael Symmonds, Bahador Bahrami

Page 2: Random Field Theory Mkael Symmonds, Bahador Bahrami

Random Field Theory

Mkael Symmonds, Bahador Bahrami

Page 3: Random Field Theory Mkael Symmonds, Bahador Bahrami

OverviewOverview

Spatial smoothingSpatial smoothing

Statistical inferenceStatistical inference

The multiple comparison problemThe multiple comparison problem

……and what to do about itand what to do about it

Page 4: Random Field Theory Mkael Symmonds, Bahador Bahrami

OverviewOverview

Spatial smoothingSpatial smoothing

Statistical inferenceStatistical inference

The multiple comparison problemThe multiple comparison problem

……and what to do about itand what to do about it

Page 5: Random Field Theory Mkael Symmonds, Bahador Bahrami

Statistical inferenceStatistical inference

AimAim– to decide if the data represents convincing to decide if the data represents convincing

evidence of the effect we are interested in.evidence of the effect we are interested in.

How How – perform a statistical test across the whole perform a statistical test across the whole

brain volume to tell us how likely our data brain volume to tell us how likely our data are to have come about by chance (the are to have come about by chance (the null distribution)null distribution)..

Page 6: Random Field Theory Mkael Symmonds, Bahador Bahrami

Inference at a single Inference at a single voxelvoxel

= p(t>t-value|H0)

NULL hypothesis, H0: activation is zero

t-value = 2.42

t-distribution

p-value: probability of getting a value of t at least as extreme as 2.42 from the t distribution (= 0.01).

t-value = 2.42

alpha = 0.025

As p < α , we reject the null hypothesist-value = 2.02

Page 7: Random Field Theory Mkael Symmonds, Bahador Bahrami

Sensitivity and Sensitivity and SpecificitySpecificity

H0 True TN FP – type I error

H0 False FN TP

Don’tReject

Reject

ACTION

Chance

Specificity = TN/(# H True) = TN/(TN+FP) = 1 - Sensitivity = TP/(# H False) = TP/(TP+FN) = power

Not by chance

Page 8: Random Field Theory Mkael Symmonds, Bahador Bahrami

Many statistical testsMany statistical tests

In functional imaging, there are many voxels, In functional imaging, there are many voxels, therefore many statistical teststherefore many statistical tests

If we do not know where in the brain our effect will If we do not know where in the brain our effect will occur, the hypothesis relates to the occur, the hypothesis relates to the whole volumewhole volume of of statistics in the brainstatistics in the brain

We would reject HWe would reject H00 if the entire if the entire familyfamily of statistical of statistical values is unlikely to have arisen from a null values is unlikely to have arisen from a null distribution – a distribution – a family-wise hypothesisfamily-wise hypothesis

The risk of error we are prepared to accept is called The risk of error we are prepared to accept is called the Family-Wise Error (FWE) rate – what is the the Family-Wise Error (FWE) rate – what is the likelihood that the family of voxel values could have likelihood that the family of voxel values could have arisen by chancearisen by chance

Page 9: Random Field Theory Mkael Symmonds, Bahador Bahrami

How to test a family-How to test a family-wise hypothesis?wise hypothesis?

Height thresholding

This can localise significant test results

Page 10: Random Field Theory Mkael Symmonds, Bahador Bahrami

How to set the How to set the threshold?threshold? Should we use the same Should we use the same alpha as alpha as

when we perform inference at a when we perform inference at a single voxel? single voxel?

Page 11: Random Field Theory Mkael Symmonds, Bahador Bahrami

OverviewOverview

Spatial smoothingSpatial smoothing

Statistical inferenceStatistical inference

The multiple comparison problemThe multiple comparison problem

……and what to do about itand what to do about it

Page 12: Random Field Theory Mkael Symmonds, Bahador Bahrami

How to set the How to set the threshold?threshold?

Signal + Noise

11.3% 11.3% 12.5% 10.8% 11.5% 10.0% 10.7% 11.2% 10.2% 9.5%

Use of ‘uncorrected’ alpha, =0.1

Percentage of Null Pixels that are False Positives

LOTS OF SIGNIFICANT ACTIVATIONS OUTSIDE OF OUR SIGNAL BLOB!

Page 13: Random Field Theory Mkael Symmonds, Bahador Bahrami

How to set the How to set the threshold?threshold? So, if we see 1 t-value above our So, if we see 1 t-value above our

uncorrected threshold in the family of uncorrected threshold in the family of tests, this is tests, this is notnot good evidence against good evidence against the the family-wise null hypothesisfamily-wise null hypothesis

If we are prepared to accept a false If we are prepared to accept a false positive rate of 5%, we need a threshold positive rate of 5%, we need a threshold such that, for the entire family of such that, for the entire family of statistical tests, there is a 5% chance of statistical tests, there is a 5% chance of there being one or more t values above there being one or more t values above that threshold.that threshold.

Page 14: Random Field Theory Mkael Symmonds, Bahador Bahrami

Bonferroni CorrectionBonferroni Correction

For one voxel (all values from a null distribution)

– Probability of a result greater than the threshold = α– Probability of a result less than the threshold = 1-α

For n voxels (all values from a null distribution)– Probability of all n results being less than the threshold

= (1-α)n

– Probability of one (or more) tests being greater than the

threshold:

= 1-(1-α)n ~= n.α (as alpha is small)

FAMILY WISE ERROR RATE

Page 15: Random Field Theory Mkael Symmonds, Bahador Bahrami

Bonferroni CorrectionBonferroni Correction

So,So,

Set the PSet the PFWEFWE < n. < n.α Gives a threshold α = PFWE / n

Should we use the Bonferroni correction for imaging data?

Page 16: Random Field Theory Mkael Symmonds, Bahador Bahrami

NULL HYPOTHESIS TRUE

10,000 tests 5% FWE rate

Apply Bonferroni correction to give threshold of 0.05/10000 = 0.000005

This corresponds to a z-score of 4.42

We expect only 5 out of 100 such images to have one or more z-scores > 4.42

100 x 100 voxels – normally distributed independent random numbers

100 x 100 voxels averaged Now only 10 x 10 independent

numbers in our image

The appropriate Bonferroni correction is 0.05/100= 0.0005

This corresponds to z-score = 3.29

Only 5/100 such images will have one or more z-scores > 3.29 by chance

Page 17: Random Field Theory Mkael Symmonds, Bahador Bahrami

Independent Voxels Spatially Correlated Voxels

Bonferroni is too conservative for brain images, but how to tell how manyindependent observations there are?

Spatial correlationSpatial correlation

Assumes Independent Voxels

Spatial pre-processing

Physiological Correlation

Smoothing

Page 18: Random Field Theory Mkael Symmonds, Bahador Bahrami

OverviewOverview

Spatial smoothingSpatial smoothing

Statistical inferenceStatistical inference

The multiple comparison problemThe multiple comparison problem

……and what to do about itand what to do about it

Page 19: Random Field Theory Mkael Symmonds, Bahador Bahrami

Spatial smoothingSpatial smoothing

Increases signal-to-noise ratioIncreases signal-to-noise ratio

Enables averaging across Enables averaging across subjectssubjects

Allows use of Gaussian Random Allows use of Gaussian Random Field Theory for thresholdingField Theory for thresholding

WhWhy do you want to do it?y do you want to do it?

Page 20: Random Field Theory Mkael Symmonds, Bahador Bahrami

Spatial SmoothingSpatial Smoothing

Reduces effect of high frequency variation in Reduces effect of high frequency variation in functional imaging data, “blurring sharp functional imaging data, “blurring sharp edges”edges”

WhWhat does it do?at does it do?

Page 21: Random Field Theory Mkael Symmonds, Bahador Bahrami

Spatial SmoothingSpatial Smoothing

Typically in functional Typically in functional imaging, a Gaussian imaging, a Gaussian smoothing kernel is usedsmoothing kernel is used– Shape similar to normal Shape similar to normal

distribution bell curvedistribution bell curve– Width usually described using Width usually described using

“full width at half maximum” “full width at half maximum” (FWHM) measure(FWHM) measuree.g., for kernel at 10mm FWHM:e.g., for kernel at 10mm FWHM:

How is it doneHow is it done??

0 5-5

Page 22: Random Field Theory Mkael Symmonds, Bahador Bahrami

Spatial SmoothingSpatial Smoothing

Gaussian kernel defines shape of function Gaussian kernel defines shape of function used successively to calculate weighted used successively to calculate weighted average of each data point with respect to average of each data point with respect to its neighbouring data pointsits neighbouring data points

How is it doneHow is it done??

Raw dataRaw data Gaussian fuGaussian functionnction SSmoothedmoothed data dataxx ==

Page 23: Random Field Theory Mkael Symmonds, Bahador Bahrami

Spatial SmoothingSpatial Smoothing

Gaussian kernel defines shape of function Gaussian kernel defines shape of function used successively to calculate weighted used successively to calculate weighted average of each data point with respect to average of each data point with respect to its neighbouring data pointsits neighbouring data points

How is it doneHow is it done??

Raw dataRaw data Gaussian fuGaussian functionnction SSmoothedmoothed data dataxx ==

Page 24: Random Field Theory Mkael Symmonds, Bahador Bahrami

Independent Voxels Spatially Correlated Voxels

Bonferroni is too conservative for brain images, but how to tell how manyindependent observations there are?

Spatial correlationSpatial correlation

Assumes Independent Voxels

Spatial pre-processing

Physiological Correlation

Smoothing

Page 25: Random Field Theory Mkael Symmonds, Bahador Bahrami

OverviewOverview

Spatial smoothingSpatial smoothing

Statistical inferenceStatistical inference

The multiple comparison problemThe multiple comparison problem

……and what to do about itand what to do about it

Page 26: Random Field Theory Mkael Symmonds, Bahador Bahrami

Random Field Theory Random Field Theory (ii)(ii)

Methods for Dummies 2008Methods for Dummies 2008Mkael SymmondsMkael Symmonds

Bahador BahramiBahador Bahrami

Page 27: Random Field Theory Mkael Symmonds, Bahador Bahrami

What is a What is a random random fieldfield??

A A random fieldrandom field is a list of is a list of random numbersrandom numbers whose values whose values are mapped onto a space (of n are mapped onto a space (of n dimensionsdimensions). Values in a random ). Values in a random field are usually spatially field are usually spatially correlated in one way or another, correlated in one way or another, in its most basic form this might in its most basic form this might mean that adjacent values do not mean that adjacent values do not differ as much as values that are differ as much as values that are further apart. further apart.

Page 28: Random Field Theory Mkael Symmonds, Bahador Bahrami

WhyWhy random field? random field? To characterise the properties our To characterise the properties our

study’s statistical parametric map study’s statistical parametric map under the NULL hypothesis under the NULL hypothesis – NULL hypothesis = NULL hypothesis =

if all predictions were wrong if all predictions were wrong all activations were merely driven by chanceall activations were merely driven by chance each voxel value was a random numbereach voxel value was a random number

– What would the probability of getting a What would the probability of getting a certain certain z-scorez-score for a voxel in this situation for a voxel in this situation be?be?

Page 29: Random Field Theory Mkael Symmonds, Bahador Bahrami

Random FieldRandom Field

Page 30: Random Field Theory Mkael Symmonds, Bahador Bahrami

Thresholded @ Zero

Thresholded @ one

Page 31: Random Field Theory Mkael Symmonds, Bahador Bahrami

Thresholded @ threeMeasurement 1

Number of blobs = 4

Measurement 2Number of blobs = 0

Measurement 3Number of blobs = 1

Measurement 1000000000

Number of blobs = 2

Average number of blobs = (4 + 0 + 1 + … + 2)/1000000000

The probability of getting a z-score>3 by chance

Page 32: Random Field Theory Mkael Symmonds, Bahador Bahrami

Therefore, for every z-score, the expected value of number of blobs

= probability of rejecting the null

hypothesis erroneously (α)

Page 33: Random Field Theory Mkael Symmonds, Bahador Bahrami

The million-dollar The million-dollar question is:question is:

thresholding the random field at which thresholding the random field at which Z-score produces average number of Z-score produces average number of blobs < 0.05? blobs < 0.05?

Or, Which Z-score has a probability = Or, Which Z-score has a probability = 0.05 of rejecting the null hypothesis 0.05 of rejecting the null hypothesis erroneously?erroneously?– Any z-scores above that will be Any z-scores above that will be

significant!significant!

Page 34: Random Field Theory Mkael Symmonds, Bahador Bahrami

So, it all comes down to So, it all comes down to estimating the estimating the average average number of blobs (that you number of blobs (that you expect by chance) in your SPMexpect by chance) in your SPM

Random field theory does that for you!

Page 35: Random Field Theory Mkael Symmonds, Bahador Bahrami

Expected number of Expected number of blobs in a random field blobs in a random field depends on …depends on …

Chosen threshold z-score Chosen threshold z-score

Volume of search regionVolume of search region Roughness (i.e.,1/smoothness) of the search region: Roughness (i.e.,1/smoothness) of the search region:

Spatial extent of correlation among values in the Spatial extent of correlation among values in the field; it is described by FWHMfield; it is described by FWHM

– Volume and Roughness are combined into Volume and Roughness are combined into RRESELsESELs

– Where does SPM get R from: it is calculated Where does SPM get R from: it is calculated from the residuals (RPV.img)from the residuals (RPV.img)

– Given the R and Z, RFT calculates the Given the R and Z, RFT calculates the expected number of blobs for you: expected number of blobs for you:

EE(EC)(EC) = R (4 ln 2) (2= R (4 ln 2) (2ππ) ) -3/2-3/2 z exp(-z z exp(-z22/2)/2)

Page 36: Random Field Theory Mkael Symmonds, Bahador Bahrami

PPFWEFWE = = average number of blobs under null hypothesisaverage number of blobs under null hypothesis

Probability of Family Probability of Family Wise ErrorWise Error

αα = = PPFWE = R (4 ln 2) (2= R (4 ln 2) (2ππ) ) -3/2-3/2 z exp(-z z exp(-z22/2)/2)

Page 37: Random Field Theory Mkael Symmonds, Bahador Bahrami
Page 38: Random Field Theory Mkael Symmonds, Bahador Bahrami
Page 39: Random Field Theory Mkael Symmonds, Bahador Bahrami

Thank youThank you

References: References: – Brett, Penny & Keibel. An introduction to Brett, Penny & Keibel. An introduction to

Random Field Theory. Chapter from Human Random Field Theory. Chapter from Human Brain MappingBrain Mapping

– Will Penny’s slides (Will Penny’s slides (http://www.fil.ion.ucl.ac.uk/spm/course/slides05/http://www.fil.ion.ucl.ac.uk/spm/course/slides05/ppt/infer.ppt#324,1,Random Field Theoryppt/infer.ppt#324,1,Random Field Theory))

– Jean-Etienne Poirrier’s slides (Jean-Etienne Poirrier’s slides (http://www.poirrier.be/~jean-etienne/presentatiohttp://www.poirrier.be/~jean-etienne/presentations/rft/spm-rft-slides-poirrier06.pdfns/rft/spm-rft-slides-poirrier06.pdf))

– Tom Nichol’s lecture in SPM Short Course (2006)Tom Nichol’s lecture in SPM Short Course (2006)

Page 40: Random Field Theory Mkael Symmonds, Bahador Bahrami

False Discovery RateFalse Discovery Rate

H True (o) TN=7 FP=3

H False (x) FN=0 TP=10

Don’tReject

Reject

ACTION

TRUTH

u1

FDR=3/13=23%=3/10=30%

At u1

o o o o o o o x x x o o x x x o x x x x

Eg. t-scoresfrom regionsthat truly do and do not activateFDR = FP/(# Reject)

= FP/(# H True)

Page 41: Random Field Theory Mkael Symmonds, Bahador Bahrami

False Discovery RateFalse Discovery Rate

H True (o) TN=9 FP=1

H False (x) FN=3 TP=7

Don’tReject

Reject

ACTION

TRUTH

u2

o o o o o o o x x x o o x x x o x x x x

Eg. t-scoresfrom regionsthat truly do and do not activate

FDR=1/8=13%=1/10=10%

At u2

FDR = FP/(# Reject)

= FP/(# H True)

Page 42: Random Field Theory Mkael Symmonds, Bahador Bahrami

False Discovery RateFalse Discovery Rate

Signal

Signal+Noise

Noise

Page 43: Random Field Theory Mkael Symmonds, Bahador Bahrami

FWE

Control of Familywise Error Rate at 10%

Occurrence of Familywise Error

6.7% 10.4% 14.9% 9.3% 16.2% 13.8% 14.0% 10.5% 12.2% 8.7%

Control of False Discovery Rate at 10%

Percentage of Activated Pixels that are False Positives

Page 44: Random Field Theory Mkael Symmonds, Bahador Bahrami

Cluster Level InferenceCluster Level Inference

We can increase sensitivity by trading off anatomical specificityWe can increase sensitivity by trading off anatomical specificity

Given a voxel level threshold u, we can computeGiven a voxel level threshold u, we can compute the likelihood (under the null hypothesis) of getting a cluster the likelihood (under the null hypothesis) of getting a cluster

containing at least n voxelscontaining at least n voxels

CLUSTER-LEVEL INFERENCECLUSTER-LEVEL INFERENCE

Similarly, we can compute the likelihood of getting cSimilarly, we can compute the likelihood of getting c clusters each having at least n voxelsclusters each having at least n voxels

SET-LEVEL INFERENCESET-LEVEL INFERENCE

Page 45: Random Field Theory Mkael Symmonds, Bahador Bahrami

Levels of inferenceLevels of inference

set-levelset-levelP(c P(c 3 | n 3 | n 12, u 12, u 3.09) = 3.09) =

0.0190.019

cluster-levelcluster-levelP(c P(c 1 | n 1 | n 82, t 82, t 3.09) = 0.029 (corrected) 3.09) = 0.029 (corrected)

n=82n=82

n=32n=32

n=1n=122

voxel-levelvoxel-levelP(c P(c 1 | n > 0, t 1 | n > 0, t 4.37) = 0.048 (corrected) 4.37) = 0.048 (corrected)

At least onecluster withunspecifiednumber of voxels abovethreshold

At least one cluster with at least 82 voxels above threshold

At least 3 clusters abovethreshold