視覚的夢内容の神経デコーディングhorikawa-t/neuro2013_20130607_v2.pdf2013/06/07 ·...
TRANSCRIPT
Neural Decoding of visual dream contents視覚的夢内容の神経デコーディング
堀川友慈 (Tomoyasu Horikawa),1,2 玉置應子 (Masako Tamaki),1 宮脇陽一 (Yoichi Miyawaki),3,1 神谷之康 (Yukiyasu Kamitani),1,2
E-mail [email protected]@atr.jp
IntroductionBackground
Questions
Dreaming is a subjective experience during sleep often accompanied by vivid visual contents. Previous research has attempted to link physiological states with dreaming but has not demonstrated how specific visual dream contents are represented in brain activity.
Can we read out dream contents from human brain activity during sleep?How are visual dream contents represented in the brain?
Measure functional magnetic resonance imaging [fMRI] activity in the visual cortex simultaneously with polysomnography (electroencephalography [EEG], electromiography [EMG], and electrooculography [EOG].)
Approach
• •
• •
•
Use a lexical database (WordNet) to systematically extract visual dream contents (objects, scenes, etc.) from verbal dream reports
• Collect verbal dream reports by awakening the subject during the sleep-onset period•
Construct machine learning decoders using fMRI responses to natural images depicting the extracted contents
•
Base synset
Artefact
Hoteln=7
n=18n=21
House
StructureWay
Building
WordNetTree
Street
S2
Bas
e sy
nset
s
male
Dream index
bookbuilding
charactercommodity
computer screen
dwellingelectronic equipment
female
furniture
mercantile establishment
regionrepresentation
street
covering
point
food
car
Web images for decoder training
Methods
From the collected reports, words describing visual objects or scenes were manually extracted and mapped to WordNet a lexical database in which semantically similar words are grouped as ‘synsets’ (synonym set) in a hierarchical structure.Using the hierachy, the extracted visual words were grouped into “ base synsets” that appeared in at least 10 reports in each subject (26, 18, and 16 synsets).
Three subjects (S1−S3) participated in the fMRI sleep (nap) experiments in which they were awakened when an EEG signature (theta power increase) was detected and were asked to give a verbal report freely describing their visual experiences.
2
1
fMRI volumes
before awakening
AwakeningWake
Rep
ort
perio
d
Machine learning decoderassisted by
lexical and image databases
Prediction
fMRI activity pattern
Sleepstages
t
Experimental overview
0 100 (%)50
S3
S2
S1 With visual contentNo visualcontent
266(7)
281(7)
307(10)
Total awakenings(total exps)
58
61
203 63
220
249
Multiple awakening procedure
1: Let the subject sleep
2: Monitor EEG to detect a characteristic EEG signature (theta power increase)
3: Awaken the subject and ask to report dream contents
Repeat to collect multiple dream reports and fMRI data until 200 awakenings with a visual report for each subject were collected
fMRI data obtained immediately before each awakening were labeled with the “dream content vector,” each element of which indicated the presence/absence of a base synset in the subsequent report. Web images depicting each base synset were collected from ImageNet, an image database in which web images are grouped according to WordNet, or Google image for decoder training. Multivoxel patterns in the higher visual cortex (HVC; the ventral region covering the lateral occipital complex [LOC], fusiform face area [FFA] and parahippocampal place area [PPA]), the lower visual cortex (LVC; V1−V3 combined), or the subareas were used as the input for the decoders.
Base synset selection
Dream content coding
夢Dream
Pairwisedecoder
or
Male
Car
Dreams
...0
1
0
1
1
0
1
0
male
Syns
ets
car
zzZ
50
80
WithinAcro
ssDeco
ding
acc
urac
y (%
)wi
thin
/acr
oss
met
a-ca
tego
ries
Results: Pairwise dream content classification
A binary SVM classifier was trained on the fMRI responses to stimulus images of two base synsets, and tested on the dream samples that contained one of the two synsets exclusively.
All synset pairs in which each synset appeared in at least 10 reports without co-occurence with the other were tested.
The performance from all pairs is compared between the decoders trained with the original and with label-shuffled data.The mean decoding accuracy was significantly higher than that of label-shuffled decoders (Wilcoxon rank-sum test p < 0.001).
The pairs with a high cross-validation decoding accuracy within stimulus/dream data were selected. The performance from the selected pairs showed higher decoding accuracies.
The decoding accuracy for synset pairs across meta-categoires was significantly higher than that for synset pairs within meta-categories.
Shuffled
UnshuffledDecoding
accuracy (%)
20 8050
All (405 pairs)Selected (97 pairs)
HVC
S1−S3pooled
Cha
nce
Distribution of decoding accuracies
D : number of voxels
fkl (x) = wd xd + w0d=1
D
w : weight parametersx : voxel value
w : bias0
Discriminant function for classificationbetween synsets k and l
Multi-labeldecoder
malefoodcar
......
street
zzZ
Results: Dream content detection
The presence/absence of each base synset was predicted by a “synset detector” constructed from a combination of the pairwise discriminant functions.
The synset detector provided a continuous score indicating how likely the synset was present in the dream report.
fk (x) =1
N 1fkl (x)
|| wkl ||l k
Detector function for synset k N : number of base synsetsw : weight parametersx : voxel pattern
fkl : discriminant function for synsets k and l
AUC averaged within each meta-category
While V1−V3 did not show different performance across meta-categories, the higher visual areas showed a marked dependence on meta-categories.
Object
Human
Scene
Others
LVCHVC
0.5 0.7
LOCFFAPPA
0.5 0.7
V3
V1V2
0.5 0.7AUC
S1−S3pooled
representation:0.448
food:0.541computer−screen:0.553furniture:0.562commodity:0.596
dwelling:0.605
car:0.612
female:0.647
point:0.647
covering:0.661
building:0.664
electronic−equipment:0.700
male:0.713
mercantile−establishment:0.760
character:0.767
street:0.774
book:0.776
region:0.794Object
Scene
Others
Human
0 0.2 0.4 0.6 0.8 10
0.2
0.4
0.6
0.8
1
False positive
True
pos
itive
S2
Performance was evaluated by area under the curve (AUC). 18 out of total 60 synsets from three subjects were detected with above-chance levels ( Wilcoxon rank-sum test p < 0.05).
ROC analysis for individual base synsets
Not only the reported synsets but also the unreported synsets having a high co-occurrence with the reported synsets showed high scores.
Time course of synset scoresIndividual synset scores
Averaged synset scores
Nor
mal
ized
Sco
re
Time to awakening (s)48 1224 -24-1236 0
0
-0.2
0.4 ReportedUnreported(High/Low co-occurence)
S1−S3pooled
Time to awakening (s)48 36 24 12 0 −12 −24
S2: 163th dream
commodity
femalemale
ReportedUnreported (high co-occurence)Unreported (low co-occurence)
Scor
e
48 36 24 12 0 −12 −24
bookcharacter
computer screen
S2: 134th dream
−10
0
10
True
N candidates
Dec
oder
outp
ut
Most similar dream?
Results: Dream identificationThe output scores of the detector functions were used to identify the true dream content vector among a variable number of candidate vectors.
Dre
am id
entif
icat
ion
acc
urac
y (%
)
2 4 8 16 320
20
40
60
80
Candidate set size
OriginalExtended
The performance exceeded the chance level across all set sizes. The extended synset vectors were better identified.
Chance
S1−S3pooled
Dream identification
We calculated the correlation coefficient between the score vector and each of the candidates and the selected the candidate with the highest correlation.
The same analysis was performed with extended dream content vectors in which the unreported synsets having a high co-occurence (top 15% conditional probability) with reported synsets were assumed to be present.
Summary
This work was supported by grants from SRPBS (MEXT), SCOPE (SOUMU), NICT, the Nissan Science Foundation, and the Ministry of Internal Affairs and Communications entitled, “Novel and innovative R&D making use of brain structures” .
Multiple awakening procedure allowed for collecting dream data (dream reports and fMRI data associated with dreaming) efficiently.
The performance of individual subareas showed semantic preferences mirroring known stimulus representation.High scores for the unreported synsets may indicate implicit dream contents.
Dream contents can be read out by stimulus-trained decoders from the higher visual cortex, suggesting that specific dream contents are represented in activity patterns which are shared by stimulus perception.
•
•
•
•
Dec
odin
gac
cura
cy (%
)
Time to awakening (s)All (405 pairs) Selected (97 pairs)
LVC
50
80 HVC
243648 12 0 −12 −24 243648 12 0 −12 −24
Time course of decoding accuracy
The decoding accuracy peaked around 0−10 s before awakening.
S1−S3pooled
Chance
LVC HVC V1 V2 V3 LOC FFA PPA
50
80
Dec
odin
gac
cura
cy (%
) AllSelected
AreaHVC showed significantly higher performance than LVC. Analyses of individual areas showed a gradual increase in decoding accuracy along the visual processing pathway.
Decoding accuracies across visual areas
Chance
S1−S3pooled
P2-2-1081ATR 脳情報研究所 (ATR CNS DNI, Kyoto, Japan), 2 奈良先端科学技術大学院大学 (NAIST, Nara, Japan), 3 情報研究通信機構 (NICT, Kyoto, Japan)
Yes, well, I saw a person. Yes. What it was... It was something like a scene thatI hid a key in a place between a chair and a bed and someone took it.
Um, what I saw now was like, a place with a street and some houses around it...
Horikawa, T., Tamaki, M., Miyawaki, Y., & Kamitani, Y. "Neural decoding of visual imagery during sleep," Science 340, 639-642 (2013)