dyslexia dual-route model of reading · 1 language processing models 15-486/782: artificial neural...

9

Upload: others

Post on 31-Aug-2019

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Dyslexia Dual-Route Model of Reading · 1 Language Processing Models 15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006 (ThislectureisbasedinpartonnotesbyDavidPlaut)

1

Language Processing Models

15-486/782: Artificial Neural NetworksDavid S. Touretzky

Fall 2006

(This lecture is based in part on notes by David Plaut)

2

Outline

� Deep and surface dyslexia

� Mapping words to meaning

� Distributed representations

� Hinton-Shallice-Plaut model

� Understanding English sentences

� Rohde's CSCP model

3

Dyslexia

� Deep dyslexia: reading via semantic route only

� Cannot read simple non-words like �brund�

� Semantic errors: see PEACH, say �apricot�

� Fewer errors on concrete vs. abstract words

� Part of speech effect:nouns > adjectives > verbs > function words

� Subtypes of deep dyslexia: input, central, output.

� Surface dyslexia: map letters directly to sound

� Can read pronounceable non-words

� Cannot read irregular words: �pint�, �yacht�

4

Dual-Route Model of Reading

15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006

Page 2: Dyslexia Dual-Route Model of Reading · 1 Language Processing Models 15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006 (ThislectureisbasedinpartonnotesbyDavidPlaut)

5

Connectionist (Neural Network)Dual-Route Model

6

Error Types in Deep Dyslexia

� Semantic (RIVER ⇒ �ocean�)

� Visual (SCANDAL ⇒ �sandals�)

� Mixed visual/semantic (TROUBLE ⇒ �terrible�)

� Visual-then-semantic (SYMPATHY ⇒ �orchestra�)

� Morphological/derivational (LOVELY ⇒ �loving�)

� Function-word substitution (FROM ⇒ �with�)

Dyslexic patients always produce a mixture of errortypes. But the distribution varies based on the type ofdyslexia.

7

Patient Error Distributions

8

Mapping Words to Meaning:Local Word Representation

Damage to hidden layer knocks out individual words.

Hinton, McClelland, and Rumelhart 1986

15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006

Page 3: Dyslexia Dual-Route Model of Reading · 1 Language Processing Models 15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006 (ThislectureisbasedinpartonnotesbyDavidPlaut)

9

Representing an Arbitrary Mappingin Distributed Form

� Hidden units represent clusters of visually similarwords.

� Sememe (output) units must be thresholded toprevent false positives.

Hinton, McClelland, and Rumelhart 1986

10

Distributed Representations in aFeed-Forward Network:

Error Patterns

� Damage to hidden layer does not knock out singlewords.

� Instead, damage raises the overall error rate.

� Can produce a variety of error types.

� But output patterns and distribution of error typedoes not match human data.

� What's needed: a way to �clean up� the semantics.

11

Attractor Dynamics CanClean Up the Mapping

12

Hinton & Shallice Model (1991)

Trained withbackprop; around1000 epochs.

15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006

Page 4: Dyslexia Dual-Route Model of Reading · 1 Language Processing Models 15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006 (ThislectureisbasedinpartonnotesbyDavidPlaut)

13

68 Semantic Features

14

40 Word Vocabulary

15

Semantic Similarity

16

Simulating Brain Damage

� Various methods:

� Remove some hidden units

� Remove some cleanup units

� Add noise to the weights

� Similar effects, but some differences.

15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006

Page 5: Dyslexia Dual-Route Model of Reading · 1 Language Processing Models 15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006 (ThislectureisbasedinpartonnotesbyDavidPlaut)

17

Distribution of Error Types

(G)rapheme, (I)ntermediate, (S)emantic, (C)leanup

Chance = pick a word at random18

Normal Network

19

Lesioned Network

Lesions change the shape of the attractor space.

20

Hinton-Shallice-Plaut Model

InputNetwork

OutputNetwork

Add a phonological layer to address more error types.

15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006

Page 6: Dyslexia Dual-Route Model of Reading · 1 Language Processing Models 15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006 (ThislectureisbasedinpartonnotesbyDavidPlaut)

21

Deterministic Boltzmann Machinewith Noise (Instead of Backprop)

�wij = �sisj�+��sisj�

-

s j�t �= �sj

�t�1��1��� tanh� 1T ���

i

si�t�1�wij��

where � N �0,��

T is a gradually lowering temperature parameter:

high T → flat sigmoid

low T → steep sigmoid (step function)

22

Error Rates forDamage to Input Network

23

Error Rates forDamage to Output Network

24

Concrete vs. Abstract Words

� Assume that concrete/visualizable words have moresemantic features than abstract words.

� More features = more recurrent connections:greater opportunity for cleanup.

� Result: model produces fewer errors for concretewords.

15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006

Page 7: Dyslexia Dual-Route Model of Reading · 1 Language Processing Models 15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006 (ThislectureisbasedinpartonnotesbyDavidPlaut)

25

Recovery From Damage

� Retraining the network after damage producedfaster recovery than the original training.

� Retraining on only a subset of the words producedimprovements in all the words.

� Distributed representations produce transfer among words

� How should patients be trained to help themrecover from damage?

� One possibility: focus on words that are non-prototypicalfor their semantic category.

26

Errors in Normal Subjects

� McLeod, Shallice, and Plaut (2000): normal subjectscan be induced to make reading errors by rapidpresentation of stimuli.

� Flash a sequence of words in rapid succession (160msec); each asks as a masking stimulus for thepreceding word. Ask subjects to identify words theysaw.

� Results: subjects make a mixture of visual andsemantic errors

27

Attractor Basins

28

Rohde's Sentence Comprehensionand Production Model (CSCP)

� Could understand simple sentences and answerqueries about them.

� Vocabulary around 300 words.

� Complex grammar: various types of embeddedclauses, parallel constructions, ambiguities.

� Giant network composed of multiple modules.

� Several SRNs (Simple Recurrent Nets).

� Trained with crosss-entropy; zero-error radius 0.1.

� Took 2 months to train! (500 MHz DEC Alpha)

15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006

Page 8: Dyslexia Dual-Route Model of Reading · 1 Language Processing Models 15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006 (ThislectureisbasedinpartonnotesbyDavidPlaut)

29

Comprehension Measured byQuery Answering

� Input: �The boy ate the soup.�

� Queries:

� Who ate?

� What did the boy eat?

� What was done to the soup?

30

Propositional Encoding

�A cop saw the young girl that was bitten by thatmean dog.�

31

Message Encoder/Decoder Network

� Queries are propositions withmissing components.

32

Comprehension/Production Network

15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006

Page 9: Dyslexia Dual-Route Model of Reading · 1 Language Processing Models 15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006 (ThislectureisbasedinpartonnotesbyDavidPlaut)

33

The Full Model

34

Results

� The model learned a basic (300 word) subset ofEnglish with rich grammatical structure.

� Generalization to novel sentences.

� Some ability to resolve ambiguities, e.g., attachment.

� Training time was substantial.

� Some non-human-like errors.

� Could estimate reading time, grammaticality.

15-486/782: Artificial Neural Networks David S. Touretzky Fall 2006