cognitive processes (com1007) · cognitive processes, including memory, problem solving,...
TRANSCRIPT
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Cognitive Processes (COM1007)Rev 124
Dr Andre Gruning
Department of ComputingUniversity of Surrey
Email: [email protected]
SS 2009
Dr Andre Gruning COM1007 – Rev 124– 1/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Dr Andre Gruning COM1007 – Rev 124– 2/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Lecture “Introduction”
Dr Andre Gruning COM1007 – Rev 124– 3/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Lecturer
Who? Where?
Name Dr Andre Gruning
Email [email protected]
Office 14BB02
Student Office Hours Monday, 1700–1800 and Wednesday1130-1230please agree appointments by email.
Module Homepage http://www.computing.surrey.ac.uk/personal/
st/A.Gruning/teaching/COM1007/SS2009/
Dr Andre Gruning COM1007 – Rev 124– 4/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Background
Academic Background
BSc+MSc Theoretical Physics
PhD Computer Science
Research Fellow in Cognitive Neuroscience
Research Fellow in Computational Neuroscience
Lecturer in Computing
Dr Andre Gruning COM1007 – Rev 124– 5/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Time Table
Tuesday, 1600–1800, LTF Lecture
Dr Andre Gruning COM1007 – Rev 124– 6/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Preliminary Session Schedule
Week 1 – 20/01/2009 Introduction: What is Cognitive Science?Week 2 – 27/01/2009 Cognition and Algorithms. Pseudo Code.
Metaphor.Week 3 – 03/02/2009 cancelled. University closed because of snow.Week 4 – 10/02/2009 Symbolic Computation Metaphors.Week 5 – 17/02/2009 Evidence for Cognitive Processes:
Psycho-physics & Imaging Techniques.Week 6 – 24/02/2009 Guest Lecture Dr Tony Browne: Neural
Networks.Week 7 – 03/03/2009 The Dynamical Metahpor of Cognitiion.
Modelling Stroop and McGurk Effects with Hopfieldnetworks.
Week 8 – 10/03/2009 Languages, Categories and Concepts.Week 9 – 17/03/2009 Categories and Concepts continued. Feedback
on Group Research Report. The Final Research Report.Week 10 – 24/03/2009 TBAWeek 11, Week 12 no sessions.
Dr Andre Gruning COM1007 – Rev 124– 7/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Assessment – 100% Coursework
Group Research Report
1500 words (roughly 2 A4 pages)counts 30% to final markA Cognitive Science topic such as Algorithms, Computational Modelsof Cognition etc. due week 7, Monday 02/03/2009, 2359.
Take Home Exercise
10% to final markdue Week 9, Monday, 16/03/2009Four questions about Cognitive Science.
Individual Research Report
2500 words (roughly 4 A4 pages)60% of the final markTopic of your choice within cognitive science, discuss a research articleor a chapter in a textbook, or do a little cognitive experiment etc.due Tuesday 05/05/2009, 2359.
Dr Andre Gruning COM1007 – Rev 124– 8/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
What are Cognitive Processes?
Cognitive Science is concerned with the “higher mental processes”,the Cognitive Processes.
Main Questions
Perception: How do humans/animals/agents/robots processthe information they receive from the environment?
Representation and Interpretation: How do they use thisinformation to entertain an inner representation of the outerworld?
Decision making and Action: How do they act upon the outerworld to reach their goals? (What are their goals anyway?)
Information Processing.
Dr Andre Gruning COM1007 – Rev 124– 9/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
What is Cognitive Science?
Interdisciplinary – Overlap with . . .
Psychology obvious?Philosophy What thoughts are thinkable? Can a machine/brain
understand itself? Free will?Computer Science Models, simulations, computability, Computational
complexity, intelligent computersInformation Theory Mathematical constraints, codes, channel
capacitiesNeuroscience Cognition in neural systems. How does the brain do it?
How do all the neurons interact to yield a thinking mind?Robotics, Embedded Cognition real world test of our theories! Can we
simulate intelligence? Can we simulate emotions?Computer Games another real world test!
Dr Andre Gruning COM1007 – Rev 124– 10/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Examples and Methods of Cognitive Science
Computer Experiments, e.g. how are faces processed?
Imaging Experiments, similar to the above but brain is directlyobserved.
Patients with brain impairments (e.g. after a stroke, accident)
Patients with brain surgery
Behavioural experiments with apes, corvines, dolphins,octopuses
Neural recordings with implanted electrodes
Computer simulations of models (neural networks, expertsystems)
Dr Andre Gruning COM1007 – Rev 124– 11/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Textbooks?There isn’t really the one textbook for this module!
Alan J. Parkin 2000 Essential Cognitive Psychology. Psychology Press,Hove.Recommended, covers the essentials, however does not takeinto account the computing perspective.
Ellis and Hunt 1993. Fundamentals of Cognitive Psychology.McGraw-Hill, Boston.A classical introduction.
Howard Gardner 1985. The Mind’s New Science : A History of theCognitive Revolution. Basic Books, New York.easy to read, historic development of cognitive science
P. Johnson-Laird 1993. The Computer and the Mind. Harvard UniversityPress.a different perspective, but more computational
D.R. Hofstadter 1979. Godel, Escher, Bach: An Eternal Golden Braid.Penguin.not really about cognitive processes in any psychologicalsense and no textbook at all – but very enjoyable!
Oliver Sacks 1986. The man who mistook his wife for a hat. Picador,London.a neurologist’s anecdotal, yet real stories about his patients’brain disorders. Many things we take for granted are in factnot.
Dr Andre Gruning COM1007 – Rev 124– 12/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Learning Aims
from the module description
Demonstrate a basic understanding of the history of the fieldof cognitive processes
Demonstrate a good basic understanding of the maincognitive processes, including memory, problem solving,categorisation and language.
Recognise the role cognitive processes play in the developmentof intelligent systems.
Understand the interrelations between Computing/ComputerScience and Cognition:
Use of Computer Science as a tool in Cognitive ScienceUse of Computer Science as inspiration for models in CognitiveScienceUse of Cognitive Science to improve computersDr Andre Gruning COM1007 – Rev 124– 13/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
What is this module good for?
Yes, it is the one-off odd module: no computers, no directapplications
provides foundations for later modules such as:
Artificial IntelligenceNeuronal NetworksComputational Vision and LanguageAlgorithms and Data StructuresObject-Oriented Design. . .
helps create better user interfaces
helps develop better computing techniques (think of MP3)
Dr Andre Gruning COM1007 – Rev 124– 14/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Overview
Find a Cognitive Process!Some more thoughts about the Cognitive Processes . . .Some first ideas about Algorithms and Cognitive Processes!
Dr Andre Gruning COM1007 – Rev 124– 15/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Find a Cognitive Process!
Exercise
Think about a Cognitive Process you used to today (oryesterday or last week)!
If you have a pet (or if your friend does): what CognitiveProcesses does your pet use during its every day live?
Write down five cognitive processes on a piece of paper
We’ll discuss some of them.
Dr Andre Gruning COM1007 – Rev 124– 16/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Some more thoughts about Cognition and the Brain
Exercise in pairs
Discuss with your neighbour, write down two or three sentences foreach question:
How could prehistoric humans find out that the brain (aphysical organ) is the “home” of “cognition” (a mentalthing??)
Which is the most important human sense? Which oneconveys most information about the environment?
Is there a “technical reason” why in many animals eyes (andears) are so close to the brain?
What would happen if humans oriented like bats by means ofultra-sound?
What could be the most important senses of the Human Fish(Proteus anguinus), an aquatic salamander that livesexclusively in dark caves?
Dr Andre Gruning COM1007 – Rev 124– 17/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Lecture “Cognition and Algorithms”
Dr Andre Gruning COM1007 – Rev 124– 18/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Milestones of Cognitive Science
On the non-computational side
historic times: Brain is home of the mind19th century:
progress in medicine: injured people would survive longeraccidents and wars: people with brain injuriesFreud: the birth of modern psychologybetter microscope and preparation techniques: the nerve cells (Ramony Cajal)
20th century:
behaviourism: inspired by stimulus-response schemata, ignored largely“inner state”control theory, cybernetics: information flow, keep a parameter fixed orin a certain range.symbolism, computer metaphor of the brain: the brain as a symbolmanipulator (soon)connectionism: the brain as a dynamical system (a bit later)patch clamp technique (Erwin Neher’s Nobel prize in the 1980s)development of computer technology to analyse data, and for moresophisticated non-invasive experimentsdevelopment of imaging techniques: live on-line look into the brain(later)
Dr Andre Gruning COM1007 – Rev 124– 19/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Milestones of Cognitive Science II
On the computational side
historic times: recipes to compute certain things: sums ofnumbers, square roots, division of two numbers.early “computers”: mechanical machines that could do thingsonly humans could do before:
Abacusmechanical calculators (with wheels, cogs etc.)punch-card driven hand-organs,punch-card driven automatic looms (weaving machines)
the formalisation of the concept of an “algorithm” and“computation”Universal Turing machinefirst computers: Zuse Z1, EniacUniversal programming languages: Basic, C, Perl, Java, Prolog,. . .
Dr Andre Gruning COM1007 – Rev 124– 20/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
What was first?
What was first?
On the one hand we want to use computers to modelcognition.
But on the other hand historically computers were modelledaccording to how humans solve computational tasks.
⇒ We have to understand better the nature of “computationaltasks”.
Dr Andre Gruning COM1007 – Rev 124– 21/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Computational Tasks
What is a Computational Task?
everything where you need to do a computation
everything where something is computed
hm, fairly vague
a more precises term would be “Algorithm”
⇒ What is an algorithm?
⇒ Let us have a look at tasks from everyday to mathematics!
Dr Andre Gruning COM1007 – Rev 124– 22/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
What is a Computation or an Algorithm?
Some computational tasks. . .
What steps does it require to produce a cup of tea?
What steps does it take to buy a railway ticket from amachine?
What steps are needed to get dressed?
What steps does it take to recognise a face?
What steps does it take to add two four digit numbers?
Think of some more computational every day tasks ormathematical computations.
Are there any repeating sets of steps?
What do all these recipes have in common?
Dr Andre Gruning COM1007 – Rev 124– 23/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
What is an Algorithm or Computation?
First idea
An Algorithm is a clear finite list of simple instructions to solve acomputational task.
What is a simple instruction?
Are there elementary instructions?
How do I describe an algorithm?
Simple cases: natural language!
Dr Andre Gruning COM1007 – Rev 124– 24/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Algorithms
Writing down an Algorithm
When things get more complicated or detail is needed,
it is often convenient to describe an algorithm in a moreformal way.
There are structures that repeat frequently in an algorithm:
Decisions: If there is a carry over then add 1 to the nextdigit,Repetitions:
while there are more digits do the following . . .repeat look for next bit until you have all the bitsfor current student is first student to last student do checkURN of current student; make next student the currentstudentfor all students in the cohort do check their URN
Dr Andre Gruning COM1007 – Rev 124– 25/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Pseudocode
while, if, for are very common structures for algorithms
it is no surprise almost all programming language have theseas control structures.
to write down an algorithm independent of the syntax of aconcrete programming language Pseudocode is frequently used
Pseudocode is mishmash of formalised control structures andnatural language
depending on the detail and extent of formal rigour needed,the mishmash can be biased towards the formal aspects or thenatural language
Dr Andre Gruning COM1007 – Rev 124– 26/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Algorithms in Pseudocode
Activity with your neighbour
Write down in Pseudocode the algorithm how to buy a railwayticket at a ticket machine
and how to do multiply two numbers by hand (with thesub-algorithm of how to add to numbers given)
Dr Andre Gruning COM1007 – Rev 124– 27/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Thinking a bit more about Computation
Discuss the following questions with your neighbour, write down two orthree sentences for each question:
Let us assume you have a computational task you want to solve withthe help of a computer programme. Can this programme in principlebe written in any programming language?Can every computational task that can be solved by computer inprinciple be solved by a human? And vice-versa? Why?Assume you have a computer language that provides operations onlyto add, subtract and multiply two integer numbers? Will you also beable to divide two numbers in the computer language? How?Can a computer add two real numbers with arbitrary precision infinite time?
Dr Andre Gruning COM1007 – Rev 124– 28/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Lecture “Snow!”
Dr Andre Gruning COM1007 – Rev 124– 29/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Lecture “The Symbolic Computation Metaphor”
Dr Andre Gruning COM1007 – Rev 124– 30/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
What was first?
What was first?
On the one hand we want to use computers to model cognition.But on the other hand historically computers were modelledaccording to how humans solve computational tasks.
⇒ We have to understand still better the nature of “computationaltasks”.We now know what an algorithm is: a list of simple instructions.But what is a simple instruction?Are there elementary instructions?Is there a universal set of elementary instructions?Can every computational task that can be solved by computer inprinciple also be solved by a human? And vice-versa? Why?
Dr Andre Gruning COM1007 – Rev 124– 31/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
What is an elementary “computation”?
Human Intuitive Computation
Looking for a minimal description of what we understand as a“computation”inspired by what a human calculator did (i.e. somebody who addedthe bills for a merchant, calculated the statics for an architect,statistics for an insurance company etc. before the arrival of moderncomputers)A human calculator had:
1 Some data to start from (on sheets of paper)2 A lot of blank squared paper3 on which s/he could scribble symbols from a finite set (numbers +
arithmetic symbols: 0123456789 +− = ∃∀∫⇒← . . . )
4 A pencil5 A rubber6 A finite number of possible actions in each time step7 A finite number of inner states (i.e. things s/he can remember without
writing down)8 every computation a human calculator can do in this way by symbol
manipulation, will be called an Intuitive Algorithm.
Dr Andre Gruning COM1007 – Rev 124– 32/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
What is an elementary “computation”?A minimal mathematical formalisation of “computation by symbolmanipulation” according to Alan Turing (before the invention ofelectronic computers (have at look at his sculpture on the open spacebetween AP and management buildings.)
Turing Machine consists of
1 An infinite tape of discrete cells. (The sheets of paper)2 A read and write head that can read or write the contents of a
single cell. (The pencil and rubber)3 A finite tape alphabet: 01 (The finite alphabet)4 A finite number of inner states. (A finite number of things in
memory)5 A finite number of actions: (similar to a human)
change inner statemove head left or rightread symbol under headoverwrite symbol under head
Actions are selected according to a set of rules (a Turingprogramme) from the current state and the currently read symbol.
6 Everything a Turing Machine can calculate in this way, will becalled a Turing Algorithm. It computes by symbol manipulation,too.
A Turing Machine simulator (with a slightly more complex tapealphabet) can be found at http://ironphoenix.org/tril/tm/.
Dr Andre Gruning COM1007 – Rev 124– 33/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Electronic Computers?
An electronic computer is in a sloppy way an extension of a Turingmachine, that makes lives harder for Mathematicians, but easier fornormal human beings.
Random-Access-Memory (RAM) machine consists of
an infinite RAM storage (instead of a tape)a memory pointer (instead of a read-write head)each memory cell in the RAM can take on only a finite set ofvalues, e.g. 0 – 255 (instead of binary values)CPU has a finite set of registers (inner states)CPU has a finite instruction set (i.e./ finite set of actions andtheir rules)
After a lengthy mathematical proof: everything a Turing machinecan compute a RAM machine can, too. And vice-versa! Thereforea RAM is called Turing-equivalent.
Dr Andre Gruning COM1007 – Rev 124– 34/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Question
What is the slight but essential difference of a RAM machine asoutlined here and your laptop/desktop?
Dr Andre Gruning COM1007 – Rev 124– 35/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Elementary Processing
Elementary Processing Steps?
Are the actions of the Turing the “elementary” processing stepswe were looking for?Or are the instructions of a RAM machine “elementary”?Or is there something more elementary? A different machine?Four-Counter-Machines? Lambda-Calculus? Recursive Functions?C? Java Virtual Machine? Perl? Different flavours of Turingmachines? Rewrite Grammars?
Elementary Processing Steps!
Generally, it is hard to find machines that can compute more thana Turing machine in principle.(Almost) all theoretical ideas of what it means to becomputable/an algorithm can be transformed into each other.
⇒ there are several sets of elementary processing steps.All are different, but can in principle calculate the same things.
Dr Andre Gruning COM1007 – Rev 124– 36/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Relation between intuitive and Turing-algorithms
Church-Turing Thesis
Mathematician: The notions of intuitive and Turing-algorithmare equivalent.
Real live: A human can compute everything a human can.And . . .
vice versa: a computer can compute everything a human can.
NB: It is a HYPOTHESIS i. e. an unproved belief, not aTHEOREM or FACT
What is this do with Cognitive Processes?
Dr Andre Gruning COM1007 – Rev 124– 37/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Turing Test
Turing Test
There is a whole set of different, but similar tests going back to ideasof Alan Turing . They are all thought experiments (?) to find outwhether a machine has human-like intelligence (or behaves as thoughit had). One (modern) version goes as follows:
Alice uses instant messaging to type messages to two otherparticipants, Bob and Claire, one of them is a human, theother a computer. Alice has to find out using instantmessaging only, whether Bob is the human or whether he isthe computer.
If Alice cannot reliably distinguish the computer from thehuman, then the computer is believed to be intelligent.
What are the problems and challenges you see here?
Dr Andre Gruning COM1007 – Rev 124– 38/161
IntroductionCognition and Algorithms
Snow!The Symbolic Computation Metaphor
Symbolic Computation Metaphor of Cognition
The Turing Machine and the Turing Test were two of the manyinspirations for the
Symbolic Computation Metaphor of Cognition
The human (and animal) brain works similar to a Turing-equivalentcomputer. That means cognition and human and animal cognitiveinformation processing can be described and explained in terms ofsymbol manipulation.
This view was adopted in:
symbolist approaches to cognition
classical AI
Dr Andre Gruning COM1007 – Rev 124– 39/161
Part I
Group Research Report
Dr Andre Gruning COM1007 – Rev 124– 40/161
Exercise 1 – Group Research Report
Topics
In groups of 4, choose one of the following:
1 “Will computers ever be intelligent?” taking into accountthought experiments like the Turing test and Searle’s ChineseRoom test and various responses to them.
2 “Cognitive abilities of animals” Dolphins, apes, or rats showsurprisingly good performance in some cognitive tasks. Discusstheir abilities. (Concentrate on one animal from this list of yourchoice). Focus on animal cognition but reference to humancognition where relevant.
3 “Turing Machines and the Halting Problem” What does a Turingmachine do in detail? Can it compute just anything? Why (not)?And how about a real computer?
4 Or: Come up with an idea of your own in a field related toCognitive Science and have it approved by me before or on13/02/2009
Dr Andre Gruning COM1007 – Rev 124– 41/161
First Short Report – Formal Stuff
Formal stuff
Handout contains full detail: http://www.computing.surrey.ac.uk/personal/st/A.Gruning/teaching/COM1007/current/info_group_report.pdf1500 words (roughly 2 A4 pages)due Week 7, Monday 02/03/2009, 23:59. Electronic submissiononly via Ulearn.sign up for groups of 4 on Ulearn.Structure:
Introduction: describe what you are going to do.Describe and explain the relevant facts and arguments (that youhave researched in the library or the web, don’t forget to referenceyour sources!) in your own words.Discussion: evaluate the facts and balance the arguments, putthem in a wider context.Conclusion: from the arguments presented and your evaluation,formulate your own conclusion.
Main guideline: Write in a way so that a fellow student of yourscan profit from reading your paper.
Dr Andre Gruning COM1007 – Rev 124– 42/161
First Report
Marks
The Group Research Report counts 30% to the final module mark.You will get marks on the usual scale from 0-100%. The report will bemarked with the following criteria in mind, each of which cancontribute up to 25 marks to the total mark of the report.
Clarity of Presentation Clear style of writing, formal aspects(sectioning, structure, spelling, grammar, referencing,title page). In short: Can the reader easily understandwhat the authors want to say?
Coverage Breadth of research. Do the authors cover all importantfacets of their topic? Has there been a considerableresearch effort? Do the arguments the authors refer togive a complete picture of the problem? Are thearguments supported in the references provided? Inshort: Have the authors covered all aspects relevant totheir research topic?
Dr Andre Gruning COM1007 – Rev 124– 43/161
First Report
Marks
Consistence and Completeness Depth of research. Do the authorsprovide one or more consistent lines ofargumentation? Are their arguments well-balanced?Is their conclusion well-informed and supported bytheir arguments? In short: Do the authors use theirarguments in an appropriate and conclusive way?
Originality What is new in your report? Are arguments evaluatedcritically? Is the conclusion you have arrived at new?Is it surprising? Have you found new arguments tosupport it? Do you combine known arguments in anew and interesting way? In short: Do you havereached a deeper understanding of the topic?
Dr Andre Gruning COM1007 – Rev 124– 44/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
Part II
Alternatives to the Symbolic Metaphor ofCognition
Dr Andre Gruning COM1007 – Rev 124– 45/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
Challenges for the symbolic metaphor
Learning How do the algorithms get into the brains?
Imprecise Data Algorithms expect start conditions and initial dataclearly specified.
Approximations Algorithms give you an exact result, probably aftera long time. For real live a real-time approximationmight be more useful.
Adaptivity Small changes in a problem require usually asubstantial redesign of an algorithm.
Generality Is there a general problem solving algorithm?
Fault Tolerance If you have the slightest error in your algorithm –the system may crash suddenly.
But are there any alternatives? Which?Dr Andre Gruning COM1007 – Rev 124– 46/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
In Search of Alternatives
Alternatives?
Maybe for inventing the symbolic computation metaphor ofcognition, we had a look on what the brain does too muchfrom the “outside” (the pen-and-paper view).
⇒ Perhaps we can understand more when we have a look at howthe brains actually does it:
Its functioning and modelling on the neural level (neuralnetworks, weeks 5 and 6).Smart the brain out while still viewing it from the outside(psychophysics, week 5).Observing it while it is working (imaging techniques, week 7).
Dr Andre Gruning COM1007 – Rev 124– 47/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Brain – What does it look like?
from
http://www.neuroskills.com/brain.shtml#map,
accessed 26/01/2008
brain consists of 100 billion(1011) neurons (nerve cells)
each connects to the orderof 10000 (104) otherneurons
a total of 1015 connections(“synapses”)Chudler, E. (2006). Brain Facts and Figures.
http://faculty.washington.edu/chudler/
facts.html, accessed 26/01/2008.
Dr Andre Gruning COM1007 – Rev 124– 48/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
Neurons
(from M Casey, UniS/Ulearn, CS365)
Dr Andre Gruning COM1007 – Rev 124– 49/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
What does a neuron do?
A first approximation. . .
When a neuron “becomes active” it sends an electric pulsedown its axon (it “fires”).
When a the pulse reaches a synapse, i.e. a connection to another neuron, the electrical pulse causes neuraltransmitters to be released from the first neuron.
The transmitter molecules diffuse to the other neuron, and
help there to build up an internal electric potential.
When the electric potential reaches a threshold value (after asufficient amount of incoming spikes within a certain time),the neuron fires and its electrical potential is reset.
Dr Andre Gruning COM1007 – Rev 124– 50/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
What does a neuron do?
It fires or spikes in single spikes or in bursts:A voltage oscillogram of a spiking neuron:http://info.med.yale.edu/neurobio/mccormick/movies/rly_exp.mpgfrom McCormick Lab http://info.med.yale.edu/neurobio/mccormick/movies.html, accessed 26/01/2008
Dr Andre Gruning COM1007 – Rev 124– 51/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
Networks
(from Dr M Casey, UniS/ULearn, CS365)
Dr Andre Gruning COM1007 – Rev 124– 52/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Lecture “Neurons, Brains and Imaging”
Dr Andre Gruning COM1007 – Rev 124– 53/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
The Symbolic Computation Metaphor and Neuronal Networksas a Potential AlternativePsychophysicsImaging Techniques
Dr Andre Gruning COM1007 – Rev 124– 54/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Symbolic Computation Metaphor of Cognition
The Turing Machine and the Turing Test were two of the manyinspirations for the
Symbolic Computation Metaphor of Cognition
The human (and animal) brain works similar to a Turing-equivalentcomputer. That means cognition and human and animal cognitiveinformation processing can be described and explained in terms ofsymbol manipulation.
This view was adopted in:
symbolist approaches to cognition
classical AI
Dr Andre Gruning COM1007 – Rev 124– 55/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Challenges for the symbolic metaphor
Learning How do the algorithms get into the brains?
Imprecise Data Algorithms expect start conditions and initial dataclearly specified.
Approximations Algorithms give you an exact result, probably aftera long time. For real live a real-time approximationmight be more useful.
Adaptivity Small changes in a problem require usually asubstantial redesign of an algorithm.
Generality Is there a general problem solving algorithm?
Fault Tolerance If you have the slightest error in your algorithm –the system may crash suddenly.
But are there any alternatives? Which?Dr Andre Gruning COM1007 – Rev 124– 56/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
In Search of Alternatives
Alternatives?
Maybe for inventing the symbolic computation metaphor ofcognition, we had a look on what the brain does too muchfrom the “outside” (the pen-and-paper view).
⇒ Perhaps we can understand more when we have a look at howthe brains actually does it:
Its functioning and modelling on the neural level (neuralnetworks, weeks 5 and 6).Smart the brain out while still viewing it from the outside(psychophysics, week 5).Observing it while it is working (imaging techniques, week 7).
Dr Andre Gruning COM1007 – Rev 124– 57/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
The Brain – What does it look like?
from
http://www.neuroskills.com/brain.shtml#map,
accessed 26/01/2008
brain consists of 100 billion(1011) neurons (nerve cells)
each connects to the orderof 10000 (104) otherneurons
a total of 1015 connections(“synapses”)Chudler, E. (2006). Brain Facts and Figures.
http://faculty.washington.edu/chudler/
facts.html, accessed 26/01/2008.
Dr Andre Gruning COM1007 – Rev 124– 58/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Networks
(from Dr M Casey, UniS/ULearn, CS365)
Dr Andre Gruning COM1007 – Rev 124– 59/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
What does a neuron do?
It fires or spikes in single spikes or in bursts:A voltage oscillogram of a spiking neuron:http://info.med.yale.edu/neurobio/mccormick/movies/rly_exp.mpgfrom McCormick Lab http://info.med.yale.edu/neurobio/mccormick/movies.html, accessed 26/01/2008
Dr Andre Gruning COM1007 – Rev 124– 60/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
What does a neuron do?
A first approximation. . .
When a neuron “becomes active” it sends an electric pulsedown its axon (it “fires”).
When a the pulse reaches a synapse, i.e. a connection to another neuron, the electrical pulse causes neuraltransmitters to be released from the first neuron.
The transmitter molecules diffuse to the other neuron, and
help there to build up an internal electric potential.
When the electric potential reaches a threshold value (after asufficient amount of incoming spikes within a certain time),the neuron fires and its electrical potential is reset.
Dr Andre Gruning COM1007 – Rev 124– 61/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Networks of Neurons can adapt
Hebbian Learning
“When an axon of cell A is near enough to excite a cell Band repeatedly or persistently takes part in firing it, somegrowth process or metabolic change takes place in one orboth cells such that A’s efficiency as one of the cells firing B,is increased.”(D. Hebb, 1979, “The Organization of Behavior”, Wiley)
What does it mean?
If one neuron fires, and subsequently the neuron it is connectedto by a synapse fires, too, thenthis synapse will in the long run become stronger,because it contributes positively to firing of the second neuron.A neural network can hence change and learn over time.
Dr Andre Gruning COM1007 – Rev 124– 62/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Are Human Brains Special?
What can we do that animals cannot?
speak?think?dream?use tools?use abstract symbols?be social?
Is the human brain special?
If we look at the brain of different mammals, the brain tissue atthe neuronal level looks (roughly) the same for all of them.There are mammals with bigger brains than humans.There are mammals with bigger brain/body ratio than humans.So what is special about human brains so they support humancognition?. . . we still do not really know. It’s a mix of factors.
Dr Andre Gruning COM1007 – Rev 124– 63/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Brains and Neurons
Important Facts
The brain essentially consists of neurons (nerve cells).The neurons are connected via synapses.Synapses have a weight and can adapt.Every neuron does a very simple computation: it accumulates thespikes it gets from other neurons and then decides whether to firea spike itself or not.A symbolic digital computer has a single powerful processor (ormaybe a few), howeverThe brain has a vast number of simple processors but connectedin a complex way.The local structure of the brain tissue is not much differentbetween humans and other mammals.Details about modelling neural networks in Dr Browne’s guestlecture in week 6.Ideas from the theory of neural networks will be a recurrenttheme in this module to model cognitive processes.
Dr Andre Gruning COM1007 – Rev 124– 64/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
The Symbolic Computation Metaphor and Neuronal Networksas a Potential AlternativePsychophysicsImaging Techniques
Dr Andre Gruning COM1007 – Rev 124– 65/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Psychophysics
What is Psychophysics?
It tries to smart out the brain. . .a sub-discipline of experimental psychologya sub-discipline of cognitive scienceIt deals with the relationship between objective stimuli and thesubjective percepts they cause.It measures in a quantitative sense the way we processinformation.It explores the limits of information processing.It tricks our information processing system, for example by
subtle manipulation of sensory input so that standard processing isbroken and we learn from the occuring errors how the cognitiveprocess under consideration works,or taking a cognitive subsystem to its limits to examine howperformance is effected.
Dr Andre Gruning COM1007 – Rev 124– 66/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Psychophysics
Three example experiments
1 Which elements in a sequence are remembered best?
2 McGurk Effect
3 Stroop Effect and reaction time measurement.
Dr Andre Gruning COM1007 – Rev 124– 67/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Short Term Memory
A little do-it-yourself psychophysical experiment
You will see a number of words flash up on the screen – oneby one.
Concentrate on the words. Do not write them down while yousee them. . .
But wait and keep paper and pen ready.
treeboxtablebookforktubefilmshipwinecarNow write down the words you remember – order is not important.Do not talk about your words with your neighbour!
Dr Andre Gruning COM1007 – Rev 124– 68/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Short Term Memory
Results
When one counts how many students remember the first, thesecond, . . . , the tenth word, the resulting graph of counts over thenumber in order is roughly U-shaped, i.e. the first ones areremembered well and so are the last ones, but not the middle ones.This is a quite general phenomenon:
Recency effect: one remembers better what one encounterslast.
Primacy effect: one remembers better what one encountersfirst.
Dr Andre Gruning COM1007 – Rev 124– 69/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Short-Term Memory
Conclusions
Memory cannot work like a pipe of finite length. . .
because then you would remember the last things best.
⇒ No primacy effect!
Memory cannot work like a stack of finite depth. . .
because then you would remember the first things best.
⇒ No recency effect!
Objective quantity measured: retention as function of serialorder.
Is there a simple computational model within the SymbolicComputation Metaphor?
Do there exits simple neural network models (week 6)?Dr Andre Gruning COM1007 – Rev 124– 70/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
McGurk Effect
You see with your eyes, and hear with your ears! Here comes theMcGurk Effect!
McGurk Effect – Demonstration
large.movThe McGurk Effectfrom Arnt Massø, University of Oslohttp://www.media.uio.no/personer/arntm/McGurk_english.html,accessed 04/02/2008.
Dr Andre Gruning COM1007 – Rev 124– 71/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
McGurk: What’s going on?
McGurk, Harry; and MacDonald, John (Univ of Surrey): ”Hearinglips and seeing voices,” Nature 1976, Vol 264(5588), pp.746–748.Person in film speaks “gagagaga. . . ”. You see his lips moveaccordingly.But film is dubbed with person saying “babababa. . . ”.
⇒ contradicting sensory input⇒ You see a “ga”, you hear a “ba”, but perceive a “da”.⇒ You hear lips and see voices. . .
Objective stimulus is the manipulated film; the percept what youperceive.Any ideas how this could come about?Later (week 7) we’ll have a look at a neural network model.
Dr Andre Gruning COM1007 – Rev 124– 72/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Stroop Effect
Two volunteers!
redyellowblueredblue
yellowgreenbluered
1 Read the list aloud as fast as you can.
2 Now name the colours aloud as fast as you can.
Dr Andre Gruning COM1007 – Rev 124– 73/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Reaction times in the Stroop effect
A psychophysical on-line experiment to measure reaction times:http://www.thewritingpot.com/stroop/
Dr Andre Gruning COM1007 – Rev 124– 74/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Questions for the Stroop effect
What happens if you take a child before it has started school?
What happens if you take somebody who doesn’t speakEnglish?
What happens if you take somebody who has started to learnEnglish?
What happens if you take instead of colour words words likethese:sun, sea, tree. . .
Can you think of a model (in terms of algorithms) for theStroop Effect?
Dr Andre Gruning COM1007 – Rev 124– 75/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
What is going on?
Reading (though acquired) seems to be a highly automatedprocess: you cannot look at a word and decide not toread/understand its meaning intentionally.
Seeing a colour and naming it is also quite natural but notreally automated.
It seems there is a conflict about resources.
Because reading is so tightly connected to speaking, itoccupies the language resource automatically, so it is hard forthe naming process to get hold of this resources.
⇒ Reaction times grow, multiple errors.
Quantities measured: errors rates and reaction times.
not merely question of attention/concentration/training!Dr Andre Gruning COM1007 – Rev 124– 76/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Summary
Psychophysics
How do physical stimuli relate to subjective precepts?How can human and animal information processing be quantified?Short-Term Memory: primacy and recency effects.The McGurk effect: hearing with your lips and seeing voices.The Stroop effect: fighting for limited resources.Psychophysics measures for examples retention rates, error ratesand reaction times.Psychophysics explores computational properties of cognitiveprocesses.It seems cognitive processes are not easy to explain in a straightforward manner within the Symbolic Metaphor.We’ll try to find models for these effects using neural networksafter your lecture with Dr Browne next week.
Dr Andre Gruning COM1007 – Rev 124– 77/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
The Symbolic Computation Metaphor and Neuronal Networksas a Potential AlternativePsychophysicsImaging Techniques
Dr Andre Gruning COM1007 – Rev 124– 78/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Imaging techniques
Another branch of cognitive science to look the brain over itsshoulder while it is working.
Types of Imaging Techniques
EEG Electroencephalography
fMRI functional Magnetic Resonance Imaging
MEG Magnetoencephalography
PET Positron Emission Tomography
infrared spectroscopy
. . .
Dr Andre Gruning COM1007 – Rev 124– 79/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
EEG
from Wikicommons,http://en.wikipedia.org/wiki/Image:
EEG_32_electrodes.jpg, accessed 04/02/2008.
from Wikicommons, http://en.wikipedia.org/wiki/Image:Spike-waves.png,accessed 04/02/2008.
Dr Andre Gruning COM1007 – Rev 124– 80/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
EEG
Electroencephalography (EEG)
measures electrical activity of the brain
uses approx 40-120 electrodes placed on the scalp
An electrode measures average activity of a large group ofneurons.
⇒ Coarse spatial, but good temporal resolution.
can roughly locate active brain areas
cheap (good EEG devices for research from 50000 £)
relatively easy to use
Dr Andre Gruning COM1007 – Rev 124– 81/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
fMRI
Dr Andre Gruning COM1007 – Rev 124– 82/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
fMRI
functional Magnetic Resonance Imaging (fMRI)
MRI: medical use to visualise soft tissues in an organismfMRI functional MRIIt uses very strong magnetic fields and radio waves that influenceatoms.Atoms respond by sending radio waves themselves.Slight differences in the radio waves occur depending on whichmolecule the atoms are bound in.When a brain region gets active, a higher demand for oxygenarises.Hence in that region the ratio of oxygenated/deoxygenatedhemoglobin changes.
⇒ Change in radio waves (so called BOLD (blood-oxygen-leveldependent) signal).Spatial resolution: millimeters. Temporal resolution: seconds.BOLD signal is a very indirect measure: Are neurons more active?Or suppressed? Both would use up energy and oxygen and givesimilar BOLD signals.Costs: from 1.000.000 GBP upwards, running costs: several100.000 GBP/a
Dr Andre Gruning COM1007 – Rev 124– 83/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
MEG
Dr Andre Gruning COM1007 – Rev 124– 84/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
MEG
Magnetoencephalography (MEG)
uses magnetic fields produced by electric currents in theneurons (very very week!).
For a significant/measurable signal component at least 50.000neurons are needed.
Good temporal resolution (1ms), spatial resolution worse thanfMRI.
Costs: again millions!
Dr Andre Gruning COM1007 – Rev 124– 85/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
MEG – Word Processing
Word processing, Pulvermuller Lab, Cambridgehttp://www.sciencedirect.com/science/MiamiMultiMediaURL/B6WNP-498TWK4-7/B6WNP-498TWK4-7-1/6968/
32bdffa5e8dd74c13e3bde4938213bda/Supplementary_Video.avifilm is part of Pulvermuller, F., Shtyrov, Y., &
Ilmoniemi, R. J. (2003). Spatio-temporal patterns of neural language processing: an MEG study using
Minimum-Norm Current Estimates. Neuroimage, 20, 1020-1025.
Dr Andre Gruning COM1007 – Rev 124– 86/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
MEG – Word Processing
What is going on?
Subject hears a word.
Different brains areas get active:
of course: some areas directly connected to the ear
some areas commonly connected with language processing
but also depending on the precise meaning of the word:
Words strongly associated with hand movements (eg. “tohammer”): activity in the premotor cortex for the hand.Words with locomotion (eg. “to run”): activity in premotorcortex for the legs and feet.Also nouns vs. verbs yield different patterns.
Dr Andre Gruning COM1007 – Rev 124– 87/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Word processing
What does it mean?
When you hear a word, it is not only processed by thelanguage centres of the brain.
To unfold the meaning of the word, physical activities thatconstitute the meaning of the word are activated, too.
⇒ The meaning of a word is distributed over the brain andentails all its associations. The meaning is a whole web ofassociations and not located in any one specific place.
This avoids “grandmother cell” paradox.
It seems logical [to me] since how can you realise what “tohammer” means without ever having observed/done it?
Dr Andre Gruning COM1007 – Rev 124– 88/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
The Symbolic Computation Metaphor and Neuronal Networks as a Potential AlternativePsychophysicsImaging Techniques
Summary
Imaging
Different techniques available: EEG, fMRI, MEG, PET,Infrared spectroscopy . . .
Trade-off between temporal and spatial resolution, and costs.
Most of them involve huge amount of physics and computerengineering to derive at the nice pictures you see inpublications.
All of them average over time and over huge numbers ofneurons.
Dr Andre Gruning COM1007 – Rev 124– 89/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
Lecture “Guestlecture: Neural Networks”
Dr Andre Gruning COM1007 – Rev 124– 90/161
Neurons, Brains and ImagingGuestlecture: Neural Networks
Neural Networks and Cognitive Modelling
Lecture “Neural Networks and Cognitive Modelling”
Dr Andre Gruning COM1007 – Rev 124– 91/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Introduction
Where are we?
Still looking for an alternative to symbol manipulation tomodell cognition. . .
And to this end, we needed to take a closer look at the brain
The brain consists of neurons
Neurons are contected to each other via synapses.
Networks of neurons can learning because synapses chain theirweight.
Artificial Neural Networks are
a model of (parts of) the brain.a machine learning device / artificial intelligence
Dr Andre Gruning COM1007 – Rev 124– 92/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Types of Artificial Neural Networks
List of important neural networks and areas of application
Last week: you learnt about Multilayer Perceptrons (MLP) indetail. Also called Feed-Forward NetworkBut there are more and different types of artificial neuralnetworks, both for supervised and unsupervised learning:
multi-layer perceptron (MLP)/ feed-forward networks (discussed indetail last week)recurrent networks (an extension of MLP, just mentioned here)Kohonen network (just mentioned)Hopfield network (some heuristic idea of how it works a bit later)
They are all very different. . .but the have basic things in common:
they consist of a number of model neurons and synapsesand these can learn.
Dr Andre Gruning COM1007 – Rev 124– 93/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Artificial Neurons
How to model a neuron?
Model neuron in all its complexity? No!A neuron i receives inputs from other neurons j .I.e. there is a synapse/link between i and j .The value of the outputs yj of the other neurons j are weightedwith the strength wij .Thus the effective input neuron i receives from j is wijyj .Neuron i sums all its inputs to determine its internal cell potentialui :=
∑j wijyj .
How does neuron i determine when to fire itself?
step function: output yi = 0 for u < u0, and yi = 1 for u ≥ u0.in general: more complicated functions y = f (u) are thinkable(e.g. sigmoid)smooth functions f preferred for learning algorithms, becausederivatives exist.
Dr Andre Gruning COM1007 – Rev 124– 94/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Learning in Networks
How does a network of such neurons learn?
Neurons can change their threshold potential u0
The network can change the strength of synapses wij
Remeber: real neurons use the Hebb rule, but artificialneurons use different rules. . . It’s here where the maindifferences between different network types lie.
Dr Andre Gruning COM1007 – Rev 124– 95/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Easy things for a neural network
Difficult for a Symbolic Computer, but easy for a network
Learning complicated mappings (e.g. multi-layer perceptron)
Pattern (Image) classification (e.g. Kohonen network)
Pattern (Image) completion (e.g. Hopfield network)
Dr Andre Gruning COM1007 – Rev 124– 96/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Complicated Mappings
All things where there is a form of regularity that is very hard tocast into clear rules.
English Pronunciation – Addendum to NetTalk covered last week
Mark Twain: In English the word “fish” could be spelt “ghoti”.
with “gh” as in “enough”
with “o” as in “women”
with “ti” as in “nation”
Notoriously hard to learn! Pronunciation depends on
orthographic context of letter
history of the word
NetTalk solves this using MLP: http://en.wikipedia.org/wiki/NETtalk_(artificial_neural_network)
Dr Andre Gruning COM1007 – Rev 124– 97/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Pattern Classification
Hand-writing recognition
with a Kohonen network, example from Heaton Research:http://www.heatonresearch.com/articles/42/page1.html
Dr Andre Gruning COM1007 – Rev 124– 98/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Pattern Completion 1
Example of Cognitive Pattern Completion in the Brain:
from Ian Wells, CS184, 2003
Dr Andre Gruning COM1007 – Rev 124– 99/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Pattern Completion 2
Digit Recognition
with a Hopfield network by means of pattern completion.
A Hopfield network stores prototypes:
It then gets an input pattern, and transform it to theprototype that is most “similar”.
(Images from Richard Bowles.)
Dr Andre Gruning COM1007 – Rev 124– 100/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Pattern Completion – Example
Input Pattern
(Images from Richard Bowles.)
Dr Andre Gruning COM1007 – Rev 124– 101/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Pattern Completion – Example
Early Transformation
(Images from Richard Bowles.)
Dr Andre Gruning COM1007 – Rev 124– 101/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Pattern Completion – Example
Closer Approximation
(Images from Richard Bowles.)
Dr Andre Gruning COM1007 – Rev 124– 101/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Pattern Completion – Example
Prototype reached
(Images from Richard Bowles.)
Dr Andre Gruning COM1007 – Rev 124– 101/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
How does a Hopfield Network work?
It learns to represent an “ideal” stimulus as an attractorAfter training a network will have a mountain landscape ofattractors: (ideally) the deepest valleys are the attractors.An attractor is something that “attracts” “similar” stimulitowards it, like a ball that rolls down a slope.Hence if a stimulus arrives that is not the “ideal” stimulus (noise,partial covered image, distorted, incomplete, . . . ), it starts at acertain point in the landscape and then rolls down into the valley.But into which valley?All points that roll to a certain valley, belong to the basin ofattraction of that valley.The task of a learning algorithm is to find a “good” basin ofattraction for each ideal stimulus and to keep attractorsseparated.Overlap/superposition of attractors leads to their distortion
⇒ interesting self-organisation effects might arise.A mathematician would say: A Hopfield network minimise anenergy functional (potential function).A Hopfield network (but also other networks) works like adynamical system.
Dr Andre Gruning COM1007 – Rev 124– 102/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
McGurk: A simple dynamic (Hopfield) model
Think of an idea for model of the McGurk effect with yourneighbours
along the lines of how a Hopfield network works
How does pattern completion come into play?
What is the pattern do be completed with eyes open/shut?
Dr Andre Gruning COM1007 – Rev 124– 103/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
McGurk: A simple dynamic (Hopfield) model
Ideas for a model
Sounds are never clear and contain a lot of noise⇒ They have to be classified. Or “completed” because only few
features are as they should ideally.⇒ Idea: try to explain it in terms of a Hopfield network
There is an attractor for “ba” and one for “ga” (and one for“da”. )Usually one perceives lip movements and sound simultaneously(two information channels)
⇒ one unique pattern of sensory informationIf just one channel of information (here: auditory as normal,visual is void when you close your eyes)
⇒ no problem, just pattern completion for the non-mutedchannel.
Dr Andre Gruning COM1007 – Rev 124– 104/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
McGurk: A simple dynamic (Hopfield) model
Ideas for a model
Now: contradicting information.
⇒ Hopfield network does not know where to drag the pattern.
one part tends to be dragged towards the “ba” valleyone part tends to be dragged towards the “ga” valley
⇒ pattern ends up in the “da” valley (btw “d” is phonetically inthe middle between “g” and “b”)
Dr Andre Gruning COM1007 – Rev 124– 105/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Model of Stroop effect
Suggestion 1 – Symbolic Computation Model
Perhaps somehow like a lock set by one fast operating system taskon a system resource, so the “speak resource” is blocked by thereading process most of the time.
Dr Andre Gruning COM1007 – Rev 124– 106/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Model of Stroop effect
Suggestion 2 – again Hopfield network
Perhaps somehow the “speak network” of the brain gets apattern with contradicting subpatterns from the visual and thereading network.
Since the “speak network” usually is used for reading, thesubpattern part coming from “word meaning” has more weightin determining the final valley in the attractor landscape.
So the “mixed” pattern has a higher tendency to roll into thevalley corresponding to “word” subpattern, and not to theactual “colour” subpattern.
It takes conscious effort (and hence time) to tweak theHopfield network so that it follows the “colour” part of thepattern.
Dr Andre Gruning COM1007 – Rev 124– 107/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Metaphors of Cognition
Symbolic Computation Hypothesis
“Cognition can best be explained in terms of digital computers.”
+ It is easy to see what an algorithm does.– Algorithm hard to find; you need a clear idea for a way to a solution.
+ Exact, crisp results.
Neural / Analog / Dynamical Computation Hypothesis
“Cognition can best be explained in terms of a dynamical system.” (e. g.neural networks, think of valleys and mountains in Hopfield network!)
+ find solutions themselves (sometimes)+ show graceful degradation and error tolerance+ more “natural” in a biological sense.– black box, hard to understand the solution
Dr Andre Gruning COM1007 – Rev 124– 108/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Lecture “Language, Categories and Grammar”
Dr Andre Gruning COM1007 – Rev 124– 109/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Metaphors of Cognition
Symbolic Computation Hypothesis
“Cognition can best be explained in terms of digital computers.”
+ It is easy to see what an algorithm does.
– Algorithm hard to find; you need a clear idea for a way to asolution.
+ Exact, crisp results.
Neural / Analog / Dynamical Computation Hypothesis
“Cognition can best be explained in terms of a dynamical system.”(e. g. neural networks, think of valleys and mountains in Hopfieldnetwork!)
+ find solutions themselves (sometimes)
+ show graceful degradation and error tolerance
+ more “natural” in a biological sense.
– black box, hard to understand the solution
Dr Andre Gruning COM1007 – Rev 124– 110/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
1 Cognitive Modelling with Hopfield Networks
2 Language, Categories and ConceptsLanguage – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
3 Concepts and CategoriesIntroductionColour NamesCategories
Typicality
4 Categories and Concepts continued
5 Feedback
Dr Andre Gruning COM1007 – Rev 124– 111/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Language Introduction
Introduction
Communication
Language – Phonetics (sounds) – briefly today
Language – Morphology (word formation) – today
Language – Categories and Conceptualisations.
Language – Grammar (phrase structure) – later lecture
Dr Andre Gruning COM1007 – Rev 124– 112/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
1 Cognitive Modelling with Hopfield Networks
2 Language, Categories and ConceptsLanguage – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
3 Concepts and CategoriesIntroductionColour NamesCategories
Typicality
4 Categories and Concepts continued
5 Feedback
Dr Andre Gruning COM1007 – Rev 124– 113/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Communication Systems
Communication Systems – Objective Components
Sender: sends a piece of information
Receiver: receives a piece of information
Message: some code hopefully conveying the piece ofinformation, depends on:
Medium: what means is used to transport the messages:speech, writing, sound waves, electrical signals, visual,chemicals . . . ,
Examples: animal cries, human language, traffic signs,pheromones, tree stress signals, encrypted messages.
Dr Andre Gruning COM1007 – Rev 124– 114/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Communication Systems
Message
Messages convey a “meaning” (Semanticity)“Meaning” (i.e. the perceived piece of information) can bedifferent for sender and receiverCoding/Encoding can be arbitrary: No relation whatsoeverbetween meaning and message necessary.But must be “agreed” upon by sender and receiver: a convention(or perhaps evolved in evolution).Natural language as “Code”: assignment of “signs” (words) to ameaning is arbitrary: Why is the concept “house” denoted by theword house?What is the purpose or intention behind an act ofcommunication? (Pragmatical Function).Mutuality: Sender and receiver can change their roles andexchange messages.
Dr Andre Gruning COM1007 – Rev 124– 115/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Communication Systems
Summary
Means (mode / channel) of communication
Semanticity
Arbitrariness
Mutuality
Pragmatic Function
Dr Andre Gruning COM1007 – Rev 124– 116/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Human Language
All of the above PLUS
Compositionality : complex message can be composed from smaller parts:
Syllables from phonems.Words from syllables.Sentences from words (grammar: next weeks)Poems or technical manuals from sentences. . .however: All subject to certain regularities (I hesitate tocall them “rules”).
Openness / Productivity Ability to create novel messages that conveynovel meanings / ideas.
Frames of Reference / Displacement That is the ability to speak aboutthings and events not present in space and time and eventand things are only “referred”. This finds it’s expression inthe existence of pronouns/adverbs like the following:
temporal adverbs as “now”, “then”.spatial adverbs as “here”, “there”.demonstratives like “these”, “those”, “this one here”,“that over there”personal pronouns “she, its, them”personal names
Dr Andre Gruning COM1007 – Rev 124– 117/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
1 Cognitive Modelling with Hopfield Networks
2 Language, Categories and ConceptsLanguage – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
3 Concepts and CategoriesIntroductionColour NamesCategories
Typicality
4 Categories and Concepts continued
5 Feedback
Dr Andre Gruning COM1007 – Rev 124– 118/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Language Phonetical Level
Phonems
phonems: a “single basic language sound”, linguistically the basicbuilding blocks. Languages have different sets of phonems
As a child you learn to distinguish the distinctive phonems of yourlanguage (think again of Hopfield networks), and you cannot easilylearn new phonems in adulthood, example:aspirated stops (explosives) “p, t, k” vs. “ph, th, kh”: differencenot distinctive in most European languages, but in some Indianones.British “r” and the rolled “r” carry no different meaning inEnglish, but could in other languages.“l” and rolled “r” are not distinct in Japanese, but how exactlythey are realised depends on the local Japanese dialect.
(remember: McGurk effect is about phonems)Phonems form syllables.
Dr Andre Gruning COM1007 – Rev 124– 119/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Syllable Level
Syllable level
There is some evidence that syllables are the basic buildingblocks for articulation / perception (eg. sylabic alphabets wereinvented before letter (phonem) based alphabets.)
Co-articulation: Sounds change a slight bit depending on theirneighbouring sounds.
The “k” sound e.g. is articulated a bit differently depending onwhich vowel follows. It is pronounced more to the frond in“ki,ke”, than “ka”, then “ku”.In Roman times three letters for “k” sounds: c, k, q).
Dr Andre Gruning COM1007 – Rev 124– 120/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Morphology
Morphology
Generally: how do you form words from other words (composition,inflection, suffixes, umlaut (vowel changes) etc.)Today: English past tense formation as an example.How do you form the English past tense?clear idea: add “-ed” to the infinitive:
paint – paintedshock – shockedcry – criedflog – flogged
but then:
dream – dreamtread – readdrink – dranklet – letbring – brought
(Is this interesting on a cognitive level?)Let us find a model.
Dr Andre Gruning COM1007 – Rev 124– 121/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Symbolic model for past tense formation
Dual route system
In order to form the past tense of a verb:
1 Keep a list of irregular verbs.
2 If verb not in list of irregulars, then add “-ed” to infinitive.
But this model has some drawbacks. . . Does it describe correctlywho humans do it?
Dr Andre Gruning COM1007 – Rev 124– 122/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Still Past Tense
How do children acquire the past tense?
When they start to use past tense, they use the right form: “give– gave”, “sleep – slept”, “put – put”, “walk – walked”.When they learn more and more verbs in past tense, suddenlythey seem to unlearn the correct irregular forms: “gived – gived”,“sleep – sleeped”, “put – putted”, “walk – walked”When they learn still more verbs, they finally perform like adults;they are back to “ give – gave”, “sleep – slept”, “put – put”, . . .
⇒ The performance on irregular verbs is U-shaped, i. e. when theyare very young performance is good, then gets weaker, and finallyis adult-like.
How to explain this U-shape in performance?
Dr Andre Gruning COM1007 – Rev 124– 123/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Developmental Adequacy
A model for a certain cognitive task is good when it describeshuman performance well: It is adequate on the performancelevel.
It is better when it also can describe the performance duringthe acquisition of a new task: It is adequate on thedevelopmental level.
Best model: A functional/operation one that allowspredictions about new experiments.
Dr Andre Gruning COM1007 – Rev 124– 124/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Past Tense
Past Tense in Development
Stage 1:At first child knows only a few verbs.It sees no regularity in their past tense,and learns them all “as a list” (or: as separate valleys in attractorlandscape), regulars and irregulars alike.N.B.: many frequent verbs are irregular, so regulars are a minority at thisstage.
⇒ good performanceStage 2:It learns more and more verbs, the number of regular verbs increasesIt gains a critical mass of regular verbs.Thus it discovers the regularity (or attractors for regular ones fuse to oneattractors).Tries to apply regularity to all verbs it knows (or: attractor for regularsdistorts basins of attraction for irregulars)
⇒ Over-generalisation of rule: regularises the irregulars, performance worse.Stage 3:The child is exposed to more verbs, both irregular and regular.Readjusts the decision rule regular/irregular (or rearranges attractorlandscape in detail) to fit adult language use.(The modelling with a Hopfield network in parentheses.)(one usually uses MLP here instead of a Hopfield Network).
Dr Andre Gruning COM1007 – Rev 124– 125/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Language – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
Past Tense Morphology
Summary
Past tense morphology There is a U-shape in child performanceduring development of past-tense formation.Explanations can be found in different degrees ofgeneralisation.
Cognitive Models It is important for cognitive science not only tomodel the steady state of a cognitive system, but alsohow the system got there (learning/development.)
Dr Andre Gruning COM1007 – Rev 124– 126/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
1 Cognitive Modelling with Hopfield Networks
2 Language, Categories and ConceptsLanguage – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
3 Concepts and CategoriesIntroductionColour NamesCategories
Typicality
4 Categories and Concepts continued
5 Feedback
Dr Andre Gruning COM1007 – Rev 124– 127/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
Language and Concepts
Concepts and Categories
Things we find in our world we cast into “concepts” /
Tree, Man, Woman . . .to run, to walk, to whisper, to shout . . .Love, Truth, Philosophy, to think
Usually concepts that are perceived as something basic, elementary,simple or fundamental, important are cast into a word.
“child”: important, basic, elementary.“red-haired man with a pimple on his nose”: not basic.What about “Arrg-this-thingy-you-know-you-find-on-your-skin-one-morning-it’s-red-ugly-and-hurts”?Hm, too long, and it became important in the early days of mankind:So: a new word is coined for this conceptCall the thingy a “pimple”!
Concepts help us to categorise the things we find in the world.Set of concepts form categories.Do concepts/categories shape the way we think about the world?
Dr Andre Gruning COM1007 – Rev 124– 128/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
Whorfian Hypothesis and Grounding
Whorfian Hypothesis
strong version The concepts we have determine how we perceive theworld.
weak version The concepts we form make it easier to speak aboutcertain things, and thus bias what we can say andunderstand easily.
Grounding But don’t we form concepts/categories because weperceive things as an entity or as similar?related to philosophical questions, political correctnessdebate. . .Essentially it’s a Hen-and-Egg-Problem: perceptioninforms concept-forming; then concepts guideperceptiopn . . .As a natural scientist: How are concepts grounded in thereal world? How do they relate to the real world?(“Symbolic grounding”).
Dr Andre Gruning COM1007 – Rev 124– 129/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
Colours
A colour wheel:http://www.ficml.org/jemimap/style/color/wheel.html
Dr Andre Gruning COM1007 – Rev 124– 130/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
Colour Names
Concepts – A simple example
Colour namesIn many languages: red, blue, green, yellow, black, whiteIn Japanese: blue and green are considered as secondary shadesof a common colour “bleen” (and would perhaps be termed as“light bleen’ and “dark bleen” respectively)(I am just wondering wheather this would influence the Stroopeffect: the word “green” printed in “blue”, but both are “blean”in Japanese. . . )opposite case: Russian has two basic terms for what we call“blue” (i. e. one for light-blue and one for dark-blue) that areshort non-composite wordssome languages: only simple words for “black”, “white” and“red”
Dr Andre Gruning COM1007 – Rev 124– 131/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
Colour names
But: All can perceive the same colours and colour differences(in psychophysical experiments).
And if shown a colour (eg. “pink”) and asked to point to amore typical example of the colour (e.g. “red”) they chooseroughly the same colour.
There is a hierarchy of language terms: if a language containsa term higher in the hierarchy, they also poses the one lowerin the hierarchy:
Black and white < Red < green or yellow < both green andyellow < Blue < Brown < Purple, pink, orange or grey etc.
Hierarchy grounded in spectral sensitivity of the eye?
Dr Andre Gruning COM1007 – Rev 124– 132/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
1 Cognitive Modelling with Hopfield Networks
2 Language, Categories and ConceptsLanguage – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
3 Concepts and CategoriesIntroductionColour NamesCategories
Typicality
4 Categories and Concepts continued
5 Feedback
Dr Andre Gruning COM1007 – Rev 124– 133/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
Overview
Organisation of world knowledge in terms of categories
(Why categories?)
Membership in a category? Typicality?
How similar are two categories?
What are the relations between categories?
Organisation of categories?
Theories of Categorisation.
Dr Andre Gruning COM1007 – Rev 124– 134/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
Typicality
Sentence verification
In order to probe world knowledge and the relations between itsdifferent pieces, subjects are asked to tell whether sentence like thefollowing are true:
1 A canary is a bird.
2 An ostrich is a bird.
3 A potato is a tree
4 A gun is a tree
Interesting? Yes, at least when you record error rates and reactiontimes!
Dr Andre Gruning COM1007 – Rev 124– 135/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
Typicality
Sentence Verification – Results
There is a Typicality Effect for a Category:
One question (“Is a Canary a bird?”) is on average answeredfaster than question (“Is an Ostrich a bird?”)
There is a Category Similarity Effect:
It takes longer to reject (“Is a potato a tree?”) as untrue than(“Is a gun a tree?”)
⇒ Same concepts fit better into a category than others.
⇒ World knowledge is somehow structured. It takes differenttimes to access different pieces in different contexts
Dr Andre Gruning COM1007 – Rev 124– 136/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
Organisation of Categories
with the help attributesfrom Alan J. Parkin, Essential Cognitive Psychology, Psychology Press, Hove, 2005, p. 159.
Dr Andre Gruning COM1007 – Rev 124– 137/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
Organisation of Categories
Means of transportation.
DIY.
Dr Andre Gruning COM1007 – Rev 124– 138/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
IntroductionColour NamesCategories
Relations between Categories
Relations between Categories
Superordinate: “animal” contains “fish”Subordinate: “canary” is a “bird”Siblings: “Shark” and “Salmon” are both members of “fish”Default Inheritance: a category inherits attributes from itssuperordinate by default: “canary” can fly (from “bird”), breathes(from “animal”)Overwriting Attributes: an ostrich cannot fly, though a “bird” canin general”(Does the above resemble some familar ideas of Object-OrientedDesign?)First approach to similarity: how many nodes to travel from onecategory to the next.Doesn’t predict the reaction times right.Can’t explain typicality effects.
Dr Andre Gruning COM1007 – Rev 124– 139/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Lecture “Categories and Concepts continued”
Dr Andre Gruning COM1007 – Rev 124– 140/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
1 Cognitive Modelling with Hopfield Networks
2 Language, Categories and ConceptsLanguage – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
3 Concepts and CategoriesIntroductionColour NamesCategories
Typicality
4 Categories and Concepts continued
5 Feedback
Dr Andre Gruning COM1007 – Rev 124– 141/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Overview
Organisation of world knowledge in terms of categories
(Why categories?)
Membership in a category? Typicality?
How similar are two categories?
What are the relations between categories?
Organisation of categories?
Theories of Categorisation.
Dr Andre Gruning COM1007 – Rev 124– 142/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Organisation of Categories
with the help attributesfrom Alan J. Parkin, Essential Cognitive Psychology, Psychology Press, Hove, 2005, p. 159.
Dr Andre Gruning COM1007 – Rev 124– 143/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Typicality
Sentence Verification – Results
There is a Typicality Effect for a Category:
One question (“Is a Canary a bird?”) is on average answeredfaster than question (“Is an Ostrich a bird?”)
There is a Category Similarity Effect:
It takes longer to reject (“Is a potato a tree?”) as untrue than(“Is a gun a tree?”)
⇒ Same concepts fit better into a category than others.
⇒ World knowledge is somehow structured. It takes differenttimes to access different pieces in different contexts
Dr Andre Gruning COM1007 – Rev 124– 144/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Categories – An other Example
What is a “cup”?
Some attributes:
one can drink liquids from it
mainly hot liquids like tea, coffee, hot chocolate
round
has a handle
made of ceramics / China / earthenware
not too big
Dr Andre Gruning COM1007 – Rev 124– 145/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
When is a Cup a Cup?
A psychophysical experiment: Subjects are asked how they wouldname the objects shown.from reproduction in Ellis and Hunt: Fundamentals of Cognitive Psychology, McGraw Hill, Boston, 1993, p. 210.
Dr Andre Gruning COM1007 – Rev 124– 146/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Context dependence of Categories
Context Dependence
Subjects who saw the picture on the slide before were either askedto imagine the objects to be filled
with liquids (solid lines)
with food (dashed lines)
taken from reproduction in Ellis and Hunt: Fundamentals of Cognitive Psychology, McGraw Hill, Boston, 1993, p.211.]
Dr Andre Gruning COM1007 – Rev 124– 147/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Context dependence of Categories
Results from Experimental Cognitive Science
Categories boundaries depend on context.
Categories are fuzzy.
Categories membership can be gradual.
Dr Andre Gruning COM1007 – Rev 124– 148/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Categories
Theories of Categorisation
Exemplar Theory
Attribute Theory
Prototype Theory
. . .
Which theory explains which effects? Typicality and Context?
Dr Andre Gruning COM1007 – Rev 124– 149/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Categories
Exemplar theory
All instances are stored
Allows you to assess also the variability of the categories.
No abstraction/generalisation over instances takes place.
Needs a lot of storage: memory is overloaded, so doesn’tsupport one main advantage of categorisation: datacompression.
Can it explain typicality effects?
Can it explain context effects?
Maybe employed when you are building up a new categoryand so far have only a few instances for it.
Dr Andre Gruning COM1007 – Rev 124– 150/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Organisation of Categories
with the help attributesfrom Alan J. Parkin, Essential Cognitive Psychology, Psychology Press, Hove, 2005, p. 159.
Dr Andre Gruning COM1007 – Rev 124– 151/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Categories
Attribute Theory
A category is defined through a lists of attributes and featuresYou have to live with exceptions:
Category “bird”Attribute “can fly”Instance “penguin”: attribute: “cannot fly”
works a bit like default inheritance in OO programmingsubcategories/instances can overwrite defaults from supercategoryto test category membership, just check the features (??)It’s a discrete theory.Can it explain typicality?When the same number of features/attributes are met, thenrecognition of “not-so-typical” instance should be as fast as very“central” instances: this is not the case (“ostrich” versus“Canary”)And worse: members violating some of the core feature (“canfly”) can be quite typical (“penguin”).Can it explain context effects?
Dr Andre Gruning COM1007 – Rev 124– 152/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Categories
Prototype theory
A category is defined through a prototypeA prototype is like a quintessence or summary of all instances of thecategory you have ever encountered (or imagined).It is like an prototypical member of the category, not necessary onethat exists in the real world.Think of an prototypical tree or prototypical chair.There is some measure of the distance of an instance to theprototype, perhaps how much you most mentally distort the image ofthe prototype.
⇒ It’s a metric theory.The smaller the distance the more typical the instance.Can it deal with typicality?Think of the mountain landscape of a Hopfield network: Distance isthe time it takes the ball (the instance) to roll down to the valley (theprototype).
⇒ It Can deal easier with reaction times and typicality effects.Can it deal with context effects?
⇒ It can deal better with fuzzy category membership.Smooth transition from exemplar based to prototype based categoriesin development (think of the irregular past tense formation!)
Dr Andre Gruning COM1007 – Rev 124– 153/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Summary
World knowledge as expressed in categories
Categories are organised in a (single?) hierarchy
Categories have a typicality effect.
Categories have context effects.
Categories can be defined in terms of attributes or prototypes.
Dr Andre Gruning COM1007 – Rev 124– 154/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
1 Cognitive Modelling with Hopfield Networks
2 Language, Categories and ConceptsLanguage – IntroductionCommunicationPhonems and SyllablesLanguage – Morphology
3 Concepts and CategoriesIntroductionColour NamesCategories
Typicality
4 Categories and Concepts continued
5 Feedback
Dr Andre Gruning COM1007 – Rev 124– 155/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Group Research Report
Topic Choices
Computers and Intelligence (Turing Test / Chinese Room Test): one half of groups
Cognitive Abilities of Dolphins (Rats, Apes, Bees): the otherhalf
The Halting Problem: one group.
Dr Andre Gruning COM1007 – Rev 124– 156/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Group Research Report
General Problems
referencing
list of referencessome placeholder to refer to reference in the text (i.e. [25] or (Gruning2008)): “In a new publication (Gruening 2008) the authors. . . ” or“And from several experiements it became obvious that apes are moreintelligent than bees [25].”incomplete references: author, title, year, form of publicationavoid obscure electronic references (news, links with cryptic numbers inthem)when there is a paper equivalent for an electronic resource, use thereference to the paper equivalent
argumentation
a good piece of written work is more than just a chain of paragraphs,statements. It is very important to make the links clear. What is therelation between the different facts.be critical, question statements, do contrast opposing positions.where to you stand as the author?too big a reluctance to delve deeper into the material, to think aboutthe topic on an abstracted level, instead of only concrete and applied.
form
grammar, spellingsection, paragraphs, heading, titlepage: make your structure clear.
Dr Andre Gruning COM1007 – Rev 124– 157/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Computers and Intelligence
Expectations
What is intelligence?What is artificial intelligence?How do they compare?Thought experiemnts for intelligence and their challengence?Can there be something as artificial intelligence in principle?
Common problems
too shallow a treatment of the topic, i. e., only discussing applicationsonly arguments in favour for one sideno critical questioning: Is a system really intelligent? What does itreally really mean to be intelligent?some groups did not realise it is a THOUGHT experimentit’s kind of a philosophical / academic questions, so no shallowdiscussion! And no discussion of advance in technology or commercialproducts.
Dr Andre Gruning COM1007 – Rev 124– 158/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Cognitive Abilities of Animals
Expectations
describe what high level cognitive tasks your chosen animals arecapable of.question whether the experiments really prove what they claim toprovewhat cognitive resources does the animal need to fulfil this task?
perceptionmemorywhat does the animal know? How does know that it knows something?Does it know that another animal knows?
comparison to humans
Common problems
unrelated sequence of statements / paragraphsdescribing simply the anatomical features of an animal or generalbehaviour without describing there relation to cognitive capabilities.
Dr Andre Gruning COM1007 – Rev 124– 159/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Final Research Report
Lessons to be learnt
Proper Referencing
Structure
Link your arguments
Let your own understanding, evaluations of and conclusionfrom the arguments shine through. However it must bebacked by the arguemtent you present.
Dig a bit deeper. No shallow argumentation.
Let some abstract / academic / philosophical reasoning enteryour paper.
Dr Andre Gruning COM1007 – Rev 124– 160/161
Cognitive Modelling with Hopfield NetworksLanguage, Categories and GrammarLanguage, Categories and Concepts
Concepts and CategoriesCategories and Concepts continuedCategories and Concepts continued
Feedback
Final Research Report
Final Research Report
3 pages
detailed information on module homepage and hand-out here
submission by 01/05, 2400hrs.
electronic copy via Ulearn.
topics:
“Theory of Mind”“Mirror Test”“Discuss a Research Article”your own (subject to approval)
Dr Andre Gruning COM1007 – Rev 124– 161/161