a new theoretical framework for multisensory integration michael w. hadley and elaine r. reynolds

1
A new theoretical framework for multisensory integration Michael W. Hadley and Elaine R. Reynolds Neuroscience Program, Lafayette College, Easton PA 18042 Multisensory integration (MSI) literature has focused on the Superior Colliculus (SC), the subcotrical area responsible for gaze orientation, resulting in understanding the development, classes and computational of SC MSI. Introduction Analysis of the use of SOMs to model MSI I took the important facets of computational models of SC MSI ([1],[5],[7] and [8]) and applied them to a cortical setting: Superior Colliculus Visual Auditory Tactile 2.1 MMA’s Architecture 2.2 MMA’s Results MMA’s model [5] consists of m senses projecting to a 10x10 grid of neurons- the SC. The projections to the SC were trained with a SOM by presenting many examples of the different firing combinations . MMA found the SC formed unisensory areas in the corners of the grid with multisensory areas in between the unisensory areas. The response of the network to multisensory stimuli showed a nonlinear increase as compared to the component unisensory stimuli (MSE). Martin, Meredith, Ahmad (MMA) SOM model Each example that a SOM is trained on maps to a location in the grid (simplified into 3.1). Similar examples are mapped to similar locations forming unisensory and multisensory areas (denoted by the colors in 3.1). SOMs are a solid foundation for cortical MSI: •[1] ,[5] and [7] used excitatory self-organizing maps (SOMs) to explain MSI •[7] and [8] used layered, topographic architectures to model multisensory information processing •[7] showed that SOMs can be used in a multilayer system •[8] shows the importance of uses inhibition and feedback I built extensions onto [5] to test the applicability of the SOM-based models to the cortex and discovered that a SOM alone cannot explain cortical MSI. I propose an additional training rule and a hierarchy based on context to allow inhibition and feedback in a multilayer SOM. •SOMs form a weight distribution to allow MSI using the sigmoidal firing •Noise is essential to smooth map formation. The random variations are micro-examples that fill in the gaps in the map (contrast in 2.2 with 3.3) •Evidence suggests that our sensory areas have a topographic organization and [4] suggests this is the result of SOMs 1.1 Traditional view of multisensory areas 1.2 Revised view of multisensory areas Adapted from [2] Cortical MSI lacks such an understanding, and hence, the recent shift in views on cortical MSI (1.1 to 1.2) has yet to be computationally modeled. 3.1 How SOMs form 3.3 No noise 3.2 Sigmoidal curve yields MSE The key to the integration is the sigmoidal firing curve. The weights in the mulstisensory areas pay attention to each modality equivalently, so the unsisensory responsiveness is subthreshold while the multisensory response is above threshold (see 3.2). Multisensory world Multi-neuron modalities Inhibition Multiple RFs Moving SOMs from MMA to a Cortical Hierarchy Virtual multisensory world Larger “multi-neuron” modalities Inhibition Multiple receptive fields (RFs) Sense 1 Sense 2 Bimodal MSE/MSD Sensory Size 3 x 3 4 x 4 2 x 2 Single RFs Multiple RFs Sense 1 Sense 2 Bimodal MSE 4.1 World with unisensory and multisensory space 4.2 Extensions to larger senses decreased the signal-to-noise ratio resulting in MSE of noise activity. Adjusting parameters was not enough to fix the ratio. 4.3 Inhibition increases signal-to-noise allowing for larger grids and potentially both MSE (Red) and MSD (Green) 4.4 2D SOMs can only map one relationship. They can either map the overlapping RFs within a sense or the overlapping RFs between senses, but not both. Visual Area Auditory Area Cortex 1 2 3 4 Multisensory World Sense 1 Sense 2 Bimodal MSE 3 x 3 4 x 4 2 x 2 Sensory Size 4 x 4 (Adjusted) 1:8 1:15 1:3 Signal:Noise Hierarchy The flow of information in the hierarchy and the additional rule set address the problems of signal-to-noise and inhibition while conforming to the literature on cortical MSI. The literature has yet to suggest reasons for the existence of interconnections between low level sensory areas and feedback from cortical areas. This model works by setting up a hierarchy of contexts. The visual, auditory and cortical areas all have their view of the world. The cross talk and feedback allows these contexts to be enhanced and suppressed as needed to create a coherent view. This view of information flow through a hierarchy has been expressed in [3] and [5]. [3] has successfully implemented contextual hierarchy to simulate advanced computer vision. Acknowledgements I would like to thank Dr. Elaine Reynolds for her continued advice and mentorship through the course of this research. References Training rule sets Feed-forward and excitatory connections trained with traditional SOM Modified Hebb with inhibition to deal with the issue of multiple RFs and signal-to-noise 1) Unisensory Extraction 2) Cross-modal Interaction 3) Multisensory Integration 4) Cortical Feedback The visual and auditory areas are trained with a SOM to store unisensory patterns. Interactions between the two sensory areas allow the alignment of sensory information: The primarily “unisensory” areas projections to the cortical area are trained with a SOM to extract a multisensory view of the world. Cortical feedback is trained with the same scheme as 2 to allow for top-down integration. •If two neurons fire in response to the same input, increase connection weight •If one neuron fires but another neuron does not, decrease connection weight •The weights are capped to allow subthreshold influences that generate MSE [1] Anastasio, T. J., & Patton, P. E. (2003). A two-stage unsupervised learning algorithm reproduces multisensory enhancement in a neural network model of the corticotectal system. Journal of Neuroscience, 23, 6713-6727. [2] Ghazanfar, A. A., & Schroeder, C. E. (2006). Is neocortex essentially multisensory? Trends in Cognitive Sciences, 10, 278- 285. [3] Hawkins, J., & Blakeslee, S. (2004). On Intelligence. New York: Holt. [4] Kohonen, T. & Hari, R. (1999). Where the abstract feature maps of the brain might come from. Trends in Neuroscience. 22, 135-139. [5] Martin, J. G., Meredith, M.A. & Ahmad, K. (2009). Modeling multisensory enhancement with self-organizing maps. Frontiers in Computation Neuroscience, 3. [6] Meyer K., & Damasio A. (2009). Convergence and divergence in a neural architecture for recognition and memory, Trends in Neurosciences, 32, 376-382. [7] Pavlou, A. and Casey, M. (2010). Simulating the effects of cortical feedback in the superior colliculus with topographic maps, Proceedings of the International Joint Conference on Neural Networks 2010, Barcelona, 18-23 July. [8] Ursino, M., Cuppini, C., Magosso, E., Serino, A. & Pellegrino, G. (2009). Multisensory integration in the superior colliculus: a neural network model. Journal of Computational Neuroscience, 26, 55-73.

Upload: vea

Post on 06-Feb-2016

42 views

Category:

Documents


0 download

DESCRIPTION

A new theoretical framework for multisensory integration Michael W. Hadley and Elaine R. Reynolds Neuroscience Program, Lafayette College, Easton PA 18042. Analysis of the use of SOMs to model MSI. Introduction. Martin, Meredith, Ahmad (MMA) SOM model. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: A new theoretical framework for multisensory integration Michael W. Hadley and Elaine R. Reynolds

A new theoretical framework for multisensory integrationMichael W. Hadley and Elaine R. Reynolds

Neuroscience Program, Lafayette College, Easton PA 18042

Multisensory integration (MSI) literature has focused on the Superior Colliculus (SC), the subcotrical area responsible for gaze orientation, resulting in understanding the development, classes and computational of SC MSI.

Introduction Analysis of the use of SOMs to model MSI

I took the important facets of computational models of SC MSI ([1],[5],[7] and [8]) and applied them to a cortical setting:

Superior Colliculus

Visual

Auditory

Tactile

2.1 MMA’s Architecture2.2 MMA’s Results

MMA’s model [5] consists of m senses projecting to a 10x10 grid of neurons-the SC. The projections to the SC were trained with a SOM by presenting many examples of the different firing combinations .

MMA found the SC formed unisensory areas in the corners of the grid with multisensory areas in between the unisensory areas. The response of the network to multisensory stimuli showed a nonlinear increase as compared to the component unisensory stimuli (MSE).

Martin, Meredith, Ahmad (MMA) SOM modelEach example that a SOM is trained on maps to a location in the grid (simplified into 3.1). Similar examples are mapped to similar locations forming unisensory and multisensory areas (denoted by the colors in 3.1).

SOMs are a solid foundation for cortical MSI:•[1] ,[5] and [7] used excitatory self-organizing maps (SOMs) to explain MSI•[7] and [8] used layered, topographic architectures to model multisensory information processing•[7] showed that SOMs can be used in a multilayer system•[8] shows the importance of uses inhibition and feedback

I built extensions onto [5] to test the applicability of the SOM-based models to the cortex and discovered that a SOM alone cannot explain cortical MSI. I propose an additional training rule and a hierarchy based on context to allow inhibition and feedback in a multilayer SOM.

•SOMs form a weight distribution to allow MSI using the sigmoidal firing•Noise is essential to smooth map formation. The random variations are micro-examples that fill in the gaps in the map (contrast in 2.2 with 3.3)•Evidence suggests that our sensory areas have a topographic organization and [4] suggests this is the result of SOMs

1.1 Traditional view of multisensory areas 1.2 Revised view of multisensory areasAdapted from [2]

Cortical MSI lacks such an understanding, and hence, the recent shift in views on cortical MSI (1.1 to 1.2) has yet to be computationally modeled.

3.1 How SOMs form 3.3 No noise3.2 Sigmoidal curve yields MSE

The key to the integration is the sigmoidal firing curve. The weights in the mulstisensory areas pay attention to each modality equivalently, so the unsisensory responsiveness is subthreshold while the multisensory response is above threshold (see 3.2).

Multisensory world

Multi-neuron modalities

Inhibition

Multiple RFs

Moving SOMs from MMA to a Cortical HierarchyVirtual multisensory world

Larger “multi-neuron” modalitiesInhibition

Multiple receptive fields (RFs)

Sense 1 Sense 2 Bimodal MSE/MSDSensory Size

3 x 3

4 x 4

2 x 2

Single RFs

Multiple RFs

Sense 1 Sense 2 Bimodal MSE

4.1 World with unisensory and multisensory space

4.2 Extensions to larger senses decreased the signal-to-noise ratio resulting in MSE of noise activity. Adjusting parameters was not enough to fix the ratio.

4.3 Inhibition increases signal-to-noise allowing for larger grids and potentially both MSE (Red) and MSD (Green)

4.4 2D SOMs can only map one relationship. They can either map the overlapping RFs within a sense or the overlapping RFs between senses, but not both.

VisualArea

AuditoryArea

Cortex

1

2

3

4

Multisensory World

Sense 1 Sense 2 Bimodal MSE

3 x 3

4 x 4

2 x 2

Sensory Size

4 x 4(Adjusted)

1:8

1:15

1:3

Signal:Noise

HierarchyThe flow of information in the hierarchy and the additional rule set address the problems of signal-to-noise and inhibition while conforming to the literature on cortical MSI.

The literature has yet to suggest reasons for the existence of interconnections between low level sensory areas and feedback from cortical areas. This model works by setting up a hierarchy of contexts. The visual, auditory and cortical areas all have their view of the world. The cross talk and feedback allows these contexts to be enhanced and suppressed as needed to create a coherent view.

This view of information flow through a hierarchy has been expressed in [3] and [5]. [3] has successfully implemented contextual hierarchy to simulate advanced computer vision.

Acknowledgements I would like to thank Dr. Elaine Reynolds for her continued advice and mentorship through the course of this research.

References

Training rule sets

Feed-forward and excitatory connections trained with traditional SOM

Modified Hebb with inhibition to deal with the issue of multiple RFs and signal-to-noise

1) Unisensory Extraction

2) Cross-modal Interaction

3) Multisensory Integration

4) Cortical Feedback

The visual and auditory areas are trained with a SOM to store unisensory patterns.

Interactions between the two sensory areas allow the alignment of sensory information:

The primarily “unisensory” areas projections to the cortical area are trained with a SOM to extract a multisensory view of the world.

Cortical feedback is trained with the same scheme as 2 to allow for top-down integration.

•If two neurons fire in response to the same input, increase connection weight•If one neuron fires but another neuron does not, decrease connection weight•The weights are capped to allow subthreshold influences that generate MSE

[1] Anastasio, T. J., & Patton, P. E. (2003). A two-stage unsupervised learning algorithm reproduces multisensory enhancement in a neural network model of the corticotectal system. Journal of Neuroscience, 23, 6713-6727. [2] Ghazanfar, A. A., & Schroeder, C. E. (2006). Is neocortex essentially multisensory? Trends in Cognitive Sciences, 10, 278-285. [3] Hawkins, J., & Blakeslee, S. (2004). On Intelligence. New York: Holt.[4] Kohonen, T. & Hari, R. (1999). Where the abstract feature maps of the brain might come from. Trends in Neuroscience. 22, 135-139.[5] Martin, J. G., Meredith, M.A. & Ahmad, K. (2009). Modeling multisensory enhancement with self-organizing maps. Frontiers in Computation Neuroscience, 3.[6] Meyer K., & Damasio A. (2009). Convergence and divergence in a neural architecture for recognition and memory, Trends in Neurosciences, 32, 376-382.[7] Pavlou, A. and Casey, M. (2010). Simulating the effects of cortical feedback in the superior colliculus with topographic maps, Proceedings of the International Joint Conference on Neural Networks 2010, Barcelona, 18-23 July. [8] Ursino, M., Cuppini, C., Magosso, E., Serino, A. & Pellegrino, G. (2009). Multisensory integration in the superior colliculus: a neural network model. Journal of Computational Neuroscience, 26, 55-73.