complete program including all scientific contributions (21 mb, pdf)
TRANSCRIPT
Eike Budinger (Editor)
Proceedings of the 5th Interna*onal Conference on Auditory Cortex Towards a Synthesis of Human and Animal Research
September 13 -‐ 17, 2014 Magdeburg, Germany
Proceedings of the
5th International Conference on Auditory Cortex
Towards a Synthesis of Human and Animal Research
September 13 -‐ 17, 2014
Magdeburg, Germany
edited by
Eike Budinger
Eike Budinger, Dr.
Leibniz Institute for Neurobiology
Brenneckestraße 6
39118 Magdeburg / Germany
Editorial deadline: September 4, 2014
Proceedings of the "5th International Conference on Auditory Cortex – Towards a Synthesis of Human and Animal Research"
Scientific Organizing Committee:
Dr. André Brechmann, PD Dr. Michael Brosch, Dr. Eike Budinger, PD Dr. Peter Heil, PD Dr. Reinhard König, Prof. Frank W. Ohl, Prof. Henning Scheich
Leibniz Institute for Neurobiology
Brenneckestraße 6
39118 Magdeburg / Germany
Conference Office:
Carola Kolouschek
Public Relations | Media | Events
Proceedings of the
5th International Conference on Auditory Cortex
Towards a Synthesis of Human and Animal Research
September 13 -‐ 17, 2014
Magdeburg, Germany
edited by
Eike Budinger
i
Preface
We would like to welcome you to the 5th International Conference on Auditory Cortex in Magdeburg. This conference continues the series of previous meetings which were held in Magdeburg (Germany) in 2003 and 2009, in Nottingham (UK) in 2006, and in Lausanne (Switzerland) in 2012.
We are very pleased that more than 270 researchers from all over the world will attend this year's conference. More than 200 scientific contributions were submitted.
The scientific program covers a wide range of topics and will provide unique opportunities to discuss current ideas of auditory cortex functions and concepts in humans and animals. Overall, 40 speakers -‐-‐ from young researchers to long-‐established leading scientists -‐-‐ will present talks in six sessions entitled:
-‐ Auditory cortex in different species
-‐ The hearing action cycle
-‐ Auditory cortex: It's about time
-‐ Auditory cortex: Clinical aspects
-‐ Multisensory interplay in auditory cortex
-‐ Learning in auditory cortex
In addition, there will be two sessions with 165 posters with ample of space and time for discussions. Ten posters have been selected for short oral presentations.
The setting of the meeting in the marvelous Herrenkrug Parkhotel right next to the Elbe river has, in the past, provided a relaxed yet stimulating atmosphere for scientific discussions. We are confident this spirit will linger also in 2014. We, the organizers of the conference, will do our very best to care for this ambience by providing an excellent scientific program complemented by attractive social events like a welcome reception, BBQ, evening dinner, dragonboat race, and Bavarian-‐style Oktoberfest. Still, if you need a break from it all, there are ideal possibilities for a range of other activities (wellness, golf, tennis, cycling, and much more).
So, welcome again to the 5th International Conference on Auditory Cortex in Magdeburg!
The scientific organizing committee
André Brechmann, Michael Brosch, Eike Budinger, Peter Heil, Reinhard König, Frank Ohl, and Henning Scheich
ii
For detailed information about the conference venue, registration, social events, travel, accommodation ... and much more, please visit our website:
www.auditory-‐cortex.de
iii
Content
Program 1
Program at a glance 6
Scientific contributions 7
S1: Auditory cortex in different species 7
S2: The hearing-‐action cycle 11
S3: Auditory cortex: It's about time 15
S4: Auditory cortex: Clinical aspects 19
S5: Multisensory interplay in auditory cortex 23
S6: Learning in auditory cortex 29
Posters and short oral presentations 35
Author index 133
List of participants 139
List of sponsors 157
Acknowledgements 158
Program
1
Program
5th International Conference on Auditory Cortex
Towards a Synthesis of Human and Animal Research
Saturday, 13/Sep/2014
4:00pm -‐ 7:00pm
Herrenkrug Lenné Room
Registration
Please register at our registration desk. You may also decide to book additional socials for you and accompanying persons. Please note that we cannot accept credit cards on-‐site.
7:00pm -‐ 10:00pm
Herrenkrug Lobby/Bar
Welcome Reception
For a first welcome drinks and snacks will be served. There will be also a representative of the Magdeburg Tourist Office providing information about Magdeburg.
2
Sunday, 14/Sep/2014
8:45am -‐ 9:00am
Herrenkrug Ball Room
Opening remarks Peter Heil, Leibniz Institute for Neurobiology Magdeburg, Germany
9:00am -‐ 10:30am
Herrenkrug Ball Room
S1/1: Auditory cortex in different species Chairs: Peter Heil, Leibniz Institute for Neurobiology Magdeburg, Germany Dexter R. Irvine, Monash University and Bionics Institute, Australia Speakers: Manfred Kössl, Goethe University of Frankfurt (Main), Germany Lutz Wiegrebe, Ludwig Maximilians University of Munich, Germany Georg M. Klump, Carl von Ossietzky University Oldenburg, Germany
10:30am -‐ 11:00am
Herrenkrug Terrace
Coffee break
11:00am -‐ 12:30pm
Herrenkrug Ball Room
S1/2: Auditory cortex in different species Chairs: Peter Heil, Leibniz Institute for Neurobiology Magdeburg, Germany Dexter R. Irvine, Monash University and Bionics Institute, Australia Speakers: Julie Elie, University of California at Berkeley, United States of America Christopher I. Petkov, Newcastle University, United Kingdom Josef Rauschecker, Georgetown University, United States of America
12:30pm -‐ 2:00pm
Herrenkrug Restaurant
Lunch break
2:00pm -‐ 3:00pm
Herrenkrug Ball Room
S2/1: The hearing-‐action cycle Chairs: Michael Brosch, Leibniz Institute for Neurobiology Magdeburg, Germany Jonathan Fritz, University of Maryland, United States of America Speakers: Erich Schröger, University of Leipzig, Germany Sonja A. Kotz, University of Manchester, United Kingdom
3:00pm -‐ 3:30pm
Herrenkrug Ball Room
Short oral presentations I Session Chair: Max Happel, Leibniz Institute for Neurobiology
Authors of posters 125, 167, 169, 179, and 239 will shortly present their studies.
3:30pm -‐ 4:00pm
Herrenkrug Terrace
Coffee break
4:00pm -‐ 6:00pm
Herrenkrug Wintergarden / Marquee
Poster session (odd)
Presentation of posters with odd numbers and of poster 220.
7:00pm -‐ 10:00pm
Herrenkrug Terrace
Barbeque
3
Monday, 15/Sep/2014
8:30am -‐ 10:00am
Herrenkrug Ball Room
S2/2: The hearing-‐action cycle Chairs: Michael Brosch, Leibniz Institute for Neurobiology Magdeburg, Germany Jonathan Fritz, University of Maryland, United States of America Speakers: Anthony Zador, Cold Spring Harbor Laboratory, United States of America David M. Schneider, Duke University, United States of America Hugo Merchant, National University of Mexico, Mexico
10:00am -‐ 10:30am
Herrenkrug Terrace
Coffee break
10:30am -‐ 11:30am
Herrenkrug Ball Room
S2/3: The hearing-‐action cycle Chairs: Michael Brosch, Leibniz Institute for Neurobiology Magdeburg, Germany Jonathan Fritz, University of Maryland, United States of America Speakers: Xiaoqin Wang, Johns Hopkins University, United States of America Edward Chang, University of California at San Francisco, United States of America
11:30am -‐ 12:30pm
Herrenkrug Ball Room
S3/1: Auditory cortex: It's about time Chairs: Maria Chait, University College London, United Kingdom Reinhard König, Leibniz Institute for Neurobiology Magdeburg, Germany Speakers: Mitchell Steinschneider, Albert Einstein College of Medicine, New York, United States of America Christoph Schreiner, University of California at San Francisco, United States of America
12:30pm -‐ 2:00pm
Herrenkrug Restaurant
Lunch break
2:00pm -‐ 3:00pm
Herrenkrug Ball Room
S3/2: Auditory cortex: It's about time Chairs: Maria Chait, University College London, United Kingdom Reinhard König, Leibniz Institute for Neurobiology Magdeburg, Germany Speakers: István Winkler, Hungarian Academy of Sciences, Budapest, Hungary David Poeppel, New York University, United States of America
3:00pm -‐ 3:30pm
Herrenkrug Ball Room
Short oral presentations II Session Chair: Susann Deike, Leibniz Institute for Neurobiology Magdeburg, Germany
Authors of posters 106, 126, 180, 186, and 232 will shortly present their studies.
3:30pm -‐ 4:00pm
Herrenkrug Terrace
Coffee break
4:00pm -‐ 6:00pm
Herrenkrug Wintergarden / Marquee
Poster session (even)
Presentation of posters with even numbers and of posters 123 and 173.
8:00pm -‐ 11:00pm
Herrenkrug Ball Room
Conference dinner
4
Tuesday, 16/Sep/2014
8:30am -‐ 10:00am
Herrenkrug Ball Room
S3/3: Auditory cortex: It's about time Chairs: Maria Chait, University College London, United Kingdom Reinhard König, Leibniz Institute for Neurobiology Magdeburg, Germany Speakers: Patrick J.C. May, Aalto University, Finland Israel Nelken, Hebrew University, Jerusalem, Israel Alexandra Bendixen, Carl von Ossietzky University Oldenburg, Germany
10:00am -‐ 10:30am
Herrenkrug Terrace
Coffee break
10:30am -‐ 12:30pm
Herrenkrug Ball Room
S4: Auditory cortex: Clinical aspects Chairs: André Brechmann, Leibniz Institute for Neurobiology Magdeburg, Germany Ingrid S. Johnsrude, University of Western Ontario, Canada Speakers: Timothy D. Griffiths, Newcastle University, United Kingdom Jos J. Eggermont, University of Calgary, Canada Anu Sharma, University of Colorado at Boulder, United States of America Nina Kraus, Northwestern University, Evanston, United States of America
12:30pm -‐ 2:00pm
Herrenkrug Restaurant
Lunch break
2:00pm -‐ 3:30pm
Herrenkrug Ball Room
S5/1: Multisensory interplay in auditory cortex Chairs: Eike Budinger, Leibniz Institute for Neurobiology Magdeburg, Germany Stephen Lomber, University of Western Ontario, London, Canada Speakers: Eike Budinger, Leibniz Institute for Neurobiology Magdeburg, Germany M. Alex Meredith, Virginia Commonwealth University, Richmond, United States of America Pascal Barone, CNRS Brain & Cognition, Toulouse, France
3:30pm -‐ 8:30pm
Lake "Salbker See"
Dragonboat race
We arranged a couple of dragonboats (including guides) for you and will perform a race on one of Magdeburg's waters with you. Please note: It could happen that you get a little wet; thus, please take appropriate clothes with you. There will be a bus transfer to the Lake "Salbker See" and a shuttle back to the Herrenkrug Parkhotel.
5
Wednesday, 17/Sep/2014
8:30am -‐ 10:00am
Herrenkrug Ball Room
S5/2: Multisensory interplay in auditory cortex Chairs: Eike Budinger, Leibniz Institute for Neurobiology, Germany Stephen Lomber, University of Western Ontario, London, Canada Speakers: Jufang He, City University of Hong Kong, China Toemme Noesselt, Otto-‐von-‐Guericke University Magdeburg, Germany Micah M. Murray, University Hospital Center and University of Lausanne, Switzerland
10:00am -‐ 10:30am
Herrenkrug Terrace
Coffee break
10:30am -‐ 12:00pm
Herrenkrug Ball Room
S6/1: Learning in auditory cortex Chairs: Frank W. Ohl, Leibniz Institute for Neurobiology Magdeburg, Germany Jan W.H. Schnupp, University of Oxford, United Kingdom Speakers: Kasia M. Bieszczad, University of California at Irvine, United States of America Timothy Gentner, University of California at San Diego, United States of America Frank W. Ohl, Leibniz Institute for Neurobiology Magdeburg, Germany
12:00pm -‐ 12:30pm
Herrenkrug Ball Room
Announcements for AC2017
Group photo
12:30pm -‐ 2:00pm
Herrenkrug Restaurant
Lunch break
2:00pm -‐ 4:00pm
Herrenkrug Ball Room
S6/2: Learning in auditory cortex Chairs: Frank W. Ohl, Leibniz Institute for Neurobiology Magdeburg, Germany Jan W.H. Schnupp, University of Oxford, United Kingdom Speakers: Andrew J. King, University of Oxford, United Kingdom Shaowen Bao, University of California at Berkeley, United States of America Stephen V. David, Oregon Health and Science University, Portland, United States of America Robert Froemke, New York University School of Medicine, United States of America
4:00pm -‐ 4:30pm
Herrenkrug Terrace
Coffee break
4:30pm -‐ 6:00pm
Herrenkrug Ball Room
S6/3: Learning in auditory cortex Chairs: Frank W. Ohl, Leibniz Institute for Neurobiology Magdeburg, Germany Jan W.H. Schnupp, University of Oxford, United Kingdom Speakers: Robert Zatorre, McGill University, Montreal, Canada Elia Formisano, Maastricht University, The Netherlands Christo Pantev, Institute for Biomagnetism and Biosignalanalysis, Muenster, Germany
6:00pm -‐ 6:30pm
Herrenkrug Ball Room
Closing remarks / Awards Peter Heil, Leibniz Institute for Neurobiology Magdeburg, Germany
6:30pm -‐ 11:00pm
Restaurant "Mückenwirt"
Farewell party
This party will be organized in the style of the Bavarian Oktoberfest. We strongly encourage wearing traditional Bavarian clothes. There will be a bus transfer to the restaurant "Mückenwirt" and a shuttle back to the Herrenkrug Parkhotel.
6
Program at a glance
Scientific contributions
7
Scientific contributions
Session 1: Auditory cortex in different species Studies of the anatomical and functional organization of auditory cortex in different species and its role in communication and behavior help to identify common underlying principles of auditory cortex functioning and to distinguish them from species-‐specific specializations owing to particular needs and evolutionary traits. In this session, speakers will provide state-‐of-‐the-‐art knowledge and views on the common functional organization, also by means of comparison to other sensory cortices, and on specific adaptations of auditory cortex in different species. Emphasis will be on humans and non-‐human primates, echo-‐locating bats, and birds where homologies with mammalian brains have recently been revised. All constitute popular model systems for studies of auditory cortex and its role in hearing, communication, behavior, and learning. S1/1: Sunday, 14/Sep/2014, 9:00am -‐ 10:30am ID: 132
Auditory cortex adaptations in echolocating bats Manfred Kössl1, Julio Hechavarria1, Cornelia Voss1, Markus Schäfer1, Marianne Vater2 1Institute for Cell Biology and Neuroscience, Goethe-‐University Frankfurt am Main, Germany; 2Institute for Biochemistry and Biology, University of Potsdam, Germany [email protected]‐frankfurt.de Bats rely on audition as their main means for echolocation-‐based orientation. In addition, they also are known to use complex communication call sequences comparable to those of some song birds. Their auditory cortices are largely hypertrophied, both in insect hunting and in frugivorous bat species. We will review auditory cortex functions of bat species that are adapted to different orientation tasks and compare them with those of non-‐echolocating mammals. Two groups of bats are considered, the FM bats Carollia perspicillata, Eptesicus fuscus, Myotis lucifugus and Pteronotus quadridens, and the
CF-‐FM bats Pteronous parnellii and Rhinolophus rouxi. Special emphasis will be laid on cortical time computation in the form of echo-‐delay tuning and duration tuning and its topography. Most bat species studied have chronotopically organized dorsal cortex regions with a rostrocaudal gradient of increasing echo delay time that codes target distance. However, two insect hunting bat species do not require a mapped representation of external space and may use population coding mechanisms for space representation. There is also a large species-‐specific difference in the degree of cortical frequency convergence related to the use of different harmonic components of the sonar signal during echolocation. We aim to assess if bat auditory cortex can serve as a model system for general auditory computation tasks in other mammals and if bat cortex also represents special design solutions for active hearing algorithms. ID: 291
Neuroethology of biosonar: Listening to silent objects by bats and humans Lutz Wiegrebe, Uwe Firzlaff, Virginia Flanagin Division of Neurobiology, Ludwig-‐Maximilians-‐University Munich, Germany [email protected] When vision becomes useless or unavailable, mammals develop remarkable skills exploring their habitat through the auditory analysis of the echoes of self-‐generated sounds. I will report on combinations of formal psychophysics, imaging and electrophysiology investigating how 3D objects of different size are perceived and cortically represented in bats and humans that are trained in echolocation. We show first that bat biosonar shows size constancy and this perceptual skill is reflected in responses of units in the high-‐frequency parts of the bat primary auditory cortex and the adjacent anterior auditory field. Second, we show that binaural analysis of the object's sonar aperture, rather than simple target strength, underlies the perception and neural encoding of extended objects. Finally
8
we show how well humans can use biosonar to explore the size of enclosed spaces. Techniques allowing to do human biosonar in fMRI reveal extraordinary coupling between sensory and motor cortices with information spill-‐over. ID: 284
Auditory scene analysis in the cortex of birds and mammals Georg M. Klump, Naoya Itatani, Lena-‐Vanessa Dolležal, Sandra Tolnai, Rainer Beutelmann Cluster of Excellence "Hearing4all", School of Medicine and Health Sciences, Carl von Ossietzky University of Oldenburg, Germany georg.klump@uni-‐oldenburg.de Identifying the acoustic signals from different sources in the natural environment is a crucial for receivers. The perceptual mechanisms underlying the segregation of the signals from these sources in complex auditory scenes can be studied applying paradigms in which the performance of the auditory system depends on its ability to segregate the signal streams from different sources. Such tasks are termed “objective stream segregation tasks” and these tasks lend themselves to evaluating both the perceptual performance as well as the neuronal response. Here we present results from two studies using an approach based on signal-‐detection theory (SDT). In the first study we demonstrate that the ability of European starlings to analyze the relative timing of two sequential signals depends on whether these are processed as belonging to one or two streams. In one-‐stream processing deviations from a regular time pattern can be detected better than in two-‐stream processing. The representation of the physical signal characteristics by the birds’ primary auditory forebrain neurons can explain the sensitivity observed in perception. These neurons, however, do not reflect the birds’ perceptual decision which is consistent with evidence obtained in earlier non-‐invasive studies on human auditory streaming. In the second study we demonstrate that informational masking in the perception of intensity increments by Mongolian gerbils depends on whether the stream of signals containing the standards and the targets with an increment are processed in the same or in a different stream as distractors with a roving level. An SDT analysis of
responses in the gerbil inferior colliculus and auditory cortex reveal a neuronal correlate of perceptual informational masking already at the level of the midbrain. Inferior colliculus neurons show a sensitivity that corresponds to the perception. The performance of gerbils in this informational masking task is similar to that of human subjects. In summary, both studies demonstrate that animal models are suitable to unravel the neuronal mechanisms underlying the perception of signals in complex auditory scenes. S1/2: Sunday, 14/Sep/2014, 11:00am -‐ 12:30pm ID: 216
Neural representations of voice and meaning in the avian auditory cortex Julie Elie, Frederic Theunissen Dept. of Psychology, University of California Berkeley, United States of America [email protected] Understanding how the brain extracts meaning from vocalizations is a central question in auditory research. Communication sounds distinguish themselves not only by their acoustical properties but also by their information content. Here, we are developing the birdsong model to investigate how the auditory system extracts invariant features carrying symbolic information of the acoustic signals and categorize communication sounds according to their social meanings. Songbirds have been used extensively to study vocal learning but the communicative function of vocalizations and their neural representation has yet to be examined. In our research, we first generated a library containing the entire zebra finch vocal repertoire and organized communication calls along 9 different categories. We then investigated the neural representations of these semantic categories in the primary and secondary auditory areas of 6 zebra finches. To decrypt the neural computations underlying the classification of these calls into semantic categories, we used a combination of optimal decoding methods and encoding models of the neural response that took into account both the acoustical properties of the sounds and their semantic grouping. Both decoding and encoding
9
analyses show that neural responses in higher auditory areas can be more effectively explained by models that describe sounds in terms of their semantic content rather than just their acoustical features. The optimal decoding method revealed that for 2/3 of the units, neural discriminability of semantic groups is higher that what could be expected from the spectro-‐temporal features of the stimuli only. The encoding model revealed that many neural responses are best explained by non-‐linear transformations of spectro-‐temporal sound patterns and that these non-‐linearities emphasize the semantic grouping of calls. Combining these results with the anatomical properties of cells (positions and spike shapes) gives new insight into the neural representation of meaningful stimuli in the avian auditory neural network. ID: 308
Artificial Grammar learning and the primate brain Christopher I. Petkov Laboratory of Comparative Neuropsychology, Newcastle University, United Kingdom [email protected] Many animals, nonhuman primates included, are not thought to be able to combine their vocalizations into structured sequences. Nonetheless, it remains possible that these animals could learn to recognize certain types of rule-‐based sequences, such as those generated by Artificial Grammars (AGs), and that sequence learning abilities could have been evolutionary precursors to human language. Thus, an interesting empirical question is which animal species can learn various levels of AG structural complexity. Understanding this could clarify the evolutionary roots of human language and facilitate the development of animal models to study language precursors at the neuronal level. In this talk I will first describe the results from behavioral AG learning work that we have conducted with macaque and marmoset monkeys, two species of nonhuman primates representing different primate evolutionary lineages. Here, I will propose a quantitative approach to relate our findings to those that have been obtained in other animal species (including songbirds) and with different AGs.
Then I will describe fMRI results on macaque brain regions that are involved in AG learning and how these results compare to fMRI findings in humans and chimpanzees (the latter conducted with collaborators at Yerkes Primate Research Center). I conclude by overviewing macaque neurophysiology work which provides insights on neuronal responses and cortical oscillations associated with AG learning. ID: 299
Auditory cortex of primates: What can we learn from visual cortex (and from control theory)? Josef Rauschecker1,2 1 Dept. of Physiology and Biophysics, Georgetown University, United States of America; 2Institute for Advanced Studies, Technical University Munich, Germany [email protected] A comparative view of the brain, comparing the same or similar functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. The dual-‐stream model of auditory cortical processing was originally conceived by analogy to the visual cortex and incorporates neural mechanisms that are found in both the visual and auditory systems. Examples are direction selectivity, size selectivity, as well as simple and complex cells based on the segregation of on-‐ and off-‐sub-‐regions of the receptive field. On a larger scale, dual processing pathways have been envisioned as representing the two main facets of sensory perception: 1) identification of objects and 2) processing of stimuli in space. However, the analogies are even more far-‐reaching than that and, by further expanding this view in terms of control theory, may offer an overarching model of cortical function. The expanded view of dorsal-‐stream function in primates was first presented at AC2011 (Rauschecker 2011). The model defines a more general role of the dorsal pathway in sensorimotor integration and control. For instance, the production, storage and anticipation of stimuli resulting from action sequences may be mediated by the dorsal stream through its close relationship to
10
sensorimotor structures in parietal and premotor cortex and in the basal ganglia. Expressed in the language of control theory, dorsal-‐stream function may subserve the encoding of actions as “forward models” informing sensory structures of actions that are about to occur, i.e. as internal models that produce a predicted sensation. Conversely, dorsal-‐stream function also relates to the programming of motor structures by sensory information as “inverse models”. In this case, the motor system (“controller”) receives the desired sensation as input and must find
actions that cause actual sensations to be as close as possible to desired sensations (Jordan and Rumelhart 1992). Similar models of audio-‐motor function have been proposed for the vocal system of songbirds and have obvious relevance for the understanding of communication and its evolution in general (Rauschecker 2012). Language and music in humans are two cognitive functions that are highly evolved but are likely based on the same mechanisms (Bornkessel-‐Schlesewsky et al., 2014).
11
Session 2: The hearing-‐action cycle When we hear sounds we may decide to orient and act towards the location where they come from. When we move we frequently generate sounds, and we use sounds to guide and control our movements and actions. The interrelationships between sounds and actions have recently come into the interest of researchers of auditory cortex. They complement recent research on the representation of non-‐auditory aspects of auditory tasks in auditory cortex. They also complement the notion that auditory cortex functions as a "semantic processor" deducing the task-‐specific meaning of sounds. In this session, speakers will present different views on the involvement of auditory cortex in the hearing-‐action cycle. Specifically, this will include audio-‐vocal interactions, the neural basis and speech perception and production, auditory-‐motor interactions in rhythm production and perception, decision making, and involvement of brain structures different from auditory cortex, such as motor cortex, basal ganglia, and cerebellum. S2/1: Sunday, 14/Sep/2014, 2:00pm -‐ 3:00pm ID: 297
Attention and prediction in audition Erich Schröger Institute of Psychology, University of Leipzig, Germany schroger@uni-‐leipzig.de Attention is a hypothetical mechanism in the service of perception that facilitates the processing of relevant and inhibits the processing of irrelevant information. Prediction is a hypothetical mechanism in the service of perception that considers prior information when interpreting the sensorial input. I will present classical theories of voluntary and involuntary attention from cognitive psychophysiology and contrast/relate those to recent findings from research on the increasingly popular field of (auditory) prediction. As attention is obviously related to prediction, the relation between these fields / concepts is of interest. First, I will show that the auditory system exploits regularities in the
transitions between successive events in order to prepare for forthcoming sounds (transitions between auditory events as indicated by the MMN) that can readily explain a subset of phenomena belonging to involuntary attention. It will turn out that the auditory system can easily encode deterministic transitions but fails with stochastic transitions. Second, I will show that also transitions between motor and auditory events can generate auditory predictions (as indicated by N1/P2 suppression for self-‐generated sounds). It will turn out that the effects of attention and prediction sometimes go into opposite directions, but that they happen to occur in an overlapping time range. This poses the question of whether the repeatedly reported prediction effects are in fact effects of attention. In the third part, will argue against this attentional account of (some of the) prediction effects. However, there remains the question of how attention and prediction are related. The answer to this question depends on the definition of attention and prediction or on the specific type of attention and prediction being under consideration. If, for example, one confines attention to the processing of the selection of task-‐relevant information and prediction to the exploitation of predictability of the sensory input, the question is similar to the traditional question on the relation between voluntary and involuntary attention (Cattell, 1886; James, 1890). Another approach could be to stay within one framework (e.g. attention) and consider (possible) contributions from the other framework (e.g. prediction) to that. I will try to pin-‐point few possibilities where / how attention can kick in from the view of the predictive coding framework. ID: 277
Common neural ground for action-‐perception coupling and perception? Sonja A Kotz School of Psychological Sciences, University of Manchester, United Kingdom [email protected] While the role of forward models in predicting sensory consequences of action is well
12
anchored in a cortico-‐cerebellar interface, it is an open question whether this interface is action specific or extends to perceptual consequences of sensory input (e.g. Knolle et al., 2012; 2013 a&b). Considering the functional relevance of a temporo-‐cerebellar-‐thalamo-‐cortical circuitry that aligns with well known cerebellar-‐thalamo-‐cortical connectivity patterns, one may consider that cerebellar computations apply similarly to incoming information coding action, sensation, or even higher level cognition (e.g. Ramnani, 2006): (i) they simulate cortical information processing and (ii) cerebellar-‐thalamic output may provide a possible source for internally generated cortical activity that predicts the outcome of information processing in cortical target areas (Knolle et al., 2012; Schwartze & Kotz, 2013). I will discuss new empirical and patient evidence in support of these considerations and present an extended cortico-‐subcortical framework encompassing action-‐perception coupling and perception. S2/2: Monday, 15/Sep/2014, 8:30am -‐ 10:00am ID: 207
Circuits underlying cortical decisions Anthony Zador Cold Spring Harbor Laboratory, United States of America [email protected] To study how animals use sensory information to make decisions, we have developed a rodent model of auditory discrimination. We previously found that a subset of neurons in auditory cortex, those that project to the auditory striatum, play a central role in propagating information beyond the cortex. In my talk I will discuss these results, as well as recent experiments indicating that plasticity at corticostriatal synapses plays an essential role in the acquisition of the association between the sound and the action in this task.
ID: 287
A synaptic and circuit basis for sensori-‐motor integration in the mouse cortex David M. Schneider, Anders Nelson, Richard Mooney Duke University, Durham, United States of America [email protected] Sensory regions of the brain integrate environmental cues with neural signals that are generated internally, including copies of motor-‐related signals important to imminent and ongoing movements. In mammals, signals propagating from the motor cortex to the auditory cortex are thought to play a critical role in normal hearing and behavior, yet the synaptic and circuit mechanisms by which these motor-‐related signals influence auditory cortical activity remain poorly understood. Here we made intracellular recordings in freely behaving mice to identify motor-‐related synaptic signals in auditory cortical excitatory neurons. Prior to and during a wide variety of movements including locomotion, grooming, and vocalization, auditory cortical excitatory neurons exhibited decreased membrane potential variability, intrinsic excitability, and auditory responsiveness, indicative of postsynaptic inhibition. Consistent with this idea, multielectrode array recordings in the auditory cortex revealed that activity in fast-‐spiking, parvalbumin+ (PV+) interneurons increased prior to movement and before motor-‐related suppression of excitatory neuron activity. One potential source of this motor-‐related suppression is the secondary motor cortex (M2), which contains a subset of cells that make long range synapses on auditory cortical PV+ interneurons, through which M2 may drive movement-‐related suppression of auditory cortical activity. Indeed, multielectrode recordings and 2-‐photon calcium imaging showed increased activity in auditory-‐cortical projecting M2 neurons that preceded motor-‐related changes in auditory cortical activity. Moreover, optogenetically activating M2 axons in the auditory cortex of resting mice was sufficient to elicit movement-‐like auditory cortical membrane potential dynamics and to suppress tone-‐evoked responses, whereas silencing M2 excitatory cells during locomotion rapidly restored rest-‐like membrane potential
13
dynamics and tone-‐evoked responses in the auditory cortex. Finally, intersectional viral tracing experiments revealed multiple disynaptic pathways connecting M2 to the auditory cortex, suggesting that direct projections may act in concert with indirect circuits to modulate auditory cortical dynamics during movement. These findings provide a synaptic and circuit basis for the motor-‐related corollary discharge hypothesized to facilitate hearing and auditory-‐guided behaviors. ID: 298
Audiomotor and visuomotor neural dynamics during isochronous tapping in the medial premotor cortex of the macaque Hugo Merchant Instituto de Neurobiologia, National University of Mexico, Mexico [email protected] We studied the response properties of neurons in the primate medial premotor cortex during isochronous tapping to an auditory or visual metronome. Monkeys performed a three sequential elements task with five different target-‐intervals. Neurons were classified as sensory or motor based on a time-‐warping transformation, which determined whether the cell activity was statistically better aligned to sensory or motor events. Interestingly, we found a large proportion of cells classified as sensory or motor. Two distinctive clusters of sensory cells were observed, namely, one cell population with short response-‐onset latencies to the previous stimulus both, and another that were probably predicting the occurrence of the next stimuli. These cells were called classic-‐ and predictive-‐sensory neurons, respectively. Classic-‐sensory neurons showed a clear bias towards the visual modality, were mostly unimodal, and were more responsive to the first stimulus, with a decrease in activity for following elements of the metronome sequence. In contrast, predictive-‐sensory neurons responded to both modalities and showed similar response profiles across serial-‐order elements. Motor cells were mostly bimodal and showed a consecutive activity-‐onset across discrete neural ensembles, generating a rapid succession of neural events between the two taps defining a produced
interval. The cyclical configuration in activation profiles engaged more motor cells as the serial-‐order elements progressed across the task, and the rate of cell recruitment over time decreased as a function of the target-‐intervals. Our findings support the idea that motor cells were responsible for the rhythmic progression of taps in the task, gaining more importance as the trial advanced, while, simultaneously, the classic-‐sensory cells lost their functional impact. S2/3: Monday, 15/Sep/2014, 10:30am -‐ 11:30am ID: 293
Auditory-‐vocal interactions in primate cortex Xiaoqin Wang Laboratory of Auditory Neurophysiology, Dept. of Biomedical Engineering, Johns Hopkins University Baltimore, United States of America [email protected] The primate auditory cortex has long been considered a structure primarily responsible for sensory processing like other sensory cortices. However, an important difference between hearing and other sensory functions such as vision is that the auditory system must process self-‐produced sounds including speech and vocalizations during vocal communication in order to produce appropriate auditory feedback to guide vocal production and enable vocal learning. While much has been learnt on auditory-‐vocal interactions in the avian brain, relatively little is known on the primate brain. A number of studies in both animals and humans have now demonstrated that the vocal production system in the primate brain modulates neural activity in the auditory cortex during natural vocal behaviors. Our work has shown that a pre-‐vocal neural signal is sent to the auditory cortex whenever a marmoset self-‐vocalizes, allowing the brain to distinguish between internally and externally generated vocalizations and compute vocal production errors. Our most recent experiments use wireless neural recording techniques in freely moving marmosets have begun to explore the interlocking aspects of the auditory perception and vocal production systems in the primate brain.
14
ID: 296
Feature representation in human speech cortex during perception and production Edward Chang Dept. of Neurological Surgery, University of California San Francisco, United States of America [email protected] Communication systems generally rely on upon defined organizational schemes for signal generation and sensing. In humans, the production and perception of speech is processed by highly specialized
neuroanatomical areas and processes. We have recently identified important phonetic-‐level features for vocal tract control during articulation in the speech motor cortex, and for speech sounds in the higher order non primary auditory cortex. I will discuss important similarities and differences in these representational systems with respect to feature organization and dynamics. I will also present related work on auditory-‐vocal (sensorimotor) integration and transformation in speech.
15
Session 3: Auditory cortex: It's about time Time is most essential for processing of auditory-‐related information. Neurons in the auditory cortex are sensitive to aspects of sounds on multiple time scales, from a few milliseconds up to several seconds, and in this way possibly encode the complexity of past auditory stimulation. Furthermore, this attribute may also play a crucial role in the prediction of upcoming auditory events. The speakers of this session will address, in presumably controversial fashion, issues derived from studies on humans and animals and related to the representation of time and the relevance of time, like stimulus specific adaptation, novelty and/or change detection, auditory cognition and memory, prediction, streaming, and other temporal mechanisms of information encoding. S3/1: Monday, 15/Sep/2014, 11:30am -‐ 12:30pm ID: 300
Temporal dynamics and spatial distribution of electrocorticographic high gamma activity during target detection tasks Mitchell Steinschneider Albert Einstein College of Medicine, New York, United States of America [email protected] Just as sounds evolve over time, so do the neural events involved in their processing and perception. Here, we describe the temporal dynamics and spatial distribution of electrocorticographic (ECoG) high gamma (70-‐150 Hz) activity occurring in auditory and auditory-‐related cortex while subjects performed sound detection tasks. Subjects were patients undergoing invasive monitoring for medically refractory epilepsy. Each had subdural grid electrodes implanted over posterior lateral superior temporal gyrus (PLST). Additionally, responses were measured from depth electrodes implanted within Heschl's gyrus (HG), and from electrodes located over frontal cortical regions. In one paradigm, responses elicited by target and non-‐target sounds, and to sounds presented during passive listening, were compared.
Responses to target sounds recorded from PLST were increased when compared to responses elicited by the same sounds when they were non-‐targets, and when they were presented during passive listening. An increase in high gamma activity to targets occurred during later portions of the response. These increases were related to the task and not to acoustic stimulus characteristics per se. In contrast, earlier activity that did not vary across conditions did represent acoustic characteristics. Task-‐related effects observed on PLST were not noted in HG. In a second paradigm, subjects performed semantic target detection tasks. Once again, activity in posteromedial HG and early activity on PLST were robust to the word stimuli regardless of their context, and minimally modulated by task. Later activity on PLST could be strongly modulated by semantic context, but not by behavioral performance, whereas activity within prefrontal cortex did co-‐vary with behavior. We propose that activity in posteromedial HG and early activity on PLST primarily reflect the representation of spectrotemporal sound attributes, whereas later activity on PLST reflects processes involved in sound categorization. Activity in prefrontal cortex appears more directly involved in sound object selection. Finally, we conclude with a brief description of preliminary data acquired while subjects performed the Mini-‐Mental Status Examination, and how this continuous, dialogue-‐based paradigm may enhance our understanding of the temporal dynamics and spatial distribution of neural events involved in auditory-‐related cognition. ID: 295
Acute and chronic effects of altered inhibition on auditory cortical receptive fields Christoph Schreiner, Bryan Seybold, Elizabeth Phillips, Andrea Hasenstaub Coleman Memorial Laboratory, UCSF Center for Integrative Neuroscience, University of California San Francisco, United States of America [email protected] Inhibitory processes play crucial and diverse roles in shaping of receptive fields of cortical
16
neurons and in defining local processing networks within and across cortical layers. We shall discuss the relationship between different inhibitory networks and functional processing of spectral and temporal response features of mouse auditory cortex. Genetic and optogenetic manipulations of inhibitory strength reveal changes to tonal and complex sound receptive fields. Chronic reduction of the number of somatostatin-‐positive neurons affected response sensitivity and timing. Acutely activating somatostatin-‐positive or parvalbumin-‐positive interneurons led to various changes in single-‐unit receptive fields across different degrees of firing rate suppression. Acute interneuron activation affected response duration and frequency tuning bandwidth in a similar way as seen for chronic reduction of inhibitory neurons. Compensatory network re-‐tuning after chronic changes in the inhibitory/excitatory balance seems to mimic acute dynamics of the network. Supported by Grants from NIDCD to CES, BS, and AH. S3/2: Monday, 15/Sep/2014, 2:00pm -‐ 3:00pm ID: 294
Young infants process the temporal structure of sound sequences István Winkler1,2, Gábor P. Háden1, Miklós Török3, Henkjan Honing4,5 1Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; 2Institute of Psychology, University of Szeged, Szeged, Hungary; 3Dept. of Obstetrics-‐Gynaecology and Perinatal Intensive Care Unit, Military Hospital, Budapest, Hungary; 4Institute for Logic, Language and Computation, University of Amsterdam, The Netherlands; 5Cognitive Science Center Amsterdam, The Netherlands [email protected] Communication by sounds requires coordination between parties and, therefore, it inevitably involves the detection of temporal regularities. Temporal regularities can also convey prosodic information for both speech and music. Whereas young infants possess sophisticated discrimination abilities for temporal cues by the age of 6 month, much less is known about the functionality of these
abilities at birth. The aim of the current study was to assess whether the newborn brain is sensitive to onsets and offsets of sound trains as well as to changes in the sound presentation rate. To this end, event-‐related brain potentials (ERP) were recorded from 30 healthy full-‐term newborn infants to short trains of tones separated by a silent interval. Tone trains started at a slower pace, changed to a faster pace at a random position midway, and finally ended abruptly after a random number of tones. ERPs elicited by train onset and offset and at presentation rate change points demonstrate that the newborn brain is sensitive to these acoustic events. These results suggest that newborn infants process some of the dynamic cues necessary for speech and music perception, allowing them to enter dialogues well before they learn to speak. ID: 301
The cortical dynamics of speech perception and language comprehension David Poeppel1,2 1Dept. of Psychology, New York University, United States of America; 2Dept. of Neuroscience Max-‐Planck-‐Institute for Empirical Aesthetics Frankfurt, Germany [email protected] Speech contains temporal structure at the acoustic level that must be extracted to enable linguistic processing. To investigate the basis of this early analysis and identify which brain regions exhibit the appropriate sensitivity and specificity, we used ‘sound quilts’ – stimuli constructed by shuffling segments of foreign speech, approximately preserving speech properties at short timescales while profoundly disrupting them at longer scales. We manipulated the amount of natural speech structure by varying the quilt segment length. Using fMRI, we identified bilateral regions of the superior temporal sulcus whose responses to speech quilts increased with segment length. This effect did not occur for non-‐speech quilts, suggesting tuning to speech-‐specific temporal structure. When examined parametrically, the response to speech quilts plateaued at segment lengths of ~500 ms. Quilts made from time-‐compressed speech yielded a similar plateau despite the increase in stimulus structure per unit time. The imaging
17
results point to a locus of speech analysis in human auditory cortex (STS) with an intrinsic temporal limit between syllables and words. The speech signal requires one form of analysis. Human language, famously, is hierarchically structured, and mental representations of such structure are necessary for successful language comprehension. In speech, however, hierarchical linguistic structures, such as words, phrases, and sentences, are not clearly defined physically and must therefore be internally constructed during comprehension. How multiple levels of abstract linguistic structure are built and concurrently represented remains unclear and controversial. We demonstrate using MEG that, during listening to connected speech, cortical activity of different time scales is entrained concurrently to track the time course of linguistic structures at different hierarchical levels. Critically, entrainment to hierarchical linguistic structures is dissociated from the neural encoding of acoustic cues as well as from processing the predictability of incoming words. The results demonstrate syntax-‐driven, internal construction of hierarchical linguistic structure via entrainment of hierarchical cortical dynamics. S3/3: Tuesday, 16/Sep/2014, 8:30am -‐ 10:00am ID: 209
Temporal integration resulting from short-‐term plasticity in auditory cortex Patrick J. C. May Dept. of Biomedical Engineering and Computational Science, School of Science, Aalto University, Finland [email protected] It is essential for auditory processing that incoming sounds are represented in the context of preceding events. This requires a memory mechanism which integrates or binds information over time. The neural mechanism which would allow this type of binding has remained elusive. In the current presentation, I discuss the possibility that short-‐term plasticity (adaptation) -‐ emerging from synaptic depression -‐ offers such a binding mechanism. Adaptation is thought to lead to a number of experimentally observed phenomena. Non-‐invasively, the most obvious is the suppression of event-‐related responses in the MEG and
EEG. Specifically, amplitude attenuation of the N1m can be observed with stimulus repetition and this is followed by response recovery (the so called mismatch response) to rare deviant events. However, the simplified stimulation with which these attenuation-‐recovery phenomena are revealed may belie the more useful aspects of synaptic plasticity. Computational simulations of auditory cortex show that plasticity is necessary for core and parabelt columns of the auditory cortex to respond selectively to speech sounds, monkey calls, and other spectrally and temporally complex stimuli. In this view, while short-‐term synaptic plasticity modulates the N1m response and allows for change detection, its real functional significance is the contribution it makes to temporal integration which is required for the processing of complex natural sounds such as speech. ID: 273
The coding of surprise in auditory cortex Israel Nelken Dept. of Neurobiology, The Silberman Institute of Life Sciences, Hebrew University of Jerusalem, Israel [email protected] The responses of neurons throughout the auditory system are strongly modulated by the temporal structure of the stimulating sequence. One way of studying this sensitivity is through stimulus-‐specific adaptation (SSA) – the specific reduction in the responses to a common stimulus that is not, or only partially, generalized to other stimuli. SSA in the auditory system is widespread -‐ it is present in all mammalian species in which it has been tested, and is present at least from the inferior colliculus all the way to auditory cortex. The simplest model for SSA is based on adaptation of narrowly tuned modules, and may underlie SSA in many parts of the auditory system. I will argue that this mechanism for SSA does not encode the deviations from expectations that would be associated with the term ‘surprise’. While responses in the inferior colliculus are roughly consistent with this model, when using both tones and broadband stimuli, neurons in auditory cortex are not: their responses to deviant stimuli in oddball sequences are larger than expected from the model, and they show both selectivity and SSA to broadband stimuli
18
that are incompatible with the model. I will show how specific subclasses of neurons in auditory cortex differ in their SSA and deviance sensitivity. Using optogenetic techniques, I will show that cortical inhibition controls the level of deviance sensitivity shown by principal neurons. These results suggest that surprise is an essential component of the responses of cortical neurons (but perhaps not of subcortical neurons). ID: 201
Predictive processing supports listening in complex environments Alexandra Bendixen Carl von Ossietzky University of Oldenburg, Dept. of Psychology Germany alexandra.bendixen@uni-‐oldenburg.de The nature of acoustic information transmission poses significant challenges to the auditory system: Temporally overlapping signals must be disentangled into the underlying sources, and any analysis must be performed in real time, with no possibility to re-‐visit missing information. Current theories posit that these complex operations are accomplished by means of predictive processing: The auditory system uses prior information to predict upcoming sounds,
thereby reducing processing complexity as the predicted signals arrive. Here, the use of such predictive strategies during sound source segregation will be demonstrated based on behavioral and EEG evidence. Results of several studies converge to suggest that predictability supports sound source segregation not only when it is present in the sound source of interest (perceptual foreground), but also when it is present in other sound sources that the listener wishes to ignore (perceptual background). It will then be discussed whether these laboratory findings can reasonably be extended towards more ecologically valid environments, containing predictability in a wider sense. To this aim, we introduced different types of uncertainty into the auditory sensory input and assessed by means of EEG whether predictive processing is still operative under such conditions. Results show that predictive processing scales gradually with the amount of uncertainty in the sound sequences. Altogether, exploiting sound predictability in an automatic manner appears to be a feasible mechanism for trying to derive meaning from the complex acoustic mixture arriving at the ears.
19
Session 4: Auditory cortex: Clinical aspects The Conference on Auditory Cortex so far has focused on basic research of auditory functions with the aim of bridging the gap between animal and human research. This time we would like to make a first step towards bridging another gap, namely between fundamental and clinical research. Therefore, we devoted one conference session to three major clinical topics in auditory research, viz. auditory based language impairment, restoration of hearing by cochlear implants, and tinnitus. The aim of this session is to provide an overview of the achievements in the respective research areas and to define challenging future questions. In addition, we intend to arouse more interest in clinical aspects of auditory functions, both to transfer knowledge from basic research into the clinic and to better understand normal auditory processing. Tuesday, 16/Sep/2014, 10:30am -‐ 12:30pm ID: 305
Disorders of the auditory brain: How auditory neuroscience and clinical observations inform each other Timothy D. Griffiths Institute of Neuroscience, Newcastle University, United Kingdom [email protected] I will describe work to define human systems for the analysis of pitch, pitch sequences, timing and rhythm. The work is based on fMRI BOLD measurement, indirect measurement of brain oscillatory activity using MEG, and direct measurement of oscillatory activity using electrocorticopgraphy and depth electrode recording. These normal systems become deranged in clinical disorders including positive disorders such as tinnitus and musical hallucinations and central deficits such as auditory agnosia. The studies of the normal and abnormal system are mutually informative. I will argue that a framework based on predictive coding allows insights into both.
ID: 108
The auditory cortex and tinnitus: A comparison between animal and human studies Jos Jan Eggermont Dept. Physiology and Pharmacology, University of Calgary, Canada [email protected] Is the auditory cortex important for the tinnitus percept? If one looks at the number of animal studies on tinnitus in brainstem and midbrain auditory nuclei in comparison to those devoted to the thalamus and auditory cortex one would dismiss it as relevant. However, if one looks at human imaging and electrophysiology studies, the picture is completely opposite. Here beside studies related to auditory cortex, a large number of papers now focuses on the brains default networks and changes therein in tinnitus patients. Tinnitus is never occurring in isolation, it typically develops after hearing loss and not infrequently for losses at frequencies not tested in clinical audiology. Furthermore tinnitus is often accompanied by hyperacusis, which in its most frequent form as increased loudness sensitivity may reflect the central gain change in the auditory system that occurs after hearing loss. In its most serious form, a painful experience accompanying even moderate level sounds, it may result from oversensitivity of the type II nerve fibers in the cochlea that sense the activity of the outer hair cells (here I speculate). I will first review the electrophysiological findings in thalamus and cortex pertaining to animal research into tinnitus. This will take the form of sorting out the changes in tonotopic maps, spontaneous firing rate, burst firing, and changes in pair-‐wise neural cross-‐correlation induced by two tinnitus inducing agents that are commonly used in animal experiments. These are systemic application of sodium salicylate, and noise exposure at levels ranging from those do not cause a hearing loss, only cause a temporary threshold shift, or cause a permanent hearing loss. Following this we will review some neuro-‐imaging and electrophysiological findings in auditory cortex in humans with tinnitus. The correlative triad
20
mentioned above confounds studies on the mechanism of tinnitus, as neither hearing loss nor hyperacusis are necessary conditions for tinnitus to occur. ID: 286
Cortical plasticity and re-‐organization in hearing loss Anu Sharma University of Colorado at Boulder, United States of America [email protected] A basic tenet of neuroplasticity is that the brain will re-‐organize following sensory deprivation. Sensory deprivation appears to tax the brain by changing its normal resource allocation. Hearing impaired adults and children who receive intervention with hearing aids and cochlear implants provide a platform to examine the trajectories and characteristics of deprivation-‐induced and experience-‐dependent plasticity in the central auditory system. We review the evidence for sensitive periods for development of the central auditory pathways. A sensitive period in early childhood appears to coincide with the period of maximal synaptogenesis in auditory cortex. Implantation within this sensitive period provides the auditory experience needed for refinement of essential synaptic pathways. Compensation for the deleterious effects of hearing loss may also include recruitment of alternative or additional brain networks to perform auditory tasks. Cross-‐modal recruitment is an aspect of plasticity that is apparent in hearing loss. In congenital deafness, somatosensory and visual appear to recruit higher-‐order auditory areas impacting auditory outcomes with a cochlear implant. In recent studies, we find that cross-‐modal re-‐organization and changes in neural resource allocation are also evident in adult-‐onset post-‐lingual hearing loss. Our high-‐density EEG experiments suggest that age-‐related hearing loss results in significant changes in neural resource allocation, reflecting patterns of increased listening effort and decreased cognitive reserve which may be associated with dementia-‐related cognitive decline. Overall, it appears that the functional activation of cognitive circuitry resulting from cortical reorganization in deafness is predictive
of outcomes after intervention with amplification or electrical stimulation. A better understanding of cortical development and reorganization in auditory deprivation has important clinical implications for optimal intervention and habilitation of these patients. Research supported by the U.S. National Institutes of Health.
ID: 275
Unraveling the biology of auditory learning in humans Nina Kraus Auditory Neuroscience Laboratory, Dept. of Communication Sciences, Northwestern University, United States of America [email protected] Our neural probe, although nominally of midbrain origin, serves as a snapshot of the larger auditory system involving sensory, cognitive, and reward centers. With it, assessing large quantities of subjects across the age span, in cross-‐sectional and longitudinal designs, and with training, we have uncovered some “neural signatures.” Because of the close adherence to the evoking stimulus, and that the stimulus is acoustically complex, this biological activity can be thoroughly characterized in terms of timing, harmonics and phase. We can also examine its inter-‐trial consistency and phase locking. From this broad canvas, we have been able to discern neural signatures of auditory learning that have promising clinical applications. We have found signatures of enhancement, such as seen in musicians and bilinguals; we have found signatures of deprivation such as seen in poverty and clinical conditions such as language disorders and difficulty hearing in noise. The effect on the response is not global; the response characteristics that are affected differ widely: sliders on a studio mixer rather than a volume knob. Given the known mutability of the auditory system, and starting from the grounding of the signature responses, we are able to investigate learning-‐related neuroplasticity in individuals. Learning approaches include amplification, music instruction and software-‐based training. With training, we have observed a change in response properties; older individuals regain a “young” response signature; poor readers’ responses look more like those of good
21
readers. The hope is that with such a measure, uniformity can be achieved—across populations, across species, across labs, and in the clinic—in our understanding of auditory
learning and brain health. Supported by NIH; NSF; Knowles Hearing Center; see www.brainvolts.northwestern.edu.
22
23
Session 5: Multisensory interplay in auditory cortex The problems of how the different senses merge in the brain and of how the brain associates this information with behavioral demands have kept neuroscientists busy already for several decades. Initially, research focused on "classical" multisensory brain structures like the superior colliculus and the parietal cortex; recent research also includes low-‐level ("unisensory") cortical areas. In this session, authors from various fields of animal and human research will present their scientific results and views on the role of the auditory cortex in multisensory interplay. They will highlight, for example, specific functions of the different auditory fields in multisensory integration processes, comparisons to other sensory, "classical" multisensory and higher-‐order association areas of the brain, anatomical pathways and mechanisms of multisensory integration at various cellular and areal levels, the role of multisensory information in learning, memory, and behavior, and cross-‐modal reorganization processes following sensory impairments. S5/1: Tuesday, 16/Sep/2014, 2:00pm -‐ 3:30pm ID: 120
Multisensory interplay in the primary auditory cortex (A1): Anatomical pathways, functional implications, and comparisons to S1 and V1 Eike Budinger Dept. Systems Physiology of Learning, Leibniz Institute for Neurobiology Magdeburg, Germany budinger@lin-‐magdeburg.de Converging evidence from recent anatomical and electrophysiological studies in animals as well as non-‐invasive imaging studies in humans have shown that multisensory integration does not only recruit higher-‐level association cortex, but also low-‐level and even primary sensory cortices. Here, I will review possible anatomical pathways and functional implications of multisensory interplay in the primary auditory cortex (A1) of different mammals including humans and I will compare the results on A1 with those on the primary somatosensory (S1) and visual cortex (V1).
Generally, there are at least three possible neuronal pathways mediating multisensory interactions in A1, which are not mutually exclusive: (i) Auditory and non-‐auditory information could be integrated at subcortical levels and then be conveyed “bottom-‐up” to AI. This integration could occur within the “classical” auditory pathway or within multisensory subcortical structures, which project to AI. (ii) Auditory and non-‐auditory information could be integrated in AI. The non-‐auditory information could be directly relayed to AI either from subcortical or from cortical brain regions, which serve essentially unisensory functions. (iii) AI could receive feedback, “top-‐down”, inputs from multisensory “higher-‐order” centers of the cortex. There are strong evidences for all three possibilities in various mammalian species. Most noteably, there are direct projections from thalamic nuclei of other sensory modalities to A1 as well as direct interconnections between A1, S1, and V1 (possibility ii). These connections could, for example, mediate short latency neuronal integration processes, which in turn might be suitable for the decrease of reaction times and for the increase of the detection of weak multimodal stimuli. S1 and V1 also receive direct inputs from thalamic nuclei and cortical regions of other sensory modalities. However, the input ratios of crossmodal thalamocortical inputs and the nature of corticocortical connections (i.e. feedback, feedforward, lateral) of S1, V1, and A1 differ, suggesting a different role or mechanisms of multisensory interplay in the three primary sensory cortices. ID: 270
Somatosensory processing in ferret core auditory cortex M. Alex Meredith, Brian L. Allman Virginia Commonwealth University School of Medicine, United States of America [email protected] The recent demonstration that the primary sensory cortices exhibit multisensory
24
responses represents a paradigm shift for neuroscience. This finding has been based largely upon the visual and auditory sensory modalities and their representations, almost to the exclusion of somatosensation. Therefore, the present investigation examined the core auditory cortices (anterior – AAF, and primary auditory-‐ A1, fields) for tactile (as well as visual and auditory) responsivity. Multiple single-‐unit recordings from anesthetized ferret cortex yielded 311 histologically verified neurons of which nearly all (n=310) were activated by acoustic stimulation. As expected, a small proportion (11%) was also influenced by visual cues, but a larger number (19%) was affected by tactile stimulation. Tactile effects occurred as overt, active spiking responses in bimodal auditory-‐tactile neurons, or as suppression of concurrent auditory activation in subthreshold multisensory neurons. Furthermore, tactile effects were observed even among auditory neurons defined as unisensory: at the population level, unisensory auditory neurons’ response to acoustic stimulation was significantly suppressed by co-‐presentation of a tactile cue. Further analysis of the entire neuron sample revealed that tactile inputs consistently exhibited the most robust influence not as independent somatosensory effects, but as a modulator of the responses to auditory stimulation. Anatomically, that the core auditory cortices receive a major input (34% of all external projections) from the rostral suprasylvian sulcal somatosensory representation supports the observed tactile effects. Furthermore, both early and late deafness or partial hearing loss in ferrets results in the crossmodal reorganization of the ferret core auditory cortices as essentially somatosensory representations. Collectively, these results demonstrate that multisensory effects in auditory cortex are not exclusively visual and that somatosensation can play a significant role in acoustic processing in the auditory cortex of hearing individuals as well as in the brain’s plasticity following hearing loss. Supported by NIH Grant NS39460 and VCU PRIP (MAM) and NSERC Discovery grant (BLA).
ID: 292
Crossmodal plasticity of the auditory cortex in deafness: From anatomy to brain imaging in cochlear implanted deaf patients Pascal Barone Centre de Recherche Cerveau et Cognition, Faculté de Médecine de Rangueil, Toulouse, France [email protected]‐tlse.fr There is now a large body of psychophysical and neuroimaging studies in both animal and human subjects showing that auditory deprivation from early developmental stages leads to functional compensations that favor the spared modalities. Perceptual crossmodal compensation in deafness is accompanied by functional reorganizations such as an invasion of the deprived auditory cortical areas by visual functions. The degrees of functional reorganization and cross-‐modal compensation are highly dependent on the age at which the sensory deprivation occurs, as a result of the decreasing capacities of adaptive plasticity of the brain from birth to adulthood. Our work is aimed at understanding the neuronal mechanisms of cortical plasticity in adult and during development that support crossmodal reorganization after deafness. Further, a hearing can be restored through cochlear implant (CI), the functional interactions between the visual and auditory modalities will be assess in light of the capacity of the auditory system to regain its original function. First in congenital deaf cats anatomical tracing performed at the level of the auditory areas A1 and DZ revealed that most of the aeral specificity of the connectivity pattern of AI and DZ is preserved (Barone et al, PLoS One 2013). However sparse non auditory inputs from visual thalamic (LP) and cortical regions (areas 20/21) were observed toward A1 and DZ respectively. This lack of a massive reorganization of the auditory connectivity suggests that most of crossmodal compensation after deafness is supported by the normal network of heteromodal connectivity implicated in multisensory interactions (Cappe et al, Hear. Res 2009). Such result are supported by our brain imaging study performed in postlingual adult deaf CI patients. PET scan brain imaging studies revealed a crossmodal visual activation in the
25
auditory temporal areas during speechreading. These abnormal activity levels diminish with post-‐implantation time and tend towards the levels observed in normal hearing (Rouger et al, Human Brain Map. 2012). Further, as the strategy adapted by CI users for speech comprehension is linked to brain plasticity, we search for brain regions whom the level activity at time of implantation is correlated with the level of auditory recovery. Correlations were observed in a set of areas including the auditory cortex with the highest correlation in the occipital cortex involved in visual processing. Indeed, the initial high activity of the visual cortex provides the best potential to favor auditory recuperation (Strelnikov et al, Brain 2013). In a more general perspective, the influence of the visual cortex on the efficiency of the purely auditory speech perception suggests the existence of some neural facilitation mechanisms that link both sensory modalities. Such visuo-‐auditory synergy may be a reflection of the multisensory nature of speech processing and supports the important role of visual input for speech comprehension in cochlear implanted postlingual deaf patients. S5/2: Wednesday, 17/Sep/2014, 8:30am -‐ 10:00am ID: 288
Cholecystokinin from the entorhinal cortex enables encoding of visuoauditory memory in the auditory cortex Jufang He Dept. of Biology and Chemistry, City University of Hong Kong, Hong Kong SAR, China [email protected] Patients with damage to the medial temporal lobe show deficits in forming new declarative memories but can still recall older memories, suggesting that the medial temporal lobe is necessary for encoding memories in the neocortex. Here, we found that cortical projection neurons in the perirhinal and entorhinal cortices were mostly immunopositive for cholecystokinin (CCK). Local infusion of CCK in the auditory cortex of anesthetized rats induced plastic changes that enabled cortical neurons to potentiate their responses or to start responding to an auditory
stimulus that was paired with a tone that robustly triggered action potentials. CCK infusion also enabled auditory neurons to start responding to a light stimulus that was paired with a noise burst. In vivo intracellular recordings in the auditory cortex showed that synaptic strength was potentiated after two pairings of presynaptic and postsynaptic activity in the presence of CCK. Infusion of a CCKB antagonist in the auditory cortex prevented the formation of a visuo-‐auditory association in awake rats. Finally, activation of the entorhinal cortex potentiated neuronal responses in the auditory cortex, which was suppressed by infusion of a CCKB antagonist. Together, these findings suggest that the medial temporal lobe influences neocortical plasticity via CCK-‐positive cortical projection neurons in the entorhinal cortex. In the second part of the experiment, the bilaterally electrode-‐implanted rat was trained to retrieve water-‐reward from either the leftmost or the rightmost hole depending on which hemisphere of the auditory cortex stimulation was triggered after it initiated the trial. After the stimulation site of one hemisphere was infused with CCK, a previously irrelevant light stimulus was then paired with the electrical stimulation of the infused hemisphere for multiple sessions in the anesthetized rat. The auditory cortex neurons responded to the light stimulus in both anesthetized and behavioral conditions. All rats approached to the “engineered” hole after they triggered light stimulus instead of electrical stimulation of the auditory cortex one week after the first conditioning. The behavioral experiment revealed that the artificially memory was transferred to the behavioral action, providing a scientific foundation for “memory implantation”. Supported by the Hong Kong Research Grants Council (CRF09/9, 561410, 561111, 561212, T13-‐607/12R).
26
ID: 271
Visually-‐induced modulations in human auditory cortex: Evidence from functional imaging studies on temporal processing and temporal recalibration Toemme Noesselt Dept. of Biological Psychology, Institute of Psychology II, Otto-‐von-‐Guericke University Magdeburg, Germany [email protected] Previous research on audiovisual temporal perception suggested that audition dominates vision in the temporal domain. For instance, auditory stimuli can induce illusory visual stimuli or change visual flicker rate. However, perceived audiovisual synchrony does not always rely on perfect physical synchrony of visual and auditory input but can be flexibly adjusted to the current situation. While this adaptation effect is well-‐documented on the behavioral level, the neural underpinnings of this effect are still unknown. In a series of fMRI-‐experiments in humans we first identified the visual and auditory regions involved in synchronous and asynchronous audiovisual stream processing/perception. We found that the neural representations of the driven (visual) and driving (auditory) modality were modulated by audiovisual synchrony. In particular, fMRI-‐signals in low-‐level auditory cortex were modulated by audiovisual synchrony. In addition, regions in superior parietal, superior temporal and frontal regions differentially responded to audiovisual asynchrony. In a second fMRI-‐ experiment we tested for effects of adaptation to audiovisual asynchrony within the regions modulated in the first experiment. Behaviorally, the point of subjective simultaneity (PSS) for test stimuli was shifted toward the side of audiovisual asynchrony (visual leading/auditory leading) of the preceding adaptation period. fMRI-‐signals in parietal and frontal regions, which coded asynchronous perceptions in the first experiment, declined during the adaptation phase for asynchronous stimulation. Implications of these results will be discussed.
ID: 283
The behavioural relevance of auditory contributions to multisensory interactions spans from detection to single-‐trial memories Micah M. Murray1,2,3 1The Laboratory for Investigative Neurophysiology, Dept. of Clinical Neurosciences and Dept. of Radiology, The University Hospital Center and University of Lausanne, Switzerland; 2The Electroencephalography Brain Mapping Core, Centre for Biomedical Imaging (CIBM), Lausanne, Switzerland; 3The Center for Neuroscience Research, Dept. of Clinical Neurosciences, The University Hospital Center and University of Lausanne, Switzerland [email protected] This talk will provide a synthesis of our efforts to identify the spatio-‐temporal brain dynamics and behavioural relevance of auditory-‐visual multisensory interactions in humans and the consequence such have had on our understanding of the functional selectivity of low-‐level auditory and visual cortices. Across studies we have used combinations of psychophysics, ERPs, fMRI and TMS. Using these techniques in multisensory research often prompted (if nor required) improvements in signal analysis methods that will likewise be briefly summarized during the course of this talk and which can be used more generally for other domains of research. Several general conclusions about auditory-‐visual interactions in humans are supported relatively solidly from cumulative results of ourselves and others, in part because they derive from multiple brain imaging/mapping methods. First, (near) primary cortices are loci of multisensory convergence and interactions. Second, these effects occur at early latencies (i.e. <100ms post-‐stimulus onset). Third, these effects directly impact behaviour and perception. Finally, multisensory interactions affect not only current stimulus processing, but also later unisensory recognition. Current unisensory (auditory or visual) object recognition and brain activity are incidentally affected by prior single-‐trial multisensory experiences; the efficacy of which is predictable from an individual’s spatio-‐temporal dynamics of multisensory interactions. Together, these data underscore how multisensory research is changing long-‐
27
held models of functional brain organization and perception. Financial support has been provided by the Swiss National Science Foundation (grants 320030-‐149982 to MMM and
P2LAP3-‐151771 to AT as well as the National Centre of Competence in Research project ‘‘SYNAPSY, The Synaptic Bases of Mental Disease’’ [project 51AU40-‐125759]) and by the Swiss Brain League (2014 Research Prize to MMM).
28
29
Session 6: Learning in auditory cortex Quite generally, neurobiological research on learning has to bridge a categorical gap because learning is a phenomenon defined on the behavioral and psychological level. In auditory cortex research, the identification of potential neural mechanisms underlying specific alterations of behavior or psychophysical performance induced by learning has been particularly successful. In this session, speakers will report and discuss recent findings of physiological mechanisms underlying learning and learning-‐related phenomena on multiple levels, ranging from cellular physiology, via neural network dynamics and imaging results to behavior and psychophysics. The scope of the session comprises both fundamental research on the neuronal mechanisms sub-‐serving learning-‐induced plasticity and their potential clinical implications. A particular emphasis lies on the conceptual exploitation of the complementary results derived from human and animal research in this field. S6/1: Wednesday, 17/Sep/2014, 10:30am -‐ 12:00pm ID: 247
Transforming auditory associative experiences into auditory memory with plasticity in A1 Kasia M Bieszczad1,2 1 Dept. of Neurobiology and Behavior, University of California Irvine, United States of America; 2College for Life Sciences, Wissenschaftskolleg zu Berlin -‐ Institute for Advanced Study [email protected] A major function of the cerebral cortex is to store memory. Associative learning produces a frank reorganization of sensory cortex, which has been intensely studied in the context of learning auditory associations between sounds and their behavioral significance, e.g., by links to rewarding outcomes. For example, associative representational plasticity in primary auditory cortex (A1) enhances the cortical organization for specific acoustic cues (frequency, sound level, FM sweep direction, etc.) that predict reward. How do specific cues come to be enhanced in A1? This is a key
question because an apparent function of signal-‐specific reorganization in A1 is to enhance the subsequent strength of cue-‐specific auditory memory (e.g., Bieszczad & Weinberger 2010 PNAS; 2012 EJN). Therefore, understanding the circumstances under which plasticity in A1 occurs is essential to understand the basis of robust auditory memory formation. Indeed, learning does not always produce plasticity, nor does it always occur with gross effects on cortical reorganization (e.g., map expansions for sound-‐frequency cues can be large, small, or undetectable). Importantly, the lack of induction cannot be explained by differences in training paradigms as identical training protocols can yield differential results on cortical reorganization (neural effects) and auditory memory (long-‐term behavioral effects) without conspicuous effects on acquisition (short-‐term behavioral effects) per se. In this talk I will present data that suggest mechanisms of induction of A1 plasticity at two levels – behavioral and epigenetic – that potentially explain the conditions necessary to form auditory memory that is strong and cue-‐specific. The research presented will aim to approach a causal account of mechanisms of auditory memory across behavioral and neural levels, including a novel pharmacological approach in the domain of molecular epigenetics. Findings in this domain are highly relevant for the development of small-‐molecule therapeutic and combined pharmaco-‐behavioral clinical strategies for effective treatment of hearing and auditory learning disorders. ID: 272
Active suppression and encoding of vocal signals in single cortical neurons during auditory selective attention Timothy Gentner, Emily Caporello-‐Bluvas Dept. of Psychology, University California San Diego, United States of America [email protected] Tracking acoustic communication signals in natural, noisy, environments requires selective attention. Auditory selective attention influences broad-‐scale measures of neural
30
activity, such as EEG and LFP, but its effects on the encoding of acoustic communication signals in single neurons and well-‐defined populations of single neurons are unknown. We recorded extracellular action potentials from single neurons in the secondary auditory forebrain region CLM of awake behaving songbirds while controlling their attention to different conspecific songs overlapping in time. Most CLM neurons show time-‐varying responses that are significantly modulated by selective attention. As a population, responses resembled the target and distractor signals more closely when the animal responded correctly than when it responded incorrectly. To understand how the functional goal of selective attention -‐-‐ improving perception of specific sensory signals amongst competing targets – might be implemented, we sorted neurons on the basis of spike-‐width. This revealed a population of wide-‐spiking neurons whose responses carry information about both the attended and unattended signals, and surprisingly, a population of narrow-‐spiking (putative inhibitory inter-‐) neurons that preferentially encode a signal only when it is unattended. These results support a mechanistic model of selective auditory attention that involves the enhanced encoding of target streams and the active suppression of unattended streams, and suggests that the computations underlying these encoding schemes are tied to physiologically distinct neuronal subtypes. ID: 302
Dopamine-‐related plasticity in animal and human auditory cortex Frank W. Ohl1,2, André Brechmann3
1Dept. Systems Physiology of Learning, Leibniz Institute for Neurobiology Magdeburg, Germany; 2Institute of Biology, Otto-‐von Guericke University Magdeburg, Germany; 3Special Lab Non-‐Invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany frank.ohl@lin-‐magdeburg.de Several animal and human studies indicate that the neurotransmitter dopamine is implicated in both auditory learning and learning-‐related plasticity of auditory cortex function. However, in neither domain its precise role is fully understood. Here we review recent findings from animal and human studies in our
laboratories on potential roles of dopamine. Animal studies demonstrate roles of dopamine for both reward-‐ and punishment-‐motivated auditory learning. The general validity of widespread conceptualizations of dopamine function based on results obtained with reward-‐learning paradigms can be tested in avoidance paradigms where learning depends on the negative prediction error for an aversive stimulus, i.e. the absence of a predicted punishment as a consequence of a successful avoidance behavior. On the neuronal circuit level effects of dopamine were studied using a combination of behavioral assays, current-‐source density analyzes, pharmacological manipulation and electrical stimulation. Results indicate that dopamine unfolds its action during early phases of learning by transiently increasing the gain of a positive cortico-‐thalamocortical feedback loop that enhances states of high persistent activity in auditory cortex evoked by behaviorally relevant stimuli. Human fMRI studies on auditory category learning in which subjects had to learn by trial and error that a certain feature or combination of features are reinforced, were performed to identify the network of learning-‐related brain areas. The representation of target and non-‐target sounds in auditory cortex changed differentially over the course of a single fMRI session. However, a straight forward interpretation of such activation as dopamine-‐related cannot be made since similar activations result from non-‐reinforcing feedback. Experiments investigating the effect of feedback on auditory cortical activity in subjects under L-‐DOPA or placebo indicated that auditory cortex is activated at the time point of reward outcome, but that the responses are not dependent on the reward itself but on whether the outcome confirmed the subjects' expectations.
31
S6/2: Wednesday, 17/Sep/2014, 2:00pm -‐ 4:00pm ID: 304
Learning-‐induced plasticity in the processing of auditory spatial cues Andrew J. King, Peter Keating, Johannes C. Dahmen, Fernando R. Nodal, Victoria M. Bajo Dept. of Physiology, Anatomy and Genetics, University of Oxford; United Kingdom [email protected] The behavioural relevance of sounds can have a profound effect on the way they are represented in the brain. In particular, experience-‐dependent plasticity in these representations has been shown to accompany auditory learning. Most research in this field has focused on the effects of associative learning on the response properties of neurons in the primary auditory cortex of animals with intact hearing. An ability to adapt through experience to altered auditory inputs arising, for example, from partial hearing loss is also critical to survival. Our studies in ferrets have demonstrated that the auditory system is able to accommodate changes in the relationship between spatial cue values and sound source direction resulting from occlusion of one ear, and therefore to maintain accurate sound localization in spite of the highly abnormal inputs available. Adaptation to altered auditory spatial cues is possible both during development and in later life, and, at least in adulthood, this process is disrupted by inactivation of primary or non-‐primary regions of the auditory cortex and by the loss of cholinergic neurons in the basal forebrain that target the cortex. Our behavioural and physiological studies suggest that the ability to compensate for unilateral hearing loss relies on multiple mechanisms. These include adaptive shifts in neuronal sensitivity to the altered binaural cues, in particular interaural level differences. However, the primary basis for auditory spatial learning involves a reweighting of different cues, with localization judgments becoming more dependent on the monaural spatial cues provided to the intact ear and less on the binaural cues that are altered by the hearing loss. Importantly, this plasticity is context specific since changes in cue weighting can be reversed as soon as normal hearing is
restored, ensuring that the auditory system uses the optimal strategy for localizing sound under different hearing conditions. How these complementary mechanisms of plasticity interact remains to be identified, but the finding that the brain can utilize different strategies to adapt to altered inputs highlights the remarkable flexibility of auditory spatial processing. ID: 306
Perceptual learning in the developing auditory cortex Shaowen Bao Helen Wills Neuroscience Institute, University of California Berkeley, United States of America [email protected] Learning has been defined as an enduring change in the mechanisms of behavior that results from experience with the environmental events. Perceptual learning is the specific and relatively permanent modification of perception and behavior following sensory experience. Exposure to specific acoustic experience in the critical period of early sensory development alters cortical sound representations and perceptual behavior, and therefore is a form of perceptual learning. However, the type of perceptual learning resulting from early sensory exposure is unique in that it does not involve an explicit training process -‐ there is no instruction on the desired response or feedback for the actual response. In the absence of instructions or feedbacks, how does the auditory system know what and how to learn in order to adapt to its specific acoustic environment? Nature acoustic environment typically comprises environmental sounds (e.g., wind blow, water flow…), and animal vocalizations (pup calls and adult encounter calls) and non-‐vocalization sounds (e.g., from footsteps, wing flaps …). Among those sounds, animal vocalizations are arguably the most structured and biologically relevant. They are complex and diverse, but also have some common characteristics. For example, most mammalian vocalization calls are repeated at a temporal rate in the range from 5 to 10 Hz. Within a vocalization bout, the calls are repeated with variations. These statistical structures provide a basis for categorization of animal
32
vocalizations: 1) sounds that are temporally spaced in a bout at 5-‐10 Hz likely belong to the same category; 2) the differences among individual sounds within a bout likely represent category variability. Electrophysiological and behavioral studies indicate that 1) auditory cortex over-‐represents sounds repeated at 6 Hz, but not 2 or 15 Hz, and 2) sequential structure between sounds can shape representational and perceptual boundary. These findings indicate that the developing auditory cortex is shaped by the statistical structures of the acoustic environment, and developmental cortical plasticity is a mechanism underlying perceptual learning. The research was supported by National Institute of Health. ID: 303
Understanding the network for top-‐down control of auditory representation Stephen V. David Oregon Health and Science University, United States of America [email protected] The mammalian auditory system involves a network of hierarchically organized areas that shape representations according to the current demands of behavior. Previous work has shown a diversity of behavioral effects in auditory cortex and suggests that these effects increase in more central areas of the processing hierarchy. In order to build a systematic understanding of how behavioral state influences representations across this network, we are taking a two-‐pronged approach. First we study how distinct aspects of behavioral state influence representations in a single brain area. Second, we compare the effects of the same behavioral manipulation at different levels of the auditory hierarchy. To contrast the influence of different aspects of behavioral state on auditory processing, we trained ferrets on a discrimination task in which selective attention and overall effort are controlled separately but stimuli are identical between behavior conditions. We recorded single-‐unit activity in primary auditory cortex (A1) during these behaviors. When selective attention was directed to a stimulus at a neuron’s BF, evoked activity was weaker than when attention was directed away from BF. Spontaneous spike rate did not change. When
overall effort was manipulated, on the other hand, we observed changes in both evoked and spontaneous activity. Thus these two aspects of behavior state have distinct influence on activity in A1, suggesting distinct functional interfaces for the respective top-‐down control signals. To compare effects of the same behavioral state variables at different levels of the auditory hierarchy, we compared behaviorally-‐driven changes during a simple tone detection behavior in midbrain inferior colliculus (IC) to changes previously reported in A1. Very little attention has been given to the possibility of behavioral influences in IC, but a substantial descending projection from auditory cortex suggests that cortical feedback may modulate incoming auditory signals. In IC single units, we observed a suppression of neural responses during behavior that depended on the similarity of the target tone frequency to neural BF, as in the previous A1 study. However, the suppression effects in IC reflected global changes in gain, rather than the local spectral tuning shifts observed in A1. Ongoing studies are investigating the effects of cortical inactivation on behavioral effects in IC. ID: 103
Cortical plasticity improves auditory perception Robert Froemke New York University School of Medicine, United States of America [email protected] Our lab studies neuromodulation and plasticity of the cerebral cortex. We generally focus on the functional consequences of changes to synaptic transmission in the auditory cortex of rats and mice, in terms of behavioral improvements and enhanced sensory perception. I will discuss new and unpublished work from our lab on acetylcholine, norepinephrine, and oxytocin, describing how these molecular signals act to alter cortical representations of sensory stimuli at the synaptic and spiking levels, and the consequences of these changes for auditory recognition tasks and social behavior.
33
S6/3: Wednesday, 17/Sep/2014, 4:30pm -‐ 6:00pm ID: 307
Functional and structural correlates of auditory abilities: Predispositions and plasticity Robert Zatorre1,2 1Montreal Neurological Institute, McGill University, Canada; 2International Laboratory for Brain, Music and Sound Research, Canada [email protected] Speech and music provide good model systems of sensory-‐motor processing to understand how brain function and structure are affected by learning. Recent evidence indicates that individual differences in anatomical and functional properties of the neural architecture affect both learning and performance. This lecture will review findings that reiterate evidence of brain plasticity as an outcome of training, but also point to the predictive validity of neuroimaging data in relation to new learning in speech and music domains. Indices of neural sensitivity to certain stimulus features have been shown to predict individual rates of learning; individual network properties of brain activity are especially relevant in this regard, as they may reflect patterns of functional connectivity that differ systematically across individuals, and that may facilitate or constrain learning. Similarly, numerous studies have shown that anatomical features of auditory cortex and related structures, along with their connectivity, can be predictive of new sensory-‐motor learning ability. Implications of this growing body of literature are discussed in the context of basic science and of potential applications. ID: 174
Neural coding of (newly-‐) learned sound categories: The contribution of early auditory cortical areas Elia Formisano Maastricht Brain Imaging Center, Maastricht University, The Netherlands [email protected] The transformation of acoustic signals into abstract (categorical) representations is the essence of the efficient and goal-‐directed neural processing of sounds. While the human
and animal auditory system is perfectly equipped to process spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. Despite a large amount of research on the phenomenon of perceptual categorization, no clear answer could yet be found on where and how abstract sound categories are represented in the brain. Whereas animal research provides increasing evidence for complex processing abilities of early auditory areas, results from human studies tend to promote more hierarchical processing models in which categorical perception relies on higher order temporal and frontal regions. In this talk, I will discuss this apparent discrepancy and illustrate the potential pitfalls attached to research on categorical sound processing. Separating perceptual and acoustical processes possibly represents the biggest challenge. In this respect, examining learning-‐ or experience-‐ induced changes of sound representations in early (and higher-‐level) auditory areas helps unraveling their nature. It is crucial to note that many “perceptual” and “learning-‐induced” effects, demonstrated in animal models, did not manifest as changes in overall signal level. I will present recent research showing that while these effects may remain inscrutable to univariate contrast analyses typically employed in human neuroimaging, modern analysis techniques -‐ such as fMRI-‐decoding – is capable of unraveling perceptual processes in distributed activation patterns. It is also becoming increasingly evident that in order to grasp the full capacity of auditory processing in low-‐level auditory areas, it is necessary to consider the susceptibility of neural responses to context and task and the capacity of flexibly adapt processing resources according to the environmental demands. Finally, I will describe novel methodological improvements in measurement techniques (high field fMRI) and data analysis (e.g. modeling of multidimensional fMRI-‐tuning) that promise to
34
bring the advances from animal and human research closer together. ID: 274
Musical training indices neuroplastic changes of multisensory nature within the auditory cortex Christo Pantev Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Germany pantev@uni-‐muenster.de Recent neuroscientific evidence indicate that multisensory integration does not only occur in higher level association areas of the cortex as the hieratical models of sensory perception assumed, but also in regions traditionally thought of as uni-‐sensory, such as the auditory cortex. Nevertheless, it is not known whether expertize induced neuroplasticity can alter
multisensory processing that occurs these low level regions. The present study used magnetoencephalography to investigate whether musical training may induce neuroplastic changes of multisensory nature within the auditory cortex. MEG data of 4 different experiments are used to demonstrate the effect of musical expertize along with the effect of music reading training in the convergence of auditory, somatosensory and visual stimuli in the auditory cortex. The cross-‐sectional design of the 3 experiments allows us to infer that long-‐term musical training causes these neuroplastic changes, while the short-‐term training design of the 4rth experiment to causally infer that multisensory music reading training affects the multimodal processing of the auditory cortex.
35
Posters and short oral presentations of selected posters
Sunday, 14/Sep/2014, 3:00pm -‐ 3:30pm
Short oral presentation of posters with ID numbers 125, 167, 169, 179, and 239.
Sunday, 14/Sep/2014, 4:00pm -‐ 6:00pm
Presentation of posters with odd numbers and of poster 220.
Monday, 15/Sep/2014, 3:00pm -‐ 3:30pm
Short oral presentation of posters with ID numbers 106, 126, 180, 186, and 232.
Monday, 15/Sep/2014, 4:00pm -‐ 6:00pm
Presentation of posters with even numbers and of posters 123 and 173.
List of poster abstracts in order of their ID numbers:
ID: 102 Echo-‐acoustic flow modifies the cortical map of target range in bats Sophia Bartenstein1, Nadine Gerstenberg1, Dieter Vanderelst2, Herbert Peremans3, Uwe Firzlaff1 1Technical University Munich, Germany; 2University of Bristol, United Kingdom; 3Universiteit Antwerpen, Belgium [email protected] Information from peripheral sensors is typically organized in the brain as topographic representation of sensory epithelial surfaces (‘structural maps’) or as maps of sensory information derived from neural computation (‘computational maps’). Echolocating bats use the delay between their sonar emissions and the reflected echoes to measure target range, a crucial parameter for avoiding collisions or capturing prey. In many bat species target range is represented as an orderly organized computational map of echo delay in the auditory cortex. While the importance of static maps is clear, little is known about dynamic changes in map representations. Combining dynamic acoustic stimulation in virtual-‐space with extracellular recordings we show that the computational map of target range in the bat Phyllostomus discolor is
modified by the continuously changing flow of acoustic information perceived during flight (‘echo-‐acoustic flow’). Neurons in the dorsal auditory cortex encode echo-‐acoustic flow information about the geometric relation between targets and the bat’s flight trajectory, rather than echo delay per se. Specifically, the cortical representation of close-‐range targets is enhanced when the lateral passing distance of the target decreases. This flow-‐dependent enhancement of target representation could trigger motor-‐behaviours like vocal-‐control or flight manoeuvres. Our results demonstrate that the computational map of a behavioural relevant sensory parameter can undergo dynamical modification to quickly adapt to task specific requirements. The dynamic enhancement of the representation of the most relevant information in a sensory map might represent the neural substrate for adaptive behaviour which is important to link perception and action in dynamically changing complex environments.
36
ID: 104
Dissociable influences of primary auditory cortex and the posterior auditory field on neuronal responses in the dorsal zone of auditory cortex Melanie A. Kok, Daniel Stolzberg, Trecia A. Brown, Stephen G. Lomber University of Western Ontario, London, Canada [email protected] The current model of hierarchical processing in auditory cortex has been based mainly on anatomical connectivity, while functional interactions between individual regions have remained largely unexplored. Using cortical deactivation, previous work has addressed functional reciprocal connectivity between primary auditory cortex (A1), the anterior and posterior auditory fields (AAF and PAF), and second auditory cortex (A2). Thus, the purpose of the present study was to expand this functional assessment of inputs to a higher-‐order auditory area, the dorsal zone (DZ). Because they comprise the two largest auditory cortical inputs to DZ, cryoloops were placed over A1 and PAF based on cortical tonotopy (for A1) and known sulcal and gyral landmarks (A1 & PAF). Based on the current model of auditory cortical hierarchy, it was predicted that deactivation of these areas would significantly influence neuronal response rates in DZ. Neuronal responses in DZ were recorded while broadband noise stimuli, as well as pure tones, were delivered during reversible deactivation of A1 alone, PAF alone, or A1 and PAF together. Deactivation of A1 alone significantly reduced neuronal firing rates in DZ, regardless of the stimulus. A1 deactivation also resulted in increased neuronal thresholds and decreased receptive field bandwidths for DZ tuning curves. Deactivation of PAF alone moderately affected DZ neuronal responses, most notably at high sound intensity levels; however, changes in DZ neuronal responses during PAF deactivation were not ubiquitous across stimulus sets. PAF deactivation also had an effect on the tuning curve properties of DZ neurons, but the effects observed were not as robust as those observed during A1 deactivation. Combined cooling of A1 and PAF together was largely driven by the effects of A1 deactivation, as in most cases, neuronal responses during deactivation of both
A1 and PAF were indistinguishable from those of A1 deactivation alone. Together, these results support the current model of auditory cortical hierarchical organization, in that deactivation of the two auditory cortical regions with the largest inputs (A1 and PAF) resulted in measurable, but dissociable changes in neuronal responses in DZ. Further, inputs arising from A1 may be more critical in terms of shaping DZ responses to tones and noise bursts than those from PAF. ID: 105
A correlate of informational masking in anesthetized primary auditory cortex can be explained by basic neuronal tuning properties Peter Bremen1,2, John C Middlebrooks2 1Donders Institute Nijmegen, The Netherlands; 2University of California Irvine, United States of America [email protected] Psychophysical signal detection in complex listening environments is vulnerable to masking by competing sounds, even if the maskers are remote in frequency from the signal. This phenomenon is called informational masking (IM) and is thought to reflect a breakdown in the formation of discrete signal and masker auditory objects. Here, we sought evidence of IM in primary auditory cortex (A1) of anesthetized cats. Stimuli were presented from calibrated free-‐field speakers arranged in the horizontal plane. We fixed the signal at contralateral 40 deg and varied masker locations. Stimuli were four pure-‐tone pulses repeated at 2.5, 5 and 10 pulses per second. Each pulse contained 4 masker components and (on half the trials) one signal component. We set signal frequency and level to the unit’s characteristic frequency and to various levels above threshold. Masker frequencies were held constant or randomized across pulses. We defined a ±1/3, ±1/2, or ±1 oct band centered on the stimulus. Within-‐band maskers had all components within this band, whereas out-‐of-‐band maskers contained frequencies outside of the band. We presented individual masker tones at 40 dB above unit threshold and gated them either synchronously or asynchronously with the signal. Interestingly, we observed masking by
37
components outside the reported critical band of auditory nerve fibers. Frequency-‐tuning curves in A1, however, are considerably broader than peripheral critical bands, and masker components that produced appreciable masking generally fell within A1 tuning curves. As a result, neuronal thresholds for signal detection were lower for 1) out-‐of-‐band compared to within-‐band maskers, 2) for asynchronous compared to synchronous maskers, and for 3) random re constant frequency maskers. Furthermore, we found that spatial separation (~80 deg) of the signal and masker sources led to mild improvements in signal detection. These observations roughly mirrored results from human psychophysics. We found no indication of IM or across-‐frequency auditory object formation in anesthetized A1 aside from that accountable by broad central tuning curves. We deem three explanations possible: 1) IM is a form of energetic masking at the level of A1; 2) IM and auditory object formation arise outside of A1; and 3) they are a product of cortico-‐cortical feedback loops, which are effectively silenced under anesthesia. We plan to address these possibilities with future studies in awake-‐behaving animals. ID: 106 Inhibitory development and control of cortical temporal processing Xiaoyang Long1, Rongrong Han2,1, Dongqin Cai1, Yang Liu1, Limin Zhao3, Kexin Yuan1
1Tsinghua University, China, People's Republic of; 2WeiFang Medical University, China, People's Republic of; 3Affiliated Hospital of WeiFang Medical University, China [email protected] In early postnatal life, inhibitory circuits are readily sculpted by sensory experience to enable cortical maturation and map plasticity1-‐5. In the primary auditory cortex (A1), in particular, the development of the tonotopic map is governed by the experience-‐dependent refinement of inhibitory synaptic strength across the whole receptive field6. However, not all stimulus features are topographically represented in the cortex, for example, features of the temporal stimulus domain7. Whether and how inhibitory circuits are involved in cortical processing of temporal features, such as stimulus sequences, in the
context of development remains unknown. Here we show that, in developing rat A1, the inhibitory time course and its plasticity play a crucial role in the maturation of temporal sequence processing 8-‐12. By applying whole-‐cell recordings in vivo, we find that, within the range of ethological sound repetition rates13, even low-‐repetition-‐rate stimuli evoke sustained inhibition in developing rat A1. This long-‐lasting inhibition arises from the remarkably long decay time of inhibitory conductances early in maturation and suppresses cortical responses to successive stimuli. Strikingly, only 3-‐5 minutes of exposure to high-‐ or low-‐repetition-‐rate stimuli dramatically shortens or prolongs the time course of inhibition. The exposure-‐dependent shortening of the inhibitory time course during maturation enables a neuron to respond to higher stimulus repetition rates due to the emergence of a more adult-‐like integration pattern of excitation and inhibition14-‐16. Thus, our results reveal a novel form of inhibitory synaptic plasticity, highlighting the unique contribution of inhibitory temporal dynamics to the maturation of cortical temporal processing. ID: 107
Neural correlates of spatial hearing in acoustically complex situations: Insights from electrical neuroimaging Jörg Lewald1, Stephan Getzmann2 1Ruhr University Bochum, Germany; 2Leibniz Research Centre for Working Environment and Human Factors Dortmund, Germany [email protected] One of the most remarkable perceptual capacities of humans is their ability to detect, localize, and selectively attend to a particular sound source of interest in complex auditory scenes composed of multiple competing sources, reverberation, and noise. Despite its essential importance for both communication and orientation in space in everyday life, the neural basis of this so-‐called “cocktail-‐party phenomenon” has remained largely unknown. Using event-‐related potentials (ERPs) in combination with standardized low-‐resolution brain electromagnetic tomography (sLORETA) in fourty healthy human subjects, we focussed on the localization of a target sound source in
38
the presence of multiple competing sources relative to the localization of a single target presented in isolation. In the multiple-‐sources condition, four different animal vocalizations were presented simultaneously, each at a different azimuth position. Subject localized a previously defined target vocalization by pressing one out of four response buttons. In the single-‐source condition, the same target was presented in isolation. Only trials with correct responses were included in the analysis of electrophysiological data. The analysis of the vertex ERPs indicated stronger amplitudes with multiple than single sound sources at the time of the P1 and N2 components, while amplitudes were stronger with single than multiple sources at the time of the N1 and P2 components. To reveal brain areas specifically involved in sound localization in the presence of multiple distractor sources, responses to multiple sources were contrasted with single sources using sLORETA for the P1-‐N1 and the N1-‐P2 peak-‐to-‐peak differences. Electrical neuroimaging of the P1-‐N1 complex revealed maximum activation in right inferior parietal lobule, precuneus, postcentral gyrus, and insula, indicating higher activation by multiple, than single, sound sources. For the N1-‐P2 complex, maximum activation was found in right posterior superior temporal and middle temporal gyri, indicating higher activation by multiple, than single, sound sources. Electrical neuroimaging of the P2-‐N2 complex revealed maximum activation in right inferior frontal and middle frontal gyri, indicating higher activation by multiple, than single, sound sources. These results document a complex chronology of successive excitatory and inhibitory activations within a cortical network specifically concerned with spatial hearing in complex situations. ID: 109
Bimodal stimulus timing dependent plasticity in primary auditory cortex is altered after noise-‐induced tinnitus Gregory Joseph Basura University of Michigan, United States of America [email protected] Background: Primary auditory cortex (A1) neurons demonstrate bimodal (auditory-‐somatosensory) integration that is stimulus-‐
timing dependent, as demonstrated in dorsal cochlear nucleus (Koehler and Shore, PloS One, 2013), with Hebbian and anti-‐Hebbian timing rules analogous to in vitro spike-‐timing dependent plasticity (STDP). After noise exposure, and tinnitus, there are changes in bimodal integration and plasticity in DCN. The rationale for the present study was to determine if tinnitus-‐associated changes in STDP principles like those found in DCN also exist in A1 after noise-‐induced tinnitus. Methods: Four-‐shank, 32-‐channel silicon electrodes were placed in A1 of sham and noise-‐exposed guinea pigs with and without evidence of tinnitus as indicated by gap-‐induced pre-‐pulse inhibition of the acoustic startle. Stimulus-‐timing dependent plasticity was measured by comparing tone-‐evoked responses and spontaneous activity before, 5 and 15 minutes after bimodal (tone-‐spinal trigeminal nucleus; Sp5) stimulation with alternating pairing orders (tone-‐Sp5 or Sp5-‐tone) and intervals (40, 20, 10 and 0ms). Results: Bimodal stimulation in sham controls and in noise-‐exposed animals without tinnitus induced suppression or facilitation of tone-‐evoked firing rates 5 min after pairing, and predominantly Hebbian-‐like timing rules 15 minutes after pairing. In contrast, noise-‐exposed animals with tinnitus showed Hebbian timing rules 5 min after pairing and predominantly anti-‐Hebbian-‐rules 15 min after pairing. Conclusions: The present findings demonstrate that, like the DCN, A1 responses following bimodal stimulation reflect STDP. Moreover, noise-‐induced tinnitus can modify multisensory integration in A1 and influence temporal relationships of converging auditory and non-‐auditory sensory systems. This effect on sensory processing may improve the understanding of mechanisms driving neural changes in A1 and ultimately lead to treatments for tinnitus.
39
ID: 110
Neural computations underlying temporal and rate coding in auditory cortex Daniel Bendor University College London, United Kingdom [email protected] When a brief sound is slowly repeated, the resulting percept is a stream of discrete events, referred to as acoustic flutter. If the sound’s repetition rate increases above ~40-‐45 Hz, the percept changes from flutter to fusion; the sensation of discrete events is transformed into a fused percept, with a pitch equal to its repetition rate. Within auditory cortex, repetitive acoustic stimuli are encoded by two main types of responses, synchronized and non-‐synchronized. Synchronized neurons represent a repeated sound temporally by virtue of their ability to stimulus lock to the onset of each repeated sound for repetition rates within the perceptual range of flutter. Non-‐synchronized neurons form a complimentary neural code, increasing their firing rate monotonically with repetition rate over the perceptual range of fusion. To explore the underlying mechanisms that could generate these dichotomous neural coding regimes, I created a leaky integrate and fire (LIF) computational model of an auditory cortical neuron. Using this model, I find that strong, delayed inhibition (relative to excitation) leads to stimulus synchronization while non-‐synchronized responses can be generated by excitation occurring in close temporal proximity with weaker inhibition. To help validate this model, I recorded single unit activity in the auditory cortex of four awake marmosets, and tested several predictions made by this computational model, including the existence of additional neural coding regimes and the ability of some neurons to switch between synchronized and non-‐synchronized response modes. These results suggest that the underlying mechanism responsible for temporal and rate coding in auditory cortex can be parsimoniously explained as a byproduct of inhibition varying in strength and timing.
ID: 111
Communication call evoked layer specific neuronal response pattern in the auditory cortex of Mongolian gerbils Markus Schaefer, Manfred Kössl Goethe-‐University Frankfurt am Main, Germany [email protected] Biologically important communication sounds of mammals are often composed of a complex structure of time-‐varying spectral features, which is an important aspect of the auditory behaviour and crucial for their social interactions. However, communication sounds evoked neuronal responses are poorly investigated in literature and it is virtually impossible to directly study neural mechanisms for processing of communication sounds, e.g., speech in humans. Animal models may provide valuable information about common mechanisms for perception of communication sounds. At the level of spiking activity a number of studies in various species have demonstrated that natural vocalizations generally may produce stronger neural responses than do their time reversed versions and call evoked local field potentials in the auditory cortex can uniquely encode each call type. But are differences also visible at the level of the intracolumnar current flows of the six layered auditory cortex. Here we recorded call evoked local field potentials and multiunit spiking activity from perpendiculary inserted linear multi-‐contact-‐electrodes spacing all cortical layers to calculate laminar current source density distributions in the left primary auditory cortex of Mongolian gerbils. Animals were presented pure tones at their respective characteristic frequencies, conspecific communication calls and their reversed versions at 80 dB SPL and 20 dB above the minimal response threshold of respective neurons. Current source density (CSD) patterns were parcellated and quantifyed to analyse the size, latency, extend and strength of respective sink and sources. It could be shown, that different conspecific communication calls elicit call specific CSD-‐patterns while following the intracolumnar circuitry. Current sinks of warning calls appear to have shorter latencies compared to pure tone stimulation of same duration.
40
ID: 112
Tonotopic mapping in patients with unilateral lesions Sandra Da Costa1, Melissa Saenz2,3, Wietske Van der Zwaag4, Pierre-‐André Rapin5, Stephanie Clarke1 1CHUV, DNC, NPR, Lausanne, Switzerland; 2CHUV, DNC, LREN, Lausanne, Switzerland; 3EPFL, IBI-‐STI, Lausanne, Switzerland; 4EPFL, CIBM, Lausanne, Switzerland; 5Institution de Lavigny, Lavigny, Switzerland Sandra.Borges-‐Da-‐[email protected] The primary auditory cortex (PAC) is central to human auditory abilities, yet its anatomical location remains unclear. In control subjects, we measured two large tonotopic subfields of PAC (A1 and R) relative to the underlying anatomy of Heschl’s gyrus (HG). The data reveals a clear anatomical-‐functional relationship that indicated the location of PAC across the range of common morphological variants of HG (single gyri, partial or complete duplications). The size and shape of these subfields are proper to each hemisphere and subject. Here, we speculate that A1 and R (but also others non-‐primary subfields) could be modulated by events such as stroke or traumatic brain injury (TBI). We performed tonotopic mapping in six patients and twelve healthy controls at 3T. Ascending and descending progressive cycles of pure tone bursts (from 88 to 8000 Hz in half-‐octave steps) were presented in blocks of 32 seconds during two 8 minutes runs. PAC was functionally defined as the largest cluster in HG containing the primary mirror-‐symmetric gradients in each hemisphere and subject. Frequency distributions (percentages for each frequency representation) were calculated based on the frequency preferences normalized by the total amount of voxels within our region of interest. Then, each time course was extracted, normalized and averaged across preferred frequency in order to get the percent signal change variation per presented frequency. Tonotopic gradients were maintained in ispsi-‐ and contralesional hemispheres, despite some relative alterations in the frequency representations. Frequency distributions were slightly shifted towards the low frequencies in patients with hemispheric lesions, with bigger shift for bigger lesions or lesions near PAC.
Percent signal change variations were different in patients, with an ispsilesional drop around 1000 Hz in patients with hemispheric lesions and a contralesional increase in patients with cerebellar lesions. Tonotopic maps were (1) preserved only if primary and non-‐primary auditory areas were spared by the lesion, and (2) strongly influenced by the distance between PAC and the lesion. ID: 113
Lateralized processing of basic acoustic parameters and the effect of sequential comparison Nicole Angenstein, André Brechmann Leibniz Institute for Neurobiology Magdeburg, Germany nicole.angenstein@lin-‐magdeburg.de Acoustic parameters of a current sound segment like intensity, duration and frequency are not judged in an absolute manner but in relation to preceding sound segments, e.g. the decision as to whether a given tone is short or long can only be made relative to a reference. As acoustic information of sound sequences unfolds over time, for the evaluation of these sequences the information of individual segments need to be stored until the evaluation is finished and has to be sequentially updated. Such sequential processing is suggested to mainly involve the left hemisphere. With functional magnetic resonance imaging (fMRI) we investigated the lateralization of processing in the human auditory cortex (AC) for duration, intensity and direction of frequency modulations (FM). Additionally, we looked for changes in activity due to additional sequential comparison of these parameters. For that we used a method that is able to elucidate differential hemispheric contribution to the processing based on an increase in activity by presenting additional noise contralateral to the task-‐relevant stimuli. For the categorization of FM tones according to their FM direction we confirmed a strong involvement of the right AC. The pairwise sequential comparison of this parameter led to an additional involvement of the left AC. The processing of intensity is strongly left lateralized irrespective of the type of stimuli (FM tones, harmonic tones without FM) and
41
tasks (without and with sequential comparison). For the categorization of tones according to their duration the right AC was stronger involved for FM tones and the left AC was stronger involved for unmodulated tones. For both intensity and duration, additional sequential comparison of tones according to these parameters led to a stronger involvement of the left AC in contrast to the mere categorization of tones according to these parameters. In accordance with previous studies on sequential comparison, the results show that the left AC is additionally involved when fundamental acoustic parameters have to be sequentially compared. The need to compare a feature between tones seems to drive an additional involvement of the left AC regardless whether the main location for the processing of this feature is the left or the right AC. Furthermore, additional brain resources outside the AC seem to be required during sequential comparison in contrast to categorization. Supported by Deutsche Forschungsgemeinschaft (DFG, AN 861/4-‐1).
ID: 114
Corticofugal modulation of cochlear activity Katharina Jäger, Manfred Kössl Goethe-‐University Frankfurt am Main, Germany [email protected]‐frankfurt.de The corticofugal descending auditory system is a complex neuronal network that reaches the cochlear hair cells through the olivocochlear pathway, which makes it a major contributor to form feedback loops that could modulate afferent responses. So far only one study demonstrated a direct regulatory role of auditory cortex activity on cochlear potentials, suggesting the presence of two functional pathways from the cortex to the cochlea. These two pathways are assumed to be the medial and the lateral olivocochlear system. By manipulation of auditory cortical activity through sodium ion-‐channel blockers like lidocaine, the outer hair cell function can be evaluated using otoacoustic emissions as a non-‐invasive measurement of the cochlear amplifier. Here, distortion-‐product otoacoustic emissions (DPOAEs) were recorded in
anaesthetized mongolian gerbils, Meriones unguiculatus, before and after auditory cortex deactivation by lidocaine microinjections in the ipsilateral or contralateral hemisphere. DPOAEs were evoked over a broad frequency range of 0.5-‐40 kHz at levels of 60/50 dB SPL and 40/30 dB SPL, respectively. The cortical frequency characteristics of the place of lidocaine injection were determined using extracellular recordings with carbon electrodes. As a control, in some animals saline was injected instead of lidocaine. Cortical microinjections of lidocaine induced reversible amplitude shifts of DPOAEs in all tested animals. The most common effect was a DPOAE amplitude decrease, but some animals showed DPOAE amplitude enhancements after injection. Effects were distributed across all tested frequencies and not restricted to the preferred frequency of the cortical site of injection. In most animals, the DPOAE amplitude completely recovered after 80 minutes. Injections in the ipsilateral hemisphere induced effects with comparable magnitudes to those after injections in the contralateral hemisphere. In contrast, almost no changes in DPOAE levels were obtained after saline microinjections. These results indicate that deactivation of auditory cortex activity through lidocaine has a massive impact on peripheral auditory responses in form of DPOAEs, probably through cortico-‐olivocochlear pathways. Effects seem to be wide-‐spread across frequencies, suggesting an ubiquitous influence of cortical activity on cochlear processes. Furthermore ipsilateral projections seem to be influenced as much as contralateral projections. ID: 115
Neural plasticity and attention in normal hearing and in tinnitus Larry Evan Roberts McMaster University Hamilton, Canada [email protected] Most if not all models of tinnitus generation propose that neural plasticity contributes to the neural changes that underlie tinnitus. It has also been proposed that the disparity between what the auditory cortex predicts it should be hearing (this prediction coded by aberrant synchronous neural activity in primary auditory
42
cortex coding for the tinnitus percept) and input delivered to the brain by the damaged cochlea activates neural systems that support auditory attention as well. Our research has investigated how the expression of neural plasticity and attention in the normal hearing human brain appears to be modified in individuals experiencing tinnitus. The findings support the view that the rules that describe auditory remodeling in the normal hearing brain are modified by the presence of tinnitus-‐related neural activity. Tinnitus-‐related modifications include a relaxation of constraints on auditory representations in primary auditory cortex, impaired temporal plasticity in subcortical pathways, and reduced modulation by attention of brain responses evoked by sounds in the tinnitus frequency region of primary auditory cortex and nonspecifically in nonprimary auditory cortex. Supported by NSERC of Canada and the Tinnitus Research Initiative.
ID: 116
Dopamine-‐modulated recurrent corticoefferent feedback in primary auditory cortex: Perceptual salience and memory function Max Happel1,2, Matthias Deliano1, Juliane Handschuh1, Frank W. Ohl1,2,3 1Leibniz Institute for Neurobiology Magdeburg, Germany; 2Otto-‐von-‐Guericke University Magdeburg, Germany; 3Center for Behavioral Brain Sciences Magdeburg, Germany mhappel@lin-‐magdeburg.de Dopamine modulates neural circuits throughout the brain and has important roles in motor control, reward, attention, and learning. In primary auditory cortex (AI), dopamine is required for learning and long-‐term memory formation. However, the circuit-‐effects of layer-‐dependent dopaminergic neurotransmission in sensory cortex and their possible roles in perception, learning and memory are largely unknown. In a recent study, we gained first insights into the circuit functions of dopamine in auditory cortex by using current-‐source-‐density analyses to compare synaptic activation patterns evoked by auditory stimulation in the presence and absence of a D1/D5 dopamine receptor agonist in gerbil AI (Happel et al., 2014).
Activation of D1/D5 receptors lead to sustained auditory (thalamocortical) input processing via a local and polysynaptic recurrent cortico-‐thalamocortical feedback loop originating from infragranular (corticoefferent) sub-‐circuits. A detailed circuit analysis of this dopamine-‐modulated corticoefferent feedback related its activation to the generation of behaviorally relevant signals and perception in a behavioral detection task. Dopaminergic modulation of such recurrent corticoefferent feedback might allow for learning-‐induced gain control promoting the read-‐out of task-‐related information from cortical synapses and improving perceptual salience and learning. We could further emphasize the translational relevance of this corticothalamic feedback-‐gain circuitry involved in learning and memory malfunctions in the 5xFAD mouse model for Alzheimer’s disease. Disruption of neuronal networks in the Alzheimer-‐afflicted brain is increasingly recognized as a key correlate of cognitive and memory decline in Alzheimer patients. In the 5xFAD model we found impaired functions of pyramidal cells in infragranular layers leading to a loss of the corticoefferent feedback-‐gain. This specific disruption of normal cross-‐laminar cortical processing coincided with mnemonic deficits in contextual and cued fear conditioning and preceded the occurrence of cell death. We therefore revealed a possible circuit mechanism of memory deficits in early AD (Lison et al., 2013). ID: 117
Cortical processing of spectrally degraded speech as revealed by intracranial recordings Kirill V. Nourski1, Ariane E. Rhone1, Mitchell Steinschneider2, Hiroto Kawasaki1, Hiroyuki Oya1, Matthew A. Howard1 1The University of Iowa; United States of America; 2Albert Einstein College of Medicine NewYork, United States of America kirill-‐[email protected] Speech perception can withstand a considerable amount of signal degradation. This is exemplified by cochlear implant (CI) users who may achieve excellent speech comprehension despite severely limited spectral information. Understanding the
43
cortical processing of normal and spectrally degraded speech can help define the roles of different auditory and auditory-‐related fields in speech perception and potentially contribute to improvements in CI design and stimulation paradigms. Subjects were normal-‐hearing neurosurgical patients undergoing chronic invasive monitoring for medically refractory epilepsy. Stimuli were utterances (/aba/, /ada/, /apa/, /ata/), spoken by a male talker. The stimuli were spectrally degraded using a noise vocoder (1-‐8 bands) and were presented in target detection and stimulus identification tasks. Electrophysiological data were recorded simultaneously from Heschl’s gyrus (HG), posterolateral superior temporal gyrus (PLST), and inferior and middle frontal gyri (IFG, MFG) using multicontact depth and subdural grid electrodes. Responses were characterized as averaged auditory evoked potentials (AEPs) and event-‐related band power. In posteromedial HG (auditory core), AEPs had short latencies and featured peaks that reflected consonant release and frequency-‐following responses to the voice fundamental of the natural stimuli. High gamma (70-‐150 Hz) responses were similar in magnitude for vocoded and natural stimuli, contrasting with that seen on PLST, where natural stimuli elicited larger and broader patterns of high gamma activity. Responses from non-‐core cortex on anterolateral HG had long latencies and were often selective for natural stimuli. IFG and MFG exhibited complex response patterns that paralleled stimulus intelligibility, task difficulty and experience with the stimuli. Findings highlight marked differences in representation of noise-‐vocoded speech across core, non-‐core and auditory-‐related cortical areas. They support a scheme wherein acoustic stimulus attributes encoded within the auditory core are transformed into phonetic representations at the level of PLST. Auditory-‐related areas appear to subserve functions related to comprehension of stimuli and task performance. By modeling the patterns of cortical activity elicited by CI stimulation, the intracranial data lay foundation for better understanding cortical processing of degraded speech and the improvements that occur in CI users with rehabilitation and experience.
ID: 118
Signal coding and perception of iterated rippled noise with cochlear implant (CI) users Luise Wagner, Rahne Torsten University Hospital Halle (Saale), Germany luise.wagner@uk-‐halle.de Pitch is one of the primary auditory percepts. Variation of pitch is associated with the perception of melodies. Thus, perception of pitch is important for music and language perception as well as for segregating the sources of concurrent sounds. Auditory pitch detection relies on cochlear and central regularity detection. Normal hearing listeners use temporal fine structure and the envelope of signals for encoding of pitch. In CI users the transfer of spectral components and temporal fine structure is limited and the perception of pitch and timbre still difficult. We examined iterated rippled noise (IRN) perception in normal hearing listeners and CI users , i.e, white noise is delayed and added on its original (Yost 1996). The frequency of the perceived tonal component is determined by the reciprocal of the delay and its strength by the amount of added iterations. Difference limen for IRN iterations were measured psychoacoustically. Auditory evoked potentials were measured with 32-‐channel EEG recordings. Pitch onset response was found for IRN in both groups. First results will be discussed. Yost WA (1996) Pitch of iterated rippled noise. Journal of the Acoustical Society of America 100(1):511-‐8
ID: 119
Stimulus specific adaptation to simple and complex sounds in freely moving rats Ana Polterovich1,2,3, Amit Yaron1,2,3, Israel Nelken1,2 1Hebrew University of Jerusalem, Israel; 2Edmund and Lily Safra Center for Brain Sciences, Israel; 3equally contributing [email protected] In order to survive, animals must be able to predict what is going to occur next and plan their reactions appropriately. A way to do this is to extract relevant information from the past and build a statistical model of the environment that can be used to predict future
44
events. Indeed, numerous studies have demonstrated sensitivity of neural activity to the overall probability of a stimulus. One of the manifestations of such sensitivity is stimulus-‐specific adaptation (SSA): a decrease in responses to a common stimulus that does not generalize, or only partially generalizes, to other stimuli. SSA has been studied mainly in anesthetized animals and mainly with pure tones, and so its relevance for real-‐world tasks is not clear. Here we present a preliminary report of studies of SSA using awake, freely moving rats, as well as our attempts to test SSA with complex sounds. We are recording the electrophysiological responses in primary auditory cortex with a 16-‐electrode array, in an acute-‐awake preparation where the electrodes are implanted on the same day as the recording session. The rats move around a computer-‐controlled environment under video control, while listening to different auditory stimuli. We report here SSA to pure tones and to sounds as complex as human speech in the awake, non-‐behaving rat. ID: 121
Cortical modulation of spike-‐time precision in the medial geniculate body of the thalamus during reversible deactivation of primary auditory cortex in cats Daniel Stolzberg, Blake E. Butler, Melanie A. Kok, Stephen G. Lomber University of Ontario, London, Canada [email protected] The functional role of corticothalamic feedback projections from primary auditory cortex (A1) to neurons in the medial geniculate body (MGB) is not well understood. One hypothesis suggests that cortical activity modulates the temporal dynamics of thalamic neuronal responses in order to improve the coding fidelity of ascending sensory information. In order to investigate its role in modulating the temporal precision of spike trains from MGB neurons, A1 was reversibly deactivated using a cryoloop while recording from a multichannel electrode array in the MGB of ketamine-‐anesthetized cats. Metric-‐space analysis was used to quantify the reliability of MGB spike-‐time coding strategies in response to normal
and time-‐reversed vocalizations before, during, and after deactivation of A1. Similar to the findings of previous studies on coding in the MGB, the majority of neurons utilized a temporal coding (>90%) over a rate coding strategy. Deactivation of A1 primarily resulted in a decrease in the temporal precision of spike trains from MGB neurons, as well as a decrease in mutual information, a measure of a neuron’s selectivity for a normal or time-‐reversed vocalization. These preliminary results support the hypothesis that A1 plays a role in modulating the temporal dynamics of MGB neural responses to sound. ID: 122
MGm makes it big: Ultrastructure of thalamocortical “giant“ boutons Katja Saldeitis1, Karin Richter2, Henning Scheich1, Eike Budinger1 1Leibniz Institute for Neurobiology Magdeburg, Germany; 2Otto-‐von-‐Guericke University Magdeburg, Germany katja.saldeitis@lin-‐magdeburg.de The auditory system comprises some very large axonal terminals, among them the endbulb and the calyx of Held in the brainstem, and those formed by corticothalamic pyramidal neurons originating in layer V (referred to as “drivers”, e.g. Lee & Sherman, 2010, Front. Neurosci. 4). A hitherto unknown population of “giant” boutons arising from the medial division of the medial geniculate body (MGm) was recently discovered in course of tracing studies on the thalamocortical connections of the gerbil auditory cortex (Saldeitis et al, 2014, J. Comp. Neurol. 522). Specific features (such as rapid, high-‐fidelity transmission) of the so far known “giant” terminals have been related to their size (“form fits function”). Therefore, and due to their preferred (column-‐like) location in infragranular layers of auditory cortex we speculate that the giant synapses from MGm play an important role in specific cortical activities. By means of small injections of the tract tracer biocytin into the MGm to anterogradely label thalamocortical boutons, and pre-‐embedding staining for transmission electron microscopy, we aimed to give a first description of the ultrastructure of MGm terminals, which in turn
45
will provide evidence about their putative functions. We identified labeled boutons having cross-‐section areas of about 1µm² (normal boutons) up to 5 µm² (giant boutons). Giant (but also normal sized labeled) terminals contain a large pool of clear, round vesicles, and form multiple asymmetric synapses with their postsynaptic targets, which are mainly dendritic spines. This indicates that they exert excitatory, presumably glutamatergic, effects on layer V and/or VI pyramidal cells. Noticeable, compared to adjacent non-‐labeled terminals, boutons from the MGm have a high mitochondrial fraction, suggesting that they consume much energy. Together, it is conceivable, that their transmission has a strong, temporally precise influence on the postsynaptic neurons, which (considering their columnar distribution pattern in auditory cortex) may make the MGm neurons able to orchestrate and synchronize the activity of multiple cortical ensembles. ID: 123
Behavioral and neural correlates of perceptual ambiguity in auditory stream segregation Susann Deike1, Peter Heil1, Martin Böckmann-‐Barthel2, Lena-‐Vanessa Dolležal3, Georg Klump3, André Brechmann1 1Leibniz Institute for Neurobiology Magdeburg, Germany; 2Otto-‐von-‐Guericke University Magdeburg, Germany; 3Carl von Ossietzky University Oldenburg, Germany sdeike@lin-‐magdeburg.de In experiments on auditory stream segregation using ABAB or ABA_ sound sequences, three perceptual domains have traditionally been distinguished. The sound sequences are predominantly perceived as a single stream when the physical differences between the A and the B sounds are small, and as two segregated streams when the differences are large (unambiguous sequences). Both of these percepts are possible and the listener can switch between them when the differences are of intermediate size (ambiguous sequences). Such ambiguous sequences are suited to probe the neural basis of streaming because different perceptual organizations can be studied without varying physical stimulus parameters. Here, we describe behavioral as well as neural
correlates that relate to general aspects of decision making on perceptually ambiguous sound sequences. In psychophysical and fMRI experiments, human subjects were asked to listen to the sound sequences (ABAB or ABA_ ) and to indicate their percept (one stream, two streams), either continuously during the sequence presentations (psychophysical study) or at the end of the sequences (fMRI). Several psychophysical measures differed clearly between unambiguous and ambiguous sequences. For ambiguous sequences, the time to the first decision was longer and the switching rate and the proportion of sequences with switches was higher compared to unambiguous sequences. In the fMRI study, we found specific BOLD responses for the ambiguous sequence in higher-‐level areas, specifically the posterior medial prefrontal cortex and the posterior cingulate cortex. The two regions are associated with cognitive functions related to monitoring decision uncertainty and higher task demands, respectively. Thus, both studies suggest that perceptual ambiguity is characterized by an uncertainty to decide for one perceptual organization and by a higher cognitive load due to this uncertainty. Hence, it should be taken into account in the analysis of results from streaming experiments that additional processing may be involved when subjects listen to ambiguous sequences which is not specifically related to streaming per se but to decision making in general. Supported by the ‘DFG’ [SFB/TRR31].
ID: 124
Postnatal development and plasticity of anatomical pathways suitable for multisensory integration processes in rodent primary sensory cortices A1, S1, and V1 Julia U. Henschke1, Patrick Kanold2, Henning Scheich1, Eike Budinger1 1Leibniz Institute for Neurobiology Magdeburg, Germany; 2University of Maryland, College Park, United States of America Julia.henschke@lin-‐magdeburg.de Multisensory integration does not only recruit higher-‐level association cortex, but also low-‐
46
level and even primary sensory cortices. Recently, we showed that the primary auditory (A1), somatosensory (S1), and visual (V1) cortex of adult Mongolian gerbils receive convergent inputs from brain structures of non-‐matched senses (Henschke et al., Brain Structure and Function, 2014). The underlying anatomical pathways include a thalamocortical (TC) and a corticocortical (CC) system, which might preferentially serve short latency integration processes in the primary sensory areas. Here, we ask how the multisensory TC and CC systems develop and change during the total lifespan of the animals. Knowledge about mechanisms underlying these changes during development will provide insights into plastic processes caused by experience, learning, and sensory deprivation. We approached the issue by stereotaxic pressure injections of the retrograde tracer Fluorogold into A1, S1, and V1 at postnatal days 1 (somatosensation already active), 9 (before ears and eyes open), 15 (ear canals open), 21 (eyes open), 28 (weaned), 120 (young adult), and 1000 (old age) to identify cells that project to the particular areas. Our cell counts demonstrated especially in the first postnatal weeks large changes of the developing multisensory TC and CC connections; mainly due to an initial competition between the developing senses and a subsequent structural consolidation of the sensory pathways during the critical and sensitive phase. At adult stages, many crossmodal TC and CC connections remain; however, most of them disappear during aging. To test if the changes in crossmodal connectivity are due to a selective generation of projection neurons, a loss of these neurons, a reorganization of axonal branches, and/or changes in the nature of the transmission systems (driving, modulatory, inhibitory) we counterstained the histological sections with antibodies against markers for neurogenesis (Doublecortin), cell apoptosis (Caspase-‐3), axonal plasticity (GAP34), and calcium-‐binding proteins (Calbindin, Parvalbumin). Our results show that during normal development TC and CC projection neurons are not newborn, do not die, and major axonal reorganization processes are mediated by neurons within the non-‐lemniscal thalamic nuclei and primary sensory cortices itself.
ID: 125
Fine spatial representation of interaural level differences in the auditory cortex: A two-‐photon imaging study Mariangela Panniello, Andrew J. King, Johannes C. Dahmen, Kerry M. Walker University of Oxford, United Kingdom [email protected] Hearing research has been revolutionized by the recent application of in vivo two-‐photon calcium imaging to the study of auditory cortex (AC), thanks to the high spatial sampling rate of this technique. The topographic representation of sound frequency, recognized as the main organizational principle of primary AC, has been shown to be absent at a fine spatial scale (within tens of micrometers) in mice. Here, we use two-‐photon calcium imaging to investigate the responses of neurons in mouse primary AC to sound level differences between the two ears (Interaural Level Differences; ILD) at a much higher spatial resolution than has previously been possible. Previous extracellular recording studies have described bands or clusters of neurons with similar binaural interaction properties, and the only evidence for a topographic organization of ILD sensitivity across the auditory cortical surface has been reported in bats. Stereotaxic injections of an AAV vector carrying GCaMP6m, a genetically encoded calcium indicator, were performed in the AC of mice aged 5-‐6 weeks. Two-‐photon imaging of auditory cortical activity was carried out 3-‐6 weeks later in anesthetized animals. During imaging, we presented noise bursts over a 0-‐30 dB ILD range, as well as monaurally to each ear with an average binaural level of either 70 or 90 dB SPL. We found a patchy arrangement of binaural preference. Small neuronal clusters (50-‐60 µm²) showing a common preferred ILD were often found adjacent to clusters with different (even contralateral) ILD tuning. We did not, however, find evidence for binaural bands in the mouse AC. These results are in accordance with previous electrophysiology studies that propose a poorly ordered spatial representation of ILD in the AC at a large spatial scale. However, the dense spatial sampling of 2-‐photon imaging demonstrated that locally clustered ILD preferences exist among neighboring auditory cortical neurons.
47
ID: 126
Responses to natural sounds reveal the functional organization of human auditory cortex Sam Norman-‐Haignere, Josh McDermott, Nancy Kanwisher Massachusetts Institute of Technology, Cambridge, United States of America [email protected] Auditory cortex is critical to speech recognition, music perception, and the ability to infer useful information from acoustic scenes. However, the functional organization of auditory cortex remains poorly understood compared with visual cortex. This is in part because studies typically test only a small number of hypotheses about auditory cortical organization, limited in part by the constraints of using relatively modest numbers of stimuli. To overcome these limitations, we measured cortical responses with fMRI to diverse collection of 165 individual natural sounds. We then took a data-‐driven approach to explore whether there was organization in the measured responses to these sounds. The response of each voxel to the 165 sounds was modeled as a weighted combination of an unknown number of canonical response profiles, each potentially representing the selectivity of an underlying neuronal sub-‐population. We used independent components analysis (ICA) to recover these components, and then explored the function of each component by correlating its response profile with acoustic measures and category labels (e.g. speech, music). The analysis revealed five components that collectively explained all of the replicable variance. These components had surprisingly distinctive and interpretable response profiles despite the lack of functional or anatomical constraints imposed by the analysis. The response profiles of two components were strongly correlated with acoustic measures of either spectral energy or pitch strength, and did not respond selectively to any of the categories tested. In contrast, two other components exhibited pronounced category-‐selectivity, one for speech and one for musical sounds, and weak correlations with our acoustic measures. A fifth component responded preferentially to environmental sounds other than speech or music. Projecting
these five components back into the brain revealed a clear anatomical pattern. The components with strong acoustic correlations explained response variation in regions in and around primary auditory cortex, while the category-‐selective components occupied distinct regions of non-‐primary auditory cortex. Collectively, these results indicate that pitch and speech are fundamental organizing dimensions of auditory cortex, and suggest the existence of an anatomically distinct neural pathway for processing music. ID: 127
Diffusion MRI of the arcuate fasciculus in prelingually deaf patients Theresa Finkl1, Alfred Anwander2, Angela D. Friederici2, Johannes Gerber3, Alexander Mainka1, Dirk Muerbe1, Anja Hahne1 1University Hospital Dresden, Germany; 2Max-‐Planck-‐Institute for Human Cognitive and Brain Sciences Leipzig, Germany; 3University Hospital Dresden, Germany theresa.finkl@uniklinikum-‐dresden.de Prelingually deaf patients, who receive a cochlear implant (CI) in adulthood generally develop only a limited ability to understand spoken language despite the ameliorated hearing conditions provided by the implant. Besides, patients who have acquired some basic verbal utterances are hardly able to change the abnormalities in their speaking. As successful processing of speech on both levels requires not only an intact auditory pathway, but also an effectively operating language network, we investigated the relationship between prelingual deafness and white matter anatomy of the arcuate fasciculus as a major language-‐associated tract by means of dMRI (diffusion magnetic resonance imaging). Six prelingually deaf adults with bilateral hearing loss (mean age 33, range 26-‐39, 2 men) and six normal-‐hearing controls (mean age 31, range 25-‐42, 2 men) took part in the study. Each subject underwent one MR scanning session, in which a T1-‐ and a diffusion-‐weighted data set was acquired. Subsequent tractography was carried out using region of interest analyses. Deterministic tractography revealed well-‐developed arcuate fasciculi in all subjects. Patients displayed higher values of mean diffusivity (MD) and lower fractional anisotropy (FA) in the left arcuate fasciculus compared to
48
the control group. In dMRI, FA is a measure for the directionality of diffusion and takes high values for strong directionalities (e.g. thick fiber bundles with pronounced myelinisation). MD describes the overall diffusion, being high in the absence of restricting boundaries (bundles of loose fibers). Connecting Broca’s and Wernicke’s area, the arcuate fasciculus constitutes one of the principal pathways within the language network and is involved in speech production and perception. Although this pathway is well developed in prelingually deaf patients, high mean diffusivity in combination with low anisotropy indicates that its fibers are less organized and less myelinated than fibers of the arcuate fasciculus in the control group. This can be attributed to auditory deprivation during the sensitive period of language acquisition in early childhood and provides additional information about the neuroanatomical prerequisites for hearing and speech rehabilitation of CI patients. ID: 128
Stimulus-‐specific adaptation to temporal gaps Bshara Awwad, Amit Yaron, Ana Polterovich, Israel Nelken Hebrew University of Jerusalem, Israel [email protected] The auditory system is capable of following rapid changes in sounds. The ability to detect short (1-‐10 ms) gaps in noise has been well-‐studied both in humans and in animal models. The gap detection test is clinically used to assess the temporal resolution of the auditory system, and is correlated with deficits in speech perception in humans. Stimulus-‐specific adaptation (SSA), the decrease in responses to a common stimulus that does not generalize, or only partially generalizes, to other stimuli, has been mainly investigated in the spectral domain of sound. Here we report the existence of SSA to gaps. We used oddball sequences composed of either 200 ms long noise bursts or noise bursts with a gap 100 ms after stimulus onset; the noise was either broadband or narrowband (… octaves). We recorded extracellular activity (local field potentials and multiunit activity) from the primary auditory cortex of both anesthetized and awake, freely
moving rats in response to such sequences. We found significant responses to gaps as short as 2 ms, as short as behavioral acuity. The responses to gap stimuli when deviant were larger than the responses to the same stimuli when standard, in both anesthetized and freely moving rats. We conclude that SSA may be elicited not only for spectral, but also for temporal, stimulus features. ID: 129
Cortical neurons of bats do not respond to all individual syllables in natural communication phrases Julio Hechavarria, Manfred Kössl Goethe-‐University Frankfurt am Main, Germany [email protected]‐frankfurt.de Acoustic communication is widely used in the animal kingdom as a mean of information exchange. Communication sounds carry information that has to be pieced together by the listener(s) to form meaningful “percepts”. The auditory cortex is regarded as the place where percepts are formed from the neuronal responses evoked by incoming sounds. However, the mechanisms by which cortical neurons cope with natural communication streams remain controversial. Using micro-‐wire arrays (16 penetrating electrodes organized in a 2x8 configuration) we investigated the response of cortical neurons of short-‐tailed fruit bats (Carollia perspicillata) to natural communication call sequences from conspecifics. Calls were recorded from adult individuals while softly pinching the skin behind their neck. Calls produced under these conditions are defined as “distress calls” and they are known to evoke exploratory behaviours in conspecifics, to activate the neuro-‐endocrine system, and to boost genetic expression in the cortex. We observed that cortical neurons respond strongly to individual sound elements (i.e. syllables) in the distress call phrases, when the syllables are played randomly and separated by 500 ms from one another. However, to our surprise, when communication phrases were played in their natural form, most neurons fired only in response to the first syllable in each phrase. The response to the following syllables was strongly suppressed. The latter suggests that, at least in bats, individual neurons are not able
49
to keep up with natural communication call streams. Local field potentials (LFPs) were analysed to study call-‐evoked responses at the level of local neuronal populations. A comparison between LFP amplitudes evoked by randomly-‐played syllables and natural communication phrases revealed attenuation in the LFP response to the natural streams. In spite of the observed response attenuation, we did observe that in most recording sites at least one of the tested communication streams “entrained” LFP waveforms, that is, the LFP waveform followed the amplitude envelope of natural communication streams. Such LFP entraining synchronizes activity across recording sites. Overall, our results suggest that tracking natural communication streams in the bat auditory cortex might depend less on the spiking activity of individual neurons, and more on the synchronized activity across neuronal populations. ID: 130
Impact of the extracellular matrix in auditory cortex on memory stability and learning flexibility Hartmut Niekisch1, Matthias Deliano1, Frank W. Ohl1,2, Renato Frischknecht1, Max Happel1,2 1Leibniz Institute for Neurobiology Magdeburg, Germany; 2Otto-‐von-‐Guericke University Magdeburg, Germany Hartmut.Niekisch@lin-‐magdeburg.de Remodeling of synaptic networks are indispensable key events during learning and memory formation and re-‐consolidation throughout life. The extracellular matrix (ECM) has been considered to serve for such stabilization of synaptic networks in the adult brain. Whether the ECM might thereby govern learning-‐related plasticity, life-‐long memory re-‐formation and higher cognitive functions is largely unknown. Recent research from our lab has enlightened a new role of the mature ECM actively organizing the balance between structural stability and functional synaptic plasticity. We investigated the impact of local enzymatic removal of the ECM in auditory cortex of adult Mongolian gerbils on auditory re-‐learning behavior. Cortex-‐dependent auditory relearning was induced by the contingency reversal within a frequency-‐modulated (FM) tone discrimination
shuttle-‐box task, which requires high behavioral flexibility. ECM removal immediately before the contingency-‐change in the training paradigm improved the relearning performance, but without generally impacting the retrieval of previous acquired memories. Hence, ECM-‐removal opened short-‐term windows of enhanced activity-‐dependent reorganization promoting complex forms of behavioral strategy change during learning (Happel et al., PNAS, 2014). Essentially, our results implicate a novel function of the cortical ECM as a potential regulatory switch to adjust the balance between stability and plasticity in the adult brain which might also open new directions in applied neurosciences. We further identified that dynamic changes of the ECM around synapses potentially regulate synaptic short-‐term plasticity by enhanced synaptic exchange of postsynaptic receptors in vitro (Frischknecht et al., Nat Neurosci, 2009). Therefore, we continued to investigate the potential role of intrinsic proteolytically induced ECM remodeling promoting learning-‐related plasticity in the adult brain. Preliminary Western blot-‐data quantifying proteolytic cleavage of selected ECM-‐proteins in different brain regions of naïve mice and mice trained in FM-‐discrimination learning will be presented. ID: 131
Timing matters for abstract pitch processing Annekathrin Weise1, Sabine Grimm1,2, Nelson J. Trujillo-‐Barreto3, Erich Schröger1 1University of Leipzig, Germany; 2University of Barcelona, Spain; 3Ciudad Habana, Cuba akweise@uni-‐leipzig.de Central auditory functions can automatically extract abstract regularities from a dynamically changing soundscape. Evidence comes from an abstract pitch paradigm in which the 2nd tone of repeatedly presented pairs had a higher pitch than the 1st tone of the respective pair; absolute pitch values varied across pairs. 2nd tones that rarely violated the pitch relation (e.g. 2nd tone of lower pitch) elicited the Mismatch Negativity (MMN; the brain’s error signal to rule violations). We studied whether the timing between events, falling either within or outside a critical time window (~350 ms), impacts the extraction of abstract pitch
50
relations. Via an abstract pitch paradigm we tested MMN to rule violating pitch relations in three conditions. In Short condition tone duration (90 ms) and stimulus onset asynchrony (SOA) between tones forming a pair were short (110 ms). In the conditions Long Gap and Long Tone SOA was long (510 ms). In Long Gap tone durations were identical to Short (90 ms), while the silent within-‐pair interval was prolonged by 400 ms. In Long Tone the duration of the 1st tone was prolonged by 400 ms, while the silent interval was comparable to Short (20 ms). We found comparable frontocentral MMN amplitudes across conditions, indicating that pitch relations were extracted. Source analyses revealed MMN generators in the supratemporal cortex. Interestingly, they were located more anterior when the silent interval was long (Long Gap) rather than short (Short, Long Tone). Moreover, frontal generators were activated when SOAs were long (Long Gap, Long Tone). Thus, timing impacts how the system processes abstract pitch relations. ID: 133
Functional imaging of pitch perception in the auditory cortex of the cat Blake E. Butler, Amee J. Hall, Stephen G. Lomber University of Western Ontario, London, Canada [email protected] Pitch perception typically involves complex spectrotemporal processing in which harmonically-‐related components of a complex sound are combined to create a single percept. Although the spectral and temporal cues to pitch are established at a cochlear level, across both human and non-‐human primates the pitch percept first emerges beyond the primary auditory cortex. Behavioural paradigms have demonstrated pitch sensitivity in a number of other animal models, however much less is known about where this percept is formed in these species. Using a methodology first described by Brown and colleagues (2014), we used high field functional imaging (7T fMRI) to locate the pitch centre in cat auditory cortex. Normal hearing cats were presented with an iterated rippled noise (IRN) stimulus designed to elicit a 400 Hz pitch percept, and bandpass filtered to remove spectral content in the
region of resolvable harmonics. These same cats were also presented with a no-‐pitch version of this IRN stimulus (IRNo), which preserves the slowly varying spectrotemporal fluctuations of IRN, but removes the pitch-‐evoking temporal fine structure. Finally, a narrowband noise stimulus with energy confined to the same passband as the IRN stimuli was presented. Contrasts between the pitch-‐evoking stimulus (IRN) and the non-‐pitch-‐evoking stimuli (IRNo, noise) highlight the locus of pitch processing in cat auditory cortex. Furthermore, this work provides the basis for further electrophysiological studies in which we hope to identify single units that are responsive to a wide variety of pitch-‐evoking stimuli. ID: 134
Constraint-‐induced sound and music therapy for sudden sensorineural hearing loss Hidehiko Okamoto1,2, Munehisa Fukushima3, Henning Teismann2, Lothar Lagemann2, Tadashi Kitahara3,4, Hidenori Inohara4, Ryusuke Kakigi1, Christo Pantev2 1National Institute for Physiological Sciences Tokyo, Japan; 2University of Muenster, Germany; 3Osaka Rosai Hospital, Japan; 4Osaka University, Japan [email protected] Sudden sensorineural hearing loss (SSHL) is an idiopathic condition characterized by acute hearing loss. Based on several national surveys, an estimate of SSHL incidence rates is around 30 cases per 100,000 people per year. The likelihood of hearing recovery strongly depends on both the severity of hearing loss at presentation and the time between SSHL incidence and initial audiogram. Roughly speaking, one third of patients recover completely, another one third of patients recover partially, and the rests show no recovery. Even though the etiology of SSHL has been investigated intensively, knowledge and understanding of SSHL remains limited. We report here the development and evaluation of “constraint-‐induced sound therapy (Okamoto et al., Sci Rep. 2014, 4:e3927)”, which is based on a well-‐established neuro-‐rehabilitation approach, especially for stroke patients (Taub et al., J Rehabil Res Dev, 1999, 36: 237-‐251), and an enriched acoustic environment (Norena and Eggermont, J
51
Neurosci, 2014, 25: 699-‐705). In the present study, we plugged (i.e. constrained) the canal of the healthy ear of SSHL patients and urged them to actively use the affected ear by listening to daily-‐life surrounding sounds and music that was presented through a headphone over the affected ear. Treatment outcome was evaluated by comparing the mean pure tone audiograms of two groups of SSHL patients: the CONTROL group (N = 31) received only the standard corticosteroid therapy, while the TARGET group (N = 22) additionally received the constraint-‐induced sound therapy. Moreover, by means of magnetoencephalography we measured the auditory evoked fields in six TARGET group patients. The results showed that the TARGET group showed significantly better recovery of hearing function compared to the CONTROL group. Additionally, the auditory evoked fields elicited by monaural sound stimulation showed that the laterality indices of both auditory steady state and N1m responses significantly increased over time. The constraint-‐induced sound therapy could have prevented maladaptive auditory cortex reorganization and appears to be an effective treatment option for SSHL. ID: 135
Human auditory cortex detects interaural time differences in high-‐frequency sound Nelli Salminen, Alessandro Altoè, Marko Takanen, Olli Santala, Ville Pulkki Aalto University, Finland [email protected] Sound sources are localized based on various acoustical cues of which the most important is the interaural time difference (ITD). ITD is best detected from the fine structure of low-‐frequency sounds (< 1.3 kHz) but if extracted from the envelope, the usefulness of ITD would extend to higher sound frequencies. Psychoacoustical studies show that human subjects can detect this envelope ITD cue with a resolution that makes it potentially relevant for sound source localization. However, the neural bases of envelope ITD detection have so far been addressed only in animal electrophysiology. Here, we performed a combined psychoacoustical and MEG study
aiming to evaluate the sensitivity of the human brain to envelope ITD and to identify the features of cortical activity that predict behavioral performance in envelope ITD detection. We found two types of sensitivity to envelope ITD in the human auditory cortex. First, the amplitude of the auditory cortical N1 response was larger for sounds with very large ITD than for those with no ITD. This preference for long over short ITD is consistent with binaural processing previously described in the lateral superior olive of experimental animals. Second, we found ITD-‐specific adaptation of the N1 response amplitude between left-‐ and right-‐leading ITDs. Such tuning could potentially originate from the medial superior olive that generates neural tuning to lateral ITDs in low-‐frequency sounds. Further, the ITD-‐specific adaptation occurred within the physiologically plausible range of delays. This suggests that the human cortex has neural sensitivity to envelope ITD with a resolution that could, in principle, serve the localization of real sound sources. The neural sensitivity found in MEG was consistent with behavioral performance in the psychoacoustical test. At the group level, neural sensitivity to envelope-‐ITD was limited to the lowest modulation frequencies in which all subjects could also detect ITD in the psychoacoustical experiment. At the individual level, the neural sensitivity predicted behavioral ITD detection thresholds and thereby could account for inter-‐individual differences in behavioral performance. In conclusion, the human auditory cortex supports the localization of high-‐frequency sounds based on ITD and this cortical sensitivity is correlated to behavioral performance in ITD detection. ID: 136
Secondary auditory cortex and basolateral amygdala interactions are essential during remote fear memory recall: The importance of theta rhythms Marco Cambiaghi, Benedetto Sacchetti Universita' degli Studi di Torino, Italy [email protected] Negative experiences are quickly learned and long remembered. The secondary auditory
52
cortex (Te2) and the basolateral amygdala (BLA) are both involved in long-‐term frightening memories. Indeed, in auditory fear conditioned rats, secondary auditory cortex is essential for encoding the emotional valence acquired by the auditory stimuli at remote time points. Brain oscillations, particularly the theta rhythm (4-‐12 Hz), seem to play a crucial role in the memory coding process and connections with amygdala. The present study was addressed to examine the electrical activity of the secondary auditory cortex and of basolateral amygdala in the recall of fear memories at one day and one month after fear conditioning. To this aim, we performed LFP recording in Te2 and BLA of rats 24 hours and 30 days after the association of acoustic conditioned stimuli (CS, tone) and aversive unconditioned stimulus (US, electric shock). Power spectral analysis of Te2 activity during the recall of aversive memories showed strong changes in the theta band, at both 24h and 30 days. Similarly for both conditions, low-‐theta frequencies (4-‐7 Hz) power increased during memory recall. On the contrary, high-‐theta (7-‐12 Hz) power decreased at both recent and remote retrieval. Moreover, low-‐theta frequencies were found to be essential for Te2-‐BLA interconnections. Theta rhythm is essential in mnemonic processes in higher order auditory cortex. Our study demonstrates the functional involvement of Te2 in mediating the expression of auditory fear memory at remote time point and we showed that during the recall of those experiences the activity of Te2 and BLA is highly synchronized. ID: 137
Phase entrainment of EEG oscillations to high-‐level features of speech sound and its perceptual consequences Benedikt Zoefel1,2, Rufin VanRullen1,2 1Centre de Recherche Cerveau et Cognition, CNRS, France; 2Université Paul Sabatier Toulouse, France [email protected]‐tlse.fr Previous studies showed that neural oscillations entrain their phase to speech sound, a mechanism that is assumed to improve speech intelligibility by aligning the brain’s high and low excitability phases to
relevant and irrelevant events. However, in everyday speech sound, speech information (the amount of information that the listener can extract from the speech signal at a given moment) is always confounded with overall amplitude modulations: There is no speech information when spectral energy is absent; there are moments of silence or low spectral energy between successive phonemes. Consequently, neural phase entrainment could reflect a simple passive ‘following’ of changes in sound amplitude (a low-‐level process) and/or a true adaptation to the rhythm of speech information (a high-‐level process). We were interested in determining the existence of such high-‐level entrainment, and evaluating its perceptual consequences. To do so, we constructed novel speech/noise stimuli without systematic fluctuations in sound amplitude or spectral content, while keeping fluctuations in speech information and intelligibility. This construction made it possible to isolate, for the first time, the rhythmic patterns of speech information without concomitant changes in sound amplitude. We tested high-‐level phase entrainment in two experiments: In a psychophysical study (9 subjects), auditory clicks at threshold level were presented at random moments during our speech/noise snippets. In a second study, 12 subjects listened to our stimuli while electroencephalography (EEG) was recorded, again with clicks presented at random moments. In both studies, subjects indicated click detection by a button press. As a result, we found that theta-‐band (2-‐8Hz) oscillations indeed entrained to speech rhythm (as assessed by speech-‐EEG phase-‐locking) even when speech information was not accompanied by changes in sound amplitude or spectral content. Importantly, this high-‐level phase entrainment had perceptual consequences: In the psychophysical study, click detection was best at phases corresponding to maximal speech information and decreased continuously, with worst performance at phases of minimal speech information. In the EEG study, as expected, click detection depended on the entrained EEG phase just before the click. In summary, we show that neural oscillations adjust their phase to high-‐level features of speech sound, and
53
that this phase entrainment has perceptual consequences. ID: 138
Encoding of voice “shape” and “texture” in the auditory cortex Marianne Latinus1,2, Frances Crabbe2, Pascal Belin1,2 1Aix Marseille Université, CNRS, France; 2University of Glasgow, United Kingdom marianne.latinus@univ-‐amu.fr How are individual voices represented in the auditory cortex? In a previous study, we demonstrated norm-‐based encoding of voices in a three-‐dimensional space. In the present study, we further investigate this hypothesis by manipulating independently the dimensions of the voice space. 16 participants, performing a pure tone detection task, were scanned in a 3.0T Tim Trio scanner (Siemens) using an event-‐related design (TR=2s). Stimuli were 32 natural female and male voices as well as voices obtained by morphing each individual voice with the same-‐gender average voice with STRAIGHT. STRAIGHT allows morphing independently information of “texture”(aperiodicity and spectral amplitude) and “shape” (F0 and formant frequencies); we created two sets of stimuli for which we morphed, relative to the average voice, either textural information (texture block) – keeping the parameters of shape unchanged – or information of shape (shape block), with textural information unchanged. In each bloc, five stimuli were created for each individual/average voice pair: a caricature (150%), the original voice (100%), an anti-‐caricature (50%), the average voice (0%) and an anti-‐voice (-‐50%). We used the regions defined by an independent voice localizer as region of interest to measure the brain activity induced by the different conditions in each experimental block. Results showed that activity in the TVA is correlated with morph level: activity decreased for voices closer to the mean, in particular for the texture block, confirming that textural information is essential in the representation of voices. Whole-‐brain analyses ran using the different levels of morphing (150%, 100%, 50%, 0% and -‐50%) as parametric modulators were run
independently for each block using a second-‐order polynomial expansion. In the texture block, we showed a positive quadratic relation between brain activity and levels of morph in the right superior temporal cortex and a negative relation in the fusiform gyrus. On the contrary, in the shape block, a positive quadratic relation was observed in the left inferior frontal gyrus. By manipulating independently information of shape and information of texture we identified distinct brain regions involved in the processing of vocal shape and texture. Activation of the TVAs is highly dependent on textural information, and suggests that the representation of voices strongly rely on textural information. ID: 139
Coincidence detection mechanisms in auditory perceptual grouping of spectrotemporal cues Alex Brandmeyer, Jonas Obleser Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany [email protected] Auditory perception involves the analysis of spectrotemporally distributed sensory signals, and grouping them into coherent representations. One biologically plausible model that has been used to account for various aspects of sensory processing in both audition and vision is the coincidence detector. Here we investigate whether a coincidence detection model can account for behavioral effects observed in an auditory perceptual grouping task. Acoustic textures consisting of densely layered frequency modulated tone sweeps within a defined spectral bandwidth served as stimuli. Texture coherence can be manipulated by varying the proportion of sweeps with the same direction and speed, leading to a more or less salient sense of direction. Participants (N=11) completed a direction identification task (up/down) across a stimulus set that varied parametrically in both coherence and spectral center. A main effect of coherence was found, with performance improving as a function of increasing coherence. An coherence x spectral center interaction was also observed, in that stimuli with either high
54
or low spectral centers biased the perception of low coherence stimuli. An encoding model was used in which a free parameter representing spectral resolution was parametrically varied in a range between .6 and 3.6 semitones. Coincidence detection was implemented as a discrete time-‐lagged process in which idealized auditory filter banks receive input from spectrally adjacent channels, allowing for rate estimates of local spectrotemporal motion. Model decisions were calculated as a signed mean of rate estimates. The free parameter was fit to individual data using a least-‐squares method. Model results captured the main effect of coherence along with the coherence x spectral center interaction effect. A high correlation (r = -‐.77, p < .01) between individual task performance and the estimated spectral resolution parameter was also observed. Additional analyses explored the relationship between the present encoding model and models used to generate so-‐called auditory image representations. These results are consistent with accounts of auditory cortical processing in which spectrotemporal receptive field (STRF) tuning of neurons in primary and non-‐primary regions underlies the perception of spectrotemporal modulation patterns in complex sounds. They also serve as a starting point for neuroimaging research investigating the mechanisms underlying auditory perceptual grouping. ID: 140
Enhancement of brain event-‐related potentials to speech sounds is associated with compensated reading skills in dyslexic children Kaisa Lohvansuu1, Jarmo A. Hämäläinen1, Annika Tanskanen1, Leena Ervast2, Heikki Lyytinen1, Paavo H.T. Leppänen1 1University of Jyväskylä, Finland; 2University of Oulu, Finland [email protected] We studied speech sound processing of dyslexic children with EEG-‐based brain event-‐related potentials using a high-‐density sensor array. We found enhanced brain responses to shortening of a phonemic length in pseudo-‐words (/at:a/ vs. /ata/) in 30 dyslexic children compared to 58 typically reading control
children and 51 typically reading children with familial risk for dyslexia. The enhanced brain responses were associated with better performance in phoneme length discrimination, as well as with reading and writing accuracy. Further analyses revealed that the brain responses of those children with dyslexia who had largest responses originated from a more posterior area of the right temporal cortex as compared to the responses of the other two groups. This suggests a compensatory mechanism in a sub-‐group of children with dyslexia involving brain areas usually activated by phonological information. ID: 141
Top-‐down modulation of cortical responses to voice and speech: Developmental changes from childhood to adulthood Milene Bonte, Anke Ley, Elia Formisano Maastricht University, The Netherlands [email protected] Human listeners are surprisingly efficient in selecting, grouping and processing relevant acoustic elements of a sound while ignoring other elements of the same sound and the possible interference of background noise. In adults, this processing has been shown to rely on neural mechanisms that enable flexible representations of the same sound depending on the current behavioral goal. Much less is known on how -‐ during development -‐ these processes change and reach their mature efficiency. Here we measured functional MRI responses while children (8-‐9 years, n=10), adolescents (14-‐15 years, n=13) and adults (~24 years, n=10) listened to the same speech sounds (vowels /a/, /i/ and /u/) spoken by different speakers (boy, girl, man) and performed a delayed-‐match-‐to-‐sample task on either speech sound or speaker identity. All participants performed well-‐above chance level (50%) on the delayed-‐match-‐to-‐sample speaker and vowel tasks. Accuracy of speaker identification was comparable across age groups (Group: F(2,30)=0.37; n.s.), but girl/boy voices were more difficult to recognize than the adult voice (Stimulus: F(2,60)=18.0; p=0.000; mean (SD) % correct: boy 89.9 (6.9); girl 82.9 (12.2); man 96.7 (5.3)). Accuracy of vowel identification was lower in children than
55
in adolescents and adults (Group: F(2,30)=10,0; p=0.000; mean (SD) % correct: children 95.4 (3.6); adolescents 99.1 (1.5); adults 99.3 (1.0)), without significant stimulus differences. Across age groups, speech sounds evoked BOLD responses in a wide expanse of superior temporal cortex, in the inferior frontal cortex, the medial prefrontal cortex, and especially during the vowel task, in the posterior temporal cortex. Task modulations of sound-‐evoked responses showed developmental changes that were most apparent when comparing children and adults, with intermediate effects in adolescents. Most interestingly, a cluster on the right superior temporal gyrus/sulcus, with strong voice selectivity in independent voice localizer data, showed an age-‐related increase in speaker-‐task specific activity. This result suggests an incremental specialization for the active processing of voices in the right superior temporal cortex. An age-‐related increase in vowel-‐task specific activity was observed in a smaller right posterior temporal cluster. Because the vowel task required matching of vowel sounds to letters, this result may relate to continued refinement of letter-‐speech sound associations with reading experience. ID: 142
Effect of hearing loss on auditory cortex responses to vocoded vocalizations Yonane Aushana1, Chloé Huetz1, Christian Lorenzi2, Jean-‐Marc Edeline1 1CNRS and Université Paris Sud, France; 2CNRS, Institut d'étude de la Cognition Bron, Ecole Normale Supérieure Paris, France chloe.huetz@u-‐psud.fr Many psychoacoustic studies have shown that cochlear hearing loss patients suffer from difficulties in understanding speech in adverse listening conditions. Whether this stems from an abnormal representation of “temporal fine structure” (TFS) information at central stages of the auditory system, is an important but still unanswered question. Here, we aim at determining (1) whether and how responses of primary ACx neurons are modified when the TFS is degraded in natural communication sounds, (2) whether background masking noise yields an additive degradation effect and (3) whether ACx neurons in hearing impaired
animals (HI) show a similar degradation effect. Neuronal activity was collected in the ACx of urethane anesthetized guinea pigs using arrays of 16 electrodes placed in the tonotopic field A1. Hearing loss was realized for half of the animals by a single 2h exposure to a 120dB 5kHz pure tone. Responses to 4 different “whistle” conspecific vocalizations were presented in their normal version then without TFS cues. The removal of TFS cues was performed by processing each vocalization with a tone vocoder: in each of the frequency bands, the original TFS was replaced by a sine tone at the central frequency of that band. We used tone vocoders using 38, 20 and 10 frequency bands. Normal and vocoded whistles (75dB) were also presented against a steady noise masker set at 65dB. Compared with the responses to normal whistles, the responses of ACx neurons to vocoded whistles were altered in terms of firing rate, spike timing and mutual information. For normal animals, the lower the number of frequency bands, the largest the decrease for all neuronal responses indices (firing rate, spike timing reliability, mutual information); the strongest effects were obtained with the 10 frequency bands vocoder. This was not the case for HI animals: information in temporal patterns was already strongly impacted with the 38 frequency bands vocoder, and masking noise amplified this effect. We show that the responses of primary auditory cortex neurons are severely altered when vocoding stimuli, both in terms of response strength and in terms spike-‐timing precision. Moreover, we show that hearing loss modifies the way ACx neurons are impacted by vocoding and masking noise.
56
ID: 144
Alteration of frequency tuning and temporal precision of auditory cortex neurons after categorization between rising vs. falling sweeps Quentin Gaucher, Chloé Huetz, Caroline Tith, Victor Adenis, Jean-‐Marc Edeline CNPS, CNRS; Univ. Paris-‐Sud, France quentin.gaucher@u-‐psud.fr Over the last years, many studies have described the receptive field reorganizations of auditory cortex neurons occurring when a particular sound frequency became significant. It has been claimed that the aversive or appetitive nature of the reward influences the changes detected in auditory cortex during a learning task (David et al., PNAS, 2011). These data were obtained in awake, behaving, animals during tasks where the animals’ attention and strategies could differ in the aversive and appetitive conditions. To clarify this issue, we trained animals to categorize rising and falling sweeps in an aversive (1) and an appetitive (2) tasks. We analyzed the effects of training on auditory cortex neurons in anesthetized conditions, i.e., when no difference in terms of attention and strategies can be suspected. Experiment 1: Rats were trained in a shuttle box to discriminate between rising (CS+, predicting a footshock) and a falling sweeps (CS-‐). After reaching a threshold of 90% of correct responses, they were trained to generalize to 3 different sets of rising and falling sweeps. Their performance decreased when a new pair of CS+/CS-‐ was introduced but returned to 90% of correct responses for each pair of CS+/CS-‐ in 2 sessions. Experiment 2: Water-‐deprived guinea pigs were trained to discriminate between a CS+ (rising or falling sweep, water rewarded) and a CS-‐ (rising or falling sweep). Once the animals have learned the initial discrimination, 3 other sets of sweeps were introduced (same stimuli as in experiment 1). After both behavioral experiments (24-‐48h), spectro-‐temporal receptive fields (STRFs) of auditory cortex neurons were tested under urethane anesthesia. Results: In the aversive task, the STRFs of trained rats were larger both in terms of
bandwidth and of duration compared to control animals. The response strength within the STRFs was also higher. In the appetitive task, the STRFs of trained guinea-‐pigs were larger in terms of response duration but not in terms of bandwidth compared to control animals; the response strength within the STRFs was also higher. Conclusions: These results suggest that when tested under general anesthesia (without attentional effects) the consequences of an aversive or appetitive version of a categorization task on cortical receptive fields are not fundamentally different. Thus, the differences previously reported could be the consequences of different animal’s strategies or different attentional load. ID: 145
Diversity in expression of calcium-‐binding proteins in the auditory forebrain of echolocating bats Julia Heyd1, Manfred Kössl2, Cornelia Voss2, Emanuel C. Mora3, Silvio Macias3, Marianne Vater1 1University Potsdam, Germany; 2Goethe-‐University Frankfurt am Main, Germany; 3University of Havana, Cuba vater@uni-‐potsdam.de We studied the distribution of immunoreactivity to antibodies directed against the calcium-‐binding proteins (CaBPs) parvalbumin, calbindin and calretinin in the medial geniculate body (MGB) and auditory cortex (AC) of two bat species that differ in their sonar system. The insectivorous mustached bat (Pteronotus parnellii) uses Dopplersensitive sonar for general orientation, prey detection and identification whereas the short-‐tailed fruit bat (Carollia perspicillata) employs wide-‐band sonar mainly for general orientation. The auditory cortex of both species contains chronotopically organized fields. In other mammals, these CaBPs are markers for functionally distinct lemniscal and nonlemniscal pathways: Parvalbumin is strongly expressed in the tonotopically organized core regions (ventral division of MGB and its target, the primary AC) whereas calbindin and calretinin preferentially or exclusively label the “secondary” or belt regions of the thalamus (dorsal and medial
57
divisions of MGB) that project more diffusely to primary and secondary cortical areas. The parvalbumin labeling patterns in the MGB strikingly differed between the two species. In the short-‐tailed fruit bat, parvalbumin-‐immunoreactive (-‐ir) somata were confined to the suprageniculate. In contrast, in the mustached bat, parvalbumin-‐ir somata additionally occurred throughout the ventral, medial and most of the dorsal division. In both species, abundant somatic labeling with calbindin and calretinin antibodies was found throughout the MGB except for the suprageniculate, a pattern which is unique among mammals. CaBP-‐antibodies labeled distinct subpopulations of nonpyramidal neurons in the AC of both species. There was no evidence for regional specific differences in the distribution of labeled somata and neuropil (primary AC vs. chronotopic fields) but there were species specific differences in laminar distribution patterns of labeled somata. The labeling patterns indicate a specialized status of the suprageniculate in bats and furthermore argue for a hypertrophied and specialized core system involved in analysis of echolocation signals. Supported by: DFG and Potsdam graduate school
ID: 146
Developing the fundament for language -‐ auditory discriminative abilities of congenitally deaf children in the first months after cochlear implantation Niki Katerina Vavatzanidis1,2, Dirk Mürbe2, Anja Hahne2 1Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany; 2University Hospital Dresden, Germany niki.vavatzanidis@uniklinikum-‐dresden.de Background: Congenitally deaf and severely hearing-‐impaired children can get access to hearing when receiving a cochlear implant – a neuroprosthesis that directly stimulates the auditory nerve. If implanted at a young age (<4 years), chances are good for acquiring normal oral speech, despite the prolonged absence of any auditory stimulation and the non-‐natural input. Understanding what infants actually hear with the implant in the critical age of language
acquisition would help to understand a) the auditory system and its plasticity after an input-‐deprived period, and b) how language acquisition evolves if it starts considerably later than normal due to the absence of sensory input. Critically, a lack of sensitivity to basic auditory cues such as vowel length or syllable stress has been shown to co-‐occur with later language impairment. We focused on vowel length as one of the most basic but linguistically relevant cues, since in e.g. German it is both semantically relevant as well as a marker of syllable stress, which in turn is relevant for speech segmentation and thus for language acquisition. Methods: 16 early implanted congenitally deaf children (age at implant activation: 0;11-‐3;9y, mean: 1;6y – time of implant use: 0-‐8months) were tested repeatedly electrophysiologically: 1) before the implantation, 2) after first fitting and after 3) two, 4) four, 5) six and 6) eight months of implant use. The syllables /ba/ and /ba:/ were presented in a classical oddball paradigm. A control group matched gender and age of implanted children measured after 4 months of implant use. Results: 2 months after the first hearing experience with the implant, the ERP of the deviating long syllable differed significantly from the long syllable (p=0.02), with the effect increasing and stabilizing with duration of implant use. No such differentiation was detectable for the implanted children pre-‐operatively and at the time of first fitting. After four months of implant use, ERPs reached the levels of the control group. Conclusion: Whereas directly after first activation of the implant there is no sign of discrimination between long and short syllables, a first response can be seen after 2 months of hearing experience, becoming more robust with increasing hearing experience. Already after four months the discriminative response of implanted children resembles that of age peers. Thus one of the fundaments for further language acquisition is laid after a short period of time.
58
ID: 147
Cortical oscillations and spiking activity associated with Artificial Grammar learning in the monkey auditory cortex Yukiko Kikuchi, Adam Attaheri, Alice Milne, Benjamin Wilson, Christopher I. Petkov Newcastle University, United Kingdom [email protected] Artificial Grammars (AG) can be designed to emulate certain aspects of language, such as the structural relationship between words in a sentence. Towards developing a primate model system to study at the neuronal level, we obtained evidence that monkeys can learn certain relationships in sequences of nonsense words generated from an auditory AG (Wilson et al., 2013). Here, we ask how monkey auditory cortical neurons evaluate the within-‐word acoustics and/or between-‐word sequencing relationships, and whether these aspects engage theta and gamma oscillations, which are critical for speech processing in human auditory cortex (e.g., Giraud & Poeppel, 2012). We recorded local-‐field potentials (LFPs) and single-‐unit activity (SUA) from 4 fMRI localised auditory core (A1 & R) and lateral belt (ML & AL) subfields in two macaques (124 sites). During each recording session, the monkeys were first habituated to exemplary sequences generated by the AG. We then recorded neuronal activity in response to identical nonsense words, either in the context of a sequence that followed the AG structure (‘correct’) or one that violated its structure (‘violation’). In response to nonsense words, the LFP power significantly increased in a broad range of frequency bands (4-‐100 Hz), including at theta (4-‐10Hz) and low (30-‐50Hz) and high (50-‐100Hz) gamma frequencies. We also observed a consistent increase in the inter-‐trial phase coherence, in particular in the theta band. Theta phase was associated with gamma power modulations in response to the nonsense words, in the correct or violation sequences, respectively, in 42% vs. 39% of sites. Moreover, a substantial proportion of the LFP sites showed differential responses to the nonsense words depending on whether the nonsense word was in the context of a ‘correct’ or ‘violation’ sequence in theta (35 /124 sites), low gamma (37/124) and high-‐gamma (25/124) bands. We provide evidence that monkey
auditory neuronal responses, including theta and gamma nested oscillations, are associated with both the processing of nonsense words and the relationship between the words, as governed by an Artificial Grammar. These nonhuman primate results likely reflect domain general, evolutionarily conserved neuronal processes, rather than those that are language specific in humans. Support: Wellcome Trust New Investigator Award (CIP; WT102961MA). ID: 148
BDNF deletion in the cochlea/lower brainstem leads to cortical deficits over age Sze Chim Lee1, Dario Campanelli1, Ksenya Varakina1, Dan Bing1, Annalisa Zuccotti1, Wibke Singer1, Lukas Rüttiger1, Thomas Schimmang2, Marlies Knipper1 1University of Tuebingen, Germany; 2Universidad de Valladolid y Consejo Superior de Investigaciones Científicas, Spain [email protected] Tissue-‐specific deletion of brain-‐derived neurotrophic factor (BDNF) in the whole cochlea, dorsal cochlear nucleus, and inferior colliculus was found to be preventive against loss of auditory brainstem response (ABR) thresholds, ABR wave I amplitudes, and loss of inner hair cell (IHC) synaptic ribbons after exposure to traumatizing sound (Zuccotti et al., 2012). The present study aimed to assess if the deletion of BDNF in the cochlea/lower brainstem alters the vulnerability of hearing, sound processing, and cortical plasticity over age. We compared the auditory function by using auditory brainstem response (ABR) and distortion product otoacoustic emission (DPOAE) measurements on young and aged conditional BDNF Pax2 KO mice. We also analyzed the influence of acoustic noise exposure on hearing loss in young and aged animals. We furthermore investigated plasticity dependent genes and patterns of perisomatic disinhibiton in the inferior colliculus and the auditory cortex of the KO mice and the respective aged-‐matched controls over age. Analyses of tissue-‐specific BDNF KO mice over age showed profound differences in auditory
59
function and noise vulnerability between KO mice and the controls. We discuss the results in the context of a differential role of BDNF for bottom-‐up / top-‐down circuits that may prevent vulnerability during aging. ID: 149
Gap detection thresholds are determined by oscillatory activity in the auditory cortex Alina Baltus1,2, Christoph S Herrmann1,2,3 1Carl von Ossietzky University Oldenburg, Germany; 2Coordinated research project SFB-‐TRR 31 (“The active auditory system”), German Research Foundation, Germany; 3Research Center Neurosensory Science, Carl von Ossietzky University Oldenburg, Germany alina.baltus@uni-‐oldenburg.de It has been proposed that auditory temporal resolution (ATR) is related to oscillatory brain activity (Giraud & Poeppel, 2012). As a behavioral measurement of ATR, gap detection (GD) thresholds obtained in a between-‐channel GD task are thought to reflect the limit of ATR. Between-‐channel GD task are more difficult than within-‐channel GD tasks and are therefore considered to require processes in areas of the cerebral cortex, i.e. auditory cortex (Phillips et al., 1997). Further evidence from single unit recordings in monkey auditory cortex (Malone et al., 2010) support the idea that neural oscillations in the auditory cortex underlie ATR reflected in observed between-‐channel GD-‐thresholds. In our first experiment, we estimated between-‐channel GD-‐thresholds as a measurement of auditory temporal resolution (ATR) with a 3 down/1 up staircase procedure. Obtained GD-‐thresholds lie in the order of tens of milliseconds which correspond to frequencies in the gamma range. Electrophysiological resonance behavior in the gamma range suggests a neuronal generator mechanism which determines the resonance behavior of the cerebral cortex (Zaehle et al., 2010). Listening to amplitude modulated (AM) tones triggers an auditory steady state response (ASSR) with a spectral peak at the AM frequency. In our second experiment we varied the AM frequency in the gamma range to obtain an ASSR curve as a function of resonance behavior. Individual’s ASSR curves showed a non-‐linear response behavior with a clear peak in the gamma range, which can be
interpreted as the brain’s preferred frequency. To investigate the relationship between ATR and resonance frequency we compared individual’s between-‐channel GD-‐thresholds as an expression of ATR and individual peaks in the ASSR as a measurement of individual’s resonance frequency of the auditory cortex. Results of 15 individuals from the first and second experiment revealed a significant negative relationship (Spearman: r = -‐ 0.46, p = 0.04). Therefore, we conclude that a higher preferred oscillation facilitates faster processing of auditory stimuli and leads to shorter GD-‐thresholds. These findings suggest that ATR is determined by oscillatory activity in the auditory cortex and is reflected in the ability to detect gaps. ID: 150
Investigating categorization selectivity in the auditory cortex with high spatial resolution fMRI Scott A. Love1,2, Marianne Latinus1,2, Pascal Belin1,2 1Institut de Neurosciences de la Timone, Aix Marseille Université & CNRS, France; 2Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, United Kingdom [email protected] During voice categorization the brain distinguishes between vocal and non-‐vocal sounds. In human fMRI, larger activity to vocal than non-‐vocal sounds is seen in the upper bank of the superior temporal sulcus and these functional areas have been termed the Temporal Voice Areas (TVAs; Belin et al, 2000): similar functional regions exist in the macaque and dog (Petkov et al, 2008; Andics et al, 2014). TVAs are identified using blocks of fairly heterogeneous stimuli, large voxels, spatial smoothing and group statistics. The purpose of the current study was twofold: 1) understand the fine-‐scale functional structure of the TVA with high-‐resolution functional imaging (HR-‐fMRI), 2) examine the selectivity profile of the TVA with more homogeneous categories than voice vs non-‐voice. To inform positioning of the HR-‐fMRI slices of the main experiment, participants (N=10) were scanned (Siemens 3T) with a standard voice localiser (voxel size 3mm3). Pseudo real-‐time analysis identified voxels with a greater response to vocal than
60
non-‐vocal sounds. The HR-‐fMRI experiment included scanning 14 high-‐resolution slices (1.2x1.2x1.8mm) positioned around regions of the temporal cortex significant in each participant’s voice-‐localiser. Participants listened to 10s-‐blocks from 6 categories (native speech, non-‐native speech, human vocal non-‐speech, animal vocal, human action and environmental sounds) while performing a 4-‐alternative forced choice categorisation task (human voice, animal sound, environmental sound and human action). HR-‐fMRI slices were preprocessed without normalization or spatial smoothing. Pairwise contrast images between each category and every other were generated and for each category and voxel the number of significant pairwise comparisons were calculated to derive a selectivity index (max = 5). On average, 890 voxels had a selectivity index greater than 1, i.e., discriminated between at least two categories. Selectivity of 5 was essentially observed for native speech: 3.3% of voxels. Selectivity of 4 was observed in 28 and 17% of voxels for native and foreign speech respectively, with 37% overlap. Using HR-‐fMRI and more homogeneous categories, we did not uncover areas of selectivity for non-‐vocal sounds that could have been hidden by more numerous vocal-‐selective neurons within a larger voxel. Selectivity was greatest for both native and non-‐native speech followed by human vocal stimuli, animal vocal stimuli and lastly human action and environmental sounds. ID: 151
Sensorimotor predictive coding in the auditory cortex during vocal production in the macaque monkey Makoto Fukushima1, Matthew Mullarkey1, Alexandra Doyle1, Richard Saunders1, Naotaka Fujii2, Bruno Averbeck1, Mortimer Mishkin1 1National Institutes of Health, Bethesda, United States of America; 2RIKEN Wako, Brain Science Institute Saitama, Japan [email protected] During vocal production, an individual’s own voice is perceived without being confused with sounds produced by external sources. To achieve normal perception of self-‐generated sounds, the auditory cortex must be able to differentiate self-‐generated sounds from sounds produced externally, presumably by
integrating corollary discharges from the motor system. Previous studies have shown that the primary auditory cortex responds to mismatch between expected and actual auditory feedback during vocal production, but the coding property of this mismatch signal and its underlying cortical network interaction are not well understood. We trained a rhesus monkey to vocalize Coo calls for water rewards with/without loud background white-‐noise playback. Then we recorded subdural field potentials with a chronically implanted electrocorticographic (ECoG) array. This ECoG array consisted of 256 recording sites for bipolar recording at 128 locations that included the medial wall, its dorsal and ventral lateral surfaces, and the supratemporal plane (STP) in the lateral sulcus. We found that the two most caudal sites in STP showed robust increases in power in the lower gamma band (30-‐70 Hz) after call onset in the presence of noise, whereas the gamma band power in these sites decreased after call onset in the absence of noise. An ANOVA indicated that this low-‐gamma-‐power modulation was significantly explained by the difference in the auditory feedback, and not by the difference in the amplitude of calls under the two conditions (i.e. the Lombard effect). Furthermore, we were able to decode the fundamental frequency of Coo calls produced in noise from the gamma power with a cross-‐validated linear regression model. These results suggest that the gamma-‐band power in primary auditory cortex carries information about the mismatch in spectral content of expected and actual auditory feedback. Interestingly, we did not find this mismatch activity in the higher-‐order auditory cortex on the rostral STP, suggesting that it was not relayed from higher-‐order to primary auditory cortex. We also found a robust increase of gamma-‐band power in primary motor cortex, the supplementary motor area, and medial prefrontal cortex. This increase generally started before the onset of the call, and thus this activity could encode motor commands associated with vocal production. These motor areas could also be potential cortical sources of the mismatch signal found in the primary auditory cortex.
61
ID: 152
Neural processing of voices in autism spectrum disorder Stefanie Schelinski1, Kamila Borowiak1, Katharina von Kriegstein1,2 1Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany; 2Humboldt University of Berlin, Germany [email protected] Hearing another person talking provides information about speaker-‐specific characteristics like the speaker’s identity. Brain areas responding to human voice sounds have been identified along the superior temporal sulcus (STS) with speaker identity being predominantly processed in the right STS. In autism spectrum disorders (ASD) the role of speaker-‐specific characteristics (i.e. speaker identity recognition) in processing voices is unclear. Here, we systematically investigate neural mechanisms of voice processing in ASD using two established approaches to localise voice-‐sensitive brain regions. Sixteen adults with high-‐functioning ASD and sixteen typically developed controls (age, gender, and IQ matched) participated in two functional magnetic resonance imaging (fMRI) experiments. In experiment 1, participants passively listened to blocks of vocal (speech and non-‐speech) and non-‐vocal sounds (e.g. musical instruments, nature, animals). In experiment 2, participants performed speaker identity recognition and speech recognition tasks. In this experiment we presented blocks of two-‐ word sentences spoken by three speakers. Participants decided whether a speaker matched the identity of a target speaker (speaker identity task) or whether the content of a sentence matched the content of a target sentence (speech task). Critically, blocks for both tasks contained exactly the same set of stimuli, only the task instructions differed. In experiment 1 we found greater voice-‐sensitive blood-‐oxygenation-‐level-‐dependent (BOLD) responses along the bilateral STS in the ASD as well as in the control group (p < .05 family wise error (FWE) corrected). In contrast, in experiment 2 we found a voice-‐sensitive cluster of enhanced BOLD response in the right middle/posterior STS that was greater in controls compared to the ASD group (p < .05 FWE, small volume
corrected for the right STS). The findings indicate that in high-‐functioning ASD neural processing of speaker-‐specific information (i.e. voice identity recognition) is altered whereas the more general processing of human voice sounds including speech is within the normal range. Our findings contrast previous evidence that voice sensitive neural responses along the STS are absent in ASD. Altered functioning of voice identity processing that is distinguishable from speech processing might be a correlate of fundamental neural mechanisms that underlie communication deficits in ASD. ID: 153
Decoding sound location from auditory cortex during a relative localisation task Katherine C. Wood, Stephen Town, Huriye Atilgan, Gareth P. Jones, Jennifer K. Bizley University College London, United Kingdom [email protected] Many studies have investigated the ability of human listeners to localise sounds in space to an absolute location (e.g. Stevens and Newman, 1936). Other studies have compared the minimum discriminable difference in spatial location that a listener can reliably discern; the minimum audible angle (Mills, 1958). However, very few studies have investigated relative sound localisation, i.e. reporting the relative position of two sequentially presented sources. Determining the relative location of two sound sources or the direction of movement of a source are ethologically relevant tasks. Here we report multi-‐unit activity from auditory cortex of ferrets performing a relative localisation task. Ferrets were trained in a positively conditioned 2AFC task to report whether a target sound originated from the left or right of a preceding reference. In standard testing, the target and reference stimuli were 150 ms noise bursts separated by a 10 ms gap. The reference was presented from one of 6 locations in the frontal 180° and the target was presented from a speaker 30° to the left or right. We also presented low pass (<1 kHz), band pass (3-‐5 kHz) and shortened stimuli (100 and 50 ms) and stimuli with a 50 ms interval. We recorded using 32 individually moveable tungsten electrodes, in a 4x4 array in each side of the head.
62
We have a total of 1284 recordings which showed sound-‐evoked activity (p<0.05, t-‐test of mean firing rate in 50 ms before and after stimulus onset) from 207 unique recording sites in two ferrets. Of the 1284 unit recordings, 34% and 26% of recordings from ferret 1 and ferret 2 showed significant tuning to the location of the reference sound (p<0.05 ANOVA on firing rate during reference presentation and reference location, post-‐hoc analysis to look at tuned locations). Overall, 39% of unit recordings were tuned to contralateral space, 10% tuned to midline locations, and 51% were ipsilaterally tuned. Preliminary ROC analysis on the target-‐evoked spike count showed that a small fraction of the tuned unit recordings showed a significant choice probability (16%) and a similar fraction (16%) reliably reported the direction of the stimulus. On-‐going analysis is exploring the contribution of ILD and ITD cues to spatial tuning through the use of band-‐pass stimuli, and relating other response measures, such as cortical location and frequency tuning, to spatial tuning and the likelihood of observing a significant choice probability. ID: 154
A task of selective attention to sound in rats Elena Andreeva, Wolfger von der Behrens University & ETH Zurich, Switzerland [email protected] How does the cortical processing of behaviorally relevant, attended sounds differ from that of irrelevant, distracting sounds? Although changes in neural firing rate, synchronization, variability, and receptive field plasticity have all been observed with attention, the laminar circuitry underlying these changes remains unexplored. We aim to investigate how attention operates on the level of the cortical microcircuit by assessing layer-‐specific changes in rat auditory cortex that occur as the animal engages in a task of selective auditory attention. To this end, we trained rats to respond to a 20 dB amplitude drop in a sequence of pure-‐tone pips of a target frequency (e.g., 6 kHz) while an interleaved pip sequence of a distractor frequency (e.g., 16 kHz) was presented from a
speaker on the opposite side. The animal, head-‐fixed, reported the side of the target frequency by licking sugar water from a spout on the corresponding side within a short time window of the amplitude drop. Rats learned to localize the side of the target stimulus with an accuracy of 75% when presented alone and ~65% when presented together with a distractor of the same volume. This is the first demonstration of successful pure tone localization in rodents that are head-‐fixed and thus unable to facilitate localization by orienting towards the source of sound. To investigate the layer-‐specific effects of attention in differently-‐tuned regions of auditory cortex, trained animals were implanted with four-‐shank silicon probe arrays, eight recording sites per shank allowing to sample multi-‐unit and LFP activity across all cortical layers. A great heterogeneity of neural responses was recorded in the animals during behavior. Neurons separated by only 200 µm in cortical depth varied by as much as two octaves in frequency tuning and 40 dB in amplitude selectivity. Responses to interleaved target-‐distractor pip sequences revealed tuning-‐related differences in adaptation. We have established a task for directing the animal’s feature-‐selective attention towards either of two competing auditory stimuli. By switching the target frequency within a session, the animal’s internal state can be altered while keeping the external stimulation constant, an important prerequisite for our ongoing investigation of the neural basis of attention. ID: 155
Modified representations in auditory, visual and somatosensory cortices following deafness Carmen Wong1, Andrej Kral2, Stephen G. Lomber1 1The University of Western Ontario, London, Canada; 2Hannover Medical University, Germany [email protected] Sensory deprivation can alter the anatomical and functional development of cortical structures normally allocated to processing the lost sensory modality. For example, both volumetric reductions to auditory cortex and reallocation of auditory areas to visual or somatosensory processing have been reported
63
in deaf subjects. While changes incurred to a sensory system following loss of its peripheral input has been investigated extensively, studies focusing on cortical changes to the remaining sensory systems are more limited. To examine the influence of acoustic experience on sensory cortex organization, areas within auditory, visual, and somatosensory cortex were examined in hearing and congenitally deaf cats. Cerebral cytoarchitecture was visualized with the monoclonal antibody SMI-‐32, a marker of neurofilament proteins used to demarcate sensory cortical areas in many species. Auditory, visual, and somatosensory cortical areas were delineated and their volumes quantified. In deaf cats, anterior auditory areas were significantly reduced in volume, resulting in an overall decrease in the total volume of auditory cortex. Volumetric reductions to anterior auditory areas were complemented by significant expansions in neighbouring somatosensory regions. Although the total visual cortex volume did not differ between hearing and deaf cats, visual areas were not impervious to deafness, with expansions to areas 17 and 18 and reductions to ventral visual areas. As the total volume of cortex examined did not differ between our hearing and deaf animals, acoustic deprivation appeared to specifically affect the volumetric proportions allocated to auditory, visual and somatosensory cortices. Overall this study demonstrates the importance of sensory input in establishing cortical representations of multiple sensory modalities, and the anatomical consequences of early sensory loss. ID: 156
Direct mapping of the cortical tinnitus network Phillip E Gander1, William Sedley2, Sukhbinder Kumar2,3, Hiroyuki Oya1, Christopher K Kovach1, Kirill V Nourski1, Hiroto Kawasaki1, Matthew A Howard1, Timothy D Griffiths1,2,3 1University of Iowa, United States of America; 2Newcastle University, United Kingdom; 3University College London, United Kingdom phillip-‐[email protected] Tinnitus occurs when peripheral hearing damage leads to secondary changes in ongoing brain activity. These central mechanisms are poorly-‐understood, partly because
experimental evidence is mostly indirect, meaning it does not reflect the real-‐time perception of tinnitus, and/or it does not provide a direct measure of neural activity. Therefore it has so far been impossible to map out the anatomy and physiology of the brain networks responsible for causing tinnitus without relying heavily on speculation. Nonetheless, testable hypotheses have recently been proposed about the possible architecture of a ‘tinnitus core’ network, defined as the minimum neural network that must be active in order for tinnitus to be perceived. Here we test this hypothesis in a human neurosurgical subject, with typical bilateral tonal tinnitus and high-‐frequency hearing loss, who had an extensive array of electrocorticography and depth electrodes placed for the localization of epilepsy. Tinnitus loudness was modulated with residual inhibition using noise, and quantified with real-‐time ratings. We found: 1) Suppression of tinnitus correlated with widespread reductions in delta (1-‐4 Hz) oscillatory power throughout most of auditory cortex, and large parts of non-‐auditory cortex in temporal, parietal, limbic and motor areas. These areas also showed changes in inter-‐regional delta phase coherence with tinnitus suppression, and we interpret this as demonstrating a ‘tinnitus driving’ network, propagating the thalamic delta rhythm into global networks relevant to perception, emotion and cognition. 2) Theta (4-‐8 Hz) and alpha (8-‐12 Hz) power was similarly suppressed in most of these areas, except in areas linked to auditory memory (mesial temporal lobe structures and inferior parietal cortex) where it increased. We interpret these discrete areas of theta/alpha increase as delineating a ‘tinnitus memory’ network. 3) High beta (20-‐28 Hz) and gamma (28-‐144 Hz) power increased, during tinnitus suppression, throughout auditory cortex and in posterior temporal, inferior parietal, sensorimotor and parahippocampal cortex. We propose these areas constitute a ‘tinnitus perception’ network, representing changes to the ongoing percept. 4) Cross-‐frequency coupling changes accompanied tinnitus suppression in Heschl’s gyrus, superior temporal gyrus, parahippocampal cortex and inferior parietal cortex, which we propose are the sites and
64
mechanisms of interaction between the three sub-‐networks described. ID: 157
Combined rate and temporal encodings yield stable envelope processing in behaving monkey auditory cortex Roohollah Massoudi1, Marc van Wanrooij1,2, Huib Versnel3, John van Opstal1 1Radboud University Nijmegen, The Netherlands; 2Radboud University Nijmegen Medical Centre, The Netherlands; 3University Medical Center Utrecht, The Netherlands [email protected] Temporal envelope processing of complex sounds is a major function of auditory cortex (AC), and crucial for speech intelligibility. Although neural sensitivity to amplitude-‐modulated noises (AM) has been studied in AC, and some studies have reported on its role in AM discrimination tasks, little is known about how different behavioral states of the listener would influence AM sensitivity. Here, we analyzed the spontaneous activity, the sound-‐onset response, and the sustained response of monkey AC cells, when animals could be in different behavioral states, varying from passive sound exposure, to engagement in a non-‐predictive and predictive sound-‐change detection task. We analyzed the monkeys’ reaction times as a function of AM modulation and predictability in the task, and found a systematic relationship between reaction time and acoustic sensitivity. We determined modulation transfer functions (MTFs) to quantify the behavioral and single-‐unit neuronal responses for a range of amplitude modulation frequencies (AMF). Task involvement altered the strength of envelope phase-‐locking of cells, their mean firing rate, and trial-‐by-‐trial variability. We also quantified the strength of neuron’s temporal modulation-‐following response (mMTF) by Fourier analysis, and found that it was task-‐independent for AMF’s between 5.6 to 45 Hz, but changed between passive and active listening conditions for both lower and higher AMF’s. These results indicate that the mMTF remains stable for the AMFs, for which AC cells employ a combination of both rate and temporal encoding mechanisms. Furthermore, our findings suggest that mMTF is a better measure for the
quantification of temporal envelope encoding of auditory cortex neurons, as it depends on both the firing rate and the strength of envelope phase-‐locking. ID: 158
Greenwood frequency-‐position relationship as revealed by optical imaging in guinea pig primary auditory cortex Wen-‐Jie Song, Masataka Nishimura Kumamoto University, Japan song@kumamoto-‐u.ac.jp Orderly representation of sound frequency over space, or tonotopy, is a hallmark of the primary auditory cortex (A1). A quantitative relationship between sound frequency and cortical position, however, remains to be further explored. Here we examined this relationship in guinea pig A1 by presenting stimulus tones in a wide frequency range, and recording the evoked cortical responses with a high spatial resolution optical imaging technique. We determined three best-‐frequency positions as the cortical positions in A1 for each tone frequency: the onset response position, the peak amplitude position, and the maximum rise rate position of the response evoked by a tone of the frequency. In each and all animals (n = 23) examined, a nonlinear log frequency-‐position relationship was found for each of the three indices, and the frequency-‐position relationship was always well described by a Greenwood equation, with correlation coefficients greater than 0.99. Cortical magnification factor, measured in octave/mm, was found to be a log function of frequency. Because sound frequency is represented in the two dimensional cortical sheet, our results do not fully capture all features of frequency representation in A1, but the results do establish a quantitative relationship for sound frequency and cortical position in guinea pig A1 along the frequency axis. Our results should find application in an array of studies including modeling of the auditory cortex.
65
ID: 159
Pupil-‐size-‐dependent auditory-‐evoked cortical responses in anesthetized rats Hirokazu Takahashi, Hiroyuki Tokushige, Tomoyo I. Shiramatsu, Takahiro Noda, Ryohei Kanzaki The University of Tokyo, Japan [email protected]‐tokyo.ac.jp The pupil size is highly correlated with neural activity in the locus coeruleus, a small collection of noradrenergic neurons. The locus coeruleus-‐noradrenaline system has widely distributed, ascending projections to the neocortex, possibly playing general roles in arousal regulation and environmental responsiveness. Recent studies showed that the pupil size exhibited spontaneous, rhythmic changes on the order of minutes even under anesthetized conditions, which covaried with the cortical state, i.e., the pattern of spontaneous cortical signals. In the present study, we further tested whether and how the pupil size covaries stimulus-‐evoked cortical responses in the auditory cortex of anesthetized rats. Six male Wistar rats at 8-‐9 postnatal weeks were used. Rats were anesthetized with isoflurane (3% at induction and 1% for maintenance) and held in a fixed position using a head-‐holding device. The right auditory cortex was surgically exposed and the tone-‐evoked activities were epipially mapped with a surface microelectrode array. During the recording, the pupil size was also measured with an infrared camera. Click or tone bursts were presented every 700 ms. For tone stimuli, 10 different frequencies (2 – 50 kHz) were presented randomly. Stimulus specific adaptation (SSA) was also characterized during an oddball paradigm with 2-‐ and 4-‐kHz tones. Consistent with previous studies, the pupil size spontaneously fluctuated with a 2-‐min cycle. The amplitudes of tone-‐evoked responses were negatively correlated with the pupil size. This negative correlation was significantly higher in response to high frequency 50-‐kHz tones than others, and also higher in the primary auditory cortex than either in the anterior or ventral auditory fields. Both the standard-‐ and deviant-‐evoked responses equally exhibited the negative correlation, and hence, SSA did not covary with the pupil size. Lastly, to test the causal effect of pupil diameter and cortical
responses, nociceptive electrical stimulus was applied at limbs to dilate the pupil; this pupil dilation was also associated with significant decreases of cortical evoked responses. Thus, the autonomic system is likely to mediate the pupil fluctuation either spontaneously or in a stimulus-‐driven manner, which is associated with some significant effects on the evoked responses in the sensory cortex. This work was supported in part by SCOPE (121803022) and KAKENHI (25135710, 26242040). ID: 160
Local-‐field potentials to speech sound features in the primary auditory cortex of rats Jari Kurkela, Mustak Ahmed, Eeva-‐Kaarina Pellinen, Paavo H.T. Leppänen, Jarmo Hämäläinen, Piia Astikainen University of Jyväskylä, Finland [email protected] The capability to discriminate between different speech sound features is essential for understanding spoken language. Furthermore active exposure to auditory stimuli can improve this ability, as seen on behavioral and brain level in humans. Fascinatingly, it has been shown that rats and other rodents also has this ability, thus it is not unique only to humans. We tested rat’s ability to discriminate different speech sound features, and brain plasticity after passive training. Eighteen animals divided into two groups were passively exposed to auditory material (either syllables changes or syllable duration changes presented in oddball condition) for ten consecutive days, one hour per day. After the exposure, local-‐field potentials were recorded epidurally above their primary auditory cortex to the both syllable changes and syllable duration changes while the rats were urethane-‐anesthetized. We found that both types of changes in speech sounds produced MMN responses to deviant stimuli, but after exposure no training effect was found in either condition. It is unclear whether the inability to find the training effect was due the insufficient exposure or other methodical aspects of this experiment. Consequently, in the future it is interesting to investigate whether training effects can be found in humans, for example for foreign speech sound features.
66
ID: 161
Context-‐dependent representation of auditory time in working memory Sundeep Teki1, Timothy D. Griffiths1,2 1University College London, United Kingdom; 2Newcastle University, United Kingdom [email protected] The brain can hold information about multiple objects in working memory. It is not known, however, whether intervals of auditory time can be stored in memory as distinct items. Furthermore, the neural substrates that represent time intervals in working memory are also poorly understood. In this study, we developed a novel behavioural paradigm to examine temporal memory where listeners were required to reproduce the duration of a single probed interval from a sequence of intervals. Listeners received precise feedback (in ms) of their memory performance that was quantified in terms of precision. We ran a series of experiments and demonstrate that memory performance significantly varies as a function of temporal structure (better memory in regular vs. irregular sequences), interval size (better memory for sub-‐ vs. supra-‐second intervals) and memory load (poor memory for higher load). Memory performance was invariant to attentional cueing, in contrast to cueing results from vision or audition. Using functional magnetic resonance imaging at 3T, we investigated encoding of time into working memory as a function of the temporal structure and memory load of the sequences. An orthogonal design was used where the temporal regularity of the sequences varied for a fixed number of intervals, and in a separate condition, the number of intervals were varied whilst the temporal regularity was constant. We demonstrate that perceptual timing areas including the cerebellum and striatum, and the parietal cortex encode temporal memory as a function of temporal regularity and memory load respectively. Structural magnetic resonance imaging data revealed parallel changes in grey matter density that correlated with behavior in a similar network of areas as implicated by the functional BOLD data. Our data represent the first systematic investigation of temporal memory in auditory sequences and support the emerging
hypothesis that time intervals are allocated a working memory resource that varies with the amount of other temporal information in a sequence. The imaging data represent the first neural evidence suggesting that core areas of the temporal processing network including the cerebellum and striatum also encode memory for time intervals in a context-‐dependent manner and that the parietal cortex acts as a hub for storing not only sensory information in working memory, but also temporal information. ID: 162
Task-‐dependent modulations of the fMRI BOLD response in monkey auditory cortex Heather Slater1, Emma Salo2, Ross Muers1, Teemu Rinne2, Christopher Petkov1 1Newcastle University, United Kingdom; 2University of Helsinki, Finland [email protected] Human neuroimaging studies have shown that operations in auditory cortex are strongly modulated by active tasks. Yet, virtually all prior imaging studies in awake nonhuman animals were conducted under passive stimulation. To help bridge the gap between human fMRI studies using active tasks and animal models, we trained two macaques to perform an auditory spatial task during fMRI, with and without competing visual stimuli. The monkeys were presented with pairs of “coo” vocalisation sounds that either changed in spatial location (target: virtual acoustic space change from -‐90° to +90° in azimuth) or were presented from the same location (nontarget: repeated location in -‐90° or +90°). The monkeys were rewarded for pressing a lever to auditory targets and withholding their response to nontargets. Approx. 40% of randomly selected auditory stimulus trials had competing visual stimuli (pairs of low-‐contrast monkey face stimuli presented either in two different spatial locations or in one location). These conditions allowed us to investigate activations to sounds presented during an active auditory task (auditory only trials) and to evaluate the influence of visual stimuli on both behaviour and associated task-‐dependent fMRI modulations. After behavioural training, simulating the MRI environment (ca. 5 sessions/wk for >18 months with each
67
monkey), the monkeys were scanned with fMRI at 4.7 Tesla while performing the auditory spatial discrimination task. This resulted in 9 fMRI scanning runs (ca. 900 testing trials) per monkey with auditory performance above chance. As expected, sound presentation was associated with reliable activation of auditory cortex (sound trials compared to silent trials). Task performance significantly modulated activations in most auditory cortical fields, where stronger activations were associated with better performance. Moreover, enhanced activations in visual cortical areas were observed when the monkeys’ performance was influenced by visual stimulation during the audio-‐visual stimulus conditions. These results provide insights into how task performance modulates activations in the nonhuman primate brain at the regional level during an active auditory spatial task with and without competing visual stimuli. *Equal contribution: HS, ES and RM. Support: Academy of Finland (TR); Wellcome Trust (CIP; WT102961MA); BBSRC U.K. (CIP; BB/J009849/1) ID: 163
Hippocampal local-‐field potentials to spectro-‐temporally complex sounds in awake rabbits Piia Astikainen1, Eeva-‐Kaarina Pellinen1, Paavo Alku2, Miriam Nokia1 1University of Jyväskylä, Finland; 2Aalto University, Finland [email protected] Electrophysiological studies have shown that hippocampus responds to changes in serially presented sinusoidal sounds. However, its role in discriminating spectro-‐temporally complex sounds is unclear. We recorded local-‐field potentials from the dentate gyrus of the hippocampal formation in awake rabbits. In oddball condition, frequently presented ‘standard’ sounds were rarely (p=.01) and randomly replaced by infrequent ‘deviant’ sounds. Speech (/ki:/ vs. /pi:/) and non-‐speech sounds with carefully matched acoustic features were presented in separate stimulus conditions. Deviant sounds elicited two prominent peaks of positive polarity approximately at 70 ms and 120 ms after stimulus onset. There were no differences between the speech and non-‐speech stimulus
conditions. /Pi:/ and its non-‐speech counterpart as the deviant stimulus modulated both peaks (larger responses to deviants than standards), while the modulation to /ki:/ and its non-‐speech counterpart was found only in the first peak. The results show that the dentate gyrus of the hippocampus in rabbit responds to rare changes in spectro-‐temporally complex sounds. The current experimental paradigm thus provides an effective tool for investigating hippocampal function in receptive language context in an animal that has neither linguistic brain structures nor long-‐term memory traces to speech sounds. ID: 164
Cerebral processing of affective non-‐verbal vocalizations using MEG and voice morphing: Single-‐subject GLM analysis Emilie Salvia1, Sonja A. Kotz2, Patricia Bestelmeyer3, Cyril Pernet4, Guillaume A. Rousselet1, Bruno L. Giordano1, Joachim Gross1, Pascal Belin1 1University of Glasgow, United Kingdom; 2University of Manchester, United Kingdom; 3University of Bangor, United Kingdom; 4University of Edinburgh, United Kingdom [email protected] Background Studies suggested rapid processing of emotional information arising from the auditory cortex, i.e. from ~100ms after stimulus onset. However, it is not entirely clear to what extent these early ‘emotional’ effects are in fact driven by acoustics: sounds that vary in affective properties also tend to have a different acoustic structure. We examined (1) whether early affective effects can be observed at the single-‐participant level and (2) whether these early effects are based on acoustical rather than perceptual features. Method MEG was used to assess the cerebral response to affective voices. Stimuli consisted of affective auditory bursts from the Montreal Affective Voices Battery manipulated via morphing to parametrically vary acoustical structure and perceived emotional properties. We performed a single-‐subject analysis (>6000 trials per subject) and used a toolbox implementing the General Linear Model (GLM) for EEG/MEG data (LIMO EEG). ‘Simple’ models
68
including an affective regressor (Arousal/Valence) and ‘combined’ models including also acoustical regressors were estimated. We tested the impact of the regressors on 3 Event Related Fields (ERF) components: 2 early components, i.e. N100, P200, reflecting auditory cortex activity, and a later one, i.e. LPP. Results ERF results show auditory stimulus responses ~100ms (N100) and ~200ms (P200) after stimulus onset; there are positive or negative signal amplitude variation over the right and left temporal sensors and contralateral sensors show opposite polarities. A positive signal amplitude variation over the left central parietal sensors was observed at ~400-‐600ms (LPP). ‘Simple’ models results show significant early and late affective effects on the signal variance at these sensors located over the auditory and centro-‐parietal cortices. However, the ‘combined’ models showed few remaining effects of Arousal after removing the acoustically-‐explained variance while significant effects of Valence remained especially at a late processing stage. Conclusion Early effects of Arousal are largely explained by variance in acoustical features showing that understanding vocal emotional messages requires analysing and integrating a variety of acoustical cues. Valence contributes more strongly to variance independently of acoustics, particularly at later processing stages. Emotional voices processing requires: (1) analysis of acoustical cues; (2) elaboration of stimulus evaluation along affective dimensions.
ID: 165
Conspecific objects exhibit preferential multisensory integration Pawel J. Matusz1,2, Antonia Thelen1,3, Eveline Geiser1, Jean-‐Francois Knebel1,4, Celine Cappe5, Micah M. Murray1,3,4 1Vaudois University Hospital Center and University of Lausanne, Switzerland; 2University of Oxford, United Kingdom; 3Vanderbilt University, United States of America; 4Center for Biomedical Imaging, Switzerland; 5Centre de Recherche Cerveau & Cognition, France [email protected] Rudimentary and complex stimuli, including environmental objects, both elicit multisensory interactions during early post-‐stimulus stages. However, processing of auditory (and visual) object categories has been typically dissociated between living and man-‐made categories, and additional evidence points to preferential processing of conspecific stimuli, such as voices (or faces). Our study assessed whether such preferences engender facilitated multisensory integration of some objects over others and/or alter the underlying neural mechanisms. We recorded 160-‐channel EEG from 14 healthy adults performing a living/man-‐made go-‐nogo categorization involving environmental objects presented as sounds, drawings or auditory-‐visual pairs. Behavioural analyses based on the inverse efficiency scores (median reaction time divided by percent correct responses revealed an overall advantage for discriminating living vs. man-‐made objects, irrespective of the sensory condition. There was no evidence for multisensory facilitation. A 3x3 ANOVA with sub-‐categories of living objects (conspecifics, mammals, birds) and sensory condition revealed multisensory facilitation exclusively for conspecifics. EEG analyses followed an electrical neuroimaging approach and were restricted to distracter trials to avoid motor confounds when comparing multisensory and summed unisensory brain responses. A 3x2 ANOVA with factors of category (conspecifics, mammals, birds) and response type (pair/sum) revealed a significant interaction due to nonlinear auditory-‐visual integration of neural responses (global field power) to conspecifics at ~50ms post-‐stimulus onset that was not observed for other categories. These results provide the first evidence that auditory-‐visual
69
objects that refer to conspecifics exhibit facilitated multisensory integration. ID: 166
Lifelong changes in the auditory cortex for voice perception Zhang Jingting1, Fraser W. Smith2, Bruno L. Giordano1, Marie-‐Hélène Grosbras1, Guillaume A. Rousselet1, Pascal Belin1 1University of Glasgow, United Kingdom; 2University of East Anglia, United Kingdom [email protected] Aging does not influence cognition in a simple way, with some cognitive functions declining in response to age-‐related neural tissue loss while others remaining preserved with aging. Current evidence on auditory processing is more focused on speech, rather than voice. Pervasive age-‐related brain atrophy, including in auditory cortex, creates the necessity of studying whether and how voice processing in auditory cortex changes with age. To address this issue, in an fMRI study we manipulated auditory vocal and non-‐vocal signals and measured the relationship between age, functional activity and grey matter density. Fifty-‐six healthy adults across a wide age range (20-‐86 years, M = 49.7 years, SD = 18.9) passively listened to sounds with eyes closed while being scanned. The experimental paradigm consisted of a classical voice localizer scan (http://vnl.psy.gla.ac.uk/resources.php) contrasting blocks of vocal vs non-‐vocal sounds to identify the temporal voice areas (TVA), which mostly cover the middle and anterior parts of the bilateral superior temporal sulcus (STS) in auditory cortex (Belin et al., 2000). The stimuli contained 40 8-‐sec blocks of sounds. Half of the blocks had only vocal sounds (both speech and nonspeech), and the other half had only non-‐vocal sounds (e.g., environmental sounds). Voxel-‐Based Morphometry analysis showed that extensive regions of cortex covering the whole brain, including auditory cortex, had reduced grey matter density with increasing age. To study the relationship between activity and age, we conducted linear and nonlinear correlations of activity with age. Results showed activation in bilateral STS, left precentral and left inferior frontal gyrus for vocal versus non-‐vocal sounds independent of
age, indicating the critical role of this system in voice-‐specific processing across the life span. We observed a significant linear relationship between activity for vocal versus non-‐vocal sounds and age, with increased activity in bilateral postcentral gyrus and right superior occipital gyrus with advancing age. The age-‐related changes in postcentral regions might reflect changes in articulation-‐related features in voice perception, as in speech perception (Pulvermüller et al., 2006). These findings altogether suggest the major voice-‐selective system in auditory cortex is preserved despite of age-‐related brain atrophy but the neural activity of articulation-‐related features in voice perception may increase with age. ID: 167
The neural representations of phonetic features in the auditory cortex during speech perception, speech production, and inner speech Jessica Arsenault1,2, Bradley Buchsbaum1,2 1Rotman Research Institute Toronto, Canada; 2University of Toronto, Canada [email protected] Mental imagery involves modality-‐specific reactivation of neural pathways that are important for direct perception (Hubbard, 2010). Within the auditory modality, the perception of speech produces patterns of activity in the superior temporal gyrus (STG) that are associated with phonetic features such as voicing and manner of articulation (Mesgarani, Cheung, Johnson, & Chang, 2014). There is a debate about the extent to which the activation of low-‐level acoustic-‐phonetic features are required for auditory imagery associated with inner speech (i.e., Corley, Brocklehurst, & Moat, 2011). The objective of the current study was to use functional magnetic resonance imaging (fMRI) and multivariate pattern analysis (MVPA) to assess whether or not distributed representations in the STG associated with phonetic features during the perception of auditory speech are also activated during silent inner speech. Participants took part in three different fMRI sessions within one week. During one session, participants were asked to subvocally rehearse consonant-‐vowel syllables (i.e., “ba”), with special instructions to not move their mouths
70
or make any sound (inner speech condition). In another session, auditory stimuli were perceived as participants passively listened to the same syllables (speech perception condition). A final session required participants to silently mouth the same syllables while in the scanner such that overt speech-‐motor movements were produced in the absence of external auditory feedback (mouthing condition). MVPA was performed in order to classify the distributed activity reflecting three phonetic features: voicing, manner of articulation, and place of articulation. Results show that within the auditory cortex, phonetic feature classification was worse during inner speech than in speech perception or mouthing. While inner speech and speech perception produced non-‐overlapping classification maps, overlap was observed between speech perception and mouthing throughout the STG. The neural patterns support the notion that phonetic features are indeed impoverished during inner speech. The fact that the mouthing condition produced stronger classification accuracy in the auditory cortex than inner speech suggests that more peripheral motor commands may be driving low-‐level feature representations in auditory cortex through an efference copy mechanism. References Corley, M., Brocklehurst, P. H., & Moat, H. S. (2011). Error biases in inner and overt speech: evidence from tongue twisters. Journal of Experimental Psychology. Learning, Memory, and Cognition, 37(1), 162–75. doi:10.1037/a0021321 Hubbard, T. L. (2010). Auditory imagery: empirical findings. Psychological Bulletin, 136(2), 302–29. doi:10.1037/a0018436 Mesgarani, N., Cheung, C., Johnson, K., & Chang, E. F. (2014). Phonetic feature encoding in human superior temporal gyrus. Science (New York, N.Y.), 343(6174), 1006–10. doi:10.1126/science.1245994
ID: 168
Do cochlear implant users process faces differently? Maren Stropahl1, Karsten Plotz2, Rüdiger Schönfeld2, Pascale Sandmann1,3,4, Maarten De Vos1,4, Stefan Debener1,4 1Carl von Ossietzky University Oldenburg, Germany; 2Evangelisches Krankenhaus Oldenburg; 3Hannover Medical School, Germany; 4Cluster of Excellence "Hearing4all" maren.stropahl@uni-‐oldenburg.de Cochlear implants (CI) can partially restore hearing in post-‐lingually deafened individuals. There is evidence that the auditory cortex
takes over visual functions during a period of auditory sensory deprivation. While deafness-‐induced reorganization may contribute to superior visual abilities in hearing-‐impaired individuals, a residual pattern of visual take-‐over after the implantation of a CI seems maladaptive for restoring speech intelligibility based on the input provided by a CI (Sandmann et al., 2012, Brain). The aim of the present study was to obtain more information about visual processing in CI users. Specifically we investigated whether the electrophysiological correlate of face processing, the face-‐selective N170 component of the event-‐related potential (ERP), is different in CI users compared to normal hearing individuals. Given that hearing-‐impaired listeners often rely on faces to better understand speech (lip-‐reading) we expected a more efficient face processing in this group. High-‐density electroencephalogram data were recorded from N=21 experienced CI users and N=21 age-‐matched controls (aged 20 to 74 years) performing a face versus house discrimination task. Lip-‐reading abilities and speech intelligibility were assessed. A face-‐index expressing the face-‐selectivity of the N170 component was calculated (Sadeh et al., 2010, Human Brain Mapping). Evaluation of ERP topographies revealed significant group differences compatible with the predicted pattern of visual take-‐over in the CI group. For the CI group the face-‐index significantly correlated with the duration of deafness (r=-‐.40), showing a more pronounced difference between the N170 component for faces compared to the N170 component for houses (more negative index) with a longer duration of deafness. Additionally, the face-‐Index was associated with the age at hearing loss onset (r=.39). Taken together, the results confirm a topographic difference in face processing between the groups, and identify for the CI users a relation to the duration of auditory deprivation. Source localization results will be reported and implications for CI outcome prediction will be discussed.
71
ID: 169
Visual experience influences an auditory cortex critical period Todd M Mowery, Vibhu C Kotak, Dan H Sanes New York University, United States of America [email protected] During development, primary sensory cortices go through brief epochs of increased plasticity known as critical periods (CP). However, sensory modalities do not begin to function at the same time, suggesting that they may influence one another as they come online (Gottlieb, 1971). Here, we asked whether the onset of visually-‐evoked activity can influence an auditory cortex CP during which synaptic inhibition is vulnerable to hearing loss. Since auditory cortex synaptic inhibition is quite sensitive to early experience (Mowery et al 2014), we first identified the CP during which spontaneous inhibitory postsynaptic currents (sIPSC) are vulnerable to hearing loss. We varied the age of hearing loss onset by inserting bilateral earplugs on different days, beginning when the ear canals open (P11). These experiments demonstrated that the CP closed abruptly at P18, near the time when eyelids normally open. Therefore, we tested whether early eyelid opening (P14), or delayed eyelid opening (P23), would influence this inhibitory CP. When eyelids were opened early, the auditory cortex CP closed early, by P16. In contrast, when eyelid opening was delayed, the auditory cortex CP was also delayed by several days. This principle held for sIPSC amplitudes, time constant, and frequency. Taken together, these results strongly suggest that visual input directly influences an auditory cortex CP. This finding has implications for cross-‐modal influence on maturation, especially as it pertains to transient deprivation or premature experience. ID: 170
The effects of behavioral engagement in an auditory same/different task on cortical responses in marmoset monkeys Michael Scott Osmanski, Xiaoqin Wang Johns Hopkins University Baltimore, United States of America [email protected] A central question in auditory neuroscience is how brain activity gives rise to perception and,
further, how different behavioral states modulate responses across auditory cortex. Recent work has shown that neurons in primary auditory cortex (A1) can be adaptively modulated by factors related to the degree of engagement in a psychophysical task. However, the role of active behavior in modulating neural response properties across auditory cortex (including secondary cortical fields such as belt and parabelt regions) remains a largely open question. To begin to address this question, we trained marmoset monkeys on an auditory delayed match-‐to-‐sample task (“Same/Different”) in which subjects were presented with two sounds, separated by a short delay period, drawn from a large corpus of different acoustic stimuli. Animals were required to lick at a feeding tube if the sounds were different and withhold responding if they were identical. After mastering this task, animals were then transferred to a series of test stimulus sets comprised of a number of spectrally and temporally complex sounds, including vocalizations. In addition, we implanted animals with a 16 channel multi-‐electrode array covering a large portion of auditory cortex. We sought to examine changes in neural activity across recording sites (i.e., between putative core [A1] and lateral belt auditory fields) when animals were engaged in the behavior task compared to passively listening to the same stimuli. Overall, our results show significant changes in stimulus-‐evoked firing rates during active behavior compared to passive listening. Specific changes in neural responses varied based on task condition, including trial type (Same/Different) and behavioral outcome (e.g., Hit/False Alarm). These data support previous studies describing modulation of neural responses as a function of engagement in a behavior task, in addition to a potential functional differentiation between core and belt regions. Supported by NIH grants DC003180 to XQW and DC013150 to MSO
72
ID: 171
Spiking in auditory cortex following thalamic stimulation is dominated by cortical network activity Matthew I. Banks, Bryan M. Krause University of Wisconsin Madison, United States of America [email protected] Introduction: Recent evidence suggests that the state of the cortical network prior to sensation can have a profound impact on neural responses and perception. In rodent auditory cortex, sensory responses are reported to occur in the context of network events, similar to brief UP states, that produce 'packets' of spikes and are associated with synchronized synaptic input. However, traditional models based on data from visual and somatosensory cortex predict that sensory stimuli activate ascending thalamocortical pathways that evoke sequential activation of cells in layers 4, 2/3 and 5 (L4>L2/3>L5). The relationship between these two types of sensory-‐evoked spatio-‐temporal activity patterns is unclear. Here, we investigated the laminar response pattern to stimulation of TC afferents auditory thalamocortical (TC) brain slices. We show that although monosynaptic spiking responses to TC afferents occur, the vast majority of spikes fired following TC stimulation occur during brief UP states and outside the context of the L4>L2/3>L5 activation sequence. Methods: Auditory TC slices were prepared from B6CBAF1/J mice (3 -‐ 12 weeks). TC afferents were electrically stimulated in the MGv or in the superior thalamic radiation. Laminar profiles of synaptic and spiking responses were obtained via (1) whole cell and on cell recordings from cells throughout layers 2-‐6; (2) current source density and multiunit activity recorded in all laminae simultaneously using multhichannel electrodes; and (3) Ca imaging, in which cells were loaded with OGB-‐1 AM throughout the cortical laminae to identify spiking cells as a function of cortical depth. Results: Monosynaptic subthreshold TC responses with similar latencies were observed throughout layers 2 -‐ 6, presumably via synapses onto dendritic processes located in granular thalamo-‐recipient layers. However, monosynaptic spiking was rare, and occurred
primarily in L4 and L5 GABAergic interneurons. Spiking was dense during TC-‐evoked brief UP states and occurred primarily in pyramidal cells. These network events always involved infragranular layers, whereas involvement of supragranular layers was variable. During UP states, latencies were comparable between infragranular and supragranular cells. Discussion: These data suggest that sensory stimuli are processed in parallel at the network level by two distinct circuits in auditory cortex. Supported by National Institutes of Health (M.I.B.: R01 DC006013; B.M.K.: T32 GM007507). ID: 172
Comparing cortical pitch responses in humans and monkeys Bevil R. Conway1, Samuel Norman-‐Haignere2, Nancy G. Kanwisher2, Josh H. McDermott2 1Wellesley College, United States of America; 2 Massachusetts Institute of Technology, United States of America [email protected] Pitch perception is a fundamental component of hearing that is thought to be important for many species. Humans possess stereotyped brain regions that respond preferentially to sounds with pitch (e.g. speech vowels, piano notes) compared with sounds that lack pitch (e.g. whispering, drumming). These regions can be consistently identified in individual human subjects using fMRI, by contrasting responses to harmonic tones with responses to frequency-‐matched noise; and these regions extend from part of primary auditory cortex (the low-‐frequency portion of the tonotopic map) into anterior auditory cortex (Norman-‐Haignere et al., J. Neurosci., 2013). Here we use fMRI to test whether similar regions are present in rhesus macaque monkeys. In Experiment I, we measured responses in two fixating macaques to the same contrast we have previously tested in humans: harmonic tones versus noise, each presented in 5 different frequency ranges (spanning 5 octaves: 0.3 -‐ 9.6 kHz). This 2 (pitch vs. noise) x 5 (different frequency ranges) factorial design allowed us to measure both frequency-‐selectivity (i.e. tonotopy) and pitch responses. Consistent with previously published results, the monkeys showed tonotopic organization that was similar to that in humans: regions
73
selective for low frequencies alternated with regions selective for high frequencies along the posterior-‐to-‐anterior axis of auditory cortex. Unlike in humans, however, there were no regions in monkeys that responded more to harmonic tones than frequency-‐matched noise. This was true both within tonotopically defined regions of interest as well as across the whole brain. In Experiment II, we measured responses in ten humans and two monkeys to a diverse collection of 165 natural sounds that varied in their pitch strength. The results of this experiment provided additional evidence that selective responses to sounds with pitch are unique to human auditory cortex. Low-‐frequency regions in human auditory cortex responded more to natural sounds with a strong pitch, consistent with the finding that pitch-‐responsive regions in humans are co-‐located with low-‐frequency regions of the tonotopic map. But in monkeys, the response of low-‐frequency regions was correlated only with the presence of low-‐frequency energy, and not with pitch strength. These results suggest that brain regions with a preferential response to sounds with pitch may be absent in macaque auditory cortex. Grant/Other Support: NIH Grant EY023322 (BRC), NSF Grant 0918064 (BRC), NIH Grant P41EB015896, NIH Grant S10RR021110, NIH Grant EY13455 (NGK), Grant/Other Support: McDonnell Foundation (JM) ID: 173
MEEG evidence that prediction fosters a reliable perception of the causal structure of the world Alessandro Tavano1, Burkhard Maess2, Erich Schroeger1 1University of Leipzig, Germany; 2Max Planck Institute for Human Cognitive and Brain Sciences, Germany tavano@uni-‐leipzig.de How does the human brain reflect the causal structure of the world? One way is to have sensory systems extracting predictable patterns in input, since regularly occurring events are likely to reflect coherent, distinguishable sources. More generally, the brain would actively exploit sensory inferences based on stimulus expectancies to obtain a reliable model of the current state of affairs in the world. However, this stance leaves open
the question as to the nature of such inferences. It has been shown that when highly probable stimuli are omitted, the brain’s response is partially similar to that elicited by the actual stimuli. This suggests that prediction or “knowing what next” activates deputy sensory cortices in a stimulus-‐specific manner, perhaps in advance or at least concurrently with the actual input (omission paradigm). However, if the neural activity “filling in” for the omitted, highly expected sound caused sensation – e.g., hearing missing tones –, it would contradict the core assumption that prediction fosters a reliable perception of the world around us, as participants would hear a sound when there is none. We used human EEG to investigate the functional nature of prediction in audition by omitting predictable vs. unpredictable pure tones, delivered in pairs outside the focus of attention. The omission of predictable sounds generated an N1 response closely resembling that of the actual sound, extending previous findings. Crucially, the omission of highly predictable tones elicited a larger distraction response (P3a) than the omission of unpredictable sounds, proving that the absence of the tone was better noticed when the tone was highly expected. In two further EEG and MEG experiments we demonstrated that the “resemblance” between the brain responses to the omitted and actual sounds can be extended as to obtain virtually perfect cortical simulations of omitted sounds at both sensor (EEG) and source (MEG) spaces, overriding the traditional concept of “stimulus template”. We conclude by proposing that auditory predictions are precise (point-‐wise) neural hypotheses about what is in the world as we hear it, and what is not. ID: 175
Trial-‐to-‐trial variation of auditory evoked potentials and mismatch negativity in rat auditory cortex Tomoyo Isoguchi Shiramatsu, Hirokazu Takahashi The University of Tokyo, Japan [email protected]‐tokyo.ac.jp Mismatch Negativity (MMN) refers to a negative deflection in auditory evoked potential (AEP) in response to sound changes. We have reported that MMN in rats is not a
74
mere effect of stimulus-‐specific adaptation (SSA), while middle latency potential (P1) exhibited strong SSA. In this study, we investigated whether MMN amplitude depends on SSA of P1, based on grand averaged analysis and single trial analysis. Eleven Wistar rats, at postnatal week 8-‐10, with a body weight of 250-‐310g, were used in the experiment. Rats were anesthetized with isoflurane (3% at induction and 1-‐2% for maintenance), and their right auditory cortex were surgically exposed. A surface microelectrode array with a grid of 10×7 recording sites epipially recorded AEPs during an oddball paradigm. The test stimuli were 60-‐dB SPL, 100-‐ms-‐duration tone bursts. The test frequencies were either 10 or 12 kHz. In each block, 540 standards (90%) and 60 deviants (10%) were delivered every 700 ms, and the grand-‐averaged and single-‐trial responses of standard and deviant AEP were obtained. In either the grand-‐averaged or single-‐trial AEPs, the amplitudes of deviant P1 and MMN were quantified. P1 amplitude was defined as the maximum potential within 50 ms from the stimulus onset. MMN amplitude was defined as the maximum within 50 – 150-‐ms post-‐stimulus latency in the subtraction waveform of the deviant response from standard response. First, in the grand-‐averaged responses, we found a positive correlation between the deviant P1 and MMN (R = 0.73, p < 0.001), suggesting that the AEP amplitude depends on the activity level of auditory cortex. However, amplitudes in single-‐trial responses did not exhibit correlation between P1 and MMN waves. More interestingly, MMN amplitude exhibited a bimodal distribution while P1 amplitude showed a unimodal distribution, suggesting that MMN is generated only in some trials but not in other trials. Furthermore, there was very weak positive correlation between MMN amplitude in single trial and the number of preceding standard tones (R=0.03, p<0.05). Thus, MMN is likely to appear independently of SSA of P1 in an ‘all-‐or-‐none’ manner. This work was partially contracted by SCOPE (121803022) and supported by KAKENHI (25135710, 26242040).
ID: 176
Development of cortico-‐cortical and thalamocortical excitatory neural circuits in the human auditory cortex Soumya Iyengar1, Arvind Singh Pundir1, Utkarsha A. Singh1, Nikhil Ahuja1, Bishen Radotra2, Praveen Kumar3, PC Dikshit4, SK Shankar5, Anita Mahadevan5 1National Brain Research Centre, India; 2PGIMER, Chandigarh, India; 3Base Hospital, Delhi Cantonment, New Delhi, India; 4Maulana Azad Medical College, Delhi, India; 5NIMHANS, Bangalore, India [email protected] Axons in all layers of the human auditory cortex are immunoreactive for heavy and medium chain neurofilaments (a marker for axonal maturity) by 25GW and the density of the neurofilament-‐rich plexus in the cortical wall become adult-‐like during the first postnatal year in humans (9 postnatal months). In order to study the origin of these axons, we studied the expression of the vesicular glutamate transporters 1 and 2 which are known to predominate in either cortico-‐cortical synapses (VGLUT-‐1) or thalamocortical synapses (VGLUT-‐2). We found that levels of VGLUT-‐2 mRNA were higher in postmortem human auditory cortex samples before birth compared to the postnatal period. In contrast, levels of VGLUT-‐1 mRNA were low before birth and increased during the postnatal period to peak at adolescence and then decreased in adulthood. Further, immunohistochemistry revealed that both VGLUT-‐1 and VGLUT-‐2 proteins were expressed in the presumptive human auditory cortex at 25GW. Higher levels of VGLUT-‐1 immunoreactivity were present in Layers II and III (supragranular layers) and Layers V and VI (infragranular layers) compared to Layer IV as early as 34GW and this pattern was maintained until adulthood. Immunoreactivity for VGLUT-‐1 was the highest during adolescence in the primary auditory cortex, as was the case for VGLUT-‐1 mRNA. In contrast, a dense band of VGLUT-‐2-‐positive terminals began to appear in Layer IV of the presumptive Heschl’s gyrus at 34GW and formed a clear band in this layer by 37G. There was a marked increase in the density of VGLUT2 immunoreactive fibers centered at Layer IV by 18 postnatal months, which decreased somewhat by adulthood.
75
Immunoreactivity for VGLUT-‐2 was not restricted to Layer IV but was also present in axon terminals in other layers, especially in Layers V and VI, starting at 34GW. Our results suggest that thalamic axons which utilize glutamate begin to innervate the human auditory cortex as early as 25GW and gradually increase in density by the first postnatal year, after which they appear to undergo pruning. Our results also suggest that VGLUT-‐1 labeled cortico-‐cortical synapses begin to form in the human auditory cortex during prenatal development. ID: 177
Categorical perception of consonants and vowels: Behavioral and magnetoencephalographic evidence Christian Friedrich Altmann, Maiko Uesaki, Kentaro Ono, Masao Matsuhashi, Tatsuya Mima, Hidenao Fukuyama Kyoto University, Japan [email protected] This experiment aimed at investigating categorical perception of consonants compared to vowels. To this end, we designed stimuli along a phonological continuum from /ba/ to /da/, /bo/ to /do/, /ba/ to /bo/, and /da/ to /do/, thus entailing changes of the consonant or the vowel of a consonant-‐vowel (CV) syllable. In a behavioral experiment, we first determined the category boundaries for each individual participant. Then, while measuring magnetoencephalography (MEG), we presented participants with consecutive pairs of either same or different CV syllables. In case of different stimuli, the two CV syllables were either chosen from within the same category or they crossed a category-‐boundary. During the MEG experiment, participants actively discriminated the stimulus pairs. Behaviorally, we found that discrimination was easier for the between-‐ compared to the within-‐category contrast for both consonants and vowels. However, this categorical effect was significantly stronger for the consonants compared to vowels, in line with a more continuous representation of vowels. At the neural level, we observed significant repetition suppression of MEG evoked fields, i.e., lower amplitudes for physically same compared to different stimulus pairs, from around 430 to
500 ms after onset of the second stimulus. Source reconstruction revealed generating sources of this repetition suppression effect within the left superior temporal lobe. A region-‐of-‐interest analysis within this region showed a clear categorical effect for consonants, but not for vowels. Thus, our study corroborates the proposition that categorical effects are stronger for consonants compared to vowels. Furthermore, it provides further evidence for the important role of left superior temporal areas in categorical representation during active phoneme discrimination. ID: 178
Unilateral tinnitus: Changes in connectivity measured with fMRI Cris Lanting1, Emile de Kleine1, Dave Langers1,2, Pim van Dijk1 1University of Groningen, The Netherlands; 2University of Nottingham, United Kingdom [email protected] Introduction Tinnitus is a percept of sound that is not related to an acoustic source outside the body. Mechanisms in the central nervous system are believed to play a role in its pathology. It is thought that, following hearing loss, neurons may change the strength of existing connections. As a result, spontaneous firing rates may increase, as well as the synchrony across multiple neurons. Functional magnetic resonance imaging was used to investigate differences in the connectivity patterns; possibly reflecting increased synchrony, across nuclei in the auditory path. Methods All imaging experiments were performed on a 3T MRI system (Philips Intera) with an eight-‐channel SENSE head coil. Three 8-‐min runs were acquired consisting of 51 identical single-‐shot T2*-‐sensitive EPI volumes (TR 10 s; TE 22 ms; voxel-‐size 1.0 × 1.0 × 2.0 mm3). Fourteen subjects with unilateral tinnitus were recruited as well as sixteen subjects without tinnitus, all without neurological and psychiatric history. We obtained the time-‐courses of ten auditory nuclei (bilateral CN, IC, MGB, primary auditory cortex, secondary cortex) and the vermis of the cerebellum. These arrays were concatenated
76
over subjects resulting in a matrix containing all the time-‐courses across subjects. For each group, the covariance matrix was calculated, containing the Pearson cross-‐correlation for all possible ROI pairs. Results The characteristics of the connectivity patterns did not relate to the laterality of tinnitus. The lateralization for left-‐ or right ear stimuli, as expressed in a lateralization index, was considerably smaller in subjects with tinnitus compared to that in controls. Reduced functional connectivity between the brainstem and auditory cortex was observed in subjects with tinnitus compared to controls. Conclusion Reduced connectivity between brainstem and cortex is consistent with two existing models of tinnitus generation. In one model, tinnitus corresponds to a deficit the connection between the limbic system and the auditory thalamus [Rauschecker 2010], normally inhibiting tinnitus-‐related activity. In the other, tinnitus relates to increased large-‐scale, slow-‐rate oscillatory coherent thalamocortical activity [Llinás 1999]. In both models, a change in the function of the auditory thalamus would lead to an reduced connectivity between brainstem and cortex, suggesting an important role for the medial geniculate body of the thalamus. ID: 179
Modulation of the cortical representation of vocalizations in the guinea pig by the amygdala David B Green, Mark N Wallace, Alan R Palmer MRC Institute of Hearing Research Nottingham, United Kingdom [email protected] Guinea pigs have a repertoire of at least eleven vocalisations that are context dependent, communicating information about danger, identity and emotional state. Most of these vocalizations can be produced by electrical stimulation of a variety of structures in the brain even when the animal is anaesthetised. We have elicited eight calls from urethane-‐anaesthetised guinea pigs by stimulating parts of the limbic system including the anterior
cingulate cortex, hypothalamus, midline thalamus and basal amygdala. Of the eight distinct vocal patterns elicited by electrical stimulation, six unambiguously matched spontaneous calls, while the remaining two were identified as unnatural versions of one spontaneous call. There seems to be strong emotional modulation of animal calls that presumably involves the limbic system. Here we investigate whether activity in the limbic portion of the call production pathways can also modulate the sensory representation of conspecific vocalizations. We acoustically presented recorded calls, while electrically stimulating the basal amygdala – an emotion-‐mediating structure that is involved in the affective prosody of human speech. Of the eight functional areas in the guinea pig auditory cortex, the two most responsive to vocalizations are the primary area (AI) and the adjacent ventrorostral belt area (VRB). However, in this study, we found evidence of a new area extending between the rhinal sulcus and the previously defined VRB. This area responds poorly, or not at all, to pure tones yet is highly selective to conspecific vocalisations. It may constitute a communication-‐related parabelt area and, if so, would be the first parabelt area described in the guinea pig. Electrical stimulation in the basal amygdala was combined with auditory presentation of a range of conspecific vocalisations, whilst recording neural activity in AI, VRB and the suprarhinal area. In all three cortical areas, neuronal responses to the acoustic vocalizations could be enhanced or suppressed depending on the call type by simultaneous electrical stimulation. This is direct evidence that the amygdalar portion of the limbic system can modulate the sensory representation of communication calls in primary and secondary cortical areas.
77
ID: 180
Appearing and disappearing objects in acoustic scenes are supported by distinct neural representations: Evidence from MEG Ediz Sohoglu, Maria Chait University College London, United Kingdom [email protected] Change detection is a critical computation in hearing and underlies our capacity to perceive complex and time-‐varying sounds such as music and speech (Näätänen et al., 2001; Bizley and Cohen, 2013). However, the precise brain mechanisms by which change detection is accomplished, particularly in complex ongoing scenes, remain unknown. In the current MEG study, we test the hypothesis that changes in acoustic scenes are represented in fundamentally different ways depending on whether the change involves an appearing or disappearing object in the ongoing scene. Listeners were presented with scenes containing four or ten auditory objects, formed from rapid pure-‐tone sequences that each had a unique frequency and amplitude modulation rate. On some trials, one of these objects appeared at a variable time relative to scene onset while on other trials one object disappeared from the scene. An additional control condition involved scenes without a change. While MEG was collected, listeners were required to actively detect the changing objects. Our results show that listeners are quicker and more accurate in detecting appearing rather than disappearing objects. Underpinning this behavioral difference are change-‐evoked neural responses that are not only significantly larger and earlier for appearing objects but are also fundamentally different: the first observable responses to appearing and disappearing objects (peaking at ~50 ms and ~150 ms, respectively) associated with distinct spatial patterns of MEG activity. These results suggest that appearing and disappearing objects are supported by distinct neural representations. One possible reason for this asymmetry is a high-‐level perceptual bias for appearing events (Cole and Kuhn, 2010). Another is because detecting disappearing objects is a computationally harder problem, necessarily requiring the prior
representation of the acoustic scene (Cervantes Constantino et al., 2012). ID: 181
Neuronal entrainment to rhythm in the gerbil inferior colliculus Vani Rajendran1, Jose Garcia-‐Lazaro2, Nick Lesica2, Jan W H Schnupp1 1University of Oxford, United Kingdom; 2University College London, United Kingdom [email protected] While the perception of “beat” in rhythmic stimuli is a well-‐appreciated human ability, the neural mechanisms of beat perception are poorly understood. In a recent study (Nozaradan et al., 2012), human listeners were presented with rhythmic patterns consisting of pure tones and silent gaps while an electroencephalogram (EEG) was recorded. These patterns lacked a simple periodic structure but were nevertheless perceived as rhythmic, even though the perceived “beat” often fell on silent intervals. The spectrum of EEG activity revealed selective enhancement of beat-‐related frequencies and selective suppression of frequencies that did not correspond to the perceived rhythmic pattern; furthermore, these meter-‐related frequencies were enhanced even when the acoustic energy in the pattern was not predominant at these frequencies, suggesting that these EEG components reflect a neural correlate of the perceived beat. To test whether the neuronal entrainment to beat-‐related frequencies underlying human beat perception can be observed in other species, we recorded responses from the inferior colliculus (IC) of anaesthetized gerbils to pink noise stimuli with the same rhythmic patterns used by Nozaradan et al. An analysis of the gerbil IC local field potentials (LFP) shows striking similarity to the human EEG spectra obtained by Nozaradan et al, including the characteristic enhancement of beat-‐related frequencies and suppression of other frequencies. This surprising finding suggests that fundamental mechanisms of rhythm and beat processing are likely to be very similar across mammalian species, and reside in early stages of the ascending auditory pathway.
78
ID: 182
Impaired neural representation of single-‐trial low-‐frequency speech information by children with dyslexia Alan Power, Natasha Mead, Lisa Barnes, Usha Goswami University of Cambridge, United Kingdom [email protected] Children with developmental dyslexia, a disorder of learning that impairs the fluency and accuracy of reading and spelling, have difficulties in oral phonological (sound structure) tasks across languages. This cognitive impairment in the “phonological representation” of word forms is considered causal to the developmental disorder, yet direct evidence for impaired neural encoding of speech is currently lacking. Here we present 800 noise-‐vocoded words to children with dyslexia, to age-‐matched typically-‐developing control children, and to younger reading-‐level matched control children in a word report task. Including younger typically-‐developing children matched in reading achievement to dyslexic children controls for the impact of reading experience on the brain. The accuracy of speech encoding was assessed via stimulus reconstruction in EEG. Children with dyslexia showed significantly poorer word report compared to CA controls, performing like younger RL children. Reconstruction of the amplitude modulations in the speech was assessed in 5 frequency bands spanning 0 – 10 Hz. Compared to CA controls, children with dyslexia showed significantly impaired encoding in the 0 – 2 Hz (delta) band, but not in any other low frequency band. RL controls performed like CA controls and better than children with dyslexia, suggesting that reduced reading experience was not driving the dyslexic impairments. Individual differences in speech envelope encoding accuracy were significantly correlated with phonological awareness (sensitivity to syllable stress) and accuracy of word report. These results provide the first evidence that the neural representation of the speech envelope is impaired in the delta band in dyslexia. Impaired delta band encoding would affect the prosodic representation of speech, and the representation of syllable stress and syllable boundaries. Consequently this neural impairment would affect the
phonological representation of word forms in the mental lexicon in dyslexia across all languages, not just English, offering a possible cross-‐linguistic neural cause of this disorder. ID: 183
Temporal predictability as a grouping cue in the perception of auditory streams Vani G Rajendran1, Nicol S Harper1, Benjamin D Willmore1, William M Hartmann2, Jan W H Schnupp1
1University of Oxford, United Kingdom; 2Michigan State University, United States of America [email protected] The process of parsing acoustic stimuli into coherent “streams” of sound is referred to as auditory streaming. While numerous studies have implicated stimulus properties such as rate, frequency separation, and temporal coherence on the propensity for auditory streams to segregate, relatively little is known about how temporal regularity of a sound affects stream perception. We report a role of temporal regularity on the perception of auditory streams. Listeners were presented with two-‐tone sequences in an A-‐B-‐A-‐B rhythm that was either regular or had a controlled amount of temporal jitter added independently to each of the B tones. Subjects were then asked to report whether they perceived one or two streams. The percentage of trials in which two streams were reported substantially and significantly increased with increasing amounts of temporal jitter. This finding suggests that temporal predictability of tones may serve as a binding cue during auditory scene analysis. ID: 184
Using adaptation to investigate the neural mechanisms of attention in the human auditory cortex Jessica de Boer, Sarah Gibbs, Katrin Krumbholz MRC Institute of Hearing Research Nottingham, United Kingdom [email protected] Single neuron recordings in auditory cortex have suggested that attention causes a sharpening of neural frequency selectivity [1]. In humans, neuroimaging studies have reported similar effects when using notched-‐noise masking to estimate cortical frequency selectivity [2]. There is a possibility, however,
79
that these results were confounded by differences in attentional load between different masking conditions. Here, we tested the effect of selective attention on cortical frequency tuning directly, using an adaptation paradigm previously used in the visual system [3]. In this paradigm, the feature selectivity of cortical neurons is assessed by measuring the degree of stimulus-‐specific adaptation of the gross evoked response as a function of the difference between the adapting stimulus and the subsequently presented probe stimulus. If attention causes an increase in neural selectivity, it would be expected that adaptation becomes more stimulus specific, that is, more strongly dependent on the adapter-‐probe difference. Auditory-‐evoked potentials (AEPs) were recorded from 18 participants performing a dichotic listening task. Tone sequences comprising four equally probable frequencies were presented to one ear, while, concurrently, a sequence of waxing or waning noises was presented to the other ear. The participants were instructed to attend either the tones or the noises (alternating every 2.5 min) and to detect rare oddballs within the attended stimulus sequence. Only AEPs to the tones were recorded. As expected, AEP amplitude was significantly larger when the tones were attended than when they were ignored. Furthermore, the effect of attention on AEP amplitude was significantly greater when the evoking tone was immediately preceded by a tone of a different frequency than when it was preceded by a tone of the same frequency. This suggests that the adaptation caused by preceding tones was more frequency-‐specific when the tones were attended than when they were unattended, implying that selective attention caused an increase in cortical frequency selectivity. [1] Fritz J., Shamma S., Elhilali M. and Klein D. (2003). Nat. Neurosci., 6 (11), 1216-‐1223. [2] Ahveninen J., Hamalainen M., Jaaskelainen I.P., Ahlfors S.P., Huang S. and Lin F.H. (2011). Proc. Natl. Acad. Sci. USA, 108, 4182-‐4187. [3] Murray S.O. and Wojciulik E. (2004). Nat. Neurosci.,7, 70-‐74.
ID: 185
Neuromodulatory effects of vagus nerve stimulation in the rat auditory cortex and thalamus Rie Hitsuyu1, Tomoyo I. Shiramatsu1, Takahiro Noda1, Ryohei Kanzaki1, Takeshi Uno1, Kensuke Kawai2, Hirokazu Takahashi1 1The University of Tokyo, Japan; 2NTT Medical Center Tokyo, Japan [email protected]‐tokyo.ac.jp Vagus nerve stimulation (VNS) causes neuromodulatory effects in the cerebral cortex, which are useful not only for therapy on intractable epilepsy but also for enhancement of higher brain functions such as cognition and memory. However, the mechanisms of action of VNS are poorly understood. In this study, we examined whether and how VNS modulates auditory-‐evoked activity in the auditory cortex and thalamus. VNS implantation was performed in 6 rats at postnatal week of 10 – 13. A week after the implantation, neural activities in the auditory cortex and medial geniculate body were investigated under isoflurane anesthesia. A surface microelectrode array was used to epipially map the auditory evoked potentials (AEP). In addition, a depth microelectrode array was used to measure multi-‐unit activities (MUA) in the auditory cortex and thalamus simultaneously. Before and after VNS was applied, auditory-‐evoked neural activities were characterized in terms of stimulus specific adaptation (SSA), reproducibility, and temporal response property. VNS was a train of 0.5-‐mA pulses at 10 Hz for 30 s and the train was delivered every 5 min throughout the experiments. Neural measurements were carried out in between VNS delivery. We found that VNS modulated SSA in cortical AEP such that strong SSA decreased while weak SSA increased, suggesting that VNS reduced variation of SSA in the auditory cortex. We also found that, in the auditory cortex, VNS decreased Fano factor of tone-‐evoked MUA, i.e., a measure of reproducibility, when the test frequency was close to the characteristic frequency (CF), but had little effects for off-‐CF tones. VNS also modified a temporal response property in the auditory cortex such that the number of spikes significantly reduced in
80
response to fast click trains. In the thalamus, on the other hand, VNS is likely to have less effect on reproducibility and temporal property than in the cortex. These results suggest that VNS may have profound effects on the auditory-‐evoked neural activities specifically in the cortex. This work was partially supported by SCOPE (121803022) and KAKENHI (25135710, 26242040). ID: 186
To integrate the unknown: Touching your lips, hearing your tongue, seeing my voice Avril Treille, Coriandre Vilain, Jean-‐Luc Schwartz, Marc Sato GIPSA-‐lab; Grenoble, France Avril.Treille@gipsa-‐lab.grenoble-‐inp.fr Seeing the articulatory gestures of the speaker significantly enhances auditory speech perception. A key issue is whether cross-‐modal speech interactions only depend on well-‐known auditory and visual modalities or, rather, might also be triggered by other sensory sources less common in speech communication. The present electro-‐encephalographic (EEG) and functional magnetic resonance imaging (fMRI) studies aimed at investigating cross-‐modal interactions between auditory, haptic, visuo-‐facial and visuo-‐lingual speech signals during the perception of other’s and our own production. In a first EEG study (n=16), auditory evoked potentials were compared during auditory, audio-‐visual and audio-‐haptic speech perception through natural dyadic interactions between a listener and a speaker. Shortened latencies and reduced amplitude of early auditory evoked potentials were observed during both audio-‐visual and audio-‐haptic speech perception compared to auditory speech perception, providing evidence for early integrative mechanisms between auditory, visual and haptic information. In a second fMRI study (n=12), the neural substrates of cross-‐modal binding during auditory, visual and audio-‐visual speech perception in relation to either facial or tongue movements of a speaker (recorded by a camera and an ultrasound system, respectively) were determined. In line with a sensorimotor nature of speech perception, common overlapping activity was observed for
both facial and tongue-‐related speech stimuli in the posterior part of the superior temporal gyrus/sulcus as well as in the premotor cortex and in the inferior frontal gyrus. In a third EEG study (n=17), auditory evoked potentials were compared during the perception of auditory, visual and audio-‐visual stimuli related to our own speech gestures or those of a stranger. Apart from a reduced amplitude of early auditory evoked potentials during audio-‐visual compared to auditory and visual speech perception, a self-‐advantage was also observed with shortened latencies of early auditory evoked potentials for self-‐related speech stimuli. Altogether our results provide evidence for bimodal interactions between auditory, haptic, visuo-‐facial and visuo-‐lingual speech signals. They further emphasize the multimodal nature of speech perception and demonstrate that multisensory speech perception is partly driven by sensory predictability and by the listener’s knowledge of speech production. ID: 187
Transformation of temporal plasticity from auditory midbrain to auditory cortex Maike Vollmer1, Ralph E. Beitel2, Patricia A. Leake2 1University Hospital Wuerzburg, Germany; 2University of California San Francisco, United States of America [email protected] Primary auditory cortex (AI) is functionally altered by learning-‐induced temporal plasticity in profoundly deaf cats (Beitel et al., 2011; Vollmer and Beitel, 2011). To characterize temporal plasticity induced by behavioral training within the auditory midbrain, we studied temporal processing properties of neurons in the central (ICC) and external (ICX) nuclei of the inferior colliculus (IC) in animals with different deafness and prosthetic hearing histories. We then compared the IC results with temporal processing properties of AI neurons recorded in the same animals. In neonatally deafened juvenile cats and long-‐deaf cats (≥3.5 yr), we provided behaviorally-‐irrelevant, passive electric cochlear implant stimulation, allowing us to fully control their auditory experience. Some of these animals received additional signal-‐detection behavioral training; a subgroup of long-‐deaf animals
81
received no auditory stimulation throughout their lifetimes. Adult deafened, passively stimulated cats served as controls. In the ICX, a marginal deafness effect was observed, but electric stimulation had no effect on temporal processing. In the ICC and AI, temporal following and latency were degraded by long-‐term deafness. Passive stimulation remediated these functional deficits in the ICC and to a lesser extent in AI. Compared to passive stimulation alone, behaviorally-‐relevant stimulation had little or no effect on neuronal response properties in the ICC but significantly enhanced temporal processing in cortical field AI of both deaf juvenile and long-‐deaf cats. This study is unique in providing a direct comparison, in the same animals, of experience-‐induced temporal plasticity at different levels in the auditory system. Our results suggest that a basic transformation in neuronal processing occurs between auditory midbrain and forebrain, and auditory cortex emerges as a pivotal site for behaviorally driven temporal plasticity in the profoundly deaf cat. Supported by N01-‐DC-‐3-‐1006 and HHS-‐N-‐263-‐2007-‐00054-‐C (PI: P.A. Leake), NIDCD R01-‐DC-‐02260 (PI: C.E. Schreiner) and by DGF Vo 640/1-‐1 (PI: M. Vollmer). ID: 188
Spatial processing of cortical neurons in the primary auditory cortex after cholinergic basal forebrain lesion in ferrets Fernando R Nodal, Nicholas Leach, Peter Keating, Johannes Dahmen, Andrew J. King, Victoria M. Bajo Oxford University, United Kingdom [email protected] Cortical acetylcholine release has been implicated in different cognitive functions including sensory plasticity. We have recently shown that cortical cholinergic innervation is also necessary for a normal auditory perception (J Neurosci 2013. 33: 6659-‐71). To explore whether behavioural sound localisation deficits observed in ferrets with reduced cortical cholinergic inputs were due to changes in spatial sensitivity of cortical neurons, we recorded neural activity in the primary auditory cortex (A1) from 3 animal in which nucleus
basalis was previously lesioned bilaterally. Neural activity was recorded from 146 penetrations on the left and right A1 under anaesthetia using silicon neuronexus probes (single shank, 16 recording sites). Diminished cholinergic innervation was achieved by prior bilateral injections of the enzyme saponin in the nucleus basalis. Histological analysis after the recording sessions confirmed a mean loss of cholinergic cells in the nucleus basalis of 89.3±7.1% when compared to control animals. This resulted in a significant reduction of cholinergic fibres across all auditory cortex and especially on the middle ectosylvian gyrus where A1 is located. The location of the penetrations at the time of the recordings and the electrophysiological characterization that typically exhibited a mean latency ≤20ms, frequency tuning and mainly onset response allowed us to assign the recordings to primary auditory cortex (A1). Frequency tuning was used to ensure an even sampling across the tonotopic axis of A1. Spatial tuning was determined using virtual acoustic space techniques using 200ms broadband noise presented at three different intensities (56,70 and 84 dB SPL) from 12 locations separated 30° in azimuth and interaural elevation. Onset response spikes counts (40ms time window) were used to build response plots and calculate the overall spatial preference as the vectorial sum of responses to all spatial positions. Most of the units responded to all the spatial locations and showed a broad spatial tuning. Overall direction vectors indicated a contralateral preference for most units with much less incidence of ipsilateral. The majority of the spatial preference vectors were oriented towards the front hemifield (±60°) regardless of lateral hemispheric preference. These spatial profiles are in consensus with a population coding of spatial location described in normal animals and their properties enough to support an accurate sound localisation.
82
ID: 189
Haemodynamic pattern analysis and direct electrical recording of human brain activity during working memory for tones Sukhbinder Kumar1, Philip Gander2, Sabine Joseph3, Andrea R. Halpern4, Masud Husain5, Kirill V. Nourski3, Hiroyuki Oya3, Hiroto Kawasaki3, Matthew A. Howard3, Timothy D. Griffiths1 1Newcastle University, United Kingdom; 2University of Iowa, United States of America; 3University College London, United Kingdom; 4Bucknell University, United States of America; 5Oxford University, United Kingdom [email protected] We carried out pattern analysis of the fMRI BOLD response and recorded local field potentials (LFPs) from the human brain during auditory working memory (WM) for tones. Subjects were presented with a low-‐tone (< 600 Hz) and high-‐tone (> 2 kHz) in random order after a visual cue. Another cue then informed the subjects which tone (first or second) to keep in mind. A retention period was followed by a test tone and same/different decision indicated by button press. The retention period was 16s for fMRI and 3s for neurophysiology. fMRI BOLD data on sixteen subjects were acquired at 3T and subjected to separate multivoxel pattern analyses (MVPA) to demonstrate areas in which activity predicted the tone perceived during perception or the tone that was held in mind during retention. For the group, significant classifier performance over the whole retention period was demonstrated in Heschl’s Gyrus (HG) and Planum Temporale (PT). Preliminary individual ‘searchlight’ analyses of the whole brain with a sliding temporal window show significant classifier performance in HG and PT during both perception and retention, and suggest significant performance in the Inferior Frontal Gyrus and Hippocampus during retention only. We recorded LFPs from two human subjects. The subjects were implanted with depth electrodes along the axis of HG and subdural grids over temporal and frontal cortex. We measured average ERPs and carried out single-‐trial time-‐frequency analysis. For tone perception, category-‐specific evoked responses and increases in gamma-‐band power (60-‐120 Hz) were demonstrated in both subjects 100 ms after stimulus onset in HG and adjacent
contacts in Superior Temporal Gyrus (STG). High tones elicited stronger responses in medial HG and low tones in lateral HG. For tone retention, sustained theta-‐band (2-‐6 Hz) activity was observed in the same contacts that showed gamma-‐band responses during perception. The theta power in HG showed a recency effect: the power was greater when the second tone was retained in WM. Theta power in STG showed the opposite effect. The data support: 1) a network for auditory working memory maintenance that includes auditory cortex, frontal cortex and hippocampus; 2) theta-‐band correlates of tone retention in auditory cortex in the same neural ensembles that are active in the gamma band during perception; 3) neurophysiological bases in the auditory cortex for interference effects within tonal working memory. ID: 190
Probing the physiology of perception: Invariant neural responses in ferret auditory cortex during vowel discrimination Stephen M. Town, Katherine C. Wood, Huriye Atilgan, Gareth P. Jones, Jennifer K. Bizley University College London, United Kingdom [email protected] Perceptual invariance is the ability to recognize an object despite variation in sensory input. For example, we can recognize phonetic components, such as the vowel “u”, across talkers with different voice pitches. However, it is unclear how the brain supports perceptual invariance and specifically whether neurons in auditory cortex extract invariant representations of vowel identity across pitch. Here we study an animal model of perceptual invariance in which ferrets (n=4) were trained in a two-‐alternative forced choice task to discriminate synthetic vowel sounds. On each trial of the task, the subject was presented with two vowel sounds and required to respond at a particular location depending on vowel identity. Across trials vowel pitch was varied and ferrets were required to generalize across pitch variation. During task performance, multi-‐unit activity was recorded from microelectrodes positioned in auditory cortex. We recorded units responsive to vowels (those
83
whose firing rate during vowel presentation differed by more than 3 standard deviations from mean spontaneous activity). For each session a responsive unit was recorded, we asked if it was possible to decode vowel identity from the spiking responses observed across all pitches. Using a bootstrap procedure to test significance, many units were found to encode information about vowel identity across pitch. We also decoded vowel pitch across all vowel identities and found that again, a large of proportion of units provided information about vowel pitch as well as vowel identity. By comparing classification of spiking responses over different time windows and temporal resolutions it was possible to estimate the time scales over which information about vowel identity and pitch was signaled. We found classification of vowel identity tended to occur earlier in the sound than pitch. Comparing classification across sessions with the best frequencies of units illustrated that units tuned to lower frequencies tended to classify pitch better, although similar correlations were not observed when decoding vowel identity. Our results show that auditory cortical neurons may offer a physiological substrate for invariant perceptual representations of sound and that information about multiple sound features may be represented in auditory cortex during behavior, even when such features are irrelevant for task performance. ID: 191
Neural circuitry underlying contrast gain control in primary auditory cortex James Cooke, Benjamin Willmore, Jan Schnupp, Andrew J. King Oxford University, United Kingdom [email protected] While sensory environments can vary dramatically in their statistics, neurons have a limited dynamic range with which they can encode sensory information. In sensory cortex, this problem is resolved by the systematic adjustment of neural gain in accordance with the contrast of sensory input. The biophysical basis of this computation has been studied extensively in visual cortex, where parvalbumin positive (PV+) interneurons have been
implicated in the adjustment of neural gain. It is not known, however, whether these cortical circuits represent a canonical feature of the cortex or whether they are vision specific. We aim to address this issue by investigating the circuit basis of contrast gain control in primary auditory cortex (A1). Our approach is to perform large scale extracellular recordings across all layers of A1 in anaesthetised mice and to optogenetically manipulate the activity of PV+ interneurons, in order to test their involvement in this computation. ID: 192
Neural entrainment is less responsive to attentional demands in older listeners Molly J. Henry, Björn Herrmann, Obleser Jonas Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany [email protected] Increasing age is accompanied by decreasing speech comprehension performance in the presence of background noise that cannot be fully explained by peripheral hearing loss. For young, normal hearing listeners, separating speakers in a “cocktail party” capitalizes on neural synchronization with attended speech. Thus, an intriguing but as of yet untested possibility is that age-‐related speech-‐comprehension deficits might be attributable to changes in the fidelity or flexibility of neural entrainment. The current electroencephalography study characterized differences in entrainment capacities between younger and older adults under varying attentional demands. Younger (age 18–35 years) and older (age 65+ years) participant groups listened to 10-‐s frequency-‐modulated sounds (FM = 2.8 Hz). Participants completed two sessions in counterbalanced order; in one session they passively listened to the sounds (passive), while in the other they detected the presence of near-‐threshold gaps (active); gaps were present in both stimulation blocks, although their presence was only relevant during active task performance. In order to characterize differences between onset-‐evoked neural responses between age groups, the same participants completed a passive-‐listening block in which they were presented with an 8-‐minute tone sequence in which each tone randomly took on one of five frequencies.
84
This block of stimulation was presented between the active and passive blocks of the frequency-‐modulated stimuli. Overall, entrained neural responses were larger when participants attended to the stimuli compared to during passive stimulation. However, the difference between entrainment strength for active versus passive stimulation was more pronounced in young than in older participants. This suggests that older participants’ entrained neural responses are less flexible under changing attentional demands. With respect to evoked responses, older adults exhibited a larger N1 component than younger adults during passive tone stimulation, but a smaller P2. Gap-‐detection hit rates were modulated by FM-‐stimulus phase, but the degree of performance modulation was similar across age groups. These results reveal critical age-‐related changes in the way that entrained neural responses adapt to changing attentional demands. ID: 193
Effect of selective attention and streaming in bilateral concurrent sound segregation Anahita H. Mehta1, Andrew J. Oxenham2, Ifat Yasin1, Shihab Shamma3 1University College London, London, United Kingdom; 2University of Minnesota, United States of America; 3University of Maryland, College Park, United States of America & Ecole Normale Supérieure, France [email protected] We aim to investigate the neural correlates and the underlying processes resulting in the ambiguous percept produced by a stimulus similar to one described in Deutsch’s “octave illusion” (Deutsch, 1974). Each ear was presented with a sequence of alternating pure tones of low and high frequencies. The same sequence was presented to each ear, but in opposite phase, such that the sequence in the left ear could be a High-‐Low-‐High… pattern whereas the sequence in the right ear was a Low-‐High-‐Low… pattern. Subjects were cued to focus on a particular frequency and side, as indicated by a priming sequence of tones that were either all low or all high in frequency and were presented either to the left or right ear. The illusion reported by Deutsch is that subjects
hear an alternating pattern of low and high tones, with all the low tones lateralized to one side and all the high tones lateralized to the other side. By instructing subjects to listen to a particular frequency and side we were able to elicit four different percepts for the same stimulus, thus allowing us to study the neural correlates of streaming and selective attention. The first EEG and psychophysics study tested the subjects’ ability to attend selectively to the target sequence. Subjects were asked to detect target amplitude deviants that were presented at varying positions. Analyses of the EEG recordings indicated a differential pattern of activity that systematically reflected the attended percept. The behavioural data indicated that the subjects could readily attend to the target sequence. The second psychophysics study investigated the effect of frequency separation on the deviant detection task. We tested subjects at four different frequency separations and we observed a systematic increase in performance with increasing frequency separation. Lastly, we conducted a series of experiments to further understand which of the four tones contribute to the percept of alternating low and high tones that arises from this particular type of stimulus. We observed interesting and unexpected misattributions of the stimuli in time and location, extending the original illusion reports of Deutsch. Overall, we find that attention and expectation modulate stimulus-‐driven responses in auditory cortex using illusory auditory stimuli, and that their effects can be reliably accessed with continuous EEG recordings. Work supported by UCL Overseas and Graduate Research Scholarships and NIH grant R01DC07657. ID: 194
Single unit and LFP activity in rostral superior temporal cortex during auditory short-‐term memory Brian Hayward Scott1, Corrie R. Camalier1, Mortimer Mishkin1, Pingbo Yin2 1National Institute for Mental Health Bethesda, United States of America; 2University of Maryland, College Park, United States of America [email protected] Short-‐term memory (STM) for visual stimuli has been shown to engage the modality-‐specific
85
cortical areas that support visual perception. Although monkeys can perform auditory STM tasks, their ability is limited relative to that in vision, and appears to depend on retention of a stimulus trace in a passive form of STM. The neural underpinnings of this putative trace are unknown, but are likely to engage non-‐primary auditory cortex, e.g., the rostral superior temporal plane and gyrus, components of the ventral auditory processing stream. We recorded single-‐unit activity and local field potentials (LFP) across these regions while monkeys performed a serial delayed-‐match-‐to-‐sample (DMS) task. On each trial, the monkey grasped a bar to initiate the presentation of a sample sound, followed by 0-‐2 nonmatch sounds (delay interval ~1 s), before the sample was presented again as a match; the monkey released the bar to indicate a match. In the unit activity, we identified two phenomena potentially associated with mnemonic tasks. First, 35% of units exhibited a sustained change in firing rate (excitation or suppression) during the delay interval. Second, the auditory response was modulated by task context in 20% of units, with 7.5% showing match enhancement (relative to the sample presentation), and 12.5% showing match suppression. Delay and match suppression were observed throughout the trial, but the proportion of units exhibiting delay or match excitation declined significantly after the presentation of the first nonmatch sound (coincident with a marked increase in behavioral error rate). These characteristics were mirrored in the LFP power. During DMS, the LFP response was modulated by the task context in which the sound appeared – sample presentations evoked larger power increases than nonmatch or match presentations across all power bands, an effect which was also apparent in the averaged response of the single-‐unit population. During the first delay period, LFP power at a given site could be suppressed or enhanced relative to the pre-‐trial baseline. By contrast, only suppression was observed in the second delay period following a nonmatch stimulus. The delay-‐period modulation in the LFP spanned multiple frequency bands, suggesting that the suppression is a network-‐wide effect. Taken together, we find that evoked LFPs are modulated by task demands, and complement
the mnemonic effects observed in single-‐unit activity. ID: 195
The relevance of homo-‐ and heteroscedasticity for the averaging of auditory-‐evoked MEG and EEG responses Reinhard König1, Artur Matysiak1, Wojciech Kordecki2, Cezary Sielużycki1,3, Norman Zacharias1, Peter Heil1 1Leibniz Institute for Neurobiology Magdeburg, Germany; 2University of Business in Wrocław, Poland; 3Team Normal and Abnormal Motor Control, ICM Brain and Spine Institute, UPMC (Paris 6), INSERM, CNRS, Paris, France rkoenig@lin-‐magdeburg.de In MEG and EEG data analyses, it is common practice to arithmetically average event-‐related magnetic fields (ERFs) or electric potentials (ERPs) across single trials and subsequently across subjects to obtain the so-‐called grand mean. Comparisons of grand means, for example between conditions, are then often performed by subtraction. These operations, and their statistical evaluation by parametric tests like ANOVA, tacitly rely on the assumption that the data follow the additive model, have a normal distribution, and a homogeneous variance. This may be true for single trials, but these conditions are rarely met when comparing ERFs/ERPs between subjects, meaning that the additive model is seldom the correct model for computing grand mean waveforms. We show, using MEG and EEG responses from the auditory cortex, that the non-‐normal distributions and the heteroscedasticity observed instead result because ERFs/ERPs follow the mixed model with additive and multiplicative components. For peak amplitudes, like the auditory M100 and N100, the multiplicative component dominates. Application of the asinh-‐transform to data following the mixed model transforms them into the requested additive model with its normal distribution and homogeneous variance. Our findings question the common practice of simply subtracting arithmetic means of auditory-‐evoked ERFs or ERPs without proper transformation of the data. They should thus have widespread implications.
86
This work was supported by the Deutsche Forschungsgemeinschaft (SFB-‐TRR 31 A6; KO1713/10-‐1; HE1721/10-‐1).
ID: 196
Neuroimaging of tonotopic maps in humans using the travelling wave paradigm: A puzzling finding Dave Langers1, Rosa Sanchez-‐Panchuelo1, Susan Francis1, Julien Besle2, Katrin Krumbholz2, Richard Bowtell1, Deborah Hall1 1University of Nottingham, United Kingdom; 2MRC Institute of Hearing Research Nottingham, United Kingdom [email protected] Tonotopy is one of the most dominant organisational principles in the central auditory system. It is not just relevant for our understanding of frequency processing, but also to distinguish functional subdivisions in the brain. In contrast with other sensory systems (e.g. retinotopy in human visual cortex) and in contrast with animals (e.g. tonotopy in non-‐human primates), the tonotopic organisation of the human auditory cortex remains relatively poorly understood. Still, more than a dozen functional magnetic resonance imaging (fMRI) studies have reported highly consistent best frequency maps. As a result, the field is currently moving towards a consensus regarding their large-‐scale interpretation. (For a recent review, see Saenz & Langers, 2014. Hear. Res. 307, 42–52.) As a part of an investigation into optimal mapping paradigms, we carried out a study to determine the response to unattended 60-‐dB-‐SL jittered tone sequences sweeping between 125 and 8000 Hz. Brain images were collected using acquisition paradigms that interspersed series of 3-‐T fMRI acquisitions of 2.2-‐s duration with periods of scanner silence of [a] 8.8 s (i.e. sparse), [b] 2.2 s (i.e. gapped), or [c] 0.0 s (i.e. continuous). Fourier analysis was used to determine frequency tuning, encoded as the sinusoidal phase of the response to the sweep stimulus. All three acquisition paradigms revealed robust sound-‐evoked activation in bilateral auditory cortex and extracted consistent tonotopic maps that agreed with previous results in the literature. Effects of scanner noise were detectable in the form of response attenuation near stimulus frequencies that coincided with
the peaks in the scanner noise spectrum. Responses progressively decreased from sparse to gapped to continuous paradigms, but interference had moderate effects on the resulting tonotopic maps. However, upon expection of the underlying response dynamics, we observed unexpected anomalies. In particular, whereas low-‐ and high-‐frequency regions in the auditory cortex as expected showed peak responses towards the beginning and end of the sweep stimuli, moderately-‐tuned regions did not peak near the middle of the sweeps. Instead, their response showed a bimodal behaviour, with peaks of similar strength towards the sweep's beginning and end. We interpret this puzzling observation to suggest that travelling wave paradigms may confound stimulus-‐dependent frequency tuning with time-‐dependent response hemodynamics in a non-‐trivial way. ID: 197
A hierarchy of neural tuning to spectrally resolved pitch in human auditory cortex Katrin Krumbholz1, Jamila Andoh1,2, Antje Heinrich1, Gemma Hutchinson1, Robert J. Zatorre2 1MRC Institute of Hearing Research Nottingham, United Kingdom; 2Montreal Neurological Institute, Canada; *Authors KK and JA contributed equally to this work [email protected] The ability to perceive pitch is crucial for speech and music perception and is also important for segregating sounds from different sources. Pitch-‐evoking sounds are composed of harmonic frequency components. Musical sounds and voiced speech contain low-‐order harmonics, which are resolved by the cochlear filters, whereas high-‐order harmonics are unresolved. Resolved harmonics elicit a stronger, and more finely discriminable, pitch than unresolved harmonics. This has led to the hypothesis that resolved and unresolved pitch may be processed by different mechanisms. The aim of the current study was to test this hypothesis by measuring neural selectivity for resolved and unresolved pitch using an adaptation approach. Adaptation refers to the suppression of the response to a probe stimulus by a preceding adapter stimulus. Importantly, adaptation is stimulus-‐specific and
87
the degree of adaptation specificity would be expected to reflect the selectivity of the neuron populations responding to the adapter and probe: when the adapter and probe activate overlapping neuron populations, adaption should be strong, when they activate disparate populations, adaptation should be weak. Here, adaptation was measured both psychophysically and with electro-‐encephalography (EEG). The adapter and probe were composed of iterated rippled noise, which is a type of harmonic sound with a noise-‐like waveform and adjustable pitch strength. The pitch difference between them was varied to measure adaptation specificity. In different conditions, the adapter and probe were either resolved or unresolved. We found that, only in the resolved conditions was adaptation specific to the adapter pitch. Moreover, the degree of adaptation specificity differed markedly between different deflections of the auditory-‐evoked cortical potentials, which occur at different latencies and thus likely represent different stages of the cortical processing hierarchy. The adaptation specificity of the latest, and thus presumably highest-‐level, deflection was statistically indistinguishable from the specificity of the psychophysical adaptation effect Our results suggest that (i) resolved, but not unresolved, harmonics are processed by neurons that are selective for pitch and (ii) the degree of pitch selectivity changes across the cortical processing hierarchy. ID: 198
Neural correlates of auditory pattern detection in humans Nicolas Barascud1, Timothy D. Griffiths2, Karl J. Friston1, Maria Chait1 1University College London, United Kingdom; 2Newcastle University Medical School, United Kingdom [email protected] A basic function of perception is to detect patterns in ongoing sensory input so as to facilitate the prediction of future events. We used functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) to study the neural mechanisms underlying listeners’ sensitivity to the emergence and
violation of complex acoustic regularities (characterized by repeating spectro-‐temporal patterns) in ongoing sound sequences. Stimuli were 50 ms tone-‐pip sequences containing transitions between random and regularly repeating frequency patterns. In the MEG experiment (N=16), the stimulus set consisted of four frequency patterns: REG – regularly repeating patterns of 10 tones (new patterns were generated for each trial); RAND – a sequence of tones of random frequencies; REG-‐RAND and RAND-‐REG sequences contained a transition between a REG and RAND patterns (transition time was jittered across trials). In the fMRI experiment (N=16; different subjects), participants were exposed to randomly alternating blocks of REG, RAND and ‘silence’ intervals of variable durations. In both experiments, subjects were kept naïve and performed an incidental visual decoy task. MEG data show that brain responses to the emergence of regularity in RAND-‐REG signals occurred from about 1.5 cycles, consistent with behavioural and modelling results, and indicating that the brain rapidly detects regularities in ongoing input. Source analysis (based on MEG as well as fMRI data) revealed a network consisting of auditory cortical sources (primary auditory cortex bilaterally; extending in the left hemisphere along the superior bank of the superior temporal gyrus; STG) and left inferior frontal gyrus (IFG), demonstrating an interplay between early sensory processing and ‘higher level’ frontal mechanisms in the course of regularity extraction, even in the absence of directed attention. This work was supported by Wellcome Trust grant 093292/Z/10/Z.
ID: 199
Frequency selectivity of cortical adaptation: A comparative study in humans and guinea pigs Oscar Woolnough, Jessica de Boer, Katrin Krumbholz, Chris Sumner MRC Institute of Hearing Research Notthingham, United Kingdom [email protected] Adaptation, the reduction in neural responses to repeated stimuli, is a ubiquitous phenomenon throughout human and animal sensory systems. It has been shown that
88
adaptation in auditory cortex is frequency specific [1], with greater frequency separation between adapter and probe tones reducing the degree of adaptation. The aim of the current study was to compare frequency specificity of adaptation in human and guinea pig (GP) auditory cortex, and to investigate how adaptation specificity depends on the properties of the adapter and probe stimuli. A recent study has shown that using repeated presentation of adapters can enhance frequency specificity of this adaptation [2]. Here we have characterised the effects of both adapter repetition and duration on the tuning of cortical adaptation. Adaptation was measured in auditory cortex, recorded through EEG in humans and via EEG and LFPs in GPs. Pure tone adapter-‐probe sequences were presented diotically with adapter frequencies of 0, 0.5 and 1.5 octaves higher than the 1 kHz probe. We analysed the resultant auditory evoked potential (AEP) amplitudes, measured as the N1-‐P2 amplitude, and quantified adaptation as the reduction in response size in the adapted probe compared to the unadapted response. The human EEG results confirm the previous results [2] that repeated adapter presentation increases the frequency specificity of adaptation. Our results further show how these effects depend on number of repetitions and that no sharpening is observed when adapters are prolonged rather than repeated. The GP EEG results show comparable trends. However, frequency selectivity of adaptation was overall much greater in GPs than in humans (GPs showed a 50-‐105% reduction in adaptation over 1.5 octave frequency separation vs 15-‐45% in humans). In fact at the largest adapter-‐probe frequency separation, the GPs showed significant facilitation, increasing the AEP amplitude beyond the unadapted amplitude, whereas no facilitation was evident in humans. Our results suggest, in both humans and GPs, adaptation contributes to refine the frequency tuning of adaptation with increasing numbers of adapting tones. However, there are apparent qualitative differences in EEG responses across species. Further work will investigate how the underlying representation in neural activity and local field potentials contribute to far field potentials.
[1] Brosch M., Schreiner CE. (1997) J Neurophysiol [2] Briley P., Krumbholz K. (2013) J Neurophysiol ID: 200
Sensitivity to frequency modulation in ferret auditory cortex Ben Willmore1, Nicol Harper1, Josh McDermott2, Jan Schnupp1, Andrew J. King1 1University of Oxford, United Kingdom; 2Massachussetts Institute of Technology, United States of America [email protected] Previous work has characterised cortical neurons in terms of their sensitivity to frequency modulation (FM). Our own work, using dynamic random chords (DRCs) to estimate spectro-‐temporal receptive fields (STRFs) of cortical neurons, tends to produce STRFs with little obvious FM sensitivity. To understand this difference, we investigated the possibility that DRCs are particularly poor stimuli for characterising FM sensitivity (perhaps as a result of their very stereotyped spectrotemporal structure). To test this hypothesis, we compared DRCs to other sounds which might be better suited to characterising modulation sensitivity. We presented these sounds to anaesthetised ferrets while recording the responses of neurons in primary cortex. The stimuli were: (1) standard DRCs with a chord length of 25ms, (2) DRCs in which the chord length was randomised between 6.25ms and 100ms, (3) temporally orthogonal ripple combinations (TORCs), (4) modulated pink noise with a flat modulation spectrum, and (5) excerpts from a range of natural sounds. For each class of stimuli, we estimated regularised STRFs using either lasso (sparse) or ridge regression. To estimate the strength of FM sensitivity in each unit, we measured the separability of the STRFs. FM sensitivity should appear in the STRFs as diagonal structure, so units which are strongly tuned for FM should have relatively inseparable STRFs. Surprisingly, we find that the STRF separability is not significantly different for DRCs and other synthetic stimulus classes. This suggests that DRCs are no worse for characterising FM than the other synthetic stimuli. To determine whether important structure was missing from the STRFs, we measured their
89
ability to predict neural responses to natural sounds. As expected, the STRFs estimated using natural sounds provided the best predictions of responses to natural sounds (since within-‐class predictions are generally higher than cross-‐class predictions). Amongst the STRFs estimated using synthetic stimuli, there were no significant differences in predictive power, indicating that the DRC predictions were no worse than the other classes. This suggests that the largely separable STRFs obtained using DRCs provide a good description of the responses of cortical neurons to both synthetic and natural stimuli, despite showing little sensitivity to frequency modulation. ID: 202
Sensitivity to language categories in the auditory cortex of Mandarin-‐ and English-‐speaking participants David Fleming1, Bruno Giordano1, Roberto Caldara2, Pascal Belin1,3,4 1University of Glasgow, United Kingdom; 2University of Fribourg, Switzerland; 3Université de Montréal, Montréal, Canada; 4Institut des Neurosciences de La Timone, CNRS &Université Aix-‐Marseille, France [email protected] English and Mandarin-‐speaking listeners rate pairs of voices which speak their native language as more dissimilar than foreign pairs, even when speech is rendered unintelligible by time-‐reversal. These listeners were also sensitive to acoustical differences between languages – they recorded very high dissimilarity ratings where two voices in a pair spoke different languages (1). We examined the neural correlates of these effects using an fMRI similarity analysis approach (2). Specifically, we hypothesized that auditory cortex would be more sensitive to identity differences in listeners’ native languages than in a foreign language. Acoustical differences between the languages should be also reflected in auditory regions, where we expected high dissimilarity among responses to pairs of stimuli speaking in different languages. Participants (native English and Mandarin speakers) underwent a functional scan where they listened to reversed speech clips from English and Mandarin female speakers. We extracted local patterns of activity across
participants' functional volumes and calculated the dissimilarity between responses elicited by stimulus pairs (1-‐pearson’s r across voxels). This procedure yielded whole-‐brain maps for each pairwise combination of stimuli which were then correlated with predictor matrices capturing hypothesized dissimilarity geometries. These reflected 1) native language pair dissimilarity and foreign language similarity (better differentiation of different native speaker identities); and 2) cross-‐language dissimilarity (better differentiation of identities across languages). We found significant correlations between brain dissimilarities and the cross-‐language model in bilateral superior temporal gyrus even when partialling out the contribution of the dissimilarity structure of selected acoustical features. Correlations were more distributed in left STG which has been implicated in within-‐language categorical speech perception (3). These results indicate that listeners remain sensitive to phonological differences between languages in reversed speech. This sensitivity is reflected in auditory regions which differentiate languages even when accounting for acoustical dissimilarity structure and when participants cannot comprehend the spoken message. 1. Fleming (Under Review); 2. Kriegeskorte (2006) PNAS; 3. Chang (2010) NatNeuro ID: 203
Adaptive techniques for identifying stimuli that drive highly selective auditory cortex neurons in awake marmoset Seth D. Koehler, Michael S. Osmanski, Xiaoqin Wang Johns Hopkins University Baltimore, United States of America [email protected] Neurons outside of core regions of auditory cortex are highly selective, often responding only to a small subset of acoustic stimuli that share specific spectro-‐temporal features. This selectivity introduces several confounds and challenges for neurophysiological studies in non-‐primary auditory cortex. For example, neurons that the experimenter cannot drive are usually excluded from analyses, introducing a sampling bias towards more broadly-‐tuned neurons. Also, examination of how auditory
90
cortex supports active behavior requires identifying those stimuli that, in addition to being perceptually salient, can elicit robust neural responses. These confounds and challenges are compounded when studying several units simultaneously with a multi-‐channel electrode array where electrodes can span the entire tonotopic axis and multiple cortical regions. To address this problem, we have developed a comprehensive stimulus set and a closed-‐loop algorithm designed to rapidly identify stimuli that elicit responses in the auditory cortex of awake, behaving marmosets. Neural responses were recorded from multi-‐channel arrays of tungsten micro-‐electrodes (Warp-‐16, Neuralynx) chronically implanted to span core, belt, and parabelt regions of auditory cortex in two marmosets. The initial stimulus set consisted of more than 1200 unique stimuli that broadly covered the classes of stimuli (i.e., tones, narrow-‐ and broad-‐band noises, SAM and SFM tones, linear FM tones, harmonic complexes, vocalizations, and natural sounds) and range of parameters (e.g., frequency, intensity, bandwidth, modulation rate, etc.) known to elicit responses across primate primary and secondary auditory cortices. Stimuli (5 repetitions) were presented to the passive, awake animal at approximately 2 Hz. To minimize recording time for practical use, we implemented a closed loop algorithm to minimize the number of repetitions required for each stimulus. Finally, we identified those stimuli within each class that elicited the strongest, most significant neural responses. This stimulus set and adaptive algorithm thus allows concurrent, relatively rapid identification of those stimuli that most effectively drive neurons across primary and secondary auditory cortices. This technique provides a starting point for further recording to estimate the optimal stimulus for each unit and a list of effective stimuli for selecting stimuli for joint neurophysiological and behavioral testing.
ID: 204
Understanding human neurobiological underpinnings of loudness perception: A MEG study Christine Wyss1,2, Frank Boers1, Wolfram Kawohl2, Jorge Arrubla1,3, Kaveh Vahedipour1, Jürgen Dammers1, Irene Neuner1,3,4, N. Jon Shah1,3,4 1Research Centre Juelich, Germany; 2University Hospital of Psychiatry Zurich, Switzerland; 3RWTH Aachen University, Germany; 4Translational Medicine, Germany [email protected] Introduction Underlying generators in loudness perception in late-‐latency responses have been the subject of a wide research since the 1980’s [1, 2]. These studies have let to the view that not only the auditory cortex, but also higher-‐order networks which ensure that the stimuli gain access to the consciousness, are activated in the perception of sound. Several authors have proposed the involvement of a frontal source, predominantly with high intensity levels. However, its precise location is still debated [3, 4]. Methods Magnetoencephalography (MEG, whole-‐head 248 channels) was applied in order to localise the sources of the N1m (75-‐125 ms) component elicited by tones of different intensities with high temporal resolution. We investigated 19 healthy male right-‐handed subjects (mean age 26.5 +-‐ 4.0 years). Tones of six different intensities (10-‐60 dB SL, 1000 Hz, 40 ms duration, SOA randomized between 2-‐3 s) were presented binaurally in a pseudo-‐randomized order through earphones with plastic tubes. Magnetic field tomography [5] was used in order to localise the primary current density in each voxel and at each time point. Voxel-‐wise root mean squared values were entered in a generalized linear model, corrected for multiple comparisons. Within the anatomical regions of interest we performed a time-‐course analysis for each intensity to elucidate the cortical activation sequence across time. Results We found significant activations in the primary auditory cortex, the posterior cingulate cortex (PCC), the premotor cortex and the primary
91
somatosensory cortex by comparing the tones with highest and lowest sensation levels. The time course analysis revealed that the primary sensory areas were activated earlier (90 ms) than the PCC (100 ms) and premotor cortex (120 ms). Conclusion The results show the activation of a widespread network in loudness perception. The involvement of the somatosensory region is consistent with the theory of multisensory integration on a low level [6]. The PCC, which plays an essential role as a hub in intrinsic connectivity networks by activating intrinsic networks appropriate for the current behavioural state [7], might be involved in the top-‐down processing during perception. The activation in the premotor cortex is in line with other findings [1] and could be in agreement with orientation of attention towards action preparation or be related to an aversive response to a novel stimulus. Acknowledgements Funded by Swiss National Science Foundation (grant number P1ZHP3_148704), EMDO Stiftung Zurich and the Initiative and Network Fund of the Helmholtz Association. References 1. Näätänen, R. and T. Picton, 1987. Psychophysiology 24: 375-‐25. 2. Picton, T.W., et al., 1999. Audiol Neurootol 4: 64-‐79. 3. Giard, M.H., et al., 1994. Electroencephalogr Clin Neurophysiol 92: 238-‐52. 4. Hari, R., et al., 1982. Electroencephalogr Clin Neurophysiol 54: 561-‐69. 5. Ioannides, A., J. Bolton, and C. Clarke, 1990. Inverse Probl 6: 523. 6. Schroeder, C.E. and J. Foxe, 2005. Curr Opin Neurobiol 15: 454-‐58. 7. Leech, R. and D.J. Sharp, 2014. Brain 137: 12-‐32. ID: 205
Music, rhythm and developmental dyslexia: Investigating amplitude envelope sensitivity Sheila Anne Flanagan, Usha Clare Goswami University of Cambridge, United Kingdom [email protected] Children with developmental dyslexia are impaired at tapping to a rhythm, and in perceiving tempo. A core difficulty in developmental dyslexia is a deficit in achieving reflective awareness of speech sounds in words, ‘phonological awareness’. Phonological awareness is impaired at all linguistic levels:
prosodic, syllabic and phonemic. One auditory difficulty found in children with dyslexia is impaired processing of the rate of change of the amplitude envelope. Sensitivity to envelope structure and dynamics is critical for speech perception as it signals speech rate, stress, and tonal contrasts, and reflects prosodic and intonational information. Perception of amplitude envelope rise-‐time is also linked to musical metrical sensitivity. In turn, sensitivity to beat patterning in music is a strong predictor of children’s phonological awareness and reading development. It has been theorised that auditory temporal processing is achieved by the synchronisation of internal neural oscillators to external temporal structure. The temporal sampling theory claims that envelope rise time difficulties in dyslexia may lead to poor oscillatory entrainment at low frequencies resulting in impairments in phonology. This may be investigated by considering the perceptual centre or P-‐centre of a sound. The P-‐centre describes the subjective moment of a sound’s occurrence. It is used in both the perception and production of rhythm. The major acoustic influence on P-‐centre timing is amplitude envelope rise-‐time. This study evaluates the perception of P-‐centres by adult and child subjects with and without dyslexia, using a range of tonal, synthetic vowel and speech stimuli in a dynamic rhythm setting task. As the perception of rhythmic timing in both speech and non-‐speech sounds depends on rise time, any rise time difficulties in dyslexia would suggest concomitant differences in P-‐centre perception. The relative timing of P-‐centres of adult and child participants with and without dyslexia is compared. One aim is to achieve early detection and develop effective remediation based on music and rhythm.
92
ID: 206
Latency of duration-‐MMN is related to resting-‐state glutamate: An 1H-‐MRS study in young healthy adults Kristiina Kompus1, Kairi Kreegipuu2, Nele Kuldkepp2, René Westerhausen1, Kenneth Hugdahl1, Risto Näätänen2 1University of Bergen, Norway; 2University of Tartu, Estonia [email protected] Mismatch negativity (MMN) is an event-‐related potential elicited by a deviant sound in a train of standard stimuli. MMN is disturbed in many psychiatric and neurological conditions, in particular schizophrenia. This makes the underlying neural mechanisms particularly interesting, as they may reflect the disturbed processes relevant for psychopathology. The generation of MMN has been suggested to be associated to glutamate, the main excitatory neurotransmitter in the brain. In particular, the synaptic plasticity depending on glutamatergic NMDA receptors has been suggested to underlie MMN generation. The hypothesis of link between glutamate and MMN has been examined with pharmacological challenge studies. As opposed to pharmacological challenge studies, magnetic resonance spectroscopy (1H-‐MRS) enables the direct measurement of glutamate in specific brain regions. In this study we examined the relationship between inter-‐individual variation of 1H-‐MRS-‐measured glutamate-‐glutamine in the superior temporal gyrus and MMN for duration and frequency deviants in 20 healthy young unmedicated adults (10 male). We found a significant relationship between the latency of the duration-‐MMN peak and creatine-‐corrected glutamate/glutamine (Glx) (p=.0003, eta2=.43), with increased Glx correlated to shorter latency of the duration-‐MMN; the effect did not significantly differ between the left and right hemisphere. The amplitude of the duration-‐MMN was not related to Glx. There were no significant effects between Glx and frequency-‐MMN. The present findings support the link between glutamate and MMN, in agreement with pharmacological challenge studies. The findings also highlight that duration-‐MMN may be more sensitive marker of NMDA receptor function than frequency-‐MMN, in agreement with a more
attenuated duration-‐MMN in schizophrenia patients. ID: 208
Hemispheric lateralization of linguistic prosody recognition Jens Kreitewolf1, Angela D. Friederici1, Katharina von Kriegstein1,2 1Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany; 2Humboldt University of Berlin, Germany [email protected] Hemispheric specialization for linguistic prosody is a controversial issue. While it is commonly assumed that linguistic and emotional prosody are preferentially processed in the right hemisphere, neuropsychological work directly comparing processes of linguistic and emotional prosody suggests a predominant role of the left hemisphere for linguistic prosody processing. Here, we used two functional magnetic resonance imaging (fMRI) experiments to clarify the role of left and right hemispheres in the neural processing of linguistic prosody. In the first experiment, we sought to confirm previous findings showing that linguistic prosody processing compared to other speech-‐related processes predominantly involves the right hemisphere. Unlike previous studies, we controlled for stimulus influences by employing a prosody and a speech task using the same speech material. The second experiment was designed to investigate whether a left-‐hemispheric involvement in linguistic prosody processing is specific to contrasts between linguistic and emotional prosody or whether it also occurs when linguistic prosody is contrasted against other non-‐linguistic processes (i.e., speaker recognition). Prosody and speaker tasks were performed on the same stimulus material. In both experiments, linguistic prosody processing was associated with activity in temporal, frontal, parietal and cerebellar regions. Activation in temporo-‐frontal regions showed differential lateralization depending on whether the control task required recognition of speech or speaker: recognition of linguistic prosody predominantly involved right temporo-‐frontal areas when it was contrasted against recognition of the speech message; when contrasted against speaker recognition,
93
recognition of linguistic prosody predominantly involved left temporo-‐frontal areas. The results show that linguistic prosody processing involves functions of both hemispheres and suggest that recognition of linguistic prosody is based on an inter-‐hemispheric mechanism which exploits both a right-‐hemispheric sensitivity to pitch information and a left-‐hemispheric dominance in speech processing. ID: 210
The role of the lemniscal corticothalamic feedback system in mistuning detection in ferrets Natsumi Homma1, Max Happel2, Fernando R. Nodal1, Victoria M. Bajo1, Andrew J. King1 1University of Oxford, United Kingdom; 2Leibniz Institute for Neurobiology Magdeburg, Germany [email protected] Recent evidence suggests that corticothalamic feedback can sharpen the spectral receptive fields of thalamic neurons and modulate the temporal precision of their responses. In this study, we aimed to investigate the role of the lemniscal corticothalamic feedback system in the perception of spectral and temporal modulations by using mistuning detection of harmonic sounds. We evaluated the behavioural effects of selective elimination of corticothalamic neurons, projecting from the primary auditory cortex (A1) to the ventral division of the medial geniculate body (MGBv), by chromophore-‐targeted laser photolysis on mistuning detection in adult ferrets. Neural recordings guided bilateral injections of chlorine e6-‐ fluorescent microbeads in MGBv, and after allowing retrograde transport of the beads to the cell bodies, apoptosis was induced by infrared (670 nm) laser illumination of A1. Mistuning detection was measured using a positive conditioned go/no-‐go task design, with three training phases and two test phases. Complex harmonic tones composed of 16 harmonics with a fundamental frequency (F0=400 Hz) were presented as a reference sound, while in the target sound the 4th harmonic was shifted to a higher frequency (0.03-‐12%) so that it was no longer an integer multiple of F0. The microbeads injections were made in the thalamus before the test phases, and mistuning detection performance was
tested before and after bilateral laser illumination of A1. Postmortem histology revealed that injection sites were located in MGBv. The cell densities of NeuN-‐positive layer VI neurons were measured using the optical fractionator stereological probe in the anterior, middle (where A1 is located) and posterior ectosylvian gyrus. A 30% reduction in cell density in the middle ectosylvian gyrus was observed relative to controls, supporting the selective elimination of corticothalamic neurons in A1. Based on signal detection theory, behavioural performance was quantified by the animals’ d’ scores, an index of detection sensitivity, and by psychometric analysis. Corticothalamic lesion resulted in poorer mistuning detection performance, as indicated by decreased d’ values and a shift of the psychometric curves towards higher frequencies, with a threshold increase from 8±1 Hz to 25±2 Hz. These results support a role for A1-‐MGBv corticothalamic feedback in the accurate recognition of auditory stimuli. ID: 211
Neuronal basis for visual and tactile processing in cat primary auditory cortex Andres Carrasco1, Melanie Kok1, Alex Meredith2, Stephen G. Lomber1 1University of Western Ontario, London, Canada; 2Virginia Commonwealth University Richmond, United States of America [email protected] Despite functional and structural evidence supporting sensory-‐specific processing pathways in cerebral cortex, recent investigations have identified neurons at early stages of cortical activation capable of multisensory integration. Hence, the aim of the present study was to bridge the copious lapse in multisensory processing models, by examining how multi-‐modal signals influence the neuronal responses at the earliest stage of cortical activation in the auditory system. Using acute recording techniques, extracellular response activity was collected from the right hemisphere of six mature cats (felis catus). Neuronal responses to isolated and combined presentation of acoustic (noise bursts), visual (flash), and somatosensory (tactile sweep) signals were measured across primary auditory
94
cortex (A1). Statistical analyses of neuronal engagement revealed widespread cortical activation during periods of acoustic (82.43%) and visual (34.29%) stimulation. These values were contrasted by a near absence in A1 activation during epochs of tactile stimulation (0.84%). Examination of bi-‐modal responses demonstrated that exposure to acoustic signals result in higher peak response levels, shorter response latencies, and shorter response durations than those measured during epochs of visual stimulation. Neuronal responses during sensory co-‐stimulation revealed modulatory interaction effects (auditory-‐visual: excitation; auditory-‐visual-‐somatosensory: inhibition) across A1 neurons. Collectively, these results support a model of multisensory activation at the earliest level of sensory processing in the auditory system, where acoustic and visual signals can engage and modulate A1 activity, and somatosensory stimulation can modulate (inhibit) but not drive A1 responses. ID: 212
EEG potentials sensitive to Artificial Grammar learning in nonhuman primates Adam Attaheri, Yukiko Kikuchi, Alice E. Milne, Andy Hanson, Benjamin Wilson, Kai Alter, Christopher I. Petkov Newcastle University, United Kingdom [email protected] Artificial Grammars (AG) can generate rule-‐based sequences that emulate aspects of language structure, such as the relationship between words in a sentence. Recent work has shown that nonhuman primates are sensitive to the relationships between sounds in an auditory AG (Wilson et al. 2013) and fMRI is revealing the brain regions involved, which include ventral frontal cortex (Wilson et al., in revision). However, other approaches, such as EEG, allow investigation of the time course of brain-‐wide effects and comparisons with prior human EEG results. Here, we habituated two rhesus macaques to exemplary, 5 element long, sequences of nonsense words generated by the AG. Then we recorded surface-‐based EEG potentials in the animals as they listened to either ‘correct’ sequences generated by the AG structure or ‘violation’ sequences, which violated the AG structure by having a single
illegal transition between nonsense words. Critically, after this illegal transition in the violation sequences, the remaining nonsense word elements were identical to their matching ‘correct’ sequences, allowing us to analyse acoustically identical parts of the ‘correct’ and ‘violation’ sequences. Auditory Event Related Potentials (ERPs) were observed in response to all of the nonsense words in the sequences, each showing stereotypical components including the N100 and P200. Violations of the AG caused a significantly stronger negativity peaking at ~150ms post violation onset. This negativity resembles the human mismatch negativity (MMN), and it may represent a putative monkey MMN. Moreover, violation transitions resulted in a stronger P200 positivity and a stronger N400 negativity. Effects were prominent in the frontal EEG electrodes, with no notable hemispheric differences. Interestingly these reported EEG sensitivities all occur much earlier in time than those that we have observed using the same AG paradigm and extracellular recordings from neurons in auditory cortex (>500 ms; Kikuchi et al., this meeting), possibly indicating feed-‐back effects on auditory cortex. In conclusion, the experiment identified macaque ERP potentials sensitive to AG learning, which help to clarify the interpretation of related monkey fMRI and neuronal recordings results. Furthermore, comparing these ERP components to those reported in humans for AG learning allows identifying evolutionarily conserved EEG components. Support: Wellcome Trust New Investigator Award (CIP; WT102961MA). ID: 213
Effects of loud noise exposure on sound processing in the mouse primary auditory cortex Ondrej Novak, Ondrej Zelenka, Tomas Hromadka, Josef Syka Institute of Experimental Medicine AS CR Praha, Czech Republic [email protected] Exposure to loud sounds damages the function of the inner ear and also induces profound changes in the receptive fields of cortical neurons, contributing to the development of tinnitus. However, the roles of different
95
neuronal classes in the development of trauma-‐induced receptive field changes have not yet been resolved. We evaluated the effects of acute acoustic trauma on the response properties of neurons in the mouse primary auditory cortex using single-‐unit extracellular recordings and two-‐photon calcium imaging in vivo. Mice were anaesthetized with ketamine/xylazine and acoustic trauma was induced by a 5-‐min exposure to 125 dB SPL white noise. Responses of neurons to broad-‐band noise and pure tones were recorded before and after the noise exposure. We observed different dynamics of spiking responses to broadband noise before and after the acoustic trauma (n=86 neurons). Sound-‐evoked responses decreased in one subset of neurons (n=13), but increased in another (n=20). Almost half of the neurons (n=38) did not change their sound-‐evoked activity and 15 neurons remained unresponsive. Interestingly, neurons with decreased responsiveness had significantly narrower spikes than other sound-‐responsive neurons. Spontaneous activity and response jitter increased in neurons with unchanged or greater responses. Frequency response areas of individual neurons showed an increase in activity at the flanks of responsive areas and a decrease in activity at the initial receptive field. In two-‐photon calcium imaging most neurons displayed shifts in their best frequencies towards lower frequencies after the trauma. The observed effects suggest that an acute acoustic trauma selectively disrupts activity of inhibitory interneurons in the auditory cortex, leading to specific changes in tuning and response dynamics of other neurons. To explore directly the changes in activity of cortical interneurons, we used calcium imaging in the auditory cortex of PV-‐Cre/TdT and SST-‐Cre/TdT mice, expressing TdTomato in parvalbumin-‐positive or somatostatin-‐positive interneurons, respectively. We evaluated acoustic trauma-‐induced changes separately for cortical layers 4 and 2/3. Noise-‐ and tone-‐responsiveness was disrupted in layer 4 significantly more than in layer 2/3. The most prominent change in layer 4 was a selective decrease in noise-‐responsiveness of PV-‐interneurons. Tone-‐
responsiveness of SST-‐interneurons, however, decreased selectively only in layer 2/3. ID: 214
Musical and vocal emotions in the auditory cortex Sebastien Paquette1, Isabelle Peretz1, David Fleming2, Pascal Belin1,2,3 1Université de Montréal, Canada.; 2University of Glasgow, United Kingdom; 3 Aix-‐Marseille Université, France [email protected] Background In recent years, many have theorized about the existence of common neuropsychological substrates for conveying musical and vocal emotions [1,2]. Although similar brain areas have been associated with musical and vocal emotional processing in the auditory cortex (e.g. the superior temporal sulcus (STS), superior temporal gyrus (STG) [3,4]), there is currently little experimental support for the existence of a common auditory emotion channel for vocal and musical emotion perception. We aim to provide an in-‐depth comparison of the musical and vocal emotion channels to identify their common neurobiological substrates in the auditory cortex. Very similar musical and vocal stimuli from both an acoustical and ecological standpoint will allow us to directly explore these structures using functional magnetic resonance imaging (fMRI) and examine whether a vocal emotion pathway can be dissociated from a musical one. Methods Two batteries of stimuli depicting basic emotional expressions (happy, sad, scary, neutral) were used: the Montreal Affective Voices (non-‐linguistic vocal bursts; mean duration: 1.3 sec) [5] and the Musical Emotional Bursts (improvisations or imitations of an emotion on a violin or a clarinet; mean duration:1.6 sec) [6]. Twenty participants realized a one-‐back task while listening to the affective bursts in three timbres (violin, clarinet, voice), during continuous fMRI scanning.
96
Results Univariate analyses, using SPM, revealed no main effect of timbre, but an interaction (emotion x timbre) in the Auditory Cortex (STG): vocal fear elicited stronger activation than their music counterparts and the opposite was found for happy stimuli (music stimuli elicited stronger activation). Activation patterns by vocal and musical emotions showed striking similarities, which suggest that the auditory cortex treats both emotional stimuli similarly. Ongoing analyses using the searchlight method, and cross-‐condition pattern classification of recently acquired sparse-‐sampling data, will allow us to determine if the musical emotional pathway can be dissociated from the vocal one. [1] Peretz (2010) Oxford Uni. Press. [2] Juslin, et al, (2003) Psy. Bull. [3] Kotz et al, (2013) HBM. [4] Aubé, et al, (2014) SCAN. [5] Belin, et al, (2008) Behav. Res. Methods. [6] Paquette et al, (2013) Front emo sci. ID: 215
Exploring the role of synchrony in spatial and temporal auditory-‐visual integration Gareth P. Jones, Stephen M. Town, Katherine C. Wood, Huriye Atilgan, Jennifer K. Bizley University College London, United Kingdom [email protected] Animals and humans integrate sensory information over time, and combine this information across modalities in order to make accurate and reliable decisions in complex and noisy sensory environments. Little is known about the neural basis of this accumulation of information, nor the cortical circuitry that links the combination of information between the senses to perceptual decision making and behaviour. Most previous examples of multisensory enhancement have relied on synchrony dependent mechanisms, but these mechanisms alone are unlikely to explain the entire scope of multisensory integration, particularly between senses such as vision and hearing, which process multidimensional stimuli and operate with vastly different latency constraints. Presented here are two audio-‐visual behavioural tasks, one spatial, one temporal (adapted from Raposo, et. al. 2012), requiring subjects (ferrets and humans) to accumulate evidence from one or both senses over time. In
the temporal task subjects estimated the average rate of short auditory and/or visual events embedded in a noisy background (20 ms white noise bursts or flashes) over a defined time period (~1000 ms). Instantaneous event rates throughout this time period varied, meaning the accuracy with which the event rate could be estimated increased over time. Similarly, in the spatial task, subjects were required to report whether a greater event rate was presented to the left or right of space. Discrimination was assessed in both unisensory auditory and visual conditions as well as in synchronous and asynchronous multisensory conditions. In the temporal task, accuracy and reaction times for humans were improved in both synchronous and asynchronous multisensory conditions (accuracy: 71% and 72%, reaction times: 285±8 and 290±8 ms, mean±SE, respectively), relative to the auditory and visual unisensory performance (accuracy: 65% and 60%, reaction times: 322±8 and 343±7 ms, mean±SE, respectively). To investigate how the additional information available in the multisensory conditions leads to optimised listening performance, these data are being analysed in the context of previously published drift diffusion models. Lastly, to better understand how such signals are represented in the brain, recordings are being performed form auditory and visual cortex of anesthetised ferrets.
97
ID: 217
Electrophysiological signatures of audio-‐visual integration in young and elderly cochlear-‐implant users Irina Schierholz1, Svenja Schulte2, Nadine Hauthal3, Christoph Kantzke1, Mareike Finke4, Reinhard Dengler5, Pascale Sandmann1
1Central Auditory Diagnostics Lab, Cluster of Excellence „Hearing4all", Hannover Medical School, Germany; 2Central Auditory Diagnostics Lab, Hannover Medical School, Germany; 3Neuropsychology Lab, Cluster of Excellence „Hearing4all", European Medical School, University of Oldenburg, Germany; 4Cluster of Excellence „Hearing4all", Hannover Medical School, Germany; 5Department of Neurology, Cluster of Excellence „Hearing4all", Hannover Medical School, Germany Schierholz.Irina@mh-‐hannover.de When faced with a long period of auditory deprivation the brain is known to be sensitive for reorganizations of the cortical areas in the auditory system. Auditory deprivation may thus affect multisensory processing and can lead to a change in the integration of auditory and visual stimuli when the ability to hear is restored with a cochlear implant. In this EEG study we wanted to compare multisensory integration between cochlear-‐implant users (CI) and normal-‐hearing listeners (NH). Furthermore, we aimed to assess possible age effects on uni-‐ and multisensory performance in cochlear-‐implant users. Young (CI: N=10; mean age: 27.5 years, NH: N=10; mean age: 28.1 years) and elderly (CI: N=16; mean age: 68.94 years, NH: N=11; mean age: 66.8 years) cochlear-‐implant patients and normal-‐hearing controls were tested with a redundant target paradigm while EEG was simultaneously recorded. Participants were presented with auditory, visual as well as audiovisual stimuli and had to make a speeded response. Miller’s test of the race model inequality was applied to examine the effects of auditory deprivation on multisensory integration. The preliminary results revealed a significant violation of the race model inequality in cochlear-‐implant users and normal-‐hearing controls, suggesting audiovisual integration in both groups. However, the younger participants showed less audiovisual integration compared with the elderly. This points to an age-‐related alteration in the ability to integrate auditory and visual
information. Further analyses will be conducted to investigate the neurophysiological underpinnings of altered audiovisual integration. These findings will be discussed with regards to consequences for rehabilitation strategies after cochlear implantation. ID: 218
Frequency selectivity and myelination measured in human auditory cortex at 7 Tesla Julien Besle1, Olivier Mougin2, Rosa Sanchez-‐Panchuelo2, Penny Gowland2, Richard Bowtell2, Susan Francis2, Katrin Krumbholz1 1Medical Research Council Cambridge, United Kingdom; 2University of Nottingham, United Kingdom [email protected] Anatomical and electrophysiological studies in non-‐human primates have subdivided the auditory cortex into core and belt areas showing distinct structural and functional properties. It remains unclear however whether analogous subdivisions exist in the human auditory cortex and how they are laid out on the supratemporal plane. Previous in-‐vivo human MRI studies have led to inconsistent subdivisions because tonotopic mapping alone cannot distinguish between sub-‐areas with parallel tonotopic gradients. To help interpret tonotopic maps, we concurrently measured tonotopy, myelination and frequency selectivity in the same subjects at ultra-‐high magnetic field (7T). For structural mapping of myelination, we estimated the R1 longitudinal relaxation rate (3D MP2RAGE, 0.6 mm resolution) and the magnetization transfer ratio (3D MTRAGE, 0.7 mm resolution). For the functional mapping (sparse 2D GRE EPI, 1.5 mm resolution, phase-‐corrected for B0-‐related distortions), we estimated best frequency and frequency selectivity using trains of narrowband noises at 7 single centre frequencies and adaptation frequency selectivity using trains of alternating centre frequencies at 5 frequency separations. All structural and functional measures were projected onto a flattened model of the supra-‐temporal cortex, segmented from the high-‐resolution processed MP2RAGE volume. Structural measures were corrected for cortical
98
thickness and curvature. Group-‐averaged maps were created using spherical registration. In all subjects/hemispheres, we identified at least 3 tonotopic gradient reversals (4 tonotopic gradients) roughly parallel to HG, with the 2 central gradients (high-‐to-‐low-‐to-‐high) centred on HG. On the group-‐averaged maps, the area of higher myelination at mid-‐cortical depth corresponded to the medial part of HG and the two central gradients, while increased selectivity was also centred on HG but extended more laterally along the entire central gradients. In individual maps however, higher myelination and selectivity could correspond to either or both of the central gradients depending on the subject/hemisphere. Frequency adaptation, when corrected for BOLD non-‐linearity, was strongest when the 2 centre frequencies of the adaptation train were closest, which suggests genuine neural adaptation. Adaptation selectivity showed a spatial distribution similar to that of the response frequency selectivity. ID: 219
Voice discrimination in zebra finches Julie E. Elie, Frederic E. Theunissen University of California Berkeley, United States of America [email protected] Communication calls in birds can convey three main types of information: who, where and what. This information can be orthogonal, for instance the same individual might produce different types of communication calls (e.g. an aggressive call, a contact call). In this neuro-‐ethological study, we investigate invariance and selective mechanisms within the zebra finch (Taeniopygia guttata) auditory cortex that lead to the perception of individual identity. To explore how the zebra finch auditory system could invariantly treat signals that have different social meanings but are emitted by the same individual, we investigated perception of vocalizations that are used in clearly distinct social contexts. Zebra finches utter a full repertoire of calls that, from an acoustical point of view, exhibit high variability: from the broad-‐band and noisy aggressive call to the spectrally structured distance call. In this study, we first generated a library containing the entire repertoire of female and male zebra
finches, organized along 9 different semantic categories and annotated with the identity of the individual and the social context of emission. Using a novel data-‐driven acoustic analysis we show that there exist some acoustic cues that tease apart individuals irrespective of the vocalization categories. Then, we conducted behavioural conditioning experiments to test the ability of birds to discriminate individuals, and found that both males and females discriminate voices irrespective of the vocalization type used. Finally, we investigated the neural representation of these social vocalizations. We recorded the neural activity in the primary auditory area (Field L), and in the two secondary auditory areas (NCM and CM) of 6 zebra finches in response to the play back of the 9 distinct communication call categories, covering the repertoire of 4 adults and 2 young per recording site. To understand the nature of the neural representation of the “who” information in the auditory system we used a decoding method. This method yields a confusion matrix that quantifies how well individual call stimuli can be decoded from single spike trains. From the confusion matrix of individual calls, we explore the invariance and selective properties of single units for individual voices. Combined with the anatomical position of the recorded units, this approach sheds light on the neural mechanisms engaged in the auditory perception of individual identity. ID: 220
Correlates of stream segregation by phase differences in tone complexes in the primary auditory cortex Stanislava Knyazeva, Elena Selezneva, Nikolaos C. Aggelopoulos, Michael Brosch Leibniz Institute for Neurobiology Magdeburg, Germany nikolaos.aggelopoulos@lin-‐magdeburg.de Psychophysical studies have established that differences in pitch or timbre (component phase) of complex tones can lead to stream segregation even in the absence of differences in power spectrum, passband and fundamental frequency (Roberts et al, 2002). We have used their paradigm to search for correlates of stream segregation in the auditory cortex of
99
awake macaque monkeys (Macaca fascicularis). We hypothesized that the firing rate or timing of the responses of auditory cortex multi-‐unit clusters under short and long repetition times could provide a potential correlate for stream segregation. Three types of tone sequences of either short (100ms) or long (400ms) repetition time (TRT100% and TRT400% respectively) were presented. The first type had three identical tones with the component harmonics at cosine phase (AAA triplets). The other two types had a second tone with alternating (change in pitch) or random (change in timbre) phase onsets for the harmonics (ABA triplets). Only unresolved harmonics of the fundamental frequency (100 Hz) were present. Neurons responded to all stimuli similarly, when they had a cosine component phase (AAA triplets) in both TRT100% and TRT400%. Some neurons (group I), responded with a higher firing rate specifically to the B stimulus in ABA triplets, if it consisted of alternating or random component phases, indicating a sensitivity to the phase of the component frequencies. In addition, there were differential responses to the B stimuli with regard to frequency following. Both these differential responses were present in both TRTs, so that the firing rate and timing of neuronal responses cannot alone be a correlate of stream segregation. Additionally, there was no change in the neuronal responses over time that might signal a switch between the sequences being perceived as an integrated stream or as two segregated streams. In another subpopulation of neurons (group II), the neuronal responses to the three tones during short repetition times extended continually over the duration of the triplet, irrespective of the phase of the component harmonics, providing a potential basis for stream integration and the perception of the triplets as unitary auditory objects. The difference in response firing rate to the B stimulus depending on component phase in group I neurons may compete with the perception of an integrated stream, suggested by the responses of group II neurons, potentially leading to stream segregation.
ID: 221
The neurobiology of voice processing: What have we learned from neuronal recordings in voice-‐sensitive cortex? Catherine Perrodin1, Christoph Kayser2, Nikos K. Logothetis1,3, Christopher I. Petkov4 1Max Planck Institute for Biological Cybernetics Tuebingen, Germany; 2University of Glasgow, United Kingdom; 3University of Manchester, United Kingdom; 4Newcastle University Medical School, United Kingdom [email protected] Voice recognition is important for social communication and the human brain contains specialized voice-‐sensitive regions. Recently, fMRI has been used in primates and dogs to identify voice-‐sensitive regions in the temporal lobe of nonhuman animals. The primate work has led to a series of studies using targeted neuronal recordings from the anterior voice-‐identity sensitive fMRI cluster in the macaque supratemporal plane (STP). Here, we review these newly obtained insights into the neurophysiology of voice-‐sensitive neurons in the primate brain. The first study investigated the neuronal substrates underlying the sensitivity to conspecific voices observed in fMRI studies. Namely, do single neurons defined as ‘voice cells’, analogously to ‘face cells’ in the visual system, exist, and, if so, are voice cells a direct auditory analog of face cells in terms of their functional properties? The neurophysiological data revealed a modest proportion of voice cells, showed how these neurons encode both the between and across voice category stimuli, and revealed interesting differences in the encoding properties of ‘voice cells’ relative to those reported for face cells. The second study explored the extent to which this voice-‐sensitive region is multisensory, and compared the results to those from association cortex in the superior-‐temporal sulcus. Stimulating with dynamic faces and voices, we found that neurons in voice-‐sensitive STP are highly sensitive to auditory features and show a certain level of multisensory influences. By contrast, neurons in association cortex are not as sensitive to auditory features but demonstrate greater specificity in their multisensory responses, such as being more often associated with congruent stimulus pairs than STP neurons. Finally, a third study investigated the link between stimulus features
100
in audiovisual communication signals and the types of multisensory responses displayed by neurons within voice-‐sensitive cortex. We found that natural audiovisual asynchronies that are present in communication signals influence the phase of ongoing low-‐frequency oscillations and also the direction of neuronal multisensory interactions. Altogether the results provide new insights into the neurobiology of primate voice-‐sensitive regions, raise hypotheses for testing in the visual modality, and constrain theoretical models on voice/face processing. Support: Max Planck Society, Wellcome Trust (CIP), Swiss National Science Foundation (CP).
ID: 222
Unilateral hearing loss in the ferret: Behavioural and electrophysiological characterisation of a novel model of tinnitus Joshua R. Gold, Fernando R. Nodal, Andrew J. King, Victoria M. Bajo University of Oxford, United Kingdom [email protected] Tinnitus is defined by the reported presence of a sound percept in the absence of any environmental correlate – equivalently, animal models have sought to induce such a percept, verified using behavioural measures and understood by physiological and anatomical means. However, published data have often neglected to obtain behavioural and physiological measurements within the same subjects. We have thus sought to expand the available pool of knowledge regarding tinnitus neurobiology by developing a novel model of maladaptive plasticity based upon selective sensorineural hearing loss in the ferret. Individual animals (n = 7) were tracked longitudinally (>12 months), with auditory function assessed using an operantly-‐conditioned gap-‐detection paradigm and measurement of auditory brainstem responses (ABRs). Upon the acquisition of stable behaviour, animals were deafened unilaterally by mechanical lesion in the spiral ganglion at the basal coil level, with further behavioural testing and ABR measurement at multiple time points post-‐recovery. Following the lesion, behavioural performance outcomes were heterogeneous, with a subset of ferrets
displaying gap-‐detection threshold elevations and correlated changes in late-‐wave ABR features, akin to those observed in human tinnitus patients. In particular, these physiological changes were distinct from those observed in animals without behavioural signs of tinnitus. From this mixed cohort, acute bilateral neural recordings performed in the auditory cortices of lesioned ferrets revealed changes in multiunit spectrotemporal response properties, including tonotopic rearrangement and elevations of neural synchrony and spontaneous activity,, in accordance with published tinnitus-‐like physiology, and our own behavioural and ABR-‐defined criteria. On the basis of these physiological changes, a second subset of ferrets received chronic bilateral fibre-‐optic implants in eachauditory cortex, which had previously been injected with an AAV-‐ArchT construct, with the aim of potentially reversing the tinnitus-‐like percept by silencing abnormal cortical function in awake behaving animals. Study supported by the Wellcome Trust and Action on Hearing Loss. ID: 223
Dual involvement of early auditory cortex in an auditory working memory task: A combined study in human and non-‐human primates Artur Matysiak, Ying Huang, Norman Zacharias, Reinhard König, Peter Heil, Michael Brosch Leibniz Institute for Neurobiology Magdeburg, Germany Ying.Huang@lin-‐magdeburg.de Working memory (WM) is defined as temporal retention of task-‐relevant information for goal-‐directed behaviours (D’Esposito, 2007). Since WM requires high accuracy and flexibility for information retention, sensory cortex is necessary to be involved in this process. Here we investigated neural correlates of WM in early auditory cortex using a multi-‐level approach in humans, where we monitored neural activity using magnetoencephalography (MEG), and in monkeys, where we recorded local field potentials (LFPs), multiunit activity (MU) and single-‐unit activity (SU). Both phasic encoding and tonic delay activity were investigated. Stimuli were four tone-‐pairs with same or
101
different frequencies (in humans 1.5 and 1.6 kHz, separated by a silent delay of 2 s; in monkeys 1 and 3 kHz, 800 ms delay). Subjects were required to make differential motor responses for the two tone-‐pairs starting with the low frequency tone, i.e., go-‐response after the low-‐low pair and nogo response after the low-‐high pair. Therefore subjects had to memorize the first tone to finish the task and these tone-‐pairs were defined as high-‐memory-‐load condition. For the two pairs starting with the high-‐frequency tone, subjects made nogo responses after both. The first tone was not required to be memorized and these pairs were defined as low-‐memory-‐load condition. A series of control experiments for stimulus difference and motor preparation etc. were conducted on the same subjects or on the same neurons. By comparing neural activity between the high-‐ and low-‐memory-‐load conditions, we find that tonic delay activity was higher in the high-‐load condition than that in the low-‐load condition for human subjects. The differential delay activity was lateralized to the right hemisphere. Similar results were observed from monkeys. The high-‐ and low-‐load conditions was bi-‐directionally differentiated on LFPs, MU and SU activity. The differential delay activity mainly related to the WM process since it was task-‐specific and could not be explained by the difference of stimulus, motor preparation etc. Besides the tonic delay activity, the phasic encoding activity was also modulated by memory-‐load. The phasic activity for the same stimulus was suppressed in the high-‐load condition relative to that in the low-‐load condition. Our results in both humans and macaques therefore suggest that auditory cortex is involved in auditory WM and both phasic and tonic neural activity are related to the memory process.
ID: 224
Dynamic activations of left and right auditory related areas during temporal analyses of speech signals and its relation to cortical structure Nathalie Giroud, Kurthen Ira, Keller Matthias, Dellwo Volker, Meyer Martin University of Zurich, Switzerland [email protected] During speech processing the auditory related cortex is able to identify relevant acoustic cues that may be described at the frequency and at the time scale level. Frequency variations can have a temporal grain rate of milliseconds and therefore change rapidly or unfold at a rate of hundreds of milliseconds and therefore change slowly. We examined auditory evoked potentials (AEPs) as a function of temporally different speech cues in words and syllables. We related the amplitudes of the AEPs to specific activations of the left and right auditory related areas and further explored dynamic changes of the activations over time. Additionally we will examine the relationship of these activations to the cortical structure in auditory related areas. We investigated this issue by (1) manipulating rapidly changing cues of a consonant-‐vowel syllable by varying the voice-‐onset time (VOT) and (2) by manipulating the rhythmic patterns of a word in order to investigate the processing of slowly changing cues. AEPs and structural MRI scans were so far collected from 21 healthy young adults. We used a pre-‐attentive mismatch negativity paradigm and a N1/P2 paradigm in combination with an inverse solution approach (LORETA). The thickness and surface area of the cortex will be calculated using a surface-‐based morphometry approach (Freesurfer). AEP data show that slowly changing cues are preferentially processed in the right middle temporal lobe (MTG), whereas rapidly changing cues are preferentially processed in the left MTG. The time course of the lateralization effects additionally show that they are dynamically changing over time. To conclude, the source localizations show a lateralization effect of the auditory related areas in line with the “asymmetric sampling in time” (AST)-‐model by Poeppel, but also implicate a very dynamic process over time.
102
ID: 225
Auditory processes limited by visual load Katharine Molloy1, Maria Chait1, Timothy D. Griffiths1,2, Nilli Lavie1 1University College London, United Kingdom; 2Newcastle University, United Kingdom [email protected] It is well established that perception is limited in conditions of high load that overwhelm perceptual capacity. The effects of perceptual load have typically been studied within sensory modality, especially within vision, but less work has explored whether a large perceptual demand in one modality can limit processing in another. Here, we show behavioural and magnetoencephalography (MEG) data which indicate that perceptual load within vision can affect auditory perception both for pure tones and for figure-‐ground segregation. Participants performed visual search tasks of either low load (feature pop out) or high load (conjunction) search. Auditory targets were presented on 50% of trials. In the detection experiments the target was a pure tone 12dB above threshold; in figure-‐ground segregation, the ‘figure’ target consisted of a repetition of multiple frequency components among a background of randomly changing chords (Stochastic Figure-‐Ground, SFG, stimuli; Teki et al., 2013). In behavioural studies, a dual task paradigm was used. For the MEG, responses to the auditory stimuli were passively recorded while participants engaged solely in the visual load tasks. Behaviourally, both pure tone detection and figure ground segregation were found to suffer under the high, compared to low load visual task. This reflects the phenomenon of ‘inattentional deafness’, where normally audible sounds are missed when we are overloaded. For pure tones, auditory evoked fields measured using MEG showed that the M100 response was significantly reduced under high visual load. This was associated with lower activity in secondary auditory cortex (middle temporal gyrus and superior temporal sulcus) under high compared to low load. Additionally, a P3 ‘awareness positivity’ was observed in the right hemisphere under low but not high visual load. The figure-‐ground MEG experiment is currently running and data will be discussed at the meeting.
Overall, our data show that, rather than the auditory system acting as an autonomous ‘early warning system’, it may actually be limited by overload of perceptual information, even from other sensory modalities. Reference: Teki, S., Chait, M., Kumar, S., Shamma, S., & Griffiths, T. D. (2013). Segregation of complex acoustic scenes based on temporal coherence. Elife, 2. ID: 226
The effects of attention on auditory object formation via temporal coherence James O'Sullivan, Edmund Lalor Trinity College Dublin, Ireland [email protected] The human brain has evolved to operate effectively in complex auditory environments, segregating multiple continuous stimuli into perceptually distinct auditory objects. One theory which seeks to explain such a percept, known as Temporal Coherence, surmises that spectral components which modulate coherently in time will be bound together to form a single auditory stream. One stimulus used in stream segregation paradigms, known as a Figure-‐Ground stimulus, consists of a sequence of chords containing a random set of pure-‐tone components. Occasionally, a subset of tonal components repeat in frequency over several consecutive chords, resulting in a spontaneous percept of a “figure” popping out of a random background of varying chords. The saliency of the figure depends on the quantity of tonal components with the same temporal characteristics. Studying this phenomenon using non-‐invasive techniques such as magneto-‐ and electroencephalography (MEG/EEG) has traditionally been constrained by time-‐locked averaging techniques, which make it difficult to disentangle particular components of interest from the multitude of simultaneous neural responses to other stimulus features. Recently, the application of linear regression methods has allowed for the extraction of localized neural correlates of individual features of interest. Using these methods in a Figure-‐Ground detection paradigm, we show that it is possible to extract a neural response that is indicative of the neural processing of temporal coherence in relative isolation. Furthermore, we show the
103
effects of attention on this response in an Attended / Unattended paradigm. Our findings show an early effect of coherence, lasting from 100ms to 200ms post stimulus, over both left and right parietal scalp regions, with no significant difference between attended and unattended responses (p > 0.05, paired t-‐test). After this, attended and unattended neural responses become, and remain, significantly different (p < 0.01, paired t-‐test). Peaking at 400ms, the attended response is primarily located over left temporal areas. The unattended response peaks at a similar latency, but is instead located over right parietal scalp. Subsequently, the unattended response shows no further activation, whereas the attended response remains up until 600ms post-‐stimulus. These findings suggest an early and automatic response to the temporal coherence of a stimulus, followed by a later response driven by attentional engagement. ID: 227
Pushing beyond the envelope: Improved modeling of continuous speech processing in EEG using phonetic features Giovanni Di Liberto, James O'Sullivan, Edmund Lalor Trinity College Dublin, Ireland [email protected] The details of how humans perform speech processing remain unknown. Progress on this problem has been made in recent years by the realization that cortical activity tracks the amplitude envelope of speech. This has facilitated the development of techniques based on regression methods for mapping this representation of speech to the recorded neurophysiological signal (EEG). Further insights have been achieved using a spectro-‐temporal representation, which is obtained by partitioning the acoustic signal into a number of frequency-‐bands and calculating the envelope of each band. Although these methods are powerful tools that have addressed several previously unanswered questions, recent work with intra-‐cranial recordings (ECoG) has shown that high-‐gamma cortical activity tracks higher-‐order representations of speech based on the phonetic structure.
The current study shows that a phonetic representation of speech can also describe the mapping between speech and low-‐frequency scalp-‐recorded EEG. Indeed we show that this representation produces a marked improvement in mapping accuracy over the approach based on the speech envelope. In particular, 90 minutes of continuous natural speech was divided into 32 trials and presented to 10 subjects. Multivariate linear-‐regression was then employed on the recorded data and predictions of the EEG signals were obtained using leave-‐one-‐out cross-‐validation. The models based on a phonetic structure were better predictors of the EEG for all subjects, and this was shown to be significant using a two-‐tailed paired t-‐test. Indeed, a phonetic representation carries higher-‐order information than the speech amplitude envelope. However, further investigation is necessary to elucidate whether the improvement we see is due to a higher-‐order mapping that captures a categorical representation of the phonetic content or a better encoding of the low-‐level speech. Some results have been obtained that tend towards this former explanation. In particular, multivariate models of the system response to the phonetic representation of speech permit the analysis of each phoneme independently. From this analysis, we show significant differences between the responses to vowels and non-‐vowels. In particular, we show that a binary classification performed with k-‐means on the phonetic model returns a partitioning with over 90% accuracy. Potentially, the realised model is an important tool in the study of how speech is processed by the human brain. ID: 228
Intracranial investigation of auditory and visual speech selectivity in human auditory and auditory-‐related cortex Ariane E. Rhone, Bob McMurray, Kirill V. Nourski, Hiroyuki Oya, Hiroto Kawasaki, Matthew A. Howard III The University of Iowa, United States of America ariane-‐[email protected] Although speech is often considered an auditory phenomenon, visual contributions to speech perception are well established. Human
104
electrophysiology and neuroimaging studies have outlined a model in which visual information naturally preceding the acoustic onset of speech can influence auditory processing, resulting in behavioral facilitation associated with audiovisual (AV) versus audio-‐alone (A) speech. A broad cortical network has been implicated in this process; we used electrocorticography to test the visual-‐alone (V) responsiveness and speech sensitivity of several of these areas simultaneously. Subjects were five normal-‐hearing adult neurosurgical patients undergoing chronic invasive monitoring for medically refractory epilepsy. Stimuli were A or V speech syllable /da/ combined with nonspeech stimuli (/da/-‐shaped noise or gurning) or unimodal (A or V) /da/. Electrophysiological data were recorded from Heschl’s gyrus (HG) using multicontact depth electrodes. Subdural grid arrays recorded simultaneous activity on the lateral cortical surface, including superior temporal gyrus (STG), inferior frontal gyrus (IFG), precentral gyrus, and middle temporal gyrus (MTG). Event related band power in the high gamma band (70-‐150 Hz) was measured in two time windows, 300 ms following mouth motion onset to test visual activation prior to acoustic stimulation and 400 ms post audio onset. Unimodal V stimuli activated sites on STG, IFG, MTG, and precentral gyrus, but not HG. Speech vs. nonspeech comparisons revealed distinct patterns of activation in different cortical regions. Prior to sound onset, HG and STG did not show speech selectivity; sites on precentral gyrus showed increased activation for visual speech vs. nonspeech content. Following sound onset, STG, but not HG, showed increased high gamma for both A and V speech. MTG showed increased activity for A speech stimuli, but no effect of V speech vs. nonspeech. Activity on precentral gyrus continued to be greater for V speech vs. nonspeech. Although IFG was activated by V stimuli, it was not modulated by speech content in either modality. Despite widespread activation by V stimuli, speech sensitivity was limited to nonprimary auditory areas and did not occur prior to acoustic onset except on precentral gyrus. This suggests that neural processes subserving AV facilitation emerge at the level of nonprimary and auditory-‐related cortical areas.
ID: 229
Successful lipreading of silent speech strengthens the correlation between cortical activity and the corresponding speech envelope Michael J. Crosse, Hesham ElShafei, Edmund C. Lalor Trinity College Dublin, Ireland [email protected] Neuroimaging research has shown that the presentation of visual speech in the absence of auditory speech activates primary auditory cortex (Calvert et al., 1997; Pekkola et al., 2004). However, due to the limited temporal resolution of fMRI, it is difficult to establish how activation in auditory cortex during silent lipreading is modulated over time or what this activation precisely reflects. Electrophysiological measurements on the other hand provide superior temporal resolution rendering them more suitable methodologies for addressing this subject. For example, recent electrophysiological work has demonstrated that an estimate of the acoustic envelope can be reconstructed from EEG data which were recorded during both audio-‐only (A) and visual-‐only (V) speech (Crosse and Lalor, 2014). The authors suggested that this latter effect was mainly driven by visual responses to motion in the visual stimulus. However, it remains a possibility that auditory cortical activity tracks the amplitude envelope of the unheard acoustic signal during successful silent lipreading. Here we test this hypothesis by conducting an experiment to examine the effects of recalling a known speech passage during silent lipreading on recorded EEG. The subjects were trained on a one-‐minute AV speech stimulus prior to testing. They were then presented with 14 trials of the same AV stimulus, 14 trials of the same V stimulus without the audio (V-‐trained) and 14 trials of different V stimuli with which they were not familiar (V-‐untrained). Subjects were required to recall the auditory speech as well as they could during the V-‐trained condition and to detect target words during each of the three conditions. Subjects performed this target word detection significantly better in the V-‐trained condition than in the V-‐untrained condition. Preliminary analysis of the EEG data suggests that cortical
105
activity during the V-‐trained condition is more closely correlated with the acoustic envelope than the V-‐untrained condition. While this could partly be due to attention effects on visual processing, it is also suggestive of auditory cortex synthesising tracking of the acoustic envelope during silent lipreading. ID: 230
Magnetoencephalography reveals brain activity varying with the number of retained tones in auditory short-‐term memory Sophie Nolden1,2,3, Stephan Grimault1,2,4, Synthia Guimond1,2, Christine Lefebvre1,2,5, Patrick Bermudez1,2, Pierre Jolicoeur1,2,5 1BRAMS, Montreal, Canada; 2CERNEC, University of Montreal, Canada; 3RWTH Aachen, Germany; 4CNRS, Marseille, France; 5CRIUGM, University of Montreal, Canada [email protected] The current study focused on the retention of tones in human auditory short-‐term memory. Recent studies have isolated an event-‐related potential, the Sustained Anterior Negativity (SAN), which varies with memory load during the retention of tones (for example Lefebvre et al., 2013). Here, we used magnetoencephalography (MEG) in order to find a magnetic equivalent to the SAN, hence event-‐related magnetic fields varying with memory load during the retention of tones. In addition, we performed source analyses to localize the neural generators of the load-‐sensitive brain activity observed outside the head. Participants retained one or two simultaneously presented tones. After a retention interval of 2000 ms a test tone was presented and the task was to determine if that tone corresponded to the retained memory tones. Bilateral event-‐related magnetic fields over temporal and parietal areas varied with memory load. Source analyses revealed that activity in the superior temporal gyrus in both hemispheres, in the right inferior temporal gyrus, in the right inferior frontal gyrus, and in right parietal areas increased when the number of retained tones increased.
ID: 231
Auditory perception in a rhythmically variable context depends on neural amplitude fluctuations in multiple frequency bands Björn Herrmann1, Molly J. Henry1, Saskia Haegens2,3, Jonas Obleser1 1Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany; 2Columbia University College of Physicians and Surgeons, United States of America; 3Nathan S. Kline Institute Orangeburg, United States of America [email protected] Neural oscillations are thought to provide a mechanism to evaluate temporal predictions. When acoustic stimulation is temporally regular, low-‐frequency neural oscillations become entrained by the event structure. In turn, listening behavior is optimized, as expected events coincide with the “excitable” phase of the entrained oscillation. However, it is less clear how amplitude fluctuations influence listening behavior, in particular in temporally-‐variable contexts in which listeners might use either a “rhythmic” or a “continuous” processing mode. Thus, the current magnetoencephalography study investigated the relation between neural amplitude fluctuations in auditory cortex and human listening behavior in a rhythmically-‐variable listening situation. Tone sequences varied in temporal regularity (i.e., mean rate 2 Hz ± 0.14 Hz SD), and participants (N=20) indicated the presence of difficult-‐to-‐detect intensity changes. An oscillator model was used to estimate the degree to which each target tone was expected based on the timing of preceding tones. Time-‐frequency analyses quantified the degree to which the amplitude of pre-‐target neural oscillations predicted behavioral performance. Behaviorally, intensity changes were better detected when their occurrence was more predictable based on the preceding temporal structure. With respect to neural oscillations, we observed interactive effects of pre-‐target neural amplitude at three distinct frequencies on perception. First, amplitude of the 2-‐Hz neural oscillation differentially predicted target-‐detection performance – expected targets were best detected when 2-‐Hz neural amplitude was high, whereas unexpected
106
targets were best detected when amplitude was low. Second, we observed modulations of target-‐detection performance by alpha-‐frequency amplitude (~8 Hz and ~13 Hz, respectively), which, third, depended on low-‐frequency (2 Hz) amplitude. In detail, hit rates increased linearly with increasing 8-‐Hz amplitude, but this effect was strongest when 2-‐Hz amplitude took on intermediate values. Hit rates were further increased for either low or high 13-‐Hz alpha amplitude, but this quadratic trend was strongest for low and high, but not intermediate, 2-‐Hz amplitude values. The current results show that, in temporally-‐variable contexts, auditory perception depends on complex interactions of neural amplitude fluctuations that might reflect an unstable neural state between “rhythmic” and “continuous” processing modes. ID: 232
Familiarity with a vocal category revealed through the expression of a synaptic plasticity gene in auditory cortex Tamara N. Ivanova, Robert C. Liu Emory University Atlanta, United States of America [email protected] The molecular mechanisms of plasticity in the adult auditory cortex have been gaining attention, especially given electrophysiological evidence that neural plasticity there helps support auditory learning and long-‐term memories. One molecule of interest is the synaptic plasticity effector immediate early gene, Arc/Arg3.1, which plays a key role in memory consolidation. Arc mRNA expression acts as a proxy for neurons that are likely to undergo plasticity after a sensory experience. Playing back sounds induces Arc in auditory cortex, and that expression is reportedly necessary for auditory learning and plasticity (Carpenter-‐Hyland et al, 2011, SfN 912.09). The cellular compartmental expression of Arc in core auditory cortex actually correlates with a sound’s familiarity (Ivanova et al, 2011, Neuroscience 181:117-‐126). Specifically, prior exposure to a neutral sound induces a higher proportion of neurons with Arc expressed only in the cytoplasmic compartment immediately after re-‐exposure to that sound, compared to when the sound is novel. This has led to the hypothesis that sound-‐induced Arc cytoplasmic
expression in auditory cortex provides a molecular biomarker of a sound’s familiarity. We now test this hypothesis in the natural context of learning a vocal category’s behavioral significance. We utilize an ultrasonic communication system between mouse pups and their mothers. Mothers prefer to approach these calls, unlike pup-‐naïve virgins. The calls elicit a pattern of cytoplasmic compartmental expression in core auditory cortex indicative of the calls being more familiar to mothers than naïve virgins. However, if Arc expression indeed provides a biomarker for sound familiarity, then even virgin animals given extended experience with vocalizing pups should exhibit enhanced cytoplasmic expression in core auditory cortex. Experiments in such “co-‐caring” females now reveal that these animals indeed show increased expression of Arc mRNA within the cytoplasmic-‐only compartment after hearing natural pup calls. The proportion of expressing neurons matches that of mothers, and is significantly higher than that of naïve virgins. These data therefore support the hypothesis that Arc provides a molecular trace of a previously-‐heard sound category’s familiarity. ID: 233
Grey matter morphometry and cortical thickness alteration in the auditory cortex: Differences associated with normal variations of hearing acuity Fahad Alhazmi, Vanessa Sluming University of Liverpool, United Kingdom [email protected] Once any sense is altered, the brain’s structure and function is reorganised in order to adapt to this sensory change. Hearing loss is considered as one of the most widespread sensory loss in the healthy aging population that is associated with age. Normal age related hearing loss is likely to impact on brain structure and function, but this has not received the same attention (as deafness or other types of hearing impairment, such as tinnitus) in the currently available scientific literature. Therefore, it is important to identify the brain structure and function differences between normal hearers and hearing loss people in an early stage. The aim of this study was to investigate the effect of normal variations of
107
hearing acuity that is associated with age on grey matter morphometry and cortical thickness in the auditory cortex of normal healthy adults. This study of 41 adults (mean age 45 (SD 11) Yrs) was recruited to take part of the study. Routine hearing test was performed to measure participants’ hearing levels. Participants were divided into two age-‐match groups: normal hearing and hearing loss. T1-‐weighted MRI structure images were acquired in order to investigate brain structure. Total grey matter volume and cortical thickness alterations were investigated in order to identify the correlation between brain structure changes and normal variation of hearing acuity. Also, Voxel-‐based Morphometry analysis was applied in order to investigate the differences between two groups. Significant grey matter volume reduction was found in primary and secondary auditory cortex, primary somatosensory cortex and right thalamus in mild hearing loss participants compared to normal hearers. Total mean cortical thickness was found negatively correlated with age and hearing levels. A significant cortical thickness reduction was found in left primary auditory cortex in older and mild hearing loss participants. However, no significant difference was found in right primary auditory cortex between participants. Our results suggest the specific role of normal age-‐related hearing loss to shrink grey matter tissue and alter cortical thickness in certain brain regions. Understanding the causal relationship between these grey matter and cortical thickness changes and mild hearing loss will be an important next step in understanding hearing loss in early stage. ID: 234
Plastic effects of combined electric stimulation of auditory nerve and vagus nerve in primary auditory cortex Armin Wiegner, Martin Kempe, Maike Vollmer University Hospital Wuerzburg, Germany [email protected] The success and limitations of a cochlear implant (CI) depend on the central auditory system’s ability to adequately process the (reduced) spectral/temporal features of electric stimuli delivered to the cochlea. Clinical studies show that auditory experience
can improve speech perception in CI users indicating learning-‐induced plasticity in central auditory processing. In hearing animals, electric stimulation of the vagus nerve (VNS) combined with acoustic stimulation generated highly specific and long-‐lasting plasticity in both spectral and temporal processing in primary auditory cortex (AI; Engineer et al. 2011, Shetake et al. 2012). Here we tested the effects of combined CI and VNS stimulation (CI/VNS) in the deaf auditory system. We hypothesized that CI/VNS would generate an expanded tonotopic representation and improved neuronal temporal processing in AI. Adult gerbils were bilaterally deafened and unilaterally implanted with multichannel CIs (MedEl, Austria). One experimental group (‘CI-‐only’) received single-‐channel chronic electric stimulation of the cochlea (ICES; 2.5 h/d, 20 d), a second group (‘CI/VNS’) received additional VNS. In control animals, only one ear was acutely deafened and implanted. This allowed comparisons between electric and acoustic response properties in the same cortical neurons as well as assignment of acoustic characteristic frequencies to the site of ICES. In physiological experiments, AI multiunit responses to ICES at different electrode configurations were recorded to construct tonotopic maps. Pulse trains of increasing rates were used to estimate temporal processing in AI neurons. In control animals the tonotopic pattern of electric activation in AI corresponded to the intracochlear site of ICES. Both chronically stimulated groups (‘CI-‐only’ and ‘CI/VNS’) maintained a tonotopic representation of ICES. In contrast to our hypothesis, our preliminary analysis did not show clear evidence of spatial overrepresentation of the chronically stimulated electrode pair in either experimental group. However, analysis is continuing to definitively confirm or reject whether the addition of VNS to ICES can alter the overall pattern of AI activation.
108
ID: 235
Supervised parcellation of the macaque auditory cortex from resting-‐state fMRI Eren Gultepe1, Jason P. Gallivan2, R. Matthew Hutchison1,3, Stefan Everling1, Ingrid S. Johnsrude1,2 1University of Western Ontario, London, Canada; 2Queen's University Kingston, Canada; 3Harvard University, United States of America [email protected] In humans and other animals, resting-‐state fMRI (rs-‐fMRI) is now a standard technique used to measure spontaneous patterns of functional connectivity in the brain. Spatial correlations of low-‐frequency BOLD oscillations are used to characterize broad-‐scale neural networks and to reveal functionally distinct brain regions [1]. But do parcellations based on rs-‐fMRI data correspond to those based on anatomical (e.g., cytoarchitectonic) parcellations? In macaques, anatomical parcellation according to cytoarchitectonic criteria is well established [2,3], and so a comparison of anatomical and rs-‐fMRI parcellation can be performed. Here, we determine whether macaque rs-‐fMRI data can be used to discriminate the cytoarchitectonic regions of the auditory koniocortex (AK; both medial and lateral regions combined) from the lateral belt area (PaAL). Rs-‐fMRI data were collected on a 7T MRI system from 10 macaque monkeys (7 M. fascicularis; 3 M. mulatta) anesthetized with 1% isoflurane. Registration to the MNI macaque atlas [4] (including both macaque species), standard rs-‐fMRI preprocessing [5], and principal-‐component analysis for dimension reduction were performed on the data. Training and testing with independent rs-‐fMRI runs, a random-‐forests classifier revealed statistically significant discrimination of the BOLD signal from voxels within the AK and PaAL regions (t(9)=3.40, p<0.05, one-‐sample t-‐test). This confirms that rs-‐fMRI signals contain information related to anatomical structure. Further, the successful delineation of known auditory boundaries indicates that supervised parcellation methods applied to non-‐invasive rs-‐fMRI data, guided by high resolution anatomical data, could be used to optimize the application of unsupervised parcellation techniques, such as clustering, to rs-‐fMRI data: these methods may be of great use for the
anatomo-‐functional parcellation of human cortex of humans, even when cytoarchitectonic or other anatomical parcellation data are not available. References 1. Beckmann M, Johansen-‐Berg H, Rushworth MFS. J Neurosci. 2009;29:1175–1190. 2. Hackett TA, Preuss TM, Kaas JH. J Comp Neurol. 2001;441:197–222. 3. Paxinos G, Huang XF, Petrides M, Toga AW. The Rhesus Monkey Brain in Stereotaxic Coordinates. San Diego, CA: Academic Press; 2009. 4. Frey S, Pandya DN, Chakravarty MM, Bailey L, Petrides M, Collins DL. NeuroImage. 2011;55:1435–42. 5. Murphy K, Birn RM, Bandettini PA. Neuroimage. 2013;80:349-‐59. ID: 236
Functional organization of speech perception in the human superior temporal gyrus Liberty S. Hamilton, Edward F. Chang University of California San Francisco, United States of America [email protected] Human speech perception requires the transformation of low-‐level acoustic features to higher-‐level linguistic representations. How the brain performs this acoustic to phonemic translation is poorly understood, especially since there are no invariant acoustic features that determine any particular phoneme. To understand speech, the human brain must therefore perform a complex categorization task of speech sounds within a highly multidimensional space encompassing phonetic features, such as manner and place of articulation, and acoustic features including formant frequencies, voice onset time, and spectrotemporal modulations. The posterior superior temporal gyrus (PSTG) has been strongly implicated in this process. Still, the functional organization of acoustic and phonetic features in the PSTG remains unclear. Here, we used 256-‐channel high-‐density electrocorticography (ECoG) to record activity directly from the surface of the brain in patients with medically intractable epilepsy while they listened passively to natural, continuous human speech. Data was obtained from 11 subjects with left hemisphere ECoG grids and 6 subjects with right hemisphere grids. To describe the selectivity of each electrode to specific acoustic features, we fit linear and nonlinear spectrotemporal receptive
109
field (STRF) models to the LFP signal, bandpassed in the high gamma range (70-‐150 Hz). These STRFs predicted neural responses to held out data with relatively high accuracy, with correlations up to r = 0.7 (average correlation r = 0.34 ± 0.01 for left hemisphere sites, r = 0.32 ± 0.01 for right). STRF performance was generally higher for posterior compared to anterior STG sites. We found that electrodes in both hemispheres were selective for combinations of simple and complex spectrotemporal features, and that this acoustic selectivity correlated with selectivity for higher order phonetic features such as manner of articulation. Spatially, we find evidence of rudimentary organization for spectrotemporal and phonetic features. Uncovering the functional organization and feature maps in the auditory cortex has important implications for sensory coding, since it is thought that map representation of external features may underlie efficient coding and permit effective sensory discrimination. Using high spatial and temporal resolution recordings, we are able to describe the organization of speech sounds in the human auditory cortex at an unprecedented level of detail. ID: 237
Maturation of intrinsic properties of pyramidal cells and fast-‐spiking interneurons in the auditory cortex of mice Andreas Abraham1, Marlen Dierich1,2, Hartmut Niekisch1,3, Florian Hetsch1,4 1University of Potsdam, Germany; 2Philipps University of Marburg, Germany; 3Leibniz Institute for Neurobiology Magdeburg, Germany; 4Max Delbrueck Center for Molecular Medicine Berlin, Germany andreas.abraham@uni-‐potsdam.de Cortical responses to auditory input change dramatically during development, and these processes seem to be connected to the maturation of excitatory pyramidal neurons and inhibitory interneurons. In the auditory cortex, GABAergic interneurons play a pivotal role in information processing by providing feed-‐forward and feedback inhibition onto pyramidal neurons. Such inhibitory processes can be effective in noise suppression, mediate
selection among competing inputs, and possibly even implement complex computations. Despite of their important role in cortical processing, information about the intrinsic properties and the maturation of auditory cortical interneurons is limited. This study examined neurons in the auditory cortex of mice in terms of maturational processes. Since mice possess adult-‐like hearing abilities approximately 18-‐20 days after birth, we compared intrinsic voltage and current properties in “premature” (postnatal day, PND 14-‐20) and “mature” (PND 21-‐42) fast-‐spiking (FS) interneurons and regular-‐spiking (RS) pyramidal neurons with the whole-‐cell patch clamp technique. We show pronounced maturation in RS but not in FS neurons. Current-‐clamp analysis revealed significantly smaller amplitudes for action potentials (AP) but protracted AP-‐width and afterhyperpolarisation, higher input resistance, as well as more pronounced sag and total voltage in “premature” compared to “mature” RS neurons. Voltage-‐clamp analysis revealed two time-‐ and voltage dependent hyperpolarization-‐activated currents, the fast inward rectifying potassium current (Kir) and the more slowly hyperpolarization-‐activated cation current (IH), consisting of a fast (IH-‐fast) and slow (IH-‐slow) component. Compared to “mature” RS neurons, “premature” cells exhibited significantly smaller Kir and total current (Kir+IH) and smaller densities for Kir, IH and total current, but faster time constants for IH-‐fast, and higher input resistance. In FS neurons only the density for IH-‐slow was smaller in “premature” compared to “mature” cells. In conclusion, the age-‐related differences of intrinsic parameters between “premature” and “mature” RS neurons in the auditory cortex may to some extend governed by different expression of inwardly rectifying conductances (mediated by the Kir channel and the hyperpolarization-‐activated cyclic nucleotide-‐gated (HCN) channel).
110
ID: 238
EEG and MEG brain activity during audio-‐visual paired associate learning Jarmo Hämäläinen University of Jyväskylä, Finland [email protected] Most training studies utilize a pre-‐ and post-‐measurement scheme where changes in brain activity are measured even weeks apart. Useful as those measurement schemes have been they do not provide detailed information of the learning process itself. The goal of the current study was to examine the changes in brain processes during learning of audio-‐visual associations in adults. In two separate studies, data was collected using 128-‐channel EEG system and 306-‐channel MEG system. EEG data showed increasing positive response at parietal electrodes around 400 ms during the learning task. MEG data also showed changes in brain activity after 400 ms particularly at the superior temporal areas but also at parietal areas. The results suggest that audio-‐visual association learning can be studied within one training session. The next phases of the study will examine how these changes are related to behavioral learning outcomes in different tasks and to grapheme-‐phoneme learning in children. ID: 239
Musical entrainment directly recorded in the depth of the human temporal and frontal cortex Sylvie Nozaradan1,3, Jacques Jonas1,2, Jean-‐Pierre Vignal2, Louis Maillard2, Bruno Rossion1, André Mouraux1 1Université catholique de Louvain, Belgium; 2Neurology, CHU Nancy, France; 3BRAMS, Canada [email protected] Beat and meter refers to the spontaneous perception of periodicities while listening to music (e.g., perceiving a waltz as a three-‐beats meter) and usually entrains to move the body at beat frequency. How the brain computes perception and body synchronization to this periodicity from complex auditory rhythms that are not necessarily periodic in reality remains unknown. Here, by taking advantage of the high temporal and spatial resolution obtained with depth-‐electrodes implanted for the evaluation of intractable epilepsy, we
provide first evidence that listening to rhythms generating a spontaneous beat and meter perception induces neural entrainment at beat and meter frequencies in both auditory areas and movement-‐related areas of the human brain. Depth recordings were performed in four patients with electrodes implanted in the superior temporal gyrus as well as in the premotor cortex and the supplementary motor area. At electrode contacts located in the primary and secondary auditory cortex, steady-‐state evoked potentials (SS-‐EPs) were elicited at frequencies corresponding to the rhythm envelope. As compared to non beat-‐ or meter-‐related frequencies, the amplitude of the SS-‐EPs obtained at beat and meter frequencies were selectively enhanced, even though the acoustic energy of the eliciting sounds was not necessarily predominant at these frequencies. This indicates that beat and meter perception modulates the processing of incoming auditory input already at the level of the primary auditory cortex, probably through a top-‐down mechanism of dynamic attending. Electrode contacts located in the premotor cortex and the supplementary motor area showed an even stronger entrainment, selective for the frequency of the beat. This selective neural entrainment to the beat in movement-‐related brain areas when listening to musical rhythms could explain how music entrains spontaneous body movements at beat frequency. Also, it provides crucial insight on the neural mechanisms underlying the shaping of perception by covert movement. Taken together, this first intracerebral investigation of the neural response to musical rhythms across human brain areas highlights how the interactions between sensory and motor areas may contribute to the emergence of perceptual objects within the human brain.
111
ID: 240
Streaming music in the brain: Development of an objective auditory stream segregation task with polyphonic music Niels R. Disbergen1,2, Giancarlo Valente1,2, Merle-‐Marie Ahrens3,4, Elia Formisano1,2, Robert J. Zatorre5,6 1Maastricht University, The Netherlands; 2Maastricht Brain Imaging Center, The Netherlands; 3University of Glasgow, United Kingdom; 4Institut des Neurosciences de La Timone, CNRS & Université Aix-‐Marseille, France; 5Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Canada; 6International Laboratories for Brain, Music and Sound (BRAMS), Université de Montréal & McGill University, Canada [email protected] To investigate modulation of auditory streams by bottom-‐up and top-‐down processes in natural scenes, we developed a psychophysical paradigm providing control over a participant’s locus of attention as well as timbre distance between scene elements. While listening to synthesized polyphonic counterpoint music consisting of two voices (Bassoon and Cello), participants detected patterns of four triplets (i.e. temporal modulations), located either within one musical voice, across the voices, or not present. Music pieces were composed in close collaboration with a composer and controlled for, among others, pitch distance, tempo, and temporal modulations. Triplets were melodically integrated into the second half of each excerpt. Polyphonic music was employed since one can attend to the aggregate as well as to the individual voices within the same stimulus. To vary locus of attention, participants attended to one of the instruments or the aggregate and indicated (post-‐stimulus) whether the pattern was present in the attended instrument(s); several catch trials were included. Listeners attended to the same instrument(s) during a block of trials, and before each block were instructed which instrument(s) to attend. To modulate bottom-‐up stimulus contributions we changed instrumental timbre distance by morphing melodies across instruments using STRAIGHT vocoder (Kawahara et al.) in Matlab. Time frequency landmarks were created on each original melody (time: middle each note;
frequency: note's f0) and put in correspondence during logarithmic interpolation of f0, spectral density, and aperiodicity across instrumental timbres. Psychophysical testing showed that non-‐musicians (N = 7) were capable of reliably switching attention and detect triplet patterns. Mean task (A-‐prime) performance was 0.96 +/-‐ 0.02 (mean +/-‐ sd), no significant differences were found (two-‐sample t-‐test, two-‐tailed) between different triplet options: across voices (0.94 +/-‐ 0.07) versus upper voice (0.96 +/-‐ 0.04; t(12)=-‐0.715, p=0.488), across voices versus lower voice (0.96 +/-‐ 0.03; t(12)=-‐1.014, p=0.331), upper voice versus lower voice (t(12)=-‐0.425, p=0.679). Additional testing indicated participants were able to perceive timbre differences and perceptual ratings matched physical morphing distance. In conclusion, results so far suggest our experimental paradigm enables studying stream segregation in a natural listening context for further psychophysical and functional neuroimaging experiments. ID: 241
Single neuron and population coding of natural sounds in the mouse auditory cortex Amos Shalev, Yishai Elyada, Israel Nelken, Adi Mizrahi Hebrew University of Jerusalem, Israel [email protected] Early in the auditory hierarchy, circuits are organized and highly specialized for detecting basic sound features. It is assumed that as information ascends along the auditory pathway, more complex features of sounds are encoded. For example, the primary auditory cortex (A1) has been proposed to code complex features of the soundscape. However, the underlying coding principles and their mechanisms are still poorly understood. Our work focuses on understanding coding principles of cortical neurons and circuits to natural sounds with special emphasis on vocalizations using the mouse as a model. To study how A1 neurons code vocalizations, we first recorded a library of pup vocalization from Balb/C mice. Pup vocalizations are several seconds long and contain numerous syllables with varying inter-‐stimulus intervals. In
112
addition, syllables have complex features like harmonics, amplitude modulations (AM), and frequency modulations (FM). To describe how cortical neurons encode these vocalizations we recorded the spiking activity of single neurons in A1 to pure tones and vocalizations using blind loose patch recordings.We found that neurons in A1 responded with highly heterogeneous patterns of activation to pup vocalizations. Interestingly, we detected clear preference for specific and sometimes unique syllables in the sentence. Neuronal response parameters like basic frequency and amplitude tuning could not explain the syllable preference leading us to hypothesize that other mechanisms are responsible for neuronal response profiles. We are currently testing how stimulus properties such as time and context, and neuronal properties such as inhibition and synaptic depletion, could better explain the complexity of neuronal response profiles. In addition, we used GCamp6 to collect in-‐vivo two-‐photon calcium responses from large populations of neurons in single mice (up to 300 neurons/mouse) to describe how the population level encodes natural sounds in A1. ID: 242
Attention on tonotopy: Task-‐ and stimulus-‐driven audio-‐frequency representations in human supratemporal cortex Lars Riecke, Judith Peters, Giancarlo Valente, Valentin G. Kemper, Elia Formisano, Bettina Sorger Maastricht University, The Netherlands [email protected] This study investigates whether acoustic stimuli and selective auditory attention share a common frequency representation in the human auditory cortex. In the “bottom-‐up” experiment, the frequency of the auditory stimulus was varied: three different frequency bands (bandwidth: seven semitones, center frequencies separated by 1.9 octaves) were presented separately and 15 normally-‐hearing, pre-‐trained listeners performed a pitch identification task on them. In the “top-‐down” experiment, the frequency that the listeners were attending to was varied: the same three bands were now presented simultaneously and the listeners performed a gap detection task
requiring them to attend to a specific band cued by a visual letter. The experiments were conducted while 3-‐T functional magnetic resonance images of temporal cortex were collected after each auditory stimulus presentation. Listeners performed both tasks with high accuracy (98% and 89% in bottom-‐up and top-‐down experiment, respectively). FMRI data from both experiments were combined in the analysis, focusing on subject-‐specific frequency representations within a primary region (bilateral PAC) and a non-‐primary region (bilateral supratemporal cortex excluding PAC, STC-‐) defined manually from macro-‐anatomical landmarks on individual cortical surface reconstructions. A linear classifier was trained to discriminate the blood oxygenation level-‐dependent (BOLD) response patterns obtained in the different frequency conditions of the bottom-‐up experiment (30 trials per condition). The classifier was then used to decode the listener’s focus of frequency-‐selective attention from the BOLD response patterns in the top-‐down experiment (36 trials per condition). Random-‐effects statistical group analysis revealed that pattern classification accuracy was above chance (33.33%) in both PAC (36.50%, p<0.0093) and STC‒ (35.20%, p<0.001). These preliminary results suggest that distributed response patterns in PAC and non-‐PAC convey information on both acoustic and attended frequency. They may provide novel insights into the interplay of bottom-‐up and top-‐down cortical mechanisms for frequency-‐selective listening. ID: 243
Cortical activation and connectivity in congenital auditory deprivation Peter Hubka1, Jochen Tillein2,3, Andrej Kral1 1Institute of Audioneurotechnology & Department of Experimental Otology, ENT Clinic, Hannover Medical School, Germany; 2Department of Otorhinolaryngology, Goethe-‐University Frankfurt am Main, Germany; 3MedEl Starnberg, Germany hubka.peter@mh-‐hannover.de Congenital sensory deprivation leads to substantial perceptional deficits after a peripheral reactivation of deprived afferent pathways in adulthood. Despite these deficits, peripheral electrical stimulation of auditory
113
nerve is still able to reliably activate the primary auditory cortex. Further spread of activation to non-‐primary auditory fields has not been studied yet. The present study is aimed on the functional analysis of the simultaneously recorded activation of the primary auditory cortex (A1) and the posterior auditory field (PAF) evoked by electrical stimulation using cochlear implants in normally hearing and congenitally deaf cats (CDC). Multiunit activities from six congenitally deaf cats (CDC) and five hearing controls were recorded simultaneously in A1 and PAF by means of 16 channel microelectrode arrays (Neuronexus probes). All animals were electrically stimulated; mono-‐ and binaural responses were evoked by pulse trains (500Hz, 3 pulses) at intensities of 0-‐10 dB above response thresholds. Effective connectivity between the simultaneously recorded positions in the fields A1 and PAF was computed using transfer entropy approach. Activation of both studied cortical fields, A1 and PAF, was found in all adult congenitally deaf cats. The main difference was the significantly shorter duration and hence significantly lower level of summed activation of the cortex. The effective connectivity between responding positions in the A1 and PAF was substantially lower in CDC when compared to control group. Furthermore, connectivity analysis of the responses within A1 has revealed functional separation of supragranular and infragranular layers in CDC, whereas supra-‐ and infragranual layers constitute one interconnected functional unit in the normally hearing animals. These results demonstrate preserved broad activation of cortical structures evoked by peripheral stimulation of deprived auditory pathways in adult CDC. Cortical activation was, however, weakly functionally coupled through intracortical connections, that crucially affects the ability of cortical network to detect and integrate incoming information. Supported by Deutsche Forschungsgemeinschaft (KR 3370/1-‐3).
ID: 244
Neurophysiological markers of perceptual learning in awake and sleeping humans Thomas Andrillon1,2, Daniel Pressnitzer1, Trevor Agus1,3, Damien Léger4, Sid Kouider1 1École Normale Supérieure; 2Université Pierre et Marie Curie; 3Sonic Arts Research Centre, School of Creative Arts, Queen's University Belfast; 4Centre du Sommeil et de la Vigilance, Hôtel-‐Dieu de Paris, France [email protected] During noise repetition-‐detection tasks, participants have to discriminate between sequences of continuously running white noise and sequences made of repeated fragments of the same, frozen, noise. Dramatic increases in performance has been evidenced when one particular repeated fragment recurs across the experiment, revealing fast perceptual learning of complex arbitrary patterns in the acoustic environment. A previous MEG study suggests that increased inter-‐trial phase coherence is associated with learning, but the underlying neural mechanism remains to be further specified. Notably, the role of attention in such learning has yet to be evaluated. In our study, we adapted this paradigm and recorded EEG in human participants as they detected white noise repetitions. In a first part, participants (N=13) had to discriminate between 1.5 s of continuously running noise (condition “Ns”) and 1.5 s of 0.5 s noise fragment repeated 3 times. Among these repeated fragments, some appeared only once during a block (“RNs”) and some recurred across a block (reference noises, “RefRNs”). In a second part, repeated fragments were reduced to 0.2 s (every 0.5 s, 4 repetitions) to restrain the time-‐window in which the repetition-‐detection is possible. At the behavioural level, our study replicates previous studies by showing increased sensitivity to reference-‐noises (i.e., performance for RefRNs was statistically higher than for RNs). At the neurophysiological levels, repetitions were associated with an increase in both power and phase coherence below 4 Hz, more-‐so for RefRNs than RNs. Performance was correlated with the strength of these cortical markers. We then adapted our paradigm to a night-‐study (N=20 subjects) to test whether noise repetition-‐detection could be automated and
114
pursued during sleep. For each vigilance stage (rapid eye-‐movements (REM) sleep, non-‐REM sleep and wakefulness) participants listened to different sets of RefRNs, which were then re-‐tested in the morning. First, EEG recordings revealed evidence of repetition-‐detection even in sleep (increase in phase coherence <4Hz) indicating that repetition-‐detection is possible in the absence of awareness. Second, performance at re-‐test suggested learning effects for reference-‐noises heard during wake and REM sleep but a negative effect for reference-‐noises heard during non-‐REM sleep, which could be link to the neuromodulatory changes associated to these different states. ID: 246
Visual–auditory interactions in ferret auditory cortex: The effect of temporal coherence Huriye Atilgan, Stephen Town, Katherine Wood, Jennifer K. Bizley University College London, United Kingdom [email protected] Human listeners performing a selective auditory attention task are better at reporting brief deviants in a target stream when it was accompanied by a coherently modulated visual stimulus than when the visual stimulus was modulated coherently with the distractor stream (Maddox et al., in prep). In this study, we use the same stimuli to explore whether the coherence between temporally modulated auditory and visual stimuli can influence neuronal activity in auditory cortex. Auditory stimuli were continuous vowels which were amplitude modulated with a <7Hz noisy carrier envelope. Embedded within the vowel were brief (200ms) timbre deviants. Two vowels (/u/ and /a/) were generated, with different fundamental frequencies, and modulated with independent envelopes. These were then presented either separately or concurrently accompanied by a luminance-‐modulated visual stimulus whose envelope either matched that of one of the auditory streams or was independently modulated. Recordings were made simultaneously in auditory and visual cortex in anesthetised ferrets, and in the auditory cortex of chronically implanted,
passively listening ferrets. We explored whether spiking responses were modulated in response to changes in the auditory modulation envelope or changes in timbre, and whether the coherence of the visual stimulus influenced these factors. We performed a 3-‐way ANOVA on spike counts binned with 20 ms resolution. The 3 stimulus parameters (auditory envelope phase, visual envelope phase and timbre) served as factors. To quantify the relative strength with which one of the three stimulus dimensions influenced neural firing we calculated the proportion of variance explained by each. A neuron was said to be sensitive to a given parameter if it explained a significant amount of the response variance and the variance explained was >5% of the total. We recorded 735 driven units in anesthetised auditory cortex. 26% were sensitive to the phase of the auditory envelope and 34% encoded changes in timbre. 19% were sensitive to the phase of the visual stimulus. Preliminary analysis of the awake data (35 units) showed a similar pattern of results, with 25% of units being sensitive to the phase of the visual stimulus. We are currently exploring neural responses to two concurrently presented streams to determine whether temporally coherent visual stimuli can influence the way in which competing sources are represented in auditory cortex. ID: 248
PV neurons in auditory cortex show stimulus-‐specific adaptation and may enhance deviance detection Tohar Sion Yarden1, Ido Maor1, Johannes Niediek1,2, Ashlan Reid3, Sang Geol Koh3, Adi Mizrahi1, Israel Nelken1 1Hebrew University of Jerusalem, Israel; 2University of Bonn, Germany; 3Cold Spring Harbor Laboratory, United States of America [email protected] Neurons in primary auditory cortex exhibit stimulus-‐specific adaptation (SSA), which is the decrease in responses to a common stimulus that does not generalize fully to other, rare stimuli. Interestingly, the responses to the rare stimulus within this context are larger than expected based on its rarity, giving rise to true deviance detection. Here we use in vivo two-‐
115
photon targeted loose-‐patch recordings in mice to investigate the role of inhibition in shaping cortical SSA. We show that parvalbumin-‐positive (PV) neurons, which form the major inhibitory population in the cortex, exhibit SSA but that this SSA is not associated with true deviance detection. We argue that the difference between the excitatory and inhibitory neurons can actually enhance deviance detection in the excitatory population. Using extracellular recordings and optogenetic inactivation of PV neurons, we found that the SSA in PV neurons acts to weaken the SSA in the excitatory population. We propose that SSA is initially generated in auditory cortex to a large extent by excitatory interactions, with possible contribution of other inhibitory populations. The lack of true deviance detection in PV neurons may indicate that they are dominated by feed-‐forward input. Our finding of stronger SSA when the cortex is released from inhibition by PV neurons suggests that true deviance detection is under active control, and can be modulated by behavioral states, possibly by modifying levels of neuromodulation. ID: 249
Co-‐modulation as a means of enhancing signal detection and object formation in mouse primary auditory cortex Joseph A. Sollini, Alexander Morris, Paul Chadderton Imperial College London, United Kingdom [email protected] In the auditory world, salient signals commonly occur within complex fluctuating soundscapes. A key function of the auditory system is to appropriately group and segregate temporally and spectrally overlapping signals into perceptually distinct objects. The auditory system is excellent at grouping signals into separate objects and can do so using a small number of cues (Bregman, 1994), but the neural mechanisms that underlie these processes are poorly understood. One phenomenon that uses such grouping processes is co-‐modulation masking release (CMR, Hall et al., 1984; Verhey et al., 2012), whereby coherent amplitude modulation of sound across many frequencies (e.g. a broadband sound) increases the detectability
of a concurrent signal (e.g. a pure tone). Here we investigated the influence of such cues on the activity of neuronal populations in primary auditory cortex (A1) during the simultaneous presentation of two sounds: 1) a long duration modulated sound (varying in bandwidth and duration) and 2) brief tone pips. Evoked activity was recorded using multi-‐site extracellular recordings, and whole cell patch clamp recordings in A1 of anaesthetised mice (NMRI, 5-‐10 weeks). Firing rate distributions were bootstrapped to elucidate significant stimulus-‐evoked changes (p<0.05). Increasing the bandwidth of the first signal enhanced the evoked response to the second signal at low signal-‐to-‐noise ratios, equivalent to reducing detection threshold in traditional CMR. Additionally, when the onset of the first signal preceded the onset of second by >500ms, evoked responses to the pure tone signal were further enhanced. Two classes of evoked response were observed following tone presentation: (i) increased firing at tone onset, or (ii) increased firing at the offset of both signals. These response classes were mediated by distinct non-‐overlapping neuronal subpopulations (PCA on stimulus-‐evoked PSTHs and agglomerative clustering). Overall, increasing the duration and bandwidth of the first signal reduced thresholds of pure tones by ~15dB, demonstrating that both ongoing temporal coherence across frequency bands, and recent stimulus history, influence the detection and segregation of auditory signals. Thus, when using cues of object formation, A1 is able to encode both signals as separately represented objects, even at low signal levels. This mechanism could be used in the selective perception of low level signals in noisy environments. ID: 250
Genetic trapping of neuronal activity in the mouse auditory cortex Gen-‐ichi Tasaka1, Amos Shalev1, Luo Liqun2, Adi Mizrahi1 1Hebrew University of Jerusalem, Israel; 2Stanford University, United States of America [email protected] Dissecting how neural circuits are functionally organized to encode natural sounds and drive behavior is still a major challenge in audition.
116
One central brain region thought to encode natural stimuli is the primary auditory cortex (A1), but little is known about how its underlying circuitry accomplishes this task. Neurons in mouse A1 show highly heterogeneous response profiles to simple sounds, let alone to natural sounds. In fact, it is difficult to predict how single cortical neurons respond to natural sound from their response profile to pure tones. Here, we set out to test a new method to access neurons in the auditory cortex based on their functional activity. Specifically, we tested the potential of a recently published method called Targeted Recombination in Active Populations (TRAP) (Guenthner et al., Neuron, 2013). TRAP uses mouse genetics in combination with viral technology to enable permanent genetic access to neurons that are transiently activate. TRAP utilizes the regulation of the immediately early gene c-‐Fos to transcribe CreER transiently when neurons are activated. When this mouse is crossed to a floxed-‐reporter mouse and injected with tamoxifen, neurons that express c-‐Fos during the tamoxifen active period undergo irreversible Cre/LoxP-‐dependent recombination, and therefore are permanently marked by the transgene of the reporter. TRAP will potentially enable us to record, manipulate, and trace connections of neuronal populations defined functionally by their stimulus-‐response properties. Here, we report our progress on optimizing the parameters of this strategy by combining TRAP with in vivo physiology and a novel reporter mouse (utilizing tTA2 as a handle for further manipulations). First, we calibrated the genetic system to achieve very low non-‐specific labeling under basal condition (i.e. when no sound stimulation was used for trapping). Then, we trapped neurons by stimulating with pure tones (6, 12 and 24 kHz) or natural sounds (wriggling calls). Our results show clear increases in the number of trapped neurons when sounds are played, particularly in the 24kHz stimulus and natural sounds experiments. In order to verify whether labeled neurons are truly more responsive to their presented sound stimuli than non-‐labeled neuron, we are currently performing two-‐photon targeted patch of the labeled neurons. This approach will rigorously evaluate the
functional specificity of TRAP and will be used as a bench mark for future experiments. ID: 251
Cortical plasticity following perceptual learning Ido Maor, Adi Mizrahi Hebrew University of Jerusalem, Israel [email protected] Perceptual learning is a cognitive phenomenon whereby perceptual capabilities improve with training. The neural substrate of perceptual learning is not well understood but probably involves multiple brain regions, one of which is the neocortex. Our work aims to understand how auditory information is learned by the animal and how it is encoded in the primary auditory cortex (A1). Our animal model is the mouse and we focus on studying how different subpopulations of neurons in A1 (e.g. inhibitory vs. excitatory neurons), encode auditory stimuli following learning. First, to study perceptual learning in mice, we developed a fully automated assay in a learning chamber that we named “the Educage”. The Educage is designed to train groups of mice (up to 6 mice simultaneously) on a two-‐tone ’go no-‐go’ discrimination task. Once the procedure is learned, task difficulty is gradually increased by decreasing the difference between the two tones. Using this procedure, mice learned and became experts in the task rapidly. Furthermore, perceptual limits were reached within several thousands of trials (taking roughly 14 days in the Educage). Second, to study the physiological correlates of learning in A1, we used in vivo two photon targeted patch clamp (loose patch) to asses basic response properties of inhibitory and excitatory neurons of layer 2/3. As inhibitory neurons, we targeted Parvalbumin positive (PV+) interneurons, the largest inhibitory subpopulation of the cortex. We used transgenic mice expressing TdTomato in PV neurons and used unlabeled neurons as controls (PV−). We compared the frequency receptive fields and other response properties (e.g. response latency, spontaneous and evoked firing rate) of PV+ and PV− neurons in A1 of expert and naïve mice. Our preliminary analysis shows novel changes in the response profiles of layer 2/3 neurons induced by learning. Specifically, learning induced changes
117
in tuning properties (shifts toward learned frequencies) as well as changes in response properties of PV+ neurons (decrease latency and increased evoked firing). Our data reveals that specific modifications in inhibitory circuits within layer 2/3 of A1 may contribute to auditory perceptual learning. ID: 252
Behavioral and electrophysiological assessment of amplitude modulation depth detection in the environmental noise-‐exposed rat Florian Occelli, Jean-‐Marc Edeline, Boris Gourevitch UMR CNRS 8195, France jean-‐marc.edeline@u-‐psud.fr It is well known that noise-‐induced hearing loss as well as aging impair hearing performance and in particular speech intelligibility related to temporal envelope processing. The ability to follow the temporal envelope has been classically studied in humans through the psychophysical temporal modulation transfer function measured as the minimal modulation depth detectable for several frequencies of modulations. Modulation depth quantifies how much the carrier amplitude varies between 0 and 1, with 0% being the unmodulated noise, and 100% being a carrier varying between 0 and 1. With such a function, it is possible to examine in human the effects of intensity resolution (modulation depth) and temporal resolution independently (Strickland and Viemeister, 1997). Recently, it has been shown that long-‐term exposure to non-‐traumatic environmental noise (<85dB), may have effects on spectral and temporal cortical processing in the auditory cortex (Norena et al, 2006; Zhou et al, 2012; Zheng, 2012). Effects of such exposure on modulation depth processing remain widely unknown whereas a reduced intensity resolution, i.e. a reduction of the dynamic range of auditory neurons, may account for potential intelligibility issues in general. In this study, we designed a behavioral task involving detection of amplitude modulation depth and we recorded multiunit and LFP in the rat auditory cortex in response to amplitude modulated white noise. Rats were exposed to 3 to 12 months of structured (industrial) non traumatic noise at 80 dB SPL.
Our results indicate that behavioral effects of long-‐term environmental noise exposure are mostly visible during the first three months of exposure and that neurometric curves for amplitude modulation depth detection do not necessarily fit to psychometric curves. ID: 253
Auditory motion processing relies on specific mechanisms Colline Poirier, Simon Baumann, Olivier Joly, David Hunter, Fabien Balezeau, Li Sun, Adrian Rees, Christopher Petkov, Alexander Thiele, Timothy Griffiths Newcastle University, United Kingdom [email protected] The nature of the mechanisms underlying auditory motion perception has been debated for more than 30 years. The ‘snapshot hypothesis’ postulates that motion is inferred from snapshots of object successive positions, without direct appreciation of velocity. According to this hypothesis, auditory motion perception is based on the same mechanisms as those involved in the localization of static sound sources. The alternative hypothesis, usually referred as the ‘motion detector hypothesis’ or ‘velocity detector hypothesis’, considers that motion perception is based on specific mechanisms. To disentangle these two hypotheses, we measured the fMRI BOLD response to auditory motion and various control stimuli in the whole auditory cortex of awake macaques. Virtual-‐acoustic space stimuli were created for fMRI by recording the pressure waveform within the ear canals of each individual during the presentation of sinusoidally amplitude modulated broadband noise coming from different spatial locations in azimuth (-‐80 to +80 degrees). Motion stimuli were sounds virtually moving within each hemispace, back and forth between the positions 0 and 80 degrees. Control stimuli were stationary sounds coming from one single location (0, 40 or 80 degrees) and spectro-‐temporal controls generated by taking the mean of the waveforms of a motion stimulus at each ear, and presenting the stimulus diotically. Activations were localized on the superior temporal plane thanks to tonotopy experiments. We found that the posterior auditory cortex, including A1 and the
118
surrounding caudal belt and parabelt, is involved in auditory motion analysis. The linear combination of static spatial mechanisms, spectro-‐temporal processes and their interaction was able to fully explain motion-‐induced activation in most parts of the auditory cortex, including A1. However, in circumscribed regions of the posterior belt and parabelt cortex, part of the signal was left unexplained. We show that the remaining part of the signal in these regions cannot be explained by adaptation mechanisms and is due to a motion-‐specific process. These results provide the first demonstration that auditory motion is not simply deduced from spatial location changes but relies on specific mechanisms. ID: 254
Microsaccades indicate fast sound identification and categorization Andreas Widmann1, Ralf Engbert2, Erich Schröger1 1University of Leipzig, Germany; 2University of Potsdam, Germany widmann@uni-‐leipzig.de The mental chronometry of the human brain’s processing of sounds to be categorized as targets has intensively been studied in cognitive neuroscience. According to current theories, a series of successive stages consisting of the registration, identification, and categorization of the sound has to be completed before participants are able to report the sound as being a target by button press after about 300-‐500 ms. Here we use miniature eye movements as a tool to study the categorization of a sound as a target showing that this categorization is completed already after 80-‐100 ms. During visual fixation, the rate of microsaccades, the fastest components of miniature eye movements, is transiently modulated after auditory stimulation. In two experiments, we measured microsaccade rates in an auditory three-‐tone oddball paradigm (including rare non-‐target sounds) and observed a difference in the microsaccade rates between targets and non-‐targets as early as 142 ms after sound onset. This finding was replicated in a third experiment with directed saccades measured in a paradigm in which tones had to be matched to score-‐like visual symbols.
Considering the delays introduced by (motor) signal transmission and data analysis constraints, the brain must have differentiated target from non-‐target sounds as fast as 80-‐100 ms after sound onset in both paradigms. We suggest that predictive information processing for expected input makes higher cognitive attributes such as a sound’s identity and category available already during early sensory processing. The measurement of eye-‐movements is thus a promising approach to investigate hearing. ID: 255
A new and fast characterization of multiple encoding properties of auditory neurons Boris Gourévitch, Florian Occelli, Quentin Gaucher, Yonane Aushana, Jean-‐Marc Edeline CNPS, UMR8195 CNRS, Université Paris-‐Sud, France [email protected] The functional properties of auditory cortex neurons are most often investigated separately, through spectrotemporal receptive fields for the frequency tuning and the use of frequency sweeps sounds for selectivity to velocity and direction. In fact, auditory neurons are sensitive to a multidimensional space of acoustic parameters where spectral, temporal and spatial dimensions interact. We designed a multi-‐parameter stimulus, the Random Double Sweep (RDS), composed of two uncorrelated random sweeps, which gives an easy, fast and simultaneous access to frequency tuning as well as FM sweep direction and velocity selectivity, frequency interactions and temporal properties of neurons. Reverse correlation techniques applied to recordings from the primary auditory cortex of guinea pigs and rats in response to RDS stimulation revealed the variety of temporal dynamics of acoustic patterns evoking an enhanced or suppressed firing rate. Group results on these two species revealed less frequent suppression areas in frequency tuning STRFs, the absence of downward sweep selectivity, and lower phase locking abilities in the auditory cortex of rats compared to guinea pigs.
119
ID: 256
Effect of the efferent effect on cochlear gain and the cortical response Ifat Yasin, Ziomarie Jimenez, Vit Drga University College London, United Kingdom [email protected] Behavioural and non-‐human physiological evidence suggests that the amount of gain applied to the basilar membrane may change during the course of acoustic stimulation due to efferent activation of the cochlea (Liberman, 1996). A psychoacoustical method which can be used to infer human cochlear response is the Fixed Duration Masking Curve (FDMC) (Yasin et al., 2013, 2014). This method can be used to estimate the effect of efferent activation on human cochlear gain and compression by presentation of a precursor sound prior to presentation of the FDMC masker-‐signal stimulus. The FDMC technique ensures that the effect of the precursor on gain can be measured independently of any masking effects by the precursor, which has been a confound in previous behavioural studies. The present study investigated the effect of an ipsilateral and contralateral precursor on estimates of cochlear gain and compression (psychoacoustics) and on the cortical response (electroencephalography; EEG). In the psychoacoustical component of the study, FDMCs for a signal frequency of 3 kHz were obtained from twelve listeners with and without an ipsilateral or contralateral 3-‐kHz precursor. The precursor had a total duration of 160 ms and was presented at a level of 75 dB SPL with the silent interval between precursor and masker fixed at 300 ms (these parameters were chosen such that the efferent effect would be observable using either an ipsilateral or contralateral precursor and mode of recording (subjective response or cortical EEG). For the EEG component of the study a three-‐electrode array was used to measure the N1-‐P2 response at electrode Fz. For the EEG recording, the maskers were presented after an ipsilateral or contralateral precursor with the same masker and precursor parameters used in the psychoacoustical study. Preliminary analyses show that there is a greater efferent effect with an ipsilateral compared to a contralateral precursor in both
the psychoacoustical and EEG component of the study. Overall there is a correspondence between the psychoacoustical and cortical EEG results for the size of gain reduction achieved with an ipsilateral precursor. Efferent effects of cochlear gain reduction can be observed in reduction of the N1-‐P2 potential at the cortical level. ID: 257
Changes in resting-‐state oscillatory power in anaesthetised guinea pigs with behaviourally-‐tested tinnitus following unilateral noise trauma Victoria Kowalkowski1,2, Ben Coomber1, Mark N. Wallace1, Katrin Krumbholz1 1MRC Institute of Hearing Research Nottingham, United Kingdom; 2University of Nottingham, United Kingdom [email protected] Chronic tinnitus affects around 10% of the population, can be highly bothersome, and is often associated with noise trauma or hearing loss. Two theories for how the tinnitus percept is generated are the ‘thalamo-‐cortical dysrhythmia’ and ‘cortical-‐reorganisation’ models. In the former, deafferentation as a result of noise exposure leads to a slowing of spontaneous oscillatory activity between auditory thalamus and cortex, as well as increased spontaneous high-‐frequency activity in the auditory cortex thought to underlie the tinnitus percept. The latter model posits that tinnitus arises as a result of inhomogeneous hearing loss, which deprives some areas of the hearing range of input, but leaves others intact, creating an imbalance of inhibitory and excitatory inputs to the intact areas that leads to spontaneous hyperactivity and thus tinnitus. In this study, we measured spontaneous oscillatory activity from the auditory cortical surface of 12 anaesthetised guinea pigs. 6 animals were unexposed controls, and 6 were unilaterally exposed to loud noise to induce tinnitus and then allowed to recover for 8-‐10 weeks. The presence of tinnitus was tested with pre-‐pulse inhibition of the Preyer startle reflex, and ABR measurements were used to show that the exposed animal’s hearing thresholds had recovered to within 20 dB of their baseline. Consistent with the ‘thalamo-‐cortical
120
dysrhythmia’ model, differences in the frequency composition of spontaneous activity were observed between groups, with a relative increase in mid-‐frequency activity in the exposed animals. More strikingly, there was a highly significant group-‐by-‐hemisphere interaction, in that noise exposure lead to a reduction in spontaneous activity in the contralateral hemisphere, and an increase in the ipsilateral hemisphere. This effect accords with the ‘cortical reorganisation’ model, because the contralateral hemisphere receives predominant input from the exposed ear, whereas the ipsilateral hemisphere receives predominant input from the intact ear. Future work aims to investigate changes in resting-‐state activity in humans with tinnitus and hearing loss using electroencephalography. ID: 258
Activation patterns in non-‐primary auditory cortex using complex acoustic stimuli and high field fMRI Amee J. Hall1, Stephen G. Lomber1,2,3 1University of Western Ontario, London, Canada; 2Robarts Research Institute, Canada; 3Brain and Mind Institute, Canada [email protected] There is an abundance of anatomical and electrophysiological investigations of the auditory cortex of the cat. The anatomical connections of all acoustically responsive cortical areas have been published in detail. However, electrophysiological investigations of many of these connections remain a mystery in large part because of the location of areas ventral to primary auditory cortex (A1). Less invasive techniques such as functional magnetic resonance imaging (fMRI) provide a way to investigate these kinds of areas. Using fMRI, pure tones have been used to demarcate core auditory cortex from other surrounding areas in cats. In the present investigation ten, more complex, auditory stimuli were used to investigate neuronal responses beyond core auditory cortex in five adult cats. Four, one octave, narrow band noise (NBN) stimuli centered at 1kHz, 10kHz, 17kHz, or 20kHz; two sweep stimuli, one ascending and the other descending, spanning 0.1kHz to 32kHz; two conspecific vocalizations; and two harmonic
stimuli with a fundamental frequency at 1kHz or 0.75kHz. Overall, all stimuli resulted in activation in primary auditory cortex. However, for each stimulus, the largest activations occurred outside of A1. Activations in response to NBN stimuli are largely located along the posterior ectosylvian sulcus (pes) and a tonotopic pattern can be identified. The location and tonotopy observed indicate that activations were within the posterior auditory field. Activations in response to sweeps were also largely contained within the pes, but were more ventral to those generated by NBN stimuli, corresponding to the ventral posterior auditory field or ventral auditory field. Vocalization stimuli generated activations along the middle ectosylvian gyrus, ventral to A1, corresponding to the second auditory and temporal areas. Three of the animals also had activations along the anterior ectosylvian sulcus, anterior to A1 corresponding the auditory field of the anterior ectosylvian sulcus. Similar activations were observed using the harmonics stimuli. Taken together, these results suggest that the more complex an acoustic stimulus, the more ventral the activation will occur in auditory cortex. ID: 259
Prolonged low-‐level manipulation of parvalbumin-‐positive interneuron activity alters neural dynamics in awake auditory cortex K. Jannis Hildebrandt1, Pedro J. Gonçalves2, Maneesh Sahani2, Jennifer F. Linden2 1Cluster of Excellence "Hearing4all", Department for Neuroscience University of Oldenburg, Germany; 2University College London, United Kingdom jannis.hildebrandt@uni-‐oldenburg.de Alterations of cortical inhibition have been proposed to play a crucial role in modulation of cortical activity during attention or learning. The effects of alterations in the activity of different functional groups of interneurons have recently become accessible to investigation through the use of optogenetic tools. Typically, a specific class of interneurons is excited or inhibited during application of light to the cortical tissue. One limitation of such experiments is that the timing of optogenetic stimulation relative to sensory
121
stimulation becomes an important factor, and the pattern of supra-‐threshold activation of inhibitory neurons may not be physiologically accurate. Here, we circumvent these limitations by using stable step-‐function opsin (SSFO), an ion channel that can be rendered continuously active with a short pulse of light at one wavelength, and later switched off with a pulse of light at another wavelength. We expressed SSFO in parvalbumin-‐positive (PV+) interneurons in the primary auditory cortex of mice, and recorded both local field potentials (LFP) and spiking responses to tone pips of varying frequency in awake animals. By using SSFO, we were able to examine the effects of prolonged low-‐level (likely sub-‐threshold) activation of PV+ cells on cortical network dynamics. Surprisingly, we found that in the majority of recordings, prolonged low-‐level activation of PV+ inhibition with SSFO enhanced rather than reduced cortical spiking evoked by tone pips, and had either no or only small effects on tuning of spiking responses. Moreover, while changes in the amplitude and tuning of tone-‐evoked spiking varied between recording sites, we observed consistent effects of SSFO activation on power in different frequency bands of the LFP, during both silence and sound presentations. Specifically, SSFO activation increased power in the low-‐frequency range of the LFP (<50Hz) and decreased power in the high-‐frequency range (50-‐150Hz, high gamma). These effects were even more pronounced during sound presentation, particularly in the high-‐gamma band: while 50-‐150Hz LFP power increased during tone presentations in the control condition, LFP power in the same band following SSFO activation was decreased during sound stimulation relative to silence. In summary, our experiments show that prolonged low-‐level activation of PV+ cells with SSFO causes profound changes in the power spectrum of the LFP in awake auditory cortex, especially in the 50-‐150Hz high-‐gamma range. This finding is especially interesting because high-‐gamma activity has been linked to perceptual awareness and attentional modulation of cortical activity. Our results therefore support a role for PV+ interneurons in these processes.
ID: 260
Long-‐lasting and spatially restricted plasticity in adult inferior colliculus following exposure to a behaviourally relevant tone for several days Hugo Cruces1,2, Livia de Hoz1 1Max Planck Institute for Experimental Medicine Goettingen, Germany; 2International Max Planck Research School for Neurosciences, Germany [email protected] To test whether auditory experience alone affects the processing of auditory information at the sub-‐cortical level, we investigated the plastic changes elicited in the inferior colliculus (IC) of mice that were exposed to an unconditioned but behaviorally relevant tone for several days. C57BL/6J 5-‐6 weeks old females were kept in an Audiobox (TSE), where continuous monitoring of individual mouse behavior is possible, by means of a transponder inserted into each mouse. Water was available in a specialized corner after a nose-‐poke. Visits to the corner were accompanied by the presentation of a fixed tone. We recorded IC neuronal activity acutely in mice that had been exposed to this tone in every visit for at least 1 and up to 11 days. The control group was kept in a separate Audiobox and did not hear any tone during visits to the corner. We characterized auditory responses along the tonotopic axis of the IC in these two groups. We found that the group that had heard a tone during each corner visit, even for as long as 11 days, showed an increase in evoked multiunit activity. This increase was largest in an area between 250 and 450 micrometers in depth independently of the frequency used during the exposure (8 or 16kHz). Inactivating the auditory cortex with muscimol during acute recordings led to a small increase in the magnitude of the responses that was equal in both groups. We also observed a clear and spatially spread shift in best frequency towards higher frequencies. This shift was larger when 16 kHz was the exposure frequency. Sound exposure alone induces sustained changes in adult IC. The changes take the shape of an increase in response magnitude and shift in BF in a specific IC area, probably the dorsal central nucleus. The expression of this change is independent of cortical activity.
122
This change might contribute to the formation of a neural substrate for auditory expectations. ID: 261 Stimulus specific adaptation and deviance detection in inferior colliculus and auditory cortex Leila Khouri, Bshara Awwad, Itai Hershenhoren, Israel Nelken Hebrew University of Jerusalem, Israel [email protected] Stimulus specific adaptation (SSA) is the reduction in the neuronal response to a repeated sound that is not, or only partially, generalized to a rare sound. SSA thus differentiates rare from common sounds and potentially represents sensitivity to sound statistics in the neuronal response. Along the nonlemniscal pathway, SSA is found as early as the inferior colliculus (IC), however along the lemniscal pathway, SSA first appears to a substantial degree in primary auditory cortex (A1) at slow presentation rates. A possible mechanism for SSA to tone frequency is adaptation of narrowly tuned inputs that are integrated by a single output neuron. We compared the responses predicted by this model to responses recorded extracellularly from the IC, and intracellularly and extracellularly from auditory cortex, of halothane anesthetized rats. The feed forward model predicts responses to the rare tone (deviant) to be smaller than responses to the same tone occuring with equal probability among many other tones (deviant among many standards). Furthermore, the model cannot generate SSA to broadband sounds that on average activate all channels to an equal extend. In IC the measured responses fulfill to a large degree the predictions of the feed-‐forward model; responses to deviants in oddball sequences are smaller than responses to deviants among many standards, and wideband stimuli do not evoke SSA. In contrast, in auditory cortex, the predictions of the feedforward model fail; responses to deviants tend to be larger than predicted, and are essentially equivalent to responses to deviants among many standards. Furthermore, there is a highly significant SSA to broadband stimuli in A1. Thus, there seem to be different mechanisms underlying SSA in IC and in A1.
While SSA in IC is compatible with feedforward adaptation and does not emphasize rarity, A1 neurons do not conform to adaptation in narrow frequency channels and in particular seem to amplify the responses to the rare sound, whether it is a pure tone or a broadband stimulus. ID: 262
The role of sensitvity to temporal regularity in auditory scene analysis Lucie Aman, Lefkothea Andreou, Maria Chait University College London, United Kingdom [email protected] The notion that sensitivity to temporal regularity plays a pivotal role in auditory scene analysis has recently garnered considerable attention. Nevertheless, evidence supporting a primary role for temporal regularity is based on experiments employing simple stimuli consisting of one, or two, concurrent sound sequences. Whether the role of sensitivity to temporal regularity in mediating auditory scene analysis is robust to more complex listening environments is unknown. The present study investigates sensitivity to TR in the context of a change detection task, employing complex acoustic scenes comprised of up to 14 concurrent auditory objects. Sequences of sounds produced by each object were either temporally regular (REG) or irregular (RAND). We studied both simple instances of temporal regularity (isochronous sequences) as well as patterns consisting of complex regular rhythms. Listeners had to detect occasional changes (appearances or disappearances of an object) within these ‘soundscapes’. Listeners’ performance depended on the temporal regularity of both the changing object and the scene context (other objects in the scene) such that RAND contexts were associated with slower response times and substantially reduced detection performance. Therefore, even in complex scenes, sensitivity temporal regularity is critical to our ability to analyse and detect changes in a dynamic soundscape. Importantly, the data reveal that listeners are able to automatically acquire the temporal patterning associated with at least 14 concurrently presented auditory objects.
123
ID: 263
Mapping tonotopy in the deprived primary auditory cortex Tomasz Wolak1, Katarzyna Joanna Ciesla1, Monika Lewandowska1, Mateusz Rusiniak1, Agnieszka Pluta1, Skarzynski Piotr2, Lorens Artur1, Skarzynski Henryk1 1Institute of Physiology and Pathology of Hearing Warsaw, Poland; 2The Institute of Sensory Organs Katjetany, Poland [email protected] FMRI has been the preferred technique to explore tonotopic organization in the primary auditory cortex (PAC) in healthy individuals [1] Clinical populations, however, have been rather neglected. In the present study an adapted study paradigm by [2] has been employed to investigate neuroplastic changes in PAC in patients with partial deafness (profound high-‐frequency hearing impairment) and chronic subjective tinnitus. 20 patients with bilateral symmetrical partial deafness (12F,8M, 35 years±4mths) and 20 (14M, 6F, 41 years±7mths) with tinnitus and normal hearing participated in an auditory fMRI experiment. A matched healthy group served as control. Two-‐and three-‐tone stimuli with 0.4,0.8,1.6,3.2, and 6.4Hz central frequencies were presented binaurally via MRI-‐headphones at 80dB SPL(C). In tinnitus patients the specific frequency of tinnitus was also used. There were 8 presentations of each sequence type and silence in a single run (3 runs in total). All sounds were presented in silent gaps between 2s data acquisition periods. The studies were performed on a Siemens 3T MAGNETOM Trio scanner and the imaging parameters were: TR=10s,TE=30ms,31 slices,voxel size 2x2x2mm. SPM12b and FreeSurfer software were applied to investigate brain responses. Individual and group SPM t-‐maps, as well as brain flat-‐maps were produced. All patients performed tests assessing their psychological performance and life quality. In healthy individuals, the fMRI paradigm revealed consistent V-‐shape high-‐low-‐high frequency gradients across the Heschl gyri. Patients showed deprivation-‐related cortical re-‐organization and were divided into sub-‐groups reflecting their audiological profiles and brain response patterns. There were no macrospcopic changes in the tonotopic
organization in the tinnitus group. Behaviourally, both patient groups demonstrated slightly elevated depressive symptoms and less active coping strategies. * Before the conference further advanced methodological approaches will be applied to the data. The auditory fMRI method, despite certain limitations, proves a useful tool to study auditory plasticity. Neuroimaging findings can potentially improve therapeutic interventions in clinical populations. [1]Saez, M. et al (2014).Tonotopic mapping of human auditory cortex,Hearing Research,307,42-‐52.[2]Humphries, C. et al.(2010).Tonotopic organization of the human auditory cortex, Neuroimage,50(3),1202-‐1211. Supported by grants: 2011/03/D/NZ4/02431 and 2012/05/N/NZ4/02202. ID: 264
Spatial sensitivity of vocalization responses in the auditory cortex of marmosets Yunyan Wang, Chia-‐Jung Chang, XIaoqin Wang Johns Hopkins University Baltimore, United States of America [email protected] Neural selectivity to conspecific vocalizations requires integration of multiple stimulus features such as frequency, amplitude and frequency modulations over time. Neurons in the auditory cortex across mammalian species generally show some degree of spatial selectivity. It is not yet clear how sound location affects neural selectivity to vocalizations. We studied spatial sensitivity of vocalization responses in the auditory cortex of the awake marmoset (Callithrix jacchus), a highly vocal New World primate. In these experiments, we systematically tested each single-‐unit with various vocalizations presented from 32 locations distributed throughout the animal’s front and rear auditory space, both below and above the horizontal plane. Preliminary data indicate that the spatial sensitivity of vocalization responses is generally consistent with that measured using broadband sounds. Nevertheless, there is a mild degree of stimulus-‐dependent variation that is likely due to tuning characteristics to spectral and temporal features in vocalizations. These data suggest that spatial selectivity is an
124
essential property of auditory cortical neurons that modulates neural selectivity across different types of stimuli, including vocalizations. Furthermore, in a subset of neurons, the sustained portion of the response to vocalizations was more selective to location than the onset response. This finding is in agreement with the notion that stimulus preference information is carried in the sustained response of cortical neurons. ID: 265
Towards optogenetic cochlea implants Marcus Jeschke1, Victor H Hernandez1,2,3, Anna Gehrt1, Zhizi Jing1, Gerhard Hoch1, Daniel Keppeler1, Christian Goßler4, Ulrich T Schwarz4,5, Patrick Ruther5, Michael Schwaerzle5, Roland Hessler6, Tim Salditt2, Livia de Hoz7, Nicola Strenzke1, Tobias Moser1 1University of Goettingen Medical Center, Germany; 2University of Goettingen, Germany; 3University of Guanajuato, Mexico; 4Fraunhofer Institute for Applied Solid State Physics Freiburg, Germany; 5University of Freiburg, Germany; 6Austria and MED-‐EL Germany, Germany; 7Max Planck Institute for Experimental Medicine Goettingen, Germany [email protected]‐goettingen.de Cochlear implants are by far the most successful neuroprostheses implanted in over 300,000 people worldwide and enable open speech comprehension in a majority of users. However, they suffer from low frequency resolution due to wide current spread from stimulation contacts, which limits the number of independently usable channels (less than a dozen usually) and compromises speech understanding in noise, music appreciation or prosody understanding. To ameliorate these drawbacks we are pursuing optogenetic cochlear implants in which spiral ganglion neurons are genetically modified to spike upon light stimulation. Optical stimulation can be spatially confined and thus promises lower spread of excitation in the cochlea. Accordingly, the increased number of independent stimulation channels is expected to enhance frequency resolution and intensity coding. We investigated optogenetic cochlea stimulation employing various transgenic rodent models as well as virus-‐mediated expression of channelrhodopsin variants in spiral ganglion neurons. Blue light stimulation
of the spiral ganglion via single-‐channel µLEDs or fiber-‐coupled lasers activated the auditory pathway, as demonstrated by recordings of neuronal population responses and single neurons along the auditory pathway. Auditory brainstem response thresholds were found to be around 1 mW/mm² -‐ similar to thresholds for cortical neuron stimulation. Expression of CatCh, a channelrhodopsin variant with higher light sensitivity, reduced the amount of light required for responses and allowed reliable neuronal spiking for stimulation up to at least 60 Hz. The cochlear spread of excitation was probed using multielectrode recordings from the inferior colliculus of transgenic channelrhodopsin2 mice. Current source density based response maps were compared between optical, acoustic and electrical stimulation and indicated better frequency resolution of optical stimulation versus monopolar electrical stimulation. Towards multichannel optical implants we, in collaboration with semiconductor experts, have implanted rodent cochleae with flexible µLED arrays accommodating approximately 100 µLEDs per 1 cm. Ongoing experiments to further characterize optogenetic stimulation of the cochlea will be presented and discussed. Taken together, our experiments demonstrate the feasibility of optogenetic cochlea stimulation to activate the auditory pathway and lay the groundwork for future applications in auditory research and prosthetics. ID: 266
My actions are louder than yours: Enhanced activity in auditory cortex to self-‐generated sounds Daniel Reznik, Yael Henkin, Noa Schadel, Roy Mukamel Tel-‐Aviv University, Israel [email protected] A pure sensory system is expected to respond in an identical manner to stimuli with identical physical attributes. Research over the last several years has demonstrated that performing actions with auditory consequences modulates the response in auditory cortex to otherwise identical stimuli passively heard. Such modulation has been suggested to occur through a corollary discharge sent from the motor cortex during
125
voluntary actions. The relationship between the effector used to generate the sound, type of modulation and changes in perceptual sensitivity are currently unclear. In the current study, we recorded whole-‐brain functional magnetic resonance imaging signals from healthy subjects and demonstrate bilateral enhancement in the auditory cortex (superior temporal gyrus) to self-‐generated versus externally generated sounds. Furthermore, we find that this enhancement is stronger when the sound-‐producing hand is contralateral to the auditory cortex. At the behavioral level, binaural hearing thresholds are lower for self-‐generated sounds and monaural thresholds are lower for sounds triggered by the hand ipsilateral to the stimulated ear. Together with functional connectivity analysis, our results suggest that a corollary discharge sent from active motor cortex enhances activity in auditory cortex and increases perceptual sensitivity in a lateralized manner. ID: 267
The representational geometry of natural-‐sound categories in the auditory cortex Bruno Lucio Giordano1, David Fleming1, Pascal Belin1,2,3 1University of Glasgow, United Kingdom; 2Université de Montréal, Canada; 3Institut des Neurosciences de La Timone, CNRS &Université Aix-‐Marseille, France [email protected] FMRI studies of natural sounds consistently revealed patches of the auditory cortex activated preferentially by specific sound categories (e.g., human voices). Sound-‐category encoding was observed also in spatial fMRI patterns (e.g., human action, Giordano et al., 2013), and appeared to rely on a CATEGORY DETECTOR representational geometry characterized by: a high pattern similarity within the target category (e.g., action); a low similarity within the non-‐target category (e.g. non-‐action); a low between-‐category similarity. This geometry is different than the CATEGORY DISCRIMINATOR, characterized by a high pattern similarity within both the target and non-‐target categories. We investigated: [1] whether category detectors also represent human
vocalizations; [2] the relationship between pattern and activation category encoding; [3] the relationship between category detectors and the pattern encoding of within-‐category structure. METHODS. We carried out an event-‐related fMRI study of vocal and action sounds (5 participants; 3 sessions each). The stimulus set was rich in within-‐category structure (e.g., speech vs. affective vs. physiological vocalizations; liquid vs. solid vs. aerodynamic action sounds). A whole-‐brain searchlight (6 mm) analysis assessed the encoding of category structure in activation (within-‐searchlight average BOLD) and fMRI-‐pattern similarity (within-‐searchlight patterns correlation). Single-‐subject inference relied on the permutation of the 1st-‐level GLM (shuffling of stimulus labels at single-‐trial level). RESULTS. [1] All participants revealed a category-‐detector geometry for both vocalizations (bilateral AC extending to aSTG) and action sounds (posterior AC from PT to pSTG), and weaker evidence for a discriminator geometry in smaller posterior AC regions. [2] Significant activation contrasts emerged in the same patches characterized by a detector geometry in 5 and 3 of the participants for vocal and action sounds, respectively. [3] For vocalizations, within-‐category structure (speech vs. affective vs. physiological) was encoded in the same regions that implemented the vocal-‐category detector. CONCLUSIONS. Spatial fMRI patterns in the AC appear to encode natural-‐sound categories based, predominantly, on a detector geometry. Auditory-‐category detectors also appear to be lawfully associated with activation differences, and, for vocal sounds, to be implemented in the same regions devoted to the encoding of within-‐category structure. ID: 268
Adaptation to changing variance of harmonic stimuli in the ferret auditory cortex Astrid Klinge-‐Strahl, Ben Willmore, Andrew J. King University of Oxford, United Kingdom astrid.klinge-‐[email protected] Neurons at higher levels of the auditory system have been found to respond to stimuli in a context-‐dependent manner (e.g. Malone et al.
126
2002, Kvale & Schreiner 2004). Recent studies have shown that short-‐term adaptations occur to stimulus statistics (Dahmen et al. 2010) or stimulus contrast (Rabinowitz et al. 2011). Dahmen et al. (2010) investigated how auditory spatial processing adapts to stimulus statistics and showed that neurons adjusted their response to match a certain ILD distribution. Rabinowitz et al. (2011) examined how changing the spectro-‐temporal contrast of a set of stimuli changed neural sensitivity to changes in these stimuli. Short-‐term adaptation and recovery from adaptation have also been shown in a study by Kvale and Schreiner (2004). Here, we investigate whether neuronal short-‐term adaptation processes also exist for dynamic changes in harmonic complex stimuli. We presented sequences of dynamic random chords (DRCs) to auditory cortex neurons in anaesthetised ferrets. The DRCs were comprised of a stack of integer multiples of a fundamental frequency which was overlayed with a random frequency jitter for each harmonic component. The amount of frequency jitter was switched after an adaptation period from low to high or vice versa. Based on previous findings one might expect that the neurons would adjust their responses to the current background in order to be able to respond adequately to possible target sounds. We presented the vowels /u/ and /Epsilon/ at different temporal positions after a switch in the variance of the background DRC. First, a change in the variance of the background DRC should be reflected in a change in the response to this stimulus. Second, the time period an auditory cortex neuron needs to adapt to the changed variance in the background stimulus should be reflected in differences in the response of the neuron to the vowels for the different positions of the vowel after the switch in background variance. Initial data suggests that the vowel response to either /u/ or /Epsilon/ is altered depending on the background they were presented in. Two different response types were observed: the response to the vowel was either increased or decreased after a switch in the background variance compared to a control condition of no switch. Furthermore, results suggest an influence of the vowel position after a switch in
background variance on the neuronal responses. ID: 269
Neural interaction between auditory spatial information and field-‐of-‐view Akiko Callan, Ando Hiroshi National Institute of Information and Communications Technology Tokyo, Japan [email protected] Both audition and vision provide important cues for motion perception and we integrate those cues in everyday life. Although audition has a smaller effect on motion perception than vision, it has been known that moving sounds can facilitate visual motion perception. A previous study suggested the involvement of the superior temporal gyrus (STG), supramarginal gyrus (SMG), and superior parietal lobule in integrating audiovisual motion cues because those areas were activated for coherent audiovisual motion. In this fMRI study, we further investigated how auditory and visual cues are integrated in motion perception. Our specific interest was to find out whether significance of auditory spatial information changes with different field-‐of-‐view (FOV) of visual stimuli. We predicted that auditory spatial cues are more important for smaller FOV and are also more important when less spatial information is provided by visual stimuli. In this fMRI study, we used videos and corresponding moving sounds as stimuli. We used three types of auditory stimuli (high spatial cue: head-‐related transfer function (HRTF) sound, low spatial cue: mono sound, and no spatial cue: no-‐sound) and four sizes of FOV (100° x 56°, 67° x 38°, 33° x 19°, and 17° x 10°). In order to create different sizes of FOV, we either shrunk or cropped original (100° x 56°) videos. The shrink stimuli contain all information presented in the original videos but the crop stimuli have less information than the original videos. During fMRI experiments, participants were asked to rate the convincingness of perceived self-‐motion (vection) in a 0-‐10 scale. Behavioral data showed significant main effects for the sound type and the FOV and a significant interaction between the sound type and the FOV. Overall analysis of fMRI data
127
showed a significant difference between the HRTF and mono conditions in the left planum temporale. To find out whether significance of auditory spatial information changes with different FOV, we performed a HRTF > mono contrast for each condition and found a significant difference in the left SMG only for the shrink 17° x 10° FOV condition. Considering the involvement of the SMG in integration of the audiovisual motion cues, the result indicated that high audio spatial information was integrated with visual information more than low audio spatial information for the small FOV and supported the hypothesis that auditory spatial cues are more important for smaller FOV. In contrast, our results did not support the hypothesis that auditory cues are more important when less spatial information is provided by visual stimuli. In this study, auditory cues facilitated motion perception when FOV was small by supplementing visual information but not by replacing missing visual information. ID: 276
The time constants of an auditory context effect Claire G Prelofi1, Shihab A Shamma1,2 1Ecole Normale Superieure, Paris, France; 2University of Maryland, College Park, United States of America [email protected] A signal may correspond to several possible interpretations of the world. In order to deal with this ambiguity, perceptual systems use recent history to make the best possible prediction. Since the temporal dynamics of these effects constrain the possible neural mechanism that underlie them, in the present work we characterized the time constants of an auditory context effect. Stimuli were octave-‐related tone complexes known as Shepard tones, containing several octaves of a base frequency (e.g. 100 Hz, 200 Hz, 400 Hz) with a fixed Gaussian envelop centered on 1040Hz. When two tones are presented a half-‐octave apart (a tritone), the direction of the pitch shift is ambiguous; as it may be heard as an up or down step (Shepard, 1964). However, this ambiguous direction can be strongly biased by
the auditory context consisting of a sequence of Shepard tones presented before the tritone. Expt 1 focused on the establishment of the bias. A single context tone varying between 5ms to 320ms was presented before the ambiguous tritone. The results revealed that the bias is significantly different from chance for a context as short as 20ms. Expt 2 explored the decay of the bias. We presented a strong context sequence (five tones of 0.125s each) followed by a silent gap ranging between 0.5s to 64s. The results show that the context effect can be maintained for over 32s of silence. Our findings reveal that this context effect shows remarkable insensitivity to temporal parameters: the bias is present at short and long time-‐scales. In Expts 1 and 2, the bias measure used was the up/down response of the listener. Since a perceptually weak bias could still influence this response reliably, we sought additional measures of how the degree of perceived ambiguity varies with parameters previously shown to introduce a bias in listeners’ responses. In Expt 3 reaction times were used as a signature of the perceived ambiguity of the tritone. The context strength is varied through the length of one context tone (as in experiment 1) and response times are measured when answering to the direction shift of the ambiguous tritone. Preliminary results suggest that the response times correlate with the strength of the bias. Namely it is shorter when the context is long and induce a strong bias and longer when the context is short and induce a weaker bias (240ms difference). Current research are aiming to replicate this effect in a design which does not vary a time parameter. Overall results revealed remarkable time constants. The context effect is established extremely rapidly and yet can be maintained for a long period of time. Such a broad time range suggest that the mechanisms responsible may be distributed at different stages in the auditory pathway. Moreover, the time responses so far indicate that a weak context can disclose a perceived ambiguity.
128
ID: 278
Lateralization of the primary auditory cortex, in patients with unilateral tinnitus Naghmeh Ghazaleh1, Wietske Van der Zwaag1, Melissa Saenz3 1École Polytechnique Fédérale de Lausanne, Switzerland; 3Lausanne University Hospital, Switzerland [email protected] Tinnitus, the chronic perception of ringing or other phantom sounds, is typically associated with hearing loss. The reduction of auditory input that conveys to auditory cortex leads to the changes in the balance of excitatory and inhibitory activation of the corresponding neurons in this area and is possibly the cause of tinnitus. From the other hand a recent study (Gordon et al. Beain 2013) has shown that bilateral input protects the cortex from unilaterally driven reorganization. Based on this finding we could expect that in patient with unilateral hearing loss and tinnitus the input from unimpaired ear has not been transfered sufficiently to the bilateral hemisphere and this loss of input has resulted in reorganization in neuronal activity of the auditory cortex. To test this hypothesis we compare the amplitude of the neuronal activity bold response of the auditory cortex in the ipsilateral and contralateral hemisphere to the hearing ear in response to different frequency tones. Ten tinnitus patients with chronic unilateral hearing loss and tinnitus and age-‐matched normal controls (ages 26-‐49) were tested. (Patients had chronic subjective non-‐pulsatile tinnitus associated with moderate to severe unilateral sensorineural hearing loss in one ear only with at least PTA>40dB on three consecutive frequencies between 1 and 4 KHz; tinnitus duration < 6 months). The recruitment of patients with unilateral hearing loss allowed unimpaired sound delivery via the unimpaired ear, bypassing any abnormal responsiveness at the peripheral level. Our high-‐resolution functional MRI at 7 Tesla (1.5 mm isotropic voxels) (Da Costa et al. J Neurosci 2011) provides us fine scale tonotopic maps in controls and tinnitus patients and using that we are able to compare the amplitude of the neural activity in the auditory cortex of the unilateral and ipsilateral
hemisphere to the hearing ear for each group of different frequency responding neurons. Our first finding shows that the activity difference in tinnitus patient is higher than the activity difference in controls. In more detail the activity amplitude in the contralateral hemisphere to the hearing ear in tinnitus patient is much higher than the activity amplitude in ipsilateral hemisphere, in comparison to controls. This result suggests that the auditory pathway in tinnitus patients is less capable to convey the sound bilaterally and it could be a probable cause of their tinnitus. ID: 279
Macaque brain areas related to auditory-‐motor circuits show increased response to sounds with regular vs. irregular beat Simon Baumann, Manon Grube, Timothy D. Griffiths Newcastle University, United Kingdom [email protected] fMRI data in humans has identified a network of interacting auditory and motor areas including premotor cortex (PMC), posterior auditory cortex and basal ganglia supporting speech1 and instrumental music playing2. Perception of rhythmic auditory stimuli has shown to be sufficient to drive the components of this network and increase interaction in the network in musicians and non-‐musicians even in absence of an obvious perception-‐action link3, 4. While tracing studies in macaque monkeys have demonstrated connections between premotor cortex and posterior auditory areas5 and macaques have been trained to keep a pace of a regular auditory beat, it is not clear whether rhythmic stimuli have the same capacity in non-‐human primates to drive an auditory-‐motor network6. Here we conducted a pilot study to test with fMRI whether auditory stimuli with a rhythmic or regular beat have the capacity to drive a potential network of the mentioned auditory and motor areas in non-‐human primates despite not featuring a capacity for speech. We presented short, ramped beats of broad band noise at about 3 Hz to the animals during a visual fixation task. The sound presentation was divided in two randomised conditions. In the first condition, the beats were presented at
129
strictly regular intervals. In the second condition, the interstimulus-‐interval (ISI) was randomised between 0-‐30% to create an irregular sequence of beats. We identified significant responses to the beats compared to silence in auditory core and belt areas, but also in the premotor cortex and the cerebellum. Additionally, we identified significant responses to the regular beats in the area Tpt and the basal ganglia. A comparison of the response to regular vs irregular beats showed significant responses in posterior auditory areas (caudal belt, Tpt), PMC, basal ganglia, cerebellum. A contrast of irregular vs. regular beats was significant in the medial auditory belt, the superior temporal sulcus and the dorsal prefrontal cortex. The data suggest that similar to humans, non-‐human primates show activity beyond the auditory system in motor related areas in response to passive stimulation with trains of short beats. These responses are stronger in PMC and posterior auditory areas for regular versus irregular beats, with additional foci of regular stimulus preference in basal ganglia and cerebellum. Most of the identified areas have previously been shown to be part of and interacting auditory-‐motor network responding to rhythmic stimuli in humans. This suggest that similar sensory-‐motor mechanisms are implicated in the processing of rhythmic sounds in non-‐human primates. Behavioural work to support this notion is currently pursued. 1. Hickock et al., 2003. J Cogn Neurosci 2. Baumann et al., 2007. Brain Res 3. Chen et al., 2008. Cereb Cortex 4. Grahn & Rowe, 2009. J Neuroscience 5. Petrides & Pandya, 2006. J Comp Neurol 6. Merchant et al., 2011. Proc Natl Acad Sci USA ID: 280
Temporal processing in dyslexia: Neural phase locking of oscillatory rhythms in auditory cortex Astrid De Vos, Robert Luke, Jolijn Vanderauwera, Pol Ghesquière, Jan Wouters KU Leuven – University of Leuven, Belgium [email protected] Developmental dyslexia refers to a hereditary neurological disorder characterized by severe difficulties in reading and spelling despite normal intelligence, education and intense
remedial effort. Depending on the used criteria, dyslexia is thought to affect between 5 and 10% of the population. Although it is widely agreed that the majority of dyslexic individuals show difficulties in one or several aspects of phonological processing, the underlying cause of these phonological problems remains debated. The current study aims to investigate whether a fundamental deficit in phase locking of neural oscillations to temporal information in speech could underlie the phonological processing problems found in children and adults with dyslexia. Auditory steady-‐state responses (ASSRs) were recorded in a group of normal-‐reading and dyslexic adolescents. Five modulation rates were chosen to examine phase locking of neural oscillations over a broad frequency range: 4 Hz (theta rhythm), 10 Hz (alpha rhythm), 20 Hz (beta rhythm), 40 Hz (low gamma rhythm) and 80 Hz (high gamma rhythm). We were specifically interested in ASSRs to low modulation rates, because these modulations are believed to correspond to the rate at which important phonological segments (e.g. syllables and phonemes) occur in speech. Stimuli were presented in three modalities: monaural to the left ear, monaural to the right ear and bilateral to both ears. Responses were recorded with a high-‐density 64-‐electrode array mounted in head caps. Source analysis was performed using CLARA (Classical LORETA Analysis Recursively Applied) and generic MRI based head models. Anatomical brain images were also collected with MR scans, including T1-‐weighted as well as T2-‐weighted images, to allow for the construction of realistic individual head models. Results at sensor level show differences in auditory temporal processing between normal and dyslexic readers at 10 Hz modulation rates. Detailed results at neural source level will be presented at the conference. This study is the first to apply source localization methods on ASSR data in dyslexia. We hope that this approach will deliver unique insights on which neural aspects of auditory processing affect the formation of phonological representations in dyslexia.
130
ID: 281
Task effects on binaural-‐cue representation in human auditory cortex George Christopher Stecker, Nathan C Higgins, Teemu Rinne Vanderbilt University Medical Center, United States of America [email protected] The binaural configuration of an auditory stimulus represents one of the fundamental dimensions, along with frequency and intensity, which modulate human perception and the neural response to sounds. Consistent with electrophysiological measurements in animal models, AC activity measured with BOLD fMRI in human listeners is sensitive to the binaural features of sounds. In this presentation, we describe a series of studies measuring that sensitivity for two types of binaural cues: interaural level differences (ILD) and interaural time differences (ITD), across a range of behavioral contexts including spatial auditory (binaural discrimination), non-‐spatial auditory (pitch discrimination) and non-‐auditory (visual brightness discrimination). Consistent with past results (a) binaural-‐cue sensitivity was mainly confined to posterior AC regions, and (b) attention-‐related modulations were strongest in the adjacent posterolateral superior temporal gyrus and inferior parietal lobule. In addition, the results demonstrate differences in the processing of ITD and ILD in the auditory system, in that ITD sensitivity was (c) restricted to a smaller subregion of posterior AC areas sensitive to ILD, (d) represented bilaterally in the right AC, and (e) especially dependent on the behavioral task. In combination with previous work on the encoding of stimulus features in spatially synthesized sounds and during different discrimination and memory tasks, these results support a central role for AC processing in the active integration of spatial cues to support the global and object-‐related perception of auditory space.
ID: 282
Phase locking of theta, beta and gamma oscillations in the aging auditory system Tine Goossens, Charlotte Vercammen, Jan Wouters, Astrid van Wieringen KU Leuven – University of Leuven, Department of Neurosciences, Research Group Experimental ORL, Leuven, Belgium [email protected] With advancing age, people experience greater difficulty following conversations in noisy environments. Besides peripheral hearing and cognition, temporal processing of low-‐frequent modulations in the speech envelope plays a key role in speech intelligibility. Temporal processing is established by phase locked activity of spontaneous neural oscillations in the central auditory system. The close correspondence between theta (4 – 8 Hz) and beta (13 – 30 Hz) oscillations and critical speech units (syllables and phonemes) urges on investigating the phase locking capability of these oscillations across the lifespan. However, aging studies have mainly focused on phase locking of gamma oscillations (> 30 Hz). By means of auditory steady-‐state response (ASSR) strengths to 4, 20, 40 and 80 Hz amplitude modulations, we investigate the phase locking capability of theta, beta and gamma oscillations in young, middle-‐aged, and elderly subjects. To prevent differences in peripheral hearing and cognition from confounding the results, all subjects have normal audiometric thresholds (≤ 25 dB HL, 125 Hz – 4 kHz) and are screened for cognitive impairments (score ≥ 26/30 on the Montreal Cognitive Assessment). In line with previous studies, we find decreasing 80 Hz ASSRs and stable 40 Hz ASSRs with advancing age. No age-‐related differences are observed for 20 Hz ASSRs. Interestingly, 4 Hz ASSRs are increased in the elderly subjects. This indicates increased phase locking of theta oscillations in the aging auditory system which is a novel finding up to now.
131
ID: 285
Conjugating time and frequency: Lessons on hemispheric specialization for speech and music as learned from mustached bats Stuart Dante Washington, Robert Rudnitsky, John S. Tillinghast Georgetown University Medical Center, United States of America [email protected] Evidence suggests that the degree of left hemispheric specialization in the auditory cortex (AC) for processing social calls, such as human speech sounds, is dictated by acoustic structure and not semantics. Specifically, the relatively greater precision with which the left AC processes time-‐critical (temporal) information enables it to detect the rapid frequency modulations (FMs) that comprise social calls, which are analogous to formant-‐transitions in speech. The right AC, on the other hand, has greater precision at processing frequency-‐related (spectral) information that enables it to track prosodic variation and pitch. Elements of this Asymmetric Sampling of Time have been identified not only in human AC but also in the Doppler-‐shifted constant frequency processing (DSCF) subregion of mustached bat AC. Here, we use published observations and theorems to suggest how an idealized version of the classic left hemispheric specialization for speech processing, characteristic of human AC, evolved in the DSCF area. We review how DSCF neurons use the tonal component of the returning, Doppler-‐shifted, second harmonic of the echolocation signal in this species (echo-‐CF2) to calculate the relative velocities of targets, including prey. Precise velocity calculations based on the echo-‐CF2 are thus ethologically advantageous to the mustached bat but can only be achieved by refined frequency discrimination. The Acoustic Uncertainty Principle dictates that refining frequency discrimination comes at the expense of temporal precision, and refined temporal precision is necessary for detecting and processing rapid FMs in social calls of this species. Thus environmental pressures and physical limitations forced right hemispheric DSCF neurons to develop greater spectral precision, enabling them to precisely track target velocity and other frequency variations
but restricting their ability to detect and process short, rapid FMs. Left hemispheric DSCF neurons, on the other hand, developed greater temporal precision and can thus process the short, rapid FMs used in social calls but perform frequency discrimination relatively poorly. We acknowledge and address discrepancies related to sex differences. If orientation sounds shape left hemispheric specialization of social calls, it would be further evidence that this asymmetry is driven by acoustic structure and not semantic content. Left hemispheric specialization for social calls and rapid FMs in mustached bats, far from being a simple scientific anomaly, could help unravel fundamental phonological mysteries related to hemispheric differences in speech processing. ID: 289
Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain David Sage Vicario, Kai Lu Rutgers University, United States of America [email protected] Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For both stimulus patterns with contiguous transitions and with non-‐adjacent elements, single and multi-‐unit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of non-‐adjacent patterns was stronger in the right hemisphere than in the left and may reflect an effect of top-‐down modulation that is lateralized. Responses to recurring patterns showed stimulus-‐specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that
132
appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. Since auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds. ID: 290
Oxytocin receptors in left auditory cortex enable learned social behavior Bianca Marlin, Mariela Mitre, James A. D’amour, Moses V. Chao, Robert C. Froemke New York University School of Medicine, United States of America [email protected] Oxytocin (OT) is a hypothalamic peptide that regulates social behavior such as maternal care and parent-‐infant bonding. It remains unclear when, where or how OT receptor activation interacts with experience to control social cognition. Here we show how OT enables naive virgin female mice learn to retrieve isolated, vocalizing pups. We first performed behavioral experiments to determine spatiotemporal requirements of OT signaling for learned pup retrieval. Virgins were co-‐housed with mothers and pups, and retrieval of isolated pups tested at regular intervals (every 1-‐6 hours for 3-‐7 days). Within 18 hours 20/36 OT-‐injected animals learned to retrieve (vs. 5/25 control mice). OT infusion into left but not right auditory cortex, or optogenetic activation of OT neurons also accelerated learning in a similar manner. With the Chao lab we found that OT receptors are preferentially expressed
in left auditory cortex (Mitre et al., SFN abstracts 2014). We made in vivo whole-‐cell recordings from left or right auditory cortex of anesthetized naive virgins, dams, and experienced virgins. Excitatory and inhibitory currents and spiking responses recorded from left cortex in trained virgins and dams were more robust and correlated than in naive virgins. Naive mice had imbalanced excitation and inhibition for pup calls, reminiscent of poor inhibitory co-‐tuning observed in young animals (Dorrn et al. Nature 2010). We asked if pairing pup calls with OT could modify neural responses, increasing their robustness and trial-‐to-‐trial reliability. We made multiple whole-‐cell recordings from the same animal to examine responses over hours (Froemke et al. Nature 2007). After measuring baseline synaptic or spiking responses, a given call was repetitively paired with either endogenous (optogenetic stimulation) or exogenously applied OT. First, OT reduced evoked inhibitory responses within seconds of pairing. Over minutes, excitatory responses were strengthened after pairing, leading to an increase in call-‐evoked spikes. Finally, inhibitory responses increased over hours, improving excitatory-‐inhibitory balance to enforce temporal precision of spike timing while preserving enhanced responses to pup calls. These findings suggest that OT, paired with pup calls, can modify cortical circuits to enable or improve learned social behavior. Furthermore, OT-‐dependent plasticity and receptor expression provides a biological basis for lateralization of vocal processing in left auditory cortex.
Author index
133
Author index Only attending authors and coauthors. Numbers correspond to ID numbers of contributions [P = Poster, S = Session; SOP = Short oral presentation, T = Talk]
Abraham, Andreas 237[P] Adenis, Victor 144[P] Aggelopoulos, Nikolaos 220[P] Alhazmi, Fahad Hassan 233[P] Altmann, Christian Friedrich 177[P] Andreeva, Elena 154[P] Andrillon, Thomas 244[P] Angenstein, Nicole 113[P] Arsenault, Jessica 167[P/SOP I] Astikainen, Piia 163[P], 160[P] Atilgan, Huriye 246[P], 153[P], 190[P], 215[P] Attaheri, Adam 212[P], 147[P] Awwad, Bshara 128[P], 261[P] Baltus, Alina 149[P] Banks, Matthew I. 171[P] Bao, Shaowen 306[T/S6/2] Barascud, Nicolas 198[P] Barone, Pascal 292[T/S5/1] Basura, Gregory Joseph 109[P] Baumann, Simon 279[P], 253[P] Beitel, Ralph Eugene 187[P] Belin, Pascal 150[P], 164[P], 166[P], 202[P], 214[P] Bendixen, Alexandra 201[T/S3/3] Bendor, Daniel 110[P] Besle, Julien 218[P] Bieszczad, Kasia Maria 247[T/S6/1]
Bizley, Jennifer 153[P], 190[P], 215[P], 246[P] Böckmann-‐Barthel, Martin 123[P] Bonte, Milene 141[P] Brandmeyer, Alex 139[P] Brechmann, André 113[P], 123[P], 302[T/S6/1] Bremen, Peter 105[P] Brosch, Michael 220[P], 223[P] Budinger, Eike 120[T/S5/1], 122[P], 124[P] Butler, Blake Edward 133[P], 121[P] Callan, Akiko 269[P] Cambiaghi, Marco 136[P] Carrasco, Andres 211[P] Chait, Maria 262[P], 180[P/SOP II], 198[P], 225[P] Chang, Edward 296[T/S2/3], 236[P] Ciesla, Katarzyna Joanna 263[P] Conway, Bevil R. 172[P] Cooke, James 191[P] Crosse, Michael Jeremiah 229[P] Da Costa, Sandra 112[P] David, Stephen V. 303[T/S6/2] de Boer, Jessica 184[P], 199[P] de Hoz, Livia 260[P] De Vos, Astrid 280[P] Deike, Susann 123[P] Di Liberto, Giovanni 227[P]
134
Disbergen, Niels Robert 240[P] Edeline, Jean-‐Marc 252[P], 142[P], 144[P], 255[P] Eggermont, Jos Jan 108[T/S4] Elie, Julie E. 216[T/S1/2], 219[P] Elyada, Yishai M. 241[P] Finkl, Theresa 127[P] Firzlaff, Uwe 102[P], 291[T/S1/1] Flanagan, Sheila Anne 205[P] Fleming, David 202[P] Formisano, Elia 174[T/S6/3], 141[P], 240[P], 242[P] Froemke, Robert 103[T/S6/2], 290[P] Fukushima, Makoto 151[P] Gander, Phillip Evan 156[P], 189[P] Gaucher, Quentin 144[P] Gentner, Timothy 272[T/S6/1] Ghazaleh, Naghmeh 278[P] Giordano, Bruno Lucio 267[P], 164[P], 166[P], 202[P] Giroud, Nathalie 224[P] Gold, Joshua Rodney 222[P] Goossens, Tine 282[P] Gourévitch, Boris 255[P], 252[P] Griffiths, Timothy David 189[P], 305[T/S4], 279[P] Hall, Amee J. 258[P], 133[P] Hämäläinen, Jarmo 238[P], 140[P], 160[P] Hamilton, Liberty 236[P] Happel, Max 116[P], 130[P], 210[P] He, Jufang 288[T/S5/2]
Hechavarria, Julio 129[P] Heil, Peter 123[P], 195[P], 223[P] Henry, Molly J. 192[P], 231[P] Henschke, Julia U. 124[P] Herrmann, Björn 231[P] Herrmann, Christoph 149[P] Hildebrandt, K. Jannis 259[P] Hitsuyu, Rie 185[P] Homma, Natsumi 210[P] Howard, Matthew Andrew 117[P], 156[P], 189[P], 228[P] Huang, Ying 223[P] Hubka, Peter 243[P] Huetz, Chloé 142[P], 144[P] Iyengar, Soumya 176[P] Jäger, Katharina 114[P] Jeschke, Marcus 265[P] Johnsrude, Ingrid Suzanne 235[P] Jones, Gareth Paul 215[P], 153[P] Khouri, Leila 261[P] King, Andrew John 304[T/S6/2], 125[P/SOP I], 188[P], 191[P], 210[P], 222[P], 268[P] Klinge-‐Strahl, Astrid 268[P] Klump, Georg M. 284[T/S1/1], 123[P] Knyazeva, Stanislava 220[P] Koehler, Seth D. 203[P] Kok, Melanie Ann 104[P], 121[P] Kompus, Kristiina 206[P] König, Reinhard 195[P], 223[P]
135
Kössl, Manfred 132[T/S1/1], 111[P], 114[P], 129[P], 145[P] Kotz, Sonja A. 277[T/S2/1], 164[P] Kraus, Nina 275[T/S4] Kreitewolf, Jens 208[P] Krumbholz, Katrin 197[P], 184[P], 199[P] Kurkela, Jari 160[P] Lalor, Edmund 229[P] Langers, Dave 196[P] Lanting, Cris 178[P] Latinus, Marianne 138[P], 150[P] Lee, Sze Chim 148[P] Leppänen, Paavo H.T. 140[P] Lewald, Jörg 107[P] Liu, Robert C. 232[P/SOP II] Lohvansuu, Kaisa 140[P] Lomber, Stephen 104[P], 121[P], 133[P], 155[P], 211[P], 258[P] Love, Scott A. 150[P] Maor, Ido 251[P], 248[P] Marlin, Bianca 290[P] Massoudi, Roohollah 157[P] Matusz, Pawel J. 165[P] May, Patrick J.C. 209[T/S3/1] Mehta, Anahita 193[P] Merchant, Hugo 298[T/S2/2] Meredith, M. Alex 270[T/S5/1], 211[P] Molloy, Katharine 225[P] Mowery, Todd Michael 169[P/SOP I] Mukamel, Roy 266[P]
Murray, Micah M. 283[T/S5/2], 165[P] Nelken, Israel 273[T/S3/1], 119[P], 128[P], 241[P], 248[P], 261[P] Niekisch, Hartmut 130[P], 237[P] Nodal, Fernando 188[P], 210[P], 222[P], 304[T/S6/2] Noesselt, Toemme 271[T/S5/2] Nolden, Sophie 230[P] Norman-‐Haignere, Samuel Victor 126[P/SOP II], 172[P] Nourski, Kirill V. 117[P] Novak, Ondrej 213[P] Nozaradan, Sylvie 239[P/SOP I] Occelli, Florian 255[P] Ohl, Frank W. 302[T/S6/1], 116[P], 130[P] Okamoto, Hidehiko 134[P] Osmanski, Michael Scott 170[P] O'Sullivan, James 226[P], 227[P] Palmer, Alan R. 179[P/SOP I] Panniello, Mariangela 125[P/SOP I] Pantev, Christo 274[T/S6/3], 134[P] Paquette, Sebastien 214[P] Perrodin, Catherine 221[P] Petkov, Christopher I. 147[P], 308[T/S1/2], 162[P], 221[P], 253[P] Poeppel, David 301[T/S3/3] Poirier, Colline 253[P] Polterovich, Ana 119[P], 128[P] Power, Alan 182[P] Rajendran, Vani Gurusamy 183[P] Rauschecker, Josef 299[T/S1/2]
136
Rhone, Ariane Elizabeth 228[P], 117[P] Riecke, Lars 242[P] Rinne, Teemu 162[P], 281[P] Roberts, Larry Evan 115[P] Saldeitis, Katja 122[P] Salminen, Nelli 135[P] Salvia, Emilie 164[P] Sandmann, Pascale 168[P], 217[P] Schaefer, Markus 111[P] Scheich, Henning 122[P], 124[P] Schelinski, Stefanie 152[P] Schierholz, Irina 217[P] Schneider, David M 287[T/S2/2] Schnupp, Jan W.H. 181[P], 183[P], 191[P] Schreiner, Christoph 295[T/S3/2] Schröger, Erich 297[T/S2/1], 131[P], 254[P] Scott, Brian Hayward 194[P] Selezneva, Elena 220[P] Shalev, Amos 241[P], 250[P] Shamma, Shihab A. 276[P] Sharma, Anu 286[T/S4] Shiramatsu, Tomoyo Isoguchi 175[P], 159[P], 185[P] Slater, Heather 162[P] Sohoglu, Ediz 180[P/SOP II] Sollini, Joseph A. 249[P] Song, Wen-‐Jie 158[P] Stecker, George Christopher 281[P] Steinschneider, Mitchell 300[T/S3/2], 117[P]
Stolzberg, Daniel 121[P], 104[P] Stropahl, Maren 168[P] Takahashi, Hirokazu 159[P], 175[P], 185[P] Tasaka, Gen-‐ichi 250[P] Tavano, Alessandro 173[P] Teki, Sundeep 161[P] Town, Stephen Michael 190[P], 153[P], 215[P], 246[P] Treille, Avril 186[P/SOP II] Vater, Marianne 145[P], 132[T/S1/1] Vavatzanidis, Niki Katerina 146[P] Vicario, David Sage 289[P] Vollmer, Maike 187[P], 234[P] von der Behrens, Wolfger 154[P] von Kriegstein, Katharina 152[P], 208[P] Wagner, Luise 118[P] Walker, Kerry Marie May 125[P/SOP I] Wang, Yunyan 264[P] Wang, Xiaoqin 293[T/S2/3], 170[P], 203[P], 264[P] Washington, Stuart Dante 285[P] Weise, Annekathrin 131[P] Widmann, Andreas 254[P] Wiegner, Armin 234[P] Wiegrebe, Lutz 291[T/S1/1] Willmore, Ben 200[P], 183[P], 191[P], 268[P] Winkler, István 294[T/S3/3] Wong, Carmen 155[P] Wood, Katherine Charlotte 153[P], 190[P], 215[P], 246[P]
137
Woolnough, Oscar 199[P] Wyss, Christine 204[P] Yarden, Tohar Sion 248[P] Yaron, Amit 119[P], 128[P] Yasin, Ifat 256[P], 193[P] Yuan, Kexin 106[P/SOP II]
Zador, Anthony 207[T/S2/2] Zatorre, Robert 307[T/S6/3], 197[P], 240[P] Zelenka, Ondrej 213[P] Zhang, Jingting 166[P] Zoefel, Benedikt 137[P]
138
List of participants
139
List of participants
Abraham, Andreas, Dr. University of Potsdam Zoology / Neurobiology Karl-‐Liebknecht-‐Str. 24-‐25, Building 26 14476 Potsdam / Germany andreas.abraham@uni-‐potsdam.de Adenis, Victor, Mr. Centre de Neurosciences Paris-‐Sud (UMR8195 CNRS) Bat. 440-‐447 Université Paris-‐Sud 91405 Osay / France victor.adenis@u-‐psud.fr Aggelopoulos, Nikolaos, Dr. Leibniz Institute for Neurobiology Special Lab Primate Neurobiology Brenneckestr. 6 39118 Magdeburg / Germany nikolaos.aggelopoulos@lin-‐magdeburg.de Alhazmi,Fahad Hassan, Mr. University of Liverpool MARIARC Pembroke Place L69 3GB Liverpool / United Kingdom [email protected] Altman, Christian Friedrich, Dr. Kyoto University, Graduate School of Medicine / Human Brain Research Center 54 Shogoin Kawaracho 606-‐8507 Kyoto / Japan [email protected] Andreeva, Elena, Ms. University of Zurich Institute of Neuroinformatics Winterthurerstrasse 190 8057 Zurich / Switzerland [email protected] Andrillon, Thomas, Mr. École Normale Supérieure Department of Cognitive Studies / LSCP ENS 29 rue d'Ulm 75005 Paris / France [email protected]
Angenstein, Nicole, Dr. Leibniz Institute for Neurobiology Special Lab Non-‐invasive Brain Imaging Brenneckstr. 6 39118 Magdeburg / Germany nicole.angenstein@lin-‐magdeburg.de Arsenault, Jessica, Ms. University of Toronto Psychology, Rotman Research Institute 3560 Bathurst St. M6A2E1 Toronto, ON / Canada [email protected] Astikainen, Piia, Ph.D. University of Jyväskylä Department of Psychology, PO Box 35 40014 Jyväskylä / Finland [email protected] Atilgan, Huriye, Ms. University College London Ear Institute 332 Grays Inn Road WC1X 8EE London / United Kingdom [email protected] Attaheri, Adam, Mr. Newcastle University Institute of Neuroscience Henry Wellcome Building for Neuroecology Framlington Place NE2 4HH Newcastle-‐Upon-‐Tyne / United Kingdom [email protected] Awwad, Bshara, Mr. Hebrew Univesity of Jerusalem Neurobiology Baal shem tov 16 3353205 Haifa / Israel [email protected] Baltus, Alina, Ms. University of Oldenburg Department of Psychology Drögen-‐Hasen-‐Weg 5A 26129 Oldenburg / Germany alina.baltus@uni-‐oldenburg.de Banks, Matthew I., Ph.D. University of Wisconsin Anesthesiology 1300 University Avenue, Room 4605 53706 Madison, WI / United States of America [email protected]
140
Bao, Shaowen, Prof. University of California-‐Berkeley Helen Wills Neuroscience Institute, 210X Barker Hall 94720-‐3202 Berkeley, CA / United States of America [email protected] Barascud, Nicolas, Mr. University College London Ear Institute 332 Gray's Inn Road WC1X 8EE London / United Kingdom [email protected] Barone, Pascal, Dr. CNRS Brain & Cognition UMR 5549 Pavillon Baudot, CHU Purpan BP 25202 31052 Toulouse / France [email protected]‐tlse.fr Basura, Gregory Joseph, Dr. University of Michigan Otolaryngology / Head and Neck Surgery 2357 Earl Shaffer Ct 48105 Ann Arbor , MI / United States of America [email protected] Baumann, Simon, Dr. Newcastle University Institute of Neuroscience Framlington Place NE2 4HH Newcastle upon Tyne / United Kingdom [email protected] Beitel, Ralph Eugene, Ph.D. University of California San Francisco 133 Cortland Avenue 94110 San Francisco, CA / United States of America [email protected] Belin, Pascal, Prof. Glasgow University 58 Hillhead Street G12 8QB Glasgow / United Kingdom [email protected] Bendixen, Alexandra, Prof. Carl von Ossietzky University of Oldenburg Department of Psychology Küpkersweg 74 26129 Oldenburg / Germany alexandra.bendixen@uni-‐oldenburg.de Bendor, Daniel, Ph.D. University College London Experimental Psychology 26 Bedford Way WC1H 0AP London / United Kingdom [email protected]
Benner, Jan, Mr. University Hospital Heidelberg Dept. of Neurology Im Neuenheimer Feld 400 69120 Heidelberg / Germany [email protected] Besle, Julien, Ph.D. Medical Research Council Institute of Hearing Research University Park NG72RD Nottingham / United Kingdom [email protected] Bieszczad, Kasia Maria, Ph.D. University of California Irvine Neurobiology and Behavior Center for the Neurobiology of Learning and Memory 201 Qureshey Laboratory, CNLM 92697-‐3800 Irvine, CA / United States of America [email protected] Bizley, Jennifer, Ph.D. University College London Ear Institute 332 Grays Inn Road WC1X 8EE London / United Kingdom [email protected] Böckmann-‐Barthel, Martin, Dr. Otto von Guericke University Magdeburg Experimental Audiology Leipziger Str. 44 39120 Magdeburg / Germany [email protected] Bonte, Milene, Dr. Maastricht University Faculty of Psychology and Neuroscience Cognitive Neuroscience P.O. box 616 6200 MD, Maastricht / Netherlands [email protected] Boubenec, Yves, Mr. Institut d'études cognitives Ecole Normale Supérieure / IEC / LSP 29 rue d'Ulm 75005 Paris / France [email protected]
141
Brandmeyer, Alex; Ph.D. Max Planck Institute for Human Cognitive and Brain Sciences Auditory Cognition group Stephanstr. 1a 04103 Leipzig / Germany [email protected] Brechmann; André; Dr. Leibniz Institute for Neurobiology Special Lab Non-‐Invasive Brain Imaging Brenneckestr. 6 39118 Magdeburg / Germany brechmann@lin-‐magdeburg.de Bremen Peter, Dr. Donders Institute Nijmegen Biophysics Heyendaalseweg 135 6525 AJ Nijmegen / Netherlands [email protected] Brosch, Michael, Dr. Leibniz Institute for Neurobiology Special Lab Primate Neurobiology Brenneckestr. 6 39118 Magdeburg / Germany brosch@ifn-‐magdeburg.de Budinger, Eike, Dr. Leibniz Institute for Neurobiology Systems Physiology of Learning Brenneckestr. 6 D-‐39118 Magdeburg / Germany budinger@lin-‐magdeburg.de Butler, Blake Edward, Dr. University of Western Ontario Physiology and Pharmacology 1151 Richmond St N. N6A5C2 London, ON / Canada [email protected] Callan, Akiko, Ms. National Institute of Information and Communications Technology Center for Information and Neural Networks University 1-‐4 Yamadaoka, Suita 565-‐0871 Osaka / Japan [email protected] Cambiaghi, Marco, Ph.D. Universita' degli Studi di Torino Neuroscience C.so Raffaello, 30 10125 Turin / Italy [email protected]
Carrasco, Andres, Ph.D. University of Western Ontario Physiology and Pharmacology 1151 Richmond St N. N6A 5C2 London, ON / Canada [email protected] Chait, Maria, Ph.D. University College London Ear Institute 332 Gray's Inn Rd WC1X 8EE London / United Kingdom [email protected] Chang, Edward, Dr. University of California, San Francisco 28 Ronada 94611 Piedmont, CA / United States of America [email protected] Ciesla, Katarzyna Joanna, Ms. Institute of Physiology and Pathology of Hearing Bioimaging Research Center Mochnackiego 10 02-‐042 Warsaw / Poland [email protected] Clemo, H. Ruth, Ph.D. Virginia Commonwealth University School of Medicine 1101 E. Marshall Street, Sanger Hall Rm 12-‐007 23298-‐0709 Richmond, VA / United States of America [email protected] Cohen, Yale Eric, Dr. University of Pennsylvania Otorhinolaryngology Dept. Otorhinolaryngology 3400 Spruce St -‐ 5 Ravdin 19104 Philadelphia, PA / United States of America [email protected] Conway, Bevil R., Mr. Wellesley College Neuroscience 106 Central Street 02481 Wellesley, MA / United States of America [email protected] Cooke, James, Mr. Oxford University Worcester College Walton Street OX1 2HB Oxford / United Kingdom [email protected]
142
Crosse, Michael Jeremiah, Mr. Trinity College Dublin Centre for Bioengineering TCBE Trinity Biomedical Sciences Institute 152–160 Pearse Street 2 Dublin / Ireland [email protected] Da Costa, Sandra, Ph.D. University Hospital Lausanne Department of Clinical Neurosciences Avenue Pierre-‐Decker 5 1011 Lausanne / Switzerland Sandra.Borges-‐Da-‐[email protected] David, Stephen V, Ph.D. Oregon Health and Science University Oregon Hearing Research Center 3181 SW Sam Jackson Park Rd, MC L335A 97239 Portland, OR / United States of America [email protected] de Boer, Jessica, Dr. MRC Institute of Hearing Research Science Road, University Park NG7 2RD Nottingham / United Kingdom [email protected] de Hoz, Livia, Dr. Max Planck Institute for Experimental Medicine Neurogenetics Hermann-‐Rein Str. 3 37075 Goettingen / Germany [email protected] De Vos, Astrid, Ms. KU Leuven Neurosciences Herestraat 49, box 721 3000 Leuven / Belgium [email protected] Deike, Susann, Dr. Leibniz Institute for Neurobiology Special Lab Non-‐invasive Brain Imaging Brenneckestr.6 39118 Magdeburg / Germany sdeike@lin-‐magdeburg.de Di Liberto, Giovanni, Mr. Trinity College Dublin Dunard Avenue 7 Dublin / Ireland [email protected]
Disbergen, Niels Robert, Mr. Maastricht University Cognitive Neuroscience Oxfordlaan 55 6229EV Maastricht / Netherlands [email protected] Duszyk, Anna, Ms. Leibniz Institute for Neurobiology Special Lab Non-‐Invasive Brain Imaging Brenneckestraße 6 39118 Magdeburg / Germany [email protected] Edeline, Jean-‐Marc, Dr. UMR CNRS 8195, Centre de Neurosciences Paris-‐Sud 91405 Orsay / France jean-‐marc.edeline@u-‐psud.fr Eggermont, Jos Jan, Ph.D. University of Calgary Psychology 2500 University Drive NW T2N 1N4 Calgary, AB / Canada [email protected] Elie, Julie E, Ph.D. University of California Berkeley Psychology & Helen Wills Neuroscience Institute 3210 Tolman 94100 Berkeley ,CA / United States of America [email protected] Elyada, Yishai M., Ph.D. Hebrew University Neurobiology / Silberman building 2224 Givat Ram Campus 91904 Jerusalem / Israel [email protected] Finkl, Theresa, Ms. Sächsisches Cochlear Implant Centrum Universitätsklinikum Dresden Fetscherstr. 74, Haus 11 01307 Dresden / Germany theresa.finkl@uniklinikum-‐dresden.de Firzlaff, Uwe, Dr. Technische Universität München Lehrstuhl für Zoologie Liesel-‐Beckmann-‐Strasse 4 85350 Freising / Germany [email protected]
143
Flanagan, Sheila Anne, Dr. University of Cambridge Department of Psychology Downing Street CB2 3EB Cambridge / United Kingdom [email protected] Fleming, David, Mr. University of Glasgow Flat B, 271 Kelvindale Road G120QU Glasgow / United Kingdom [email protected] Formisano, Elia, Prof. Maastricht University Department of Cognitive Neuroscience, P.O. Box 616 6200MD Maastricht / Netherlands [email protected] Friedrich, Björn, Mr. Leibniz Institute for Neurobiology Systems Physiology of Learning Brenneckestr. 6 39118 Magdeburg / Germany bjoern.friedrich@lin-‐magdeburg.de Fritz, Jonathan, Ph.D. University of Maryland Center for Auditory and Acoustic Research Institute for Systems Research Neural Systems Lab, ISR, ECE, Room 2202, A.V. Williams Building 20742 College Park, MD / United States of America [email protected] Froemke, Robert, Dr. New York University School of Medicine Skirball Institute, Department of Otolaryngology 540 First Avenue, Skirball 5-‐9 10016 New York, NY / United States of America [email protected] Fukushima, Makoto, Ph.D. National Institutes of Health Laboratory of Neuropsychology, NIMH 49 Convent Drive, Building 49, Room 1B80 20892 Bethesda, MD, United States of America [email protected] Gaese, Bernhard, Dr. Goethe University Frankfurt Inst. Cell Biology and Neuroscience Max-‐von-‐Laue-‐Str. 13 60439 Frankfurt am Main / Germany [email protected]‐frankfurt.de
Gander, Phillip Evan, Ph.D. University of Iowa Department of Neurosurgery, 1800 JPP 200 Hawkins Drive 52242 Iowa City, IA / United States of America phillip-‐[email protected] Gandras, Katharina, Ms. Carl von Ossietzky University Oldenburg Department of Psychology Ammerländer Heerstraße 114-‐118 26129 Oldenburg / Germany katharina.gandras@uni-‐oldenburg.de Gracia-‐Lazaro, Haydee, Ms. Leibniz Institute for Neurobiology OvGU Magdeburg Fermersleber Weg 45E W402 39112 Magdeburg / Germany Haydee.Garcia-‐Lazaro@lin-‐magdeburg.de Gaucher, Quentin, Mr. CNPS UMR 8195 Université Paris-‐Sud, Bat 446 91405 Orsay / France quentin.gaucher@u-‐psud.fr Gentner, Timothy, Prof. UC San Diego Psycholgoy 9500 Gilman Dr., MC 0109 92093 La Jolla, CA / United States of America [email protected] Ghazaleh, Naghmeh, Ms. École Polytechnique Fédérale de Lausanne (EPFL) Avenue du Grey 78 1018 Lausanne / Switzerland [email protected] Giordano, Bruno Lucio, Ph.D. University of Glasgow Institute of Neuroscience and Psychology 58 Hillhead Street G12 8QB Glasgow / United Kingdom [email protected] Giroud, Nathalie, Ms. University of Zurich Neuroplasticity and Learning in Normal Aging Andreasstrasse 15, Box 2 8050 Zurich / Switzerland [email protected]
144
Gold, Joshua Rodney, Mr. University of Oxford Physiology, Anatomy, and Genetics Merton College Merton Street OX1 4JD Oxford / United Kingdom [email protected] Golubeva, Yulia, Ms. Tomatis Center Moscow, children Zeleny 3a / 11 bld 1 111141 Moscow / Russia [email protected] Goossens, Tine, Ms. KU Leuven Neurosciences Herestraat 49 -‐ box 721 3000 Leuven / Belgium [email protected] Górska, Urszula, Ms. Jagiellonian University Psychophysiology Laboratory al. Mickiewicza 3 31-‐120 Kraków / Poland [email protected] Gourévitch, Boris, Mr. CNPS Lab, UMR8195 CNRS Université Paris-‐Sud Bâtiments 440-‐447 91405 Orsay / France [email protected] Griffiths, Timothy David, Prof. Newcastle University Institute of Neuroscience Medical School Framlington Place NE2 4HH Newcastle upon Tyne / United Kingdom [email protected] Grigutsch, Maren, Ms. Max Planck Institute for Human Cognitive and Brain Sciences Neuropsychology Stephanstr. 1a 04103 Leipzig / Germany [email protected] Güntensperger, Dominik, Mr. University of Zurich Bruggerstrasse 139 5400 Baden / Switzerland [email protected]
Hackett, Troy A., Dr. Vanderbilt University Hearing and Speech Sciences 465 21st Avenue South 37064 Nashville, TN / United States of America [email protected] Hajizadeh, Aida, Ms. Leibniz Institute for Neurobiology Special Lab Non-‐invasive Brain Imaging Brenneckestraße 6 39118 Magdeburg / Germany aida.hajizadeh@lin-‐magdeburg.de Hall, Amee J, Ms. University of Western Ontario Anatomy and Cell Biology 548 Platts Ln #23 N6G3H2, London, ON / Canada [email protected] Hämäläinen, Jarmo, Dr. University of Jyväskylä Psychology Ylistönmäentie 33, P.O.Box 35 40014 Jyväskylä / Finland [email protected] Hamilton, Liberty, Dr. University of California, San Francisco Neurological Surgery 675 Nelson Rising Lane 94158 San Francisco, CA / United States of America [email protected] Happel, Max, Ph.D. Leibniz Institute for Neurobiology Systems Physiology of Learning Brenneckestr. 6 39118 Magdeburg / Germany mhappel@lin-‐magdeburg.de He, Jufang, Prof. City University of Hong Kong Biomedical Sciences Tat Chee Avenue Kowloon, Hong Kong [email protected] Hechavarria, Julio, Mr. Goethe University, Frankfurt am Main Institut für Zellbiologie und Neurowissenschaft Max-‐von-‐Laue-‐Str. 13 60438 Frankfurt am Main / Germany [email protected]‐frankfurt.de
145
Heil, Peter, Dr. Leibniz Institute for Neurobiology Magdeburg Systems Physiology of Learning Brenneckestraße 6 39118 Magdeburg / Germany peter.heil@lin-‐magdeburg.de Henry, Molly J., Dr. Max Planck Institute for Human Cognitive and Brain Sciences Max Planck Research Group "Auditory Cognition" Stephanstrasse 1a 04103 Leipzig / Germany [email protected] Henschke, Julia U., Ms. Leibniz Institute for Neurobiology Systems Physiology of Learning Brenneckestraße 6 D-‐39118 Magdeburg / Germany Julia.henschke@lin-‐magdeburg.de Herrmann, Björn, Ph.D. Max Planck Institute for Human Cognitive and Brain Sciences Stephanstrasse 1a 04107 Leipzig / Germany [email protected] Herrmann, Christoph, Prof. Oldenburg University Experimental Psychology Ammerländer Heerstr. 114-‐118 26111 Oldenburg / Germany christoph.herrmann@uni-‐oldenburg.de Hessel, Horst, Dr. Cochlear Deutschland GmbH & Co. KG Karl Wiechert Allee 76 A 30625 Hannover / Germany [email protected] Hildebrandt, K. Jannis, Prof. Carl von Ossietzky University Oldenburg Department for Neuroscience Carl von Ossietzky Str. 9-‐11 26129 Oldenburg / Germany jannis.hildebrandt@uni-‐oldenburg.de Hitsuyu, Rie, Ms. Research Center for Advanced Science and Technology The University of Tokyo 4-‐6-‐1, Komaba, Meguro-‐ku 153-‐8904 Tokyo / Japan [email protected]‐tokyo.ac.jp
Homma, Natsumi, Ms. University of Oxford Physiology, Anatomy and Genetics Merton College Merton Street OX1 4JD Oxford / United Kingdom [email protected] Howard, Matthew Andrew, Dr. University of Iowa Hospitals and Neurosurgery 200 Hawkins Drive, 1823 JPP 52242 Iowa City, IA, United States of America matthew-‐[email protected] Huang, Ying, Dr. Leibniz Institute for Neurobiology Special Lab. primate Neurobiology Brenneckestraße 6 39118 Magdeburg / Germany Ying.Huang@lin-‐magdeburg.de Hubka, Peter, Ph.D. Hanover Medical School Institute of Audioneurotechnology Feodor-‐Lynen-‐Str. 35 30625 Hannover / Germany hubka.peter@mh-‐hannover.de Huetz, Chloé, Dr. UMR CNRS 8195 Centre de Neurosciences Paris Sud Batiment 446 Université Paris Sud 91405 Orsay / France chloe.huetz@u-‐psud.fr Irvine, Dexter Robert, Prof. Monash University and Bionics Institute School of Psychological Sciences Monash University 3800 Clayton, VIC / Australia [email protected] Iyengar, Soumya, Dr. National Brain Research Centre, Manesar, Systems Neuroscience Nainwal Mode NH-‐8 Haryana 122050 Gurgaon /India [email protected] Jäger, Katharina, Ms. Goethe University Frankfurt Max-‐von-‐Laue-‐Str. 13 60438 Frankfurt am Main / Germany [email protected]‐frankfurt.de
146
Jeschke, Marcus, Dr. University of Göttingen Medical Center InnerEarLab, Dept. of Otolaryngology Robert-‐Koch-‐Str. 40 37075 Goettingen / Germany [email protected]‐goettingen.de Johnsrude, Ingrid Suzanne, Dr. University of Western Ontario Psychology / School of Communication Sciences and Disorders Brain and Mind Institute, Natural Sciences Centre N6A 5B7 London, ON / Canada [email protected] Jones, Gareth Paul, Dr. UCL Ear Institute 332 Grays Inn Road WC1X 8EE London / United Kingdom [email protected] Kashirina, Svetlana, Ms. Tomatis Center Moscow, children Zeleny 3a/11 bld 1 111141 Moscow / Russia [email protected] Kasper, Johannes, Mr. Universität Frankfurt Schleswiger Straße 6 60435 Frankfurt / Germany [email protected] Kell, Christian, Dr. Goethe University Department of Neurology Schleusenweg 2-‐16 60528 Frankfurt / Germany [email protected]‐frankfurt.de Khouri, Leila, Ph.D. Hebrew University of Jerusalem Department of Neurobiology Nelken Lab 91904 Jerusalem / Israel [email protected] King, Andrew John, Prof. University of Oxford Physiology, Anatomy and Genetics Sherrington Building Parks Road OX1 3PT Oxford / United Kingdom [email protected]
Klinge-‐Strahl, Astrid, Dr. University of Oxford Department of Physiology Anatomy & Genetics Parks Road OX1 3PT Oxford / United Kingdom astrid.klinge-‐[email protected] Klump, Georg M., Prof. Cluster of Excellence "Hearing4all" Oldenburg University, School of Medicine and Health Sciences Animal Physiology and Behavior Group Dept. of Neuroscience AG Zoophysiologie und Verhalten Dept. für Neurowissenschaften Universität Oldenburg, Fakultät 6 26111 Oldenburg / Germany georg.klump@uni-‐oldenburg.de Knyazeva, Stanislava, Ms. Leibniz Institute für Neurobiology Special Lab Primate Neurobiology Brenneckestr, 6 39118 Magdeburg / Germany Stanislava.Knyazeva@lin-‐magdeburg.de Koehler, Seth D, Ph.D. Johns Hopkins University Biomedical Engineering 438 Old Trail Road 21212 / Baltimore, MD / United States of America [email protected] Kok, Melanie Ann, Ms. University of Western Ontario Schulich School of Medicine and Dentistry 697 Sevilla Park Place N5Y 4H9 London, ON / Canada [email protected] Kolodziej, Angela, Dr. Leibniz Institute for Neurobiology Systems Physiology of Learning Brenneckestraße 6 39118 Magdeburg / Germany angela.kolodziej@lin-‐magdeburg.de Kompus, Kristiina, Ph.D. University of Bergen Jonas Lies vei 91 5009 Bergen / Norway [email protected]
147
König, Reinhard, Dr. Leibniz Institute for Neurobiology Special Lab Non-‐invasive Brain Imaging Brenneckestraße 6 39118 Magdeburg / Germany rkoenig@lin-‐magdeburg.de Kordowski, Paweł Marian, Mr. Leibniz Institute for Neurobiology Special Lab Non-‐Invasive Brain Imaging Brenneckestr. 6 39118 Magdeburg / Germany Pawel.Kordowski@lin-‐magdeburg.de Kössl, Manfred, Prof. University of Frankfurt Cell Biology & Neuroscience Max-‐von-‐Laue-‐Str.13 60439 Frankfurt / Germany [email protected]‐frankfurt.de Kotz, Sonja A, Prof. School of Psychological Sciences University of Manchester Cognitive Neuroscience and Experimental Psychology Zochonis Building, Brunswick Street M13 9PL Manchester / United Kingdom [email protected] Kowalkowski, Victoria, Ms. Medical Research Council Institute of Hearing Research MRC Institute of Hearing Research University Park NG7 2RD Nottingham / United Kingdom [email protected] Kraus, Nina, Ph.D. Northwestern University Communication Sciences Frances Searle Bldg 2240 Campus Drive 60208 Evanston, IL / United States of America [email protected] Kreitewolf, Jens, Mr. Max Planck Institute for Human Cognitive and Brain Sciences Department of Neuropsychology Stephanstr. 1a 04103 Leipzig / Germany [email protected]
Krumbholz, Katrin, Ph.D. MRC Institute of Hearing Research University Park NG7 2RD Nottingham / United Kingdom [email protected] Kurkela, Jari, Mr. University of Jyväskylä Department of Psychology Taitoniekantie 9 A 3030 40740 Jyväskylä / Finland [email protected] Lalor, Edmund, Dr. Trinity College Dublin Trinity College Dublin 2 Dublin / Ireland [email protected] Langers, Dave, Ph.D. University of Nottingham NIHR Nottingham Hearing Biomedical Research Unit Ropewalk House 113 The Ropewalk NG1 5DU Nottingham / United Kingdom [email protected] Lanting, Cris, Dr. University Medical Center Groningen Department of of Otorhinolaryngology P.O. Box 30.001 9700RB Groningen / Netherlands [email protected] Latinus, Marianne, Ph.D. CNRS Institut de Neuroscience de la Timone 27 Bd Jean Moulin 13385 Marseille / France marianne.latinus@univ-‐amu.fr Lee, Sze Chim, Mr. Tübingen Hearing Research Centre Department of Otolaryngology Head and Neck Surgery Elfriede-‐Aulhorn-‐Strasse 5 72076 Tübingen / Germany [email protected] Leppänen, Paavo H.T., Prof. University of Jyväskylä Department of Psychology Ylistönmäentie 33 P.O.Box 35 40014 Jyväskylä / Finland [email protected]
148
Lewald, Jörg, Dr. Ruhr University Bochum Cognitive Psychology Universitätsstr. 150 44780 Bochum / Germany [email protected] Lippert, Michael Thomas, Dr. Leibniz Institute for Neurobiology Systems Physiology of Learning Brenneckestr. 6 39118 Magdeburg / Germany mlippert@lin-‐magdeburg.de Liu, Robert C., Prof. Emory University Biology 1510 Clifton Rd NE Rollins Research Center Rm 2006 30322 Atlanta, GA / United States of America [email protected] Lohvansuu, Kaisa, Ms. University of Jyväskylä Department of Psychology P.O. Box 35 FI-‐40014 University of Jyväskylä / Finland [email protected] Lomber, Stephen, Prof. University of Western Ontario Brain and Mind Institute 1151 Richmond Street North, SSC 9232 N6A 5C2 London, ON / Canada [email protected] Love, Scott A., Dr. Institut de Neurosciences de la Timone Campus Santé Timone 27 Bd Jean Moulin 13005 Marseille / France [email protected] Malone, Brian, Dr. UCSF Otolaryngology and Head and Neck Surgery 148 Locksley Avenue, Apt.1 94122 San Francisco, CA / United States of America [email protected] Maor, Ido, Mr. The Hebrew University of Jerusalem Department of Neurobiology 90666 Kibbutz Kalia / Israel [email protected]
Marlin, Bianca, Ms. New York University School of Medicine New York University 540 1st Ave, Skirball 5-‐9 10006 New York, NY / United States of America [email protected] Massoudi, Roohollah, Dr. Radboud University Donders institute for brain, cognition and behavior Biophysics, room HG.00825 Heyendaalseweg 135 6525AJ Nijmegen / Netherlands [email protected] Matusz, Pawel J., Ph.D. Centre Hospitalier Universitaire Vaudois (CHUV) University of Lausanne Clinical Neurosciences Avenue de France, Studio 36 1004 Lausanne / Switzerland [email protected] May, Patrick J. C., Dr. Aalto University P.O. Box 12200 FI-‐00076 AALTO / Finland [email protected] Mehta, Anahita, Ms. University College London Ear Institute 332, Grays Inn Road WC1X 8EE London / United Kingdom [email protected] Merchant, Hugo, Prof. Instituto de Neurobiologia National University of Mexico Cognitive Neurobiology Boulevard Juriquilla No. 3001 Instituto de Neurobiología, UNAM campus Juriquilla 76230 Queretaro / Mexico [email protected] Meredith, M. Alex, Prof. Virginia Commonwealth University School of Medicine Anatomy and Neurobiology 1101 E. Marshall Street Sanger Hall Rm 12-‐067 23298-‐0709 Richmond, VA / United States of America [email protected]
149
Michalski, Nicolas, Dr. Institut Pasteur Neuroscience 25 rue du Dr. Roux 75015 Paris / France [email protected] Mintel, Josephine Therse, Ms. InnerEarLab Hals-‐Nasen-‐Ohrenheilkunde Universitätsmedizin Goettingen 48 Weender Straße 37075 Göttingen / Germany [email protected]‐goettingen.de Molloy, Katharine, Ms. University College London Institute of Cognitive Neuroscience 26 Bedford Way WC1H 0DS London / United Kingdom [email protected] Mowery Todd Michael, Ph.D. New York University Carnegy Mellon University 4 Washington Place, rm 809 10003 New York, NY / United States of America [email protected] Mukamel, Roy, Ph.D. Tel-‐Aviv University School of Psychological Sciences & Sagol School of Neuroscience 69978 Tel-‐Aviv, Israel [email protected] Murray, Micah M., Prof. University Hospital and University of Lausanne Department of Clinical Neuroscience and Department of Radiology CHUV Radiology, CIBM BH08.078 Rue du Bugnon 46 1011 Lausanne / Switzerland [email protected] Mylius, Judith, Ms. Leibniz Institute for Neurobiology Special Lab Primate Neurobiology Brenneckestraße 6 39118 Magdeburg / Germany Judith.Mylius@lin-‐magdeburg.de Nelken, Israel, Prof. Hebrew University Edmond J. Safra Campus Institute of Life Sciences 91904 Jerusalem / Israel [email protected]
Niekisch, Hartmut, Mr. Leibniz Institute for Neurobiology Systems Physiology of Learning and Memory Brenneckestraße 6 39118 Magdeburg / Germany Hartmut.Niekisch@lin-‐magdeburg.de Nodal, Fernando, Dr. Oxford University Physiology, Anatomy and Genetics Parks Road OX1 3PT Oxford / United Kingdom [email protected] Noesselt, Toemme, Prof. Otto-‐von-‐Guericke University Magdeburg Psychology Universitaetsplatz 1 39104 Magdeburg / Germany [email protected] Nolden, Sophie, Ms. University of Montreal Department of Psychology 6-‐4417 rue de la Roche H2J 3J2 Montreal, QC / Canada [email protected] Norman-‐Haignere, Samuel Victor, Mr. MIT Brain and Cognitive Sciences 52 Plymouth St, Apt 2 02141 Cambridge, MA / United States of America [email protected] Nourski, Kirill V., Dr. The University of Iowa Neurosurgery 200 Hawkins Dr. 1815 JCP 52242 Iowa City, IA / United States of America kirill-‐[email protected] Novak, Ondrej, Mr. Institute of Experimental Medicine AS CR Auditory Neuroscience Videnska 1083 14220 Prague / Czech Republic [email protected] Nozaradan, Sylvie, Ph.D. UCL Institute of Neuroscience (Ions) 53 Mounier avenue 1200 Brussels, Belgium [email protected]
150
Occelli, Florian, Mr. CNRS,Centre de Neurosciences Paris Sud (UMR8195 CNRS) bât 440-‐447 Université Paris Sud 91405 Orsay / France florian.occelli@u-‐psud.fr Oeztan, Zeynep, Ms. Leibniz Institute for Neurobiology Special Lab Non-‐invasive Brain Imaging Brenneckestr. 6 39118 Magdeburg / Germany zoeztan@lin-‐magdeburg.de Ohl, Frank W., Prof. Leibniz Institute for Neurobiology Systems Physiology of Learning Brenneckestr. 6 D-‐39118 Magdeburg / Germany frank.ohl@lin-‐magdeburg.de Okamoto, Hidehiko, Dr. National Institute for Physiological Sciences Department of Integrative Physiology 38 Nishigo-‐Naka, Myodaiji 4448585 Okazaki / Japan [email protected] Ortiz, Michael, Mr. Max Planck Institute for Biological Cybernetics Physiology of Cognitive Processes Spemannstraße 38 72076 Tuebingen / Germany [email protected] Osmanski, Michael Scott, Dr. Johns Hopkins University, Biomedical Engineering 412 Traylor Building 720 Rutland Avenue 21205 Baltimore, MD / United States of America [email protected] O'Sullivan, James, Mr. Trinity College Dublin Trinity Center for Bioengineering Trinity College Dublin 2 Dublin / Ireland [email protected] Palmer, Alan R, Prof. MRC Insititute of Hearing Research Science Road University Park NG7 2RD Nottingham, United Kingdom [email protected]
Panniello, Mariangela, Ms. University of Oxford Physiology, Anatomy and Genetics Lincoln College Turl Street OX1 3DR OXFORD / United Kingdom [email protected] Pantev, Christo, Prof. Institute for Biomagnetism and Biosignalanalysis Malmedyweg 15 48149 Münster / Germany pantev@uni-‐muenster.de Paquette, Sebastien, Mr. University of Montreal Psychology 5582-‐16 Gatineau H3T1X7 Montreal, QC / Canada [email protected] Perrodin, Catherine, Ms. Max Planck Institute for Biological Cybernetics Physiology of Cognitive Processes Spemannstrasse 38 72076 Tuebingen / Germany [email protected] Petkov, Christopher I, Dr. Newcastle University Institute of Neuroscience Framlington Place Newcastle University Medical School NE2 4HH Newcastle upon Tyne / United Kingdom [email protected] Poeppel, David, Prof. New York University Department of Psychology 6 Washington Place 10003 New York, NY / United States of America [email protected] Poirier, Colline, Dr. Newcastle University Institute of Neuroscience Framlington Place NE2 4HH Newcastle-‐upon-‐Tyne / United Kingdom [email protected] Polterovich, Ana, Ms. Hebrew University of Jerusalem Neurobiology 28 Ben Yosef st. 69125 Tel Aviv / Israel [email protected]
151
Power, Alan, Dr. University of Cambridge Psychology Downing Street CB2 3EB Cambridge / United Kingdom [email protected] Rajendran, Vani Gurusamy, Ms. University of Oxford 116 Charles Street OX4 3AT Oxford / United Kingdom [email protected] Rauschecker, Josef, Prof. Georgetown University Neuroscience Georgetown University medical Center 3970 Reservoir rd, NW 20057 Washington, DC / United States of America [email protected] Reiche, Martin, Mr. Carl von Ossietzky University of Oldenburg Department of Psychology Küpkersweg 74 D-‐26129 Oldenburg / Germany martin.reiche@uni-‐oldenburg.de Renart, Alfonso, Mr. Champalimaud Foundation Champalimaud Neuroscience Programme Champalimaud Centre for the Unknown Avenida Brasilia s/n 1400-‐030 Lisbon / Portugal [email protected] Rhone, Ariane Elizabeth, Ph.D. The University of Iowa Neurosurgery 1825 JPP 52242 Iowa City, IA / United States of America ariane-‐[email protected] Riecke, Lars, Ph.D. Maastricht University Oxfordlaan 55 6200MD Maastricht / Netherlands [email protected] Rinne, Teemu, Ph.D. University of Helsinki Institute of Behavioural Sciences PO Box 8 00014 University of Helsinki / Finland [email protected]
Roberts, Larry Evan, Dr. McMaster University Psychology Neuroscience and Behaviour 1280 Main Street West L8S 4K1 Hamilton, ON / Canada [email protected] Rufener, Katharina, Ms. University of Zurich Neuropsychology UFSP Dynamik Gesunden Alterns Andreasstrasse 15 / Box 2 8050 Zurich / Switzerland [email protected] Saldeitis, Katja, Ms. Leibniz Institute for Neurobiology Systems Physiology of Learning Brenneckestr. 6 39118 Magdeburg / Germany katja.saldeitis@lin-‐magdeburg.de Salminen, Nelli, Dr. Aalto University P.O. Box 12200 FI-‐00076 Aalto, Finland [email protected] Salvia, Emilie, Dr. University of Glasgow Institute of Neuroscience and Psychology 58 Hillhead Street G12 8QB Glasgow / United Kingdom [email protected] Sandmann, Pascale, Ph.D. Hannover Medical School Department of Neurology Feodor-‐Lynen-‐Str. 27 30625 Hannover, Germany sandmann.pascale@mh-‐hannover.de Sanfacon, Anne, Ms. Independent rue d'Oultremont, 79 1040 Brussels / Belgium [email protected] Schaefer, Markus, Mr. Goethe-‐Universität Frankfurt am Main Institut für Zellbiologie & Neurowissenschaft Max-‐von-‐Laue-‐Str. 13 60439 Frankfurt am Main / Germany [email protected]
152
Scheich, Henning, Prof. Leibniz Institute for Neurobiology Lifelong Learning Brenneckestr. 6 D-‐39118 Magdeburg / Germany henning.scheich@lin-‐magdeburg.de Schelinski, Stefanie, Ms. Max Planck Institute for Human Cognitive and Brain Sciences Max Planck Research Group "Neural mechanisms of human communication" Stephanstrasse 1A 04103 Leipzig / Germany [email protected] Schierholz, Irina, Ms. Hannover Medical School Department of Neurology Feodor-‐Lynen-‐Straße 27 30625 Hannover / Germany Schierholz.Irina@mh-‐hannover.de Schneider, Peter, Dr. University of Heidelberg Dept. of Neurology, INF 400 69120 Heidelberg / Germany [email protected] Schneider, David M, Ph.D. Duke University Department of Neurobiology 301C Bryan Research Building 412 Research Dr. 27710 Durham, NC / United States of America [email protected] Schnupp, Jan W.H., Prof. University of Oxford Physiology Sherrignton Bldg OX1 3PT Oxford / United Kingdom [email protected] Schreiner, Christoph, Prof. University California at San Francisco Otolaryngology 675 Nelson Rising Lane, Room 514C 94143-‐0444 San Francisco , CA / United States of America [email protected] Schröger, Erich, Prof. University of Leipzig Institute for Psychology Neumarkt 9-‐19 04109 Leipzig / Germany schroger@uni-‐leipzig.de
Scott, Brian Hayward, Dr. National Institute for Mental Health Laboratory of Neuropsychology 49 Convent Drive Room 1B80 20892 Bethesda, MD / United States of America [email protected] Selezneva, Elena, Dr. Leibniz Institute for Neurobiology Primate Neurobiology Brenneckestr. 6 39118 Magdeburg / Germany Elena.Selezneva@lin-‐magdeburg.de Shalev, Amos, Mr. Hebrew University Neurobiology Yehuda Halevi 125, apt 13 6527525 Tel Aviv / Israel [email protected] Shamma, Shihab A, Dr. Ecole Normale Superieure Department of Cognitive Studies 29 rue d'ulm 75005 Paris / France [email protected] Sharma, Anu, Prof. University of Colorado at Boulder 2501 Kittredge Loop Drive SLHS 80309 Boulder, CO / United States of America [email protected] Shiramatsu, Tomoyo Isoguchi, Ph.D. The University of Tokyo Research Center for Advanced Science and Technology Building 3-‐South, 360 4-‐6-‐1, Komaba, Meguro-‐ku 153-‐8904 Tokyo / Japan [email protected]‐tokyo.ac.jp Slater, Heather, Ms. Newcastle University Institute of Neuroscience NE2 4HH Newcastle Upon Tyne / United Kingdom [email protected] Sohoglu, Ediz, Dr. University College London 332 Gray's Inn Road WC1X 8EE London / United Kingdom [email protected]
153
Sollini, Joseph A, Dr. Imperial College London Bioengineering South Kensington Campus SW166SR London / United Kingdom [email protected] Song, Wen-‐Jie, Prof. Kumamoto University Sensory and Cognitive Physiology Honjyo, Chuo Ku 860-‐8556 Kumamoto / Japan song@kumamoto-‐u.ac.jp Spustek, Tomasz, Mr. Leibniz Institute for Neurobiology Special Lab Non-‐invasive Brain Imaging Brenneckestraße 6 39118 Magdeburg / Germany [email protected] Stadler, Jörg, Dr. Leibniz Institute for Neurobiology Special Lab Non-‐invasive Brain Imaging Brenneckestr. 6 39118 Magdeburg / Germany Stadler@lin-‐magdeburg.de Stecker, George Christopher, Prof. Vanderbilt University Medical Center Hearing and Speech Sciences 1215 21st Ave South, Room 8310 37232 Nashville, TN / United States of America [email protected] Steinschneider, Mitchell, Dr. Albert Einstein College of Medicine Department of Neurology 1300 Morris Park Avenue Kennedy Center 322 10461 Bronx, NY /United States of America [email protected] Stolzberg, Daniel, Ph.D. University of Western Ontario Psychology 339 Ambleside Dr. N6G4Y2 London, ON / Canada [email protected] Stropahl, Maren, Ms. Carl von Ossietzky University Oldenburg Psychology Moltkestraße 7 26122 Oldenburg / Germany maren.stropahl@uni-‐oldenburg.de
Takagaki, Kentaroh, Dr. Leibniz Institute for Neurobiology Magdeburg Systems Physiology of Learning Brenneckestr. 6 39118 Magdeburg / Germany kentaroh.takagaki@lin-‐magdeburg.de Takahashi, Hirokazu, Ph.D. The University of Tokyo Research Center for Advanced Science and Technology 4-‐6-‐1 Komaba, Meguro-‐ku 153-‐8904, Tokyo, Japan [email protected]‐tokyo.ac.jp Tasaka, Gen-‐ichi, Ph.D. The Hebrew University of Jerusalem Department of Neurobiology The Edmond and Lily Safra Center for Brain Science (ELSC), Room 3-‐223 Givat Ram Campus 91904 Jerusalem / Israel [email protected] Tavano, Alessandro, Ph.D. University of Leipzig Institute of Psychology Neumarkt 9-‐19 04109 Leipzig / Germany tavano@uni-‐leipzig.de Teki, Sundeep, Dr. University College London Wellcome Trust Centre for Neuroimaging 12 Queen Square WC1N 3BG London / United Kingdom [email protected] Thomassen, Sabine, Ms. University of Oldenburg Department of Psychology Küpkersweg 74 26129 Oldenburg / Germany sabine.thomassen@uni-‐oldenburg.de Town, Stephen Michael, Ph.D. University College London Ear Institute 332 Gray's Inn Road WC1X 8EE London / United Kingdom [email protected] Treille, Avril, Ms. GIPSA-‐lab 11 rue des Mathématiques 38402 St Martin d'Hères / France Avril.Treille@gipsa-‐lab.grenoble-‐inp.fr
154
Vander Ghinst, Marc, Dr. Hôpital Erasme -‐ Université Libre de Bruxelles ENT 57 avenue du Pérou 1000 Bruxelles, Belgium [email protected] Vater, Marianne, Prof. Universität Potsdam Institut für Biochemie und Biologie, Allgemeine Zoologie Karl Liebknecht Str. 26 14476 Potsdam / Germany vater@uni-‐potsdam.de Vavatzanidis, Niki Katerina, Ms. Saxonian Cochlear Implant Center Uniklinikum Dresden Fetscherstr. 74 01307 Dresden / Germany niki.vavatzanidis@uniklinikum-‐dresden.de Vicario, David Sage, Prof. Rutgers University Psychology 760 West End Ave #16C 10025 New York, NY / United States of America [email protected] Vilain, Coriandre, Dr. GIPSA-‐Lab Speech&Cognition 1280 Av Centrale 38400 St Martin d'Hères / France coriandre.vilain@gipsa-‐lab.fr Vollmer, Maike, Dr. University Hospital Wuerzburg, Comprehensive Hearing Center Otolaryngology Josef-‐Schneider-‐Strasse 11 97080 Wuerzburg / Germany [email protected] von der Behrens, Wolfger, Dr. University of Zurich and ETH Zurich Institute of Neuroinformatics Winterthurerstrasse 190 8057 Zurich / Switzerland [email protected] von Kriegstein, Katharina, Prof. Max Planck Institute for Human Cognitive and Brain Sciences and Humboldt University Berlin Stephanstrasse 1A 04103 Leipzig / Germany [email protected]
Wagner, Luise, Ms. Uniklinikum Halle (Saale) HNO Ernst-‐Grube-‐Straße 40 06120 Halle (Saale) / Germany luise.wagner@uk-‐halle.de Walker, Kerry Marie May, Dr. University of Oxford Physiology, Anatomy & Genetics Sherrington Building Parks Road OX1 3PT Oxford / United Kingdom [email protected] Wang, Yunyan, Ph.D. Johns Hopkins University Biomedical Engineering Traylor 412 720 Rutland Ave 21205 Baltimore, MD / United States of America [email protected] Wang, Xiaoqin, Prof. Johns Hopkins University Department of Biomedical Engineering 720 Rutland Avenue Traylor 410 21205 Baltimore, MD / United States of America [email protected] Wanger, Tim, Dr. Leibniz Institute for Neurobiology Brenneckestr. 6 39118 Magdeburg / Germany tim.wanger@lin-‐magdeburg.de Washington, Stuart Dante, Ph.D. Georgetown University Medical Center Neurology 2122 Massachusetts Ave., N.W. 20008 Washington, DC / United States of America [email protected] Weise, Annekathrin, Dr. Universität Leipzig Institut für Psychologie Neumarkt 9-‐19 04109 Leipzig / Germany akweise@uni-‐leipzig.de Widmann, Andreas, Mr. University of Leipzig Cognitive and Biological Psychology Neumarkt 9-‐19 04109 Leipzig / Germany widmann@uni-‐leipzig.de
155
Wiegner , Armin, Mr. University Hospital Würzburg Comprehensive Hearing Center Josef-‐Schneider-‐Str. 11, B2 97080 Würzburg / Germany [email protected] Wiegrebe, Lutz, Prof. Division of Neurobiology Biologie II Grosshaderner Str. 2 82152 Martinsried / Germany [email protected] Willmore, Ben, Dr. University of Oxford Physiology Anatomy & Genetics Sherrington Building Parks Road OX1 3PT Oxford / United Kingdom [email protected] Winkler, István, Prof. Institute of Cognitive Neuroscience and Psychology Research Centre for Natural Sciences Hungarian Academy of Sciences Cognitive Neuroscience II Magyar Tudósok krt. 2 H-‐1117 Budapest / Hungary [email protected] Wong, Carmen, Ms. The University of Western Ontario Graduate Program in Neuroscience 1151 Richmond St N N6A5C2 London, ON / Canada [email protected] Wood, Katherine Charlotte, Ms. UCL The Ear Institute 332 Grays Inn Road WC1X 8EE London / United Kingdom [email protected] Woolnough, Oscar, Mr. MRC Institute of Hearing Research Science Road, University Park NG7 2RD Nottingham / United Kingdom [email protected]
Wyss, Christine, Ms. University Hospital of Psychiatry Zurich Department of Psychiatry, Psychotherapy and Psychosomatics Militärstrasse 8 Postfach 1930 8021 Zurich / Switzerland [email protected] Yarden, Tohar Sion, Mr. Hebrew University of Jerusalem Edmond and Lily Safra Center for Brain Sciences Silberman Institue of Life Sciences, Edmond Safra Campus, Givat Ram Department of Neurobiology 91904 Jerusalem / Israel [email protected] Yaron, Amit, Mr. Hebrew university Neurobiology 66a Herzl blvd. 9614122 Jerusalem / Israel [email protected] Yasin, Ifat, Dr. University College London 332 Grays Inn Road WC1X 8EE London / United Kingdom [email protected] Yuan, Kexin, Ph.D. Tsinghua University Biomedical Engineering School of Medicine, Room B218 100084 Beijing / China [email protected] Zador, Anthony, Prof. Cold Spring Harbor Laboratory Neuroscience One Bungtown Road 11724 Cold Spring Harbor, NY / United States of America [email protected] Zatorre, Robert, Prof. McGill University Montreal Neurological Institute 3801 University St H3A2B4 Montreal, QC / Canada [email protected]
156
Zelenka, Ondrej, Mr. Institute of Experimental Medicine AS CR Auditory Neuroscience Videnska 1083 14220 Prague / Czech Republic [email protected] Zhang, Jingting, Dr. University of Glasgow Institute of Neuroscience and Psychology 58 Hillhead Street G12 8QB Glasgow / United Kingdom [email protected]
Zoefel, Benedikt, Mr. Centre de Recherche Cerveau et Cognition (CerCo), CNRS Pavillon Baudot CHU Purpan 31052 Toulouse / France [email protected]‐tlse.fr
Sponsors
Acknowledgements
157
List of sponsors
We would like to thank all our sponsors for their collaboration and financial support.
Scientific and Research Institutions
Deutsche Forschungsgemeinschaft
Leibniz-‐Institut für Neurobiologie
Office of Naval Research Global
Center for Behavioral Brain Sciences
Companies
Philips Healthcare
MicroBrightField Europe
Thomas Recording
IAC Acoustics
Carl Zeiss Jena
Springer Science+Business Media
Tucker-‐Davis Technologies
Bürgschaftsbank Sachsen-‐Anhalt GmbH / Mittelständische Beteiligungsgesellschaft Sachsen-‐Anhalt mbH
MR-‐CONFON
Optoacoustics
Cambridge Research Systems
MES Forschungssysteme
158
Acknowledgements
The Organizing Committee is deeply indebted to the host of this conference, the Leibniz Institute for Neurobiology (LIN), Magdeburg.
Special thanks go to our sponsors for their financial, technical, and logistic assistance (see list of sponsors on the preceding page).
Technical assistance has been provided by Reinhard Blumenstein, Andreas Fügner, and Marco Dombach (all from LIN). The internet presentation of the meeting has been developed by Corona Labs and was maintained by Torsten Stöter and Eike Budinger (both LIN). The registration software was provided by Harald Weinreich (conftool).
The diverse jobs coming up in the organization and coordination of the conference office have been taken care of by a lot of helping hands. Particular thanks go to Carola Kolouschek who headed the conference organization. We would like to thank Sarah Bresch, Monika Dobrowolny, Anja Gürke, Julia Henschke, Ines Kaiser, Kathrin Ohl, and Janet Stallmann (all LIN).
We are also very grateful to Julia Henschke (LIN) for her support and assistance during the preparation of these proceedings.
Many thanks go to Daniela Wiechert, on behalf of the whole team of the “Herrenkrug Parkhotel”, for the collaboration and the hospitality.
Finally, we would like to thank all participants, for coming to Magdeburg to join the conference and contribute to its sessions, thus filling the scientific as well as the social events with life.
Magdeburg, September 2014
Notes