letter 03 en

Upload: ejuan

Post on 04-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 Letter 03 En

    1/44

    COMENIUS 2.1 ACTION

    Qualification of educational staff

    working with hearing impaired children

    (QESWHIC)

    Study Letter 3

    George Tavartkiladze

    Audiological Assessment

  • 7/31/2019 Letter 03 En

    2/44

    Contents

    Psychoacoustics.....................................................................................................................................3Pure-tone audiometry................................................................ ................................................................. ...... 3

    Measurement of hearing ......................................................... .............................................................. ...... 5Procedures for obtaining thresholds..................................... ................................................................. ...... 7WHO severity scale of auditory impairment................................................................... ........................ 8Masking .......................................................... .................................................................. .......................... 8

    Tests of cochlear function (Suprathreshold tests) .................................................................... ...................... 10Loudness balance procedures.............................................................................................. ...................... 10The short Increment Sensitivity Index (SISI) Test............................................................. ....................... 11Uncomfortable loudness level (UCL)............................................................. .......................................... 12

    Speech audiometry........... ............................................................ .............................................................. .... 13Speech Recognition Threshold (SRT) ...................................................................... ................................ 13

    Speech threshold materials..................................................................................... ................................... 14Speech threshold procedures.............................................................. ....................................................... 14Assessment of the supra-threshold speech recognition.......................................................... ................... 14Speech Discrimination Test ....................................................... ............................................................... 15Speech materials for the paediatric population ............................................................ ............................. 15Masking .......................................................... .................................................................. ........................ 16

    Objective audiological tests ...............................................................................................................17Acoustic Immittance Measurement................................................................ ................................................ 17

    Immittance Test Battery.......................... ................................................................ .................................. 18Tympanometry......................................... ........................................................... ...................................... 19Static Acoustic Immittance....................................................... .......................................................... ...... 20Acoustic Reflex Threshold................................................................................... ..................................... 20Special Considerations in the Use of Immittance with Young Children................................................... 21

    Overview of auditory evoked responses and otoacoustic emissions................. ............................................. 22What are Auditory Evoked Responses ? ............................................................... .................................... 22Factors in the Measurement of AERs .......................................................... ............................................. 24Electrocochleography (ECochG): General Description And Waveform Terminology............................. 26Auditory Brainstem Response (ABR) General Description and Waveform Terminology ....................... 28Auditory Middle Latency Response (AMLR) General Description And Waveform Terminology.......... 30Auditory Long Latency Response (ALLR) General Description And Waveform Terminology.............. 31Threshold evaluation............................... ................................................................ .................................. 31

    Advantages and limitations ......................................................... ............................................................... .... 32Electrocochleography .................................................. ............................................................... .............. 32Brain Stem Audiometry .......................................................... ............................................................ ...... 33Cortical Response Audiometry (middle- and late responses) .............................................................. ..... 33

    Objective methods in the selection of the candidates for Cochlear implantation and speech processoradjustment. .............................................................. ............................................................... ........................ 33Otoacoustic Emissions ........................................................ ............................................................ ............... 34

    Measurement Parameters.................................................................. ........................................................ 35Conclusions........ ................................................................ ............................................................. ............... 40

    Summary .............................................................................................................................................41

    Glossary...............................................................................................................................................41

  • 7/31/2019 Letter 03 En

    3/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment3

    Psychoacoustics

    The study of behavioural responses to acoustic stimulation is referred to as psychoacous-

    tics. Two primary means have evolved through which the psychoacoustician measures

    responses from the listener. The first, known generally as the discrimination procedure, is

    used to assess the smallest difference that would allow a listener to discriminate between

    two stimulus conditions. The end result is an estimate of a threshold of a certain type. A

    tone of specified amplitude, frequency, and starting phase, for example, can be discrimi-

    nated from the absence of such a signal. In this case one is measuring the absolute thresh-

    old of hearing. This threshold is often referred to as a detection threshold, because one is

    determining the stimulus parameters required to just detect the presence of a signal.

    Pure-tone audiometry

    Pure-tone audiometry is the basis of a hearing evaluation. With pure-tone audiometry, hearing

    thresholds are measured for pure tones at different test frequencies. Hearing threshold is typi-

    cally defined as the lowest (softest) sound level needed for a person to detect the presence of a

    signal approximately 50% of the time.

    To determine audibility thresholds of clinical cases in a relatively short period of time, com-

    promises must be made in managing the variables described above. Clinical tests are not usu-

    ally conducted in anechoic environments. Under clinical conditions there is often variation due

    to subjective factors that affect behavioural responses. Unlike the subject in a psychoacoustic

    experiment, the responses of a client or patient in the audiological clinic are often inconsistent

    because these listeners are not ordinarily trained to be highly competent and reliable observers.

    Fig.1. The audiogram and the associated symbol system.

  • 7/31/2019 Letter 03 En

    4/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment4

    The equipment and technique must conform to practical requirements of cost effectiveness. The

    time necessary for test administration is an important factor. The procedure must be a reasona-

    bly simple task for the examiner who must work quickly and consistently.

    Because administration time is often limited, attenuator steps of 5 dB are used. Threshold in-formation at each frequency is then plotted on a graph known as an audiogram.

    The audiogram is a chart used to graphically record the hearing thresholds and other test results.

    Figure 1 shows an example of an audiogram and the associated symbol system recommended

    by the American Speech-Language-Hearing Association. The audiogram is shown in graphic

    form with the signal frequencies (in Hertz - Hz) displayed on the x-axis and the hearing level

    (in decibels - dB) represented on the y-axis. The graph is designed in such a manner that 1

    octave on the frequency scale is equal in size to 20 dB on the hearing level scale. The horizontal

    line at 0 dB hearing level (HL) represents normal hearing sensitivity for the average young

    adult. However, the human ear does not perceive sound equally well at all frequencies. The earis most sensitive to sound in the intermediate-frequency region from 1000 to 4000 Hz and is

    less sensitive at both the higher and lower frequencies. Greater sound pressure is needed to

    elicit a threshold response at 250 Hz than at 2000 Hz in normal ears. The audiometer is cali-

    brated to correct for these differences in threshold sensitivity at various frequencies. Conse-

    quently, when the hearing level dial is set at zero for a given frequency, the signal is automati-

    cally presented at the normal threshold sound pressure level required for the average young

    adult to hear that particular frequency.

    Fig.2. The relationship between decibels SPL (upper panel) and decibels HL (lower panel); circles - thresholds innormal hearing subject; triangles - thresholds in patients with high frequency sensorineural hearing loss.

    Fig.2 depicts the relationship between the dB HL scale and the dB SPL (Sound Pressure level)

    scale of sound intensity.

    Results plotted on the audiogram can be used to classify the extent of a hearing handicap. This

    information plays a valuable role in determining the habilitative or rehabilitative needs of a

    hearing-handicapped individual. Classification schemes using the pure-tone audiogram are

    based on the fact that there is a strong relationship between the threshold for those frequenciesknown to be important for hearing speech (500, 1000 and 2000 Hz) and the lowest level at

  • 7/31/2019 Letter 03 En

    5/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment5

    which speech can be recognised accurately 50% of the time. The latter measure is generally

    referred to as the speech recognition threshold, or SRT.

    Measurement of hearing

    In pure-tone audiometry thresholds are obtained by both air conduction and bone conduction. In

    air conduction measurement the different pure-tone stimuli are transmitted through earphones.

    The signal travels through ear canal, across the middle ear cavity via the three ossicles to the

    cochlea, and to the auditory central nervous system. Air conduction thresholds reflect the integ-

    rity of the total peripheral auditory mechanism. When a person exhibits a hearing loss by air

    conduction, it is not possible to determine the location of the pathology along the auditory

    pathway. The hearing loss could be the result of (a) a problem in the outer or middle ear, (b) a

    difficulty at the level of the cochlea, (c) damage along the neural pathways to the brain, or (d)

    some combination of these. When air conduction measurements are combined with bone con-duction measurements, however, it is possible to differentiate outer and middle ear problems

    (conductive hearing loss) from inner ear problems (sensorineural hearing loss).

  • 7/31/2019 Letter 03 En

    6/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment6

    Fig.3. A- conductive hearing loss; B sensorineural hearing loss; C mixed hearing loss.

    In bone conduction measurement signals are transmitted via bone vibrator that is usually placed

    on the mastoid prominence of the skull. The forehead is another common position for place-

    ment of the bone vibrator. A signal transduced through the vibrator causes the skull to vibrate.

    The pure tone directly stimulates the cochlea which is embedded in the skull, effectively by-

    passing the outer ear and middle ear systems. If an individual exhibits a reduction in hearing

    sensitivity when tested by air conduction yet shows normal sensitivity by bone conduction, the

    impairment is probably due to an obstruction or blockage of the outer or middle ear. This condi-

    tion is referred to as a conductive hearing loss (Fig.3a). The difference between the air conduc-

    tion threshold and bone conduction threshold at a given frequency is generally referred to as the

    air-bone gap. In fig.3a, for example, at 250 Hz there is a 30-dB air-bone gap in the right ear anda 40-dB gap in the left ear. Conductive hearing loss is especially prevalent among pre-school

    and young school-age children who experience repeated episodes of otitis media (middle-ear

    infection) or secretory otitis media with effusion. Other examples of pathologic conditions to

    produce conductive hearing loss include congenital atresia of the external auditory canal and

    pathology of middle ear, blockage or occlusion of the ear canal (possibly be cerumen or ear-

    wax), perforation or scarring of the tympanic membrane, ossicular chain disruption, and oto-

    sclerosis.

    A sensorineural hearing loss is suggested when the air and bone conduction thresholds are

    approximately the same (+5 dB) at all frequencies (Fig.3b). Sensorineural hearing impairment

    may be either congenital or acquired. Some of the congenital causes include noise, ageing,inflammatory diseases (e.g., measles, mumps etc.), and ototoxic drugs (e.g., aminoglycoside

    antibiotics, salicylates etc.).

    The majority of sensorineural hearing losses are characterised by audiometric configurations

    that are flat, trough-shaped, or slightly to steeply sloping in the high frequencies. The latter is

    probably the most common configuration associated with acquired sensorineural hearing loss.

    When both air and bone conduction thresholds are reduced in sensitivity, but bone conduction

    yields better results than air conduction, the term mixed hearing loss is used (Fig.3c).

  • 7/31/2019 Letter 03 En

    7/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment7

    Procedures for obtaining thresholds

    The hearing examination is generally conducted in a sound-treated room where noise levels are

    kept at a minimum and do not interfere with or influence the hearing test results. Pure-toneaudiometry begins with air conduction measurements at octave intervals ranging from 125 to

    8000 Hz; these are followed by bone conduction measurements at octave intervals ranging from

    250 to 6000 Hz. The initial step in measuring threshold involves instructing the patient as to the

    listening and responding procedures.

    Because 1000 Hz is considered the frequency most easily heard under head-phones, and the

    threshold at this frequency is the most reliable, it is used as the initial frequency and is adminis-

    tered first to the better ear. The right ear is tested first if the patient indicates that hearing sensi-

    tivity is the same in both ears.

    Factors that can influence threshold

    To ensure accurate threshold measurement, it is important to take every precaution to eliminate

    extraneous variables that can influence the threshold test. The following factors are known to be

    significant:

    - proper maintenance and calibration of audiometer;

    - test environment;

    - earphone placement;

    - placement of the bone vibrator.

    Procedures for young children

    The procedures for pure-tone audiometry previously described are those recommended for

    older children and adults. Considerable modification of these techniques is required for testing

    the hearing of young children.

    o Testing the younger infant, aged 2 months to 2 years. A child of this age is somewhateasier to evaluate than a new-born because the childs increased maturity results in more so-

    phisticate and reliable responses. By approximately 3 months of age, attending responses

    are elicited easily, and the infant is able to determine the location of the sound source in the

    horizontal plane. The nature of a positive response varies with the infants age. Some type

    of visual reinforcement is used to train the infant in the task and to reward the infant for re-sponding appropriately.

    The nature of signal varies. Some clinicians favour the use of various noise-makers, such as

    squeeze toy, bell, rattle, or spoon in a cup. Although these signals are considered favourable

    in that they maintain a certain amount of interest on the part of the child and appear to be ef-

    fective for eliciting arousal responses in young children or infants, they have inappropriate

    frequency characteristics. Most noise-makers contain energy at several frequencies. this

    makes it difficult to define the way in which the hearing loss varies with frequency and, in

    many cases, to determine if a hearing loss is presented.

    The responses during the early months are typically arousal- or startle-oriented, whereas

    more sophisticated sound localisation responses, such as head turns or eye movements in thedirection of the loudspeaker producing the sound, occur with the older infants.

  • 7/31/2019 Letter 03 En

    8/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment8

    The use of visual reinforcer in the measurement of hearing for very young infants is ex-

    tremely important. Delivery of a visual stimulus following the appropriate sound-

    localisation response serves to enhance the response behaviour and leads to a more accurate

    threshold measurement. This technique is sometimes referred to as the conditioned orienta-tion reflex or visual reinforcement audiometry. Visual reinforcers range from a simple

    blinking light to a more complex animated toy animal.

    There are limitations to sound-field audiometry performed with young infants. The most

    obvious limitations are: (1) the inability to obtain thresholds from each ear separately, (2)

    the inability to obtain bone conduction thresholds, and (3) failure to obtain responses from

    young children who have severe or profound hearing losses.

    o Testing children 2 to 5 years old. In this age child has matured sufficiently to permit the useof more structured procedures in the hearing evaluation. The techniques used most fre-

    quently are conditioned play audiometry, tangible reinforcement operand conditioningaudiometry (TROCA) or visual reinforcement operand conditioning audiometry (VROCA),

    and conventional (hand-raising) audiometry.

    WHO severity scale of auditory impairment

    (determined by the average hearing threshold level measured in dB for pure tones of 500, 1000,

    2000 and 4000 Hz).

    dBMild Hearing Loss 26-40

    Moderate Hearing Loss 41-55

    Sever hearing Impairment 56-70

    Profound Hearing Impairment 71-90

    Deafness >91

    Masking

    When hearing is assessed under earphones or with bone vibrator, it is not always the case that

    the ear that the clinician desires to test is the one that is stimulated. Sometimes the other ear,

    referred to simply as the non-test ear, will be the ear stimulated by sound. This situation results

    because the two ears are not completely isolated from one other. Air-conducted sound delivered

    to the test ear through conventional earphones mounted in typical ear cushions is decreased or

    attenuated approximately 40 dB before stimulating the other ear via skull vibration. The value

    of 40 dB for inter-aural (between-ear) attenuation is actually a minimum value. The amount

    of inter-aural attenuation for air conduction varies with the frequency of the test signal but is

    always more than 40 dB. The minimum inter-aural attenuation value for bone conduction is 0

    dB. Thus, the bone vibrator placed on the mastoid process behinds the left ear vibrates the

    entire skull such that the effective stimulation of the right is essentially equivalent to that of theleft ear. It is impossible, therefore, when testing by bone conduction, to discern whether the

  • 7/31/2019 Letter 03 En

    9/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment9

    right ear or the left ear is responding without some way of eliminating the participation of the

    non-test ear.

    Masking enables the clinician to eliminate the non-test ear from participation in the measure-

    ment of hearing thresholds for the test ear. Essentially, the audiologist introduces a sufficientamount of masking noise into the non-test ear so as to make any sound crossing the skull from

    the test ear inaudible.

    There are two simple rules the audiologist follows in deciding whether to mask the non-test ear;

    one applies to air conduction testing and the other to bone conduction measurements. Regarding

    air conduction thresholds, masking is required at a given frequency if the air conduction thresh-

    old of the test ear exceeds the bone conduction threshold of the non-test ear by more than 35

    dB. Recall that bone conduction threshold may be equivalent to or better than the air conduction

    threshold, but never poorer. Thus, if one observes a 50-dB difference between the air conduc-

    tion thresholds of the two ears at a particular frequency, the difference will be at least that great

    when the air conduction threshold of the test ear is compared with the bone conduction thresh-old of the non-test ear prior to deciding whether to mask for air conduction. The following

    example will clarify it.

    The patient indicates that hearing is better in the right ear, and air conduction testing is initiated

    with that ear. A threshold of 10 dB HL is observed at 1000 Hz. Following completion of testing

    at other frequencies in the right ear, the left ear is tested. An air conduction threshold of 50 dB

    HL is observed at 1000 Hz. Is this an accurate indication of the hearing loss at 1000 Hz in the

    left ear? The audiologist cant be sure. The air conduction threshold for the right ear is 10 dB

    HL and indicates that the bone conduction threshold for that ear at 1000 Hz is not greater than

    this value. Thus, there is at least 40-dB difference between the air conduction threshold of the

    test ear at 1000 Hz (50 dB HL) and the bone conduction threshold of the non-test ear ( 10 dB

    HL). If the skull only attenuates the air-conducted sound delivered to the test ear by 40 dB, then

    a signal level of approximately 10 dB HL reaches the non-test ear. The level of the crossed-over

    signal approximates the bone conduction threshold of the non-test ear and may, therefore, be

    detected by that ear. Now assume that a masking noise is introduced into the non-test ear to

    raise that ears threshold to 30 dB HL. If the air conduction threshold of the test ear remains at

    50-55 dB HL, we may safely conclude that the observed threshold provides an accurate indica-

    tion of the hearing loss in that ear. Why? Because if the listener was actually using the non-test

    ear to hear the sound presented to the test ear, making the hearing threshold 20 dB worse in the

    non-test ear (from 10 to 20 dB HL) would also shift the threshold in the test ear by 20 dB (from

    50 to 70 dB HL).

    When testing hearing by bone conduction, masking is needed whenever the difference betweenthe air conduction threshold of the test ear and the bone conduction threshold of the test ear at

    the same frequency (the air-bone gap) is 10 dB. Because the interaural attenuation for bone

    conduction testing is 0 dB, the bone conduction threshold is always determined by the ear

    having better bone conduction hearing. Consider the following example. Air conduction thresh-

    olds obtained from both ears reveal a hearing loss of 30 dB across all frequencies in the left ear,

    whereas thresholds of 0 dB HL are obtained at all frequencies in the right ear. Bone conduction

    testing of each ear indicates thresholds of 0 dB HL at all frequencies. Is the 30-dB air-bone gap

    observed in the left ear a real indication of a conductive hearing loss in that ear? Without the

    use of masking in the measurement of the bone conduction thresholds of the left ear, the clini-

    cian cant answer that question confidently. We know that the bone conduction thresholds of

    the non-test (right) ear will be less than or equal to the air conduction threshold of that ear (0 dB

    HL). Thus, when we observe this same bone conduction threshold when testing the test (left)

  • 7/31/2019 Letter 03 En

    10/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment10

    ear, either of two explanations is possible. First, the bone conduction threshold could be provid-

    ing a valid indication of the bone conduction hearing in that ear. This means that a conductive

    hearing loss is present in that ear. Second, the bone conduction threshold of the test ear could

    actually be poorer than 0 dB HL, but could simply be reflecting the response of the better ear(the non-test ear with a threshold of 0 dB HL). By masking the non-test ear, we can decide

    between these two explanations for the observed bone conduction threshold of the test ear. If

    the bone conduction threshold of the test ear remains unchanged when sufficient masking noise

    is introduced into the non-test ear, we may safely conclude that it provides an accurate indica-

    tion of bone conduction hearing in that ear.

    As we have seen, the need to mask the non-test ear and the rules for when to mask are relatively

    straightforward. There are a variety of masking procedures that have been developed to deter-

    mine haw much masking is sufficient.

    Tests of cochlear function (Suprathreshold tests)

    Two procedures, sensitive to cochlear function, which have been used in diagnostic test batter-

    ies are loudness balance techniques and the short increment sensitivity index (SISI). Loudness

    balance techniques were first developed by Fowler (1936) for comparing loudness growth in a

    normal versus an abnormal ear. Jerger et al. (1959) developed the SISI test as an outgrowth of

    studies of the difference limen for intensity (DLI) (Jerger, 1952, 1953).

    Loudness balance procedures

    There has been an interest in and research on presumed abnormal loudness growth for more

    than 60 years. Pohlman and Kranz (1924) and Fowler (1928) first commented on abnormal

    loudness growth in impaired ears, while Fowler provided the label "recruitment" in 1937. Re-

    cruitment is defined as an abnormal growth of loudness for signals at supra-threshold intensi-

    ties. Consider an individual with a threshold at 1000 Hz of 5 dB hearing level (HL) in one ear

    and 45 dB HL in the other. At 5 and 45 dB, respectively, the tones are perceived as equally loud

    as both are at threshold. If the level was increased to 70 dB HL (65 dB SL) in the normal ear

    and a level of 70 dB HL (25 dB SL) in the poor ear was judged equally loud, then an abnormal

    growth of loudness or recruitment has been shown. Only a 25-dB increase in intensity overthreshold was needed in the poor ear to sound equally loud as a 65 dB increase in intensity over

    the threshold in the normal ear. Thus, the poor ear showed a rapid rise in loudness growth.

    The previous example illustrates one of the two most commonly used loudness balance proce-

    dures, the alternate binaural loudness balance or ABLB. Fowler's (1936) ABLB technique is for

    unilateral losses. Because most clients present with bilateral losses, Reger (1936) described the

    monaural loudness balance (MLB) technique which would be suitable for such cases.

    The essence of either procedure requires loudness balancing between a frequency within normal

    limits (25 dB HL or better) and one showing a loss. If loudness balancing is done with both

    frequencies having a loss, the test results might be confounded (e.g., a finding of "no recruit-

    ment" might reflect recruitment in both ears) .

  • 7/31/2019 Letter 03 En

    11/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment11

    ABLB compares loudness growth between the same frequencies for the two ears. MLB com-

    pares loudness growth between two different frequencies in the same ear. For either procedure,

    a series of loudness judgements is made at different levels. For the ABLB, the intensity is held

    constant in one ear while intensity is varied in the other until the listener judges both signals tobe of equal loudness. The ear with the constant or fixed intensity is termed the "reference ear"

    because loudness judgements are matched to this ear. The signals are rapidly alternated from

    ear to ear in ABLB and are alternated from frequency to frequency in MLB. Loudness balance

    judgements are made at several levels of fixed intensity in the reference ear.

    The client's task is to state whether the variable tone is "softer than," "louder than," or "equal"

    in loudness to the reference ear. To assure that the client understands the task and to maximise

    validity and reliability of results, a bracketing procedure is used. Initial judgements should be

    based on a variable tone that is clearly perceived as "louder" than the reference tone, whereas

    the next judgement should be for a tone that is likely to be much "softer" than the reference.

    Further balances follow with intensity of the variable tone approaching the presumed equal

    loudness point. The client is told that he will hear two tones, one constant in loudness and one

    variable. Judgements of loudness are to be made only from the variable ear (or tone) to the

    reference. It is suggested that the client listen to the constant signal in the reference ear at each

    test level for a few seconds before presenting the variable tone. This helps the listener to judge

    loudness because of a better sense of the reference signal as compared to the variable one. The

    client is cautioned to pay attention only to loudness changes and to ignore pitch differences.

    This would apply to MLB or to ABLB in which the same tone in each ear may sound different

    because of diplaucusis.

    Since those early studies, ABLB and MLB have been used in test batteries to differentiate

    cochlear from retrocochlear disorders. In most articles attention has centred on unilateral losses

    and, consequently, use of ABLB. In most reports the majority of retrocochlear problems hasbeen eighth nerve tumours.

    ABLB studies consistently have shown the expected recruitment in cochlear cases a greater

    percentage of the time than the absence of recruitment in retrocochlear disorder. Thus, recruit-

    ment would seem very likely in ears with cochlear disorders. However, attention should be paid

    to "no recruitment" results, a false positive finding that could suggest a retrocochlear involve-

    ment. The previous research cited recruitment absence in as many as 15 to 27% of the cases

    with cochlear disorder.

    "No recruitment" in a sensorineural case suggests no cochlear pathology, which tends to sup-

    port retrocochlear findings. However, the relative frequency of the expected "no recruitment" or

    "recruitment" in retrocochlear lesions varies considerably.

    The short Increment Sensitivity Index (SISI) Test

    As originally described by Jerger et al. (1959) the SISI test consisted of a steady or carrier tone

    presented at 20 dB SL re: pure tone threshold with increments of intensity superimposed every

    5 seconds. The client was to report whenever an increase in intensity was perceived. The test

    was to begin with several 5-dB increment presentations, because they are easily heard by most

    clients. After this training period twenty 1-dB test increments were presented. Response to each

    of the 1-dB increments was worth 5%. Ease of perception of most of the 1-dB increments (high

    or positive SISI scores) was noted as the hallmark of cochlear lesions with few (low or negativeSISI scores) or no increments heard for normal ears or for other auditory sites of dysfunction.

  • 7/31/2019 Letter 03 En

    12/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment12

    Relative to SISI results (Jerger et al., 1959), scores in the range of 0 to 20% were termed "nega-

    tive" or low SISI scores, likely seen with normals, conductives, or those with VIIIth nerve

    problems, whereas scores of 25 to 65% were "questionable". Scores of 70 to 100% were termed

    "positive" or high SISI scores characteristically expected in cases of cochlear dysfunction.Scores on the SISI test may be related to test frequency: they are increasing with frequency

    from 250 to 4000 Hz.

    Uncomfortable loudness level (UCL)

    UCL determination begins with air conduction measurements at octave intervals ranging from

    250 to 8000 Hz; the intensity is slowly increased from the threshold levels to first signs of

    uncomfortable loudness. In no circumstances is allowed to increase the intensity to sensation of

    pain. In normal subjects and in patients with sensorineural hearing loss with the recruitment

    UCL levels are close to 80-90 dB. Its necessary to mention that in patients with sensorineuralhearing loss with elevated thresholds the dynamic range of hearing (difference between UCL

    and hearing thresholds) is narrowed. In patients with conductive hearing loss UCL thresholds

    correspond to 110 dB or even their detection is impossible. This test has minimal diagnostic

    value for the diagnosis of retrocochlear impairment.

    Concluding remarks From this review of loudness balance, SISI and UCL tests it is

    evident that these measures are quite sensitive to cochlear disorders

    but show variable results relative to retrocochlear lesion; the latter

    was reported as an auditory nerve disorder in most cases. Severalfactors would appear to relegate these tests to a secondary role in

    the differential diagnostic test category. Acoustic reflex (AR )

    measures and auditory brainstem evoked response (ABR) (See

    below) show greater sensitivity to auditory nerve disorder than

    either loudness balance or SISI tests.

  • 7/31/2019 Letter 03 En

    13/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment13

    Speech audiometry

    Although pure-tone thresholds tell us much about the function of the auditory system, they do

    not provide the audiologist with a precise measure of a persons ability to understand speech.

    Speech audiometry, a technique designed to assess a persons ability to hear and understand

    speech, has become a basic tool in the overall assessment of hearing handicap.

    Most diagnostic audiometers include the appropriate circuitry for both pure-tone measurements

    and speech audiometric evaluation. The speech circuitry portion of the audiometer consists of a

    calibrated amplifying system having a variety of options for input and output of speech signals.

    Most commonly, the speech audiometer accommodates inputs for a microphone, tape recorder,

    and compact-disc (CD) player. It is possible to conduct speech testing using live speech(microphone) or pre-recorded materials. The output of the speech signal may be directed to

    earphones, a bone vibrator or a loudspeaker situated in the test room. Many diagnostic audi-

    ometers employ a dual-channel system that enables the examiner to incorporate two inputs at

    the same time and to direct these signals to any of the available outputs.

    To ensure valid and reliable measurements, just as with pure-tone audiometry, the circuitry of

    the speech audiometer must be calibrated on a regular basis. The acoustic output of the speech

    audiometer is calibrated in decibels relative to normal hearing threshold (dB HL). According to

    the most recent standards for speech audiometers, 0 dB HL is equivalent to 20 dB SPL when

    measured through earphones. There is a difference of approximately 7.5 dB between earphone

    and sound-field threshold measurements for speech signals. Consequently, 0 dB HL in the

    sound field corresponds to an output from a loudspeaker of approximately 12 dB SPL. Thuscalculated difference in audiometric zero for earphones and loudspeakers allows us to obtain

    equivalent speech thresholds in decibels HL under these two listening conditions.

    Speech Recognition Threshold (SRT)

    Speech Recognition Threshold (SRT) is intensity at which an individual can identify simple

    speech materials approximately 50% of the time. It is included in the basic hearing evaluation

    for two specific reasons. First, it serves as an excellent check on the validity of pure-tone

    thresholds. There is a strong correlation between the average of the pure-tone thresholds ob-

    tained at the frequencies known to be important for speech (i.e., 500, 1000, and 2000 Hz) andthe SRT. Large discrepancies between SRT and this pure-tone average (PTA) may suggest

    functional, or non-organic, hearing loss. A second important reason for including the SRT in

    the hearing evaluation is that it provides a basis for selecting the sound level at which a pa-

    tients speech recognition abilities should be tested. Finally, besides its use in the basic hearing

    evaluation, the SRT is also useful in the determination of functional gain in the hearing aid

    evaluation process.

  • 7/31/2019 Letter 03 En

    14/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment14

    Speech threshold materials

    The most popular test materials used by audiologists to measure SRT are spondaic words (taken

    from the Central Institute for the Deaf (CID) Auditory Test W-1 -CID W1). Spondaic words aretwo-syllable words spoken with equal stress on each syllable.

    Speech threshold procedures

    Several different procedures have been advocated for determining the SRT using spondaic

    words. The procedure recommended by the Committee on Audiometric Guidelines of the

    American Speech-Language-Hearing Association (ASHA) uses most of the 36 spondaic words

    from the CID W-1 word list. The testing begins by first presenting all of the spondaic words to

    the client at a comfortable level. This familiarises the patient with the words to be used in the

    measurement of threshold.The actual test procedure is a descending threshold technique that consists of two phases: a

    preliminary phase and a test phase. In the preliminary phase, the first spondaic word is pre-

    sented at 30-40 dB above the estimated threshold (pure-tone average) or at 50 dB HL, if an

    estimate can not be obtained. If the client fails to respond at this level or responds incorrectly,

    the starting level is increased 20 dB. This is continued until a correct response is obtained. a

    correct response is then followed by a 10-dB decrease in level until the response changes to an

    incorrect one. At this level, a second spondee is presented. The level is decreased in 10-dB

    steps until two consecutive words have been missed at the same level. The level is then in-

    creased 10 dB above this level at which two consecutive spondees were missed. This represents

    the starting level for the next phase, the test phase.

    During the test phase, two spondaic words are presented at the starting level and at each succes-

    sive 2-dB decrement in level. If five of the first six spondees are repeated correctly, this proce-

    dure continues until five of the last six spondees are responded incorrectly. If five of the first

    stimuli were not repeated correctly, starting level is increased 4-10 dB, and the descending

    series of stimulus presentations is initiated.

    The descending procedure recommended by the ASHA begins with speech level above thresh-

    old and descends toward threshold. At the first few presentation levels, about 80% (five of

    six) of the items are correct. At the ending level, about 20% (one of six) spondees are repeated

    correctly. A formula is used to then calculate the SRT. The SRT represents the lowest hearing

    level at which an individual can recognise 50% of the speech material.

    Assessment of the supra-threshold speech recognition

    Although the speech threshold provides the clinician with the index of the degree of hearing

    loss of speech, it does not offer any information regarding a persons ability to make distinc-

    tions among the various acoustic cues in our spoken language at conversational intensity levels.

    Unlike the situation with the SRT, attempts to calculate a persons ability to understand speech

    presented at comfortably loud levels based on data from the pure-tone audiogram have not been

    successful. Consequently, various supra-threshold speech recognition tests have been developed

    for the purpose of estimating a persons ability to understand conversational speech. Three of

    the common types of speech recognition tests are phonetically balanced word lists, multiple-

    choice tests, and sentence tests.

  • 7/31/2019 Letter 03 En

    15/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment15

    Speech Discrimination Test

    The Speech Discrimination Test is performed by exposing the patient to a series of words at aloudness of 30-40 dB above the previously determined SRT. The patient is asked to repeat each

    word he hears and the result, the discrimination score (DS), is reported as the percentage of

    words correctly heard. Based on these results so-called speech audiogram is constructed (Fig.4).

    Fig. 4. Speech audiogram in normal hearing subject and patients with conductive and sensorineural hearing losses.

    The words used in this test are all monosyllabic and are phonetically balanced. Phonetically

    balanced means that together, all of the words in the list cover the spectrum of phonetics used in

    English language. If the patient has a poor discrimination score at 30 dB above his threshold,

    the test can be repeated at progressively higher levels until the threshold of discomfort is

    reached.

    The word discrimination test is a valuable one for several reasons. It helps to distinguish con-

    ductive hearing losses, where discrimination is in the range of 80% to 100%, from sensorineural

    losses where it is poor. Even more important, it gives clue as to whether a sensorineural loss is

    most likely cochlear or retrocochlear in origin. While discrimination is decreased in both cir-

    cumstances, it is usually much worse with retrocochlear disease such as an acoustic neuroma

    (0% to 40%) than with cochlear disease (40% to 80%).

    The discrimination test can also be very helpful in assessing in an objective way the efficacy of

    hearing aids and making the decision on the cochlear implant candidacy (speech discrimination

    results without and with optimal hearing aid).

    Speech materials for the paediatric population

    This section reviews some speech materials commonly used to assess the word recognition

    skills of children. The major modification required in the evaluation of children is to ensure that

    the speech material is within the receptive vocabulary of the child under test. The required

    response, moreover, must be appropriate for the age tested. It would not be appropriate, for

    example, to ask for written responses from a 4-year-old on a word recognition task!

    The three most popular tests for use with the paediatric population are (a) the Phonetically

    Balanced Kindergarten test (PBK-50s), (b) the 'Word Intelligibility by Picture Identification

    ('WIPI) test, and (c) the North-Western University Auditory Test No. 6 (NU-6). The NU-6

    materials are appropriate for use with adults and children 8 years of age and older. The PBK-

  • 7/31/2019 Letter 03 En

    16/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment16

    50s, an open-response test, appear to be most suitable for children between the ages of 5 and 7

    years.

    For many younger children, the open-response design of the PBK-50s and NU-6 provides a

    complicated task, which causes difficulty in the administration and scoring of the test. Forexample, a child with a speech problem is difficult to evaluate because oral responses may not

    represent what the child actually perceived. In addition, children sometimes lose interest in the

    task because of the tedium associated with this format. Many children's lists have incorporated

    a closed-set response format to minimise some of these problems.

    Probably the most popular multiple-choice test for use with hearing-impaired children is the

    WIPI test. This test consists of four 25-word lists of monosyllabic words within the vocabulary

    of pre-school-age children. For a given test item, the child is presented with a page containing

    six pictures, one of which is a picture of the test item. The appropriate response of the child is

    to point to or touch the picture corresponding to the word perceived. The WIPI test is most

    appropriate for children between the ages of 2 and 6 years.

    Masking

    In speech audiometry, just as in pure-tone measurements, cross-over can occur from the test ear

    to the non-test ear. Consequently, when the level of the speech signal presented to the test ear

    exceeds the bone conduction threshold of the non-test ear by more than 35 dB, masking may be

    necessary.

  • 7/31/2019 Letter 03 En

    17/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment17

    Objective audiological tests

    The second part of the chapter includes the basic information on the objective audiologi-

    cal tests: acoustic immittance measures, auditory evoked potentials and otoacoustic emis-

    sions. Each of these sections describes all types of the auditory evoked potentials and

    otoacoustic emissions.

    Acoustic Immittance Measurement

    Impedance is defined as the opposition to the flow of energy through a system.

    When an acoustic wave strikes the eardrum of the normal ear, a portion of the signal is trans-

    mitted through the middle ear to the cochlea, while the remaining part of the wave is reflected

    back out the external canal. The reflected energy forms a sound wave travelling in an outward

    direction with an amplitude and phase that depend on the opposition encountered at the tym-

    panic membrane. The energy of the reflected wave is greatest when the middle ear system is

    stiff or immobile, as in such pathologic conditions as otitis media with effusion and otosclero-

    sis. On the other hand, an ear with ossicular-chain interruption will reflect considerably less

    sound back into the canal because of the reduced stiffness. A greater portion of the acousticwave will he transmitted to the middle ear under these circumstances. The reflected sound

    wave, therefore, carries information about the status of the middle ear system.

    Fig. 5. The immittance measurement procedure.

    The measurement of acoustic immittance at the tympanic membrane is an important component

    of the basic hearing evaluation. This sensitive and objective diagnostic tool has been used to

    identify the presence of fluid in the middle ear, to evaluate Eustachian tube and facial nerve

    function, to predict audiometric findings, to determine the nature of hearing loss, and to assist in

    diagnosing the site of auditory lesion. This technique is considered particularly useful in the

    assessment of difficult-to-test persons, including very young children.

    Figure 5 shows how this concept may be applied to actual practice. A pliable probe tip is in-

    serted carefully into the ear canal and an airtight seal is obtained so that varying amounts of air

    pressure can be applied to the ear cavity by pumping air into the ear canal or suctioning it out.A positive amount of air pressure (usually +200 daPa) is then introduced into the airtight ear

  • 7/31/2019 Letter 03 En

    18/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment18

    canal, forcing the tympanic membrane inward. The eardrum is now stiffer than it is in its natu-

    ral state because of the positive pressure created in the ear canal. A low-frequency pure tone is

    then introduced, and a tiny microphone measures the -level of the sound reflected from the

    stiffened eardrum. A low-frequency tone is used because this is the frequency region mostaffected by changes in stiffness. Keeping the intensity of the probe tone introduced into the ear

    canal constant, the pressure is then reduced slowly, causing the tympanic membrane to become

    more compliant (less stiff). As the tympanic membrane becomes increasingly compliant, more

    of the acoustic signal will be passed through the middle ear, and the level of the reflected sound

    in the ear canal will decrease. When the air pressure in the ear canal equals the air pressure in

    the middle ear, the tympanic membrane will move with the greatest ease. As the pressure is

    reduced further, the tympanic membrane is pulled outward, and the eardrum again becomes less

    mobile. As before, when the eardrum becomes stiffer or less compliant, more low-frequency

    energy is reflected off the tympanic membrane, and the sound level within the ear canal in-

    creases.

    Fig. 6. A- type A; B type C; C type B; D type AD

    Immittance Test Battery

    Three basic measurements - tympanometry, static acoustic immittance, and threshold of the

    acoustic reflex - commonly make up the basic acoustic immittance test battery.

  • 7/31/2019 Letter 03 En

    19/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment19

    Tympanometry

    Acoustic immittance at the tympanic membrane of a normal ear changes systematically

    as air pressure in the external canal is varied above and below ambient air pressure. Thenormal relationship between air pressure changes and changes in immittance is fre-

    quently altered in the presence of middle ear disease. Tympanometry is the measure-

    ment of the mobility of the middle ear when air pressure in the external canal is varied

    from +200 to -400 daPa (mm H2O). Results from tympanometry are then plotted on a

    graph, with air pressure along the x-axis and immittance, or compliance, along the y-

    axis. Figure 6 illustrates some of the tympanograms commonly seen in normal and

    pathologic ears.

    Various estimates have been made of the air pressure in the ear canal that results in the

    least amount of reflected sound energy from normal middle ears.

    This air pressure is routinely referred to as the peak pressure point. A normal tym-panogram for an adult (Fig. 6a) has a peak pressure point between -100 and +40 daPa,

    which suggests that the middle ear functions optimally at or near ambient pressure (0

    daPa). Tympanograms that peak at a point below the accepted range of normal pressures

    (Fig. 6b) suggest malfunction of the middle ear pressure-equalising system. This mal-

    function might be due to Eustachian tube malfunction, early or resolving secretory otitis

    media, or acute otitis media. Ears that contain fluid behind the eardrum are characterised

    by a flat tympanogram at a high impedance or low admittance value without a peak

    pressure point (Fig. 6c). This implies an excessively stiff system that does not allow for

    an increase in sound transmission through the middle ear under any pressure state.

    The amplitude (height) of the tympanogram also provides information about the com-pliance or elasticity of the system. A stiff middle ear (as in, for example, ossicular-chain

    fixation) is represented by a shallow amplitude, suggesting high acoustic impedance or

    low admittance. Conversely, an ear with abnormally low acoustic impedance or high

    admittance (as in an interrupted ossicular chain or a hypermobile tympanic membrane)

    is revealed by a tympanogram with a very high amplitude (Fig. 6d).

    As noted, most tympanograms are measured using low-frequency probe tones intro-

    duced into the ear canal; frequencies of 220 and 660 Hz are the most commonly used

    frequencies. In recent years, however, there has been increased interest in the use of

    multiple probe-tone frequencies in which a family of tympanograms are obtained for a

    wide range of probe frequencies (multiple-frequency tympanometry). This has usually

    been accomplished by sweeping the probe tone frequency from a low to a high value at

    each of several air-pressure values or by sweeping the air pressure from positive to

    negative (or vice versa) air pressures at each of several probe-tone frequencies. Use of

    multiple probe-tone frequencies allows measurement and specification of the resonant

    frequency of the middle ear. Although it has been suggested that changes in the resonant

    frequency observed with multiple-frequency tympanograms can improve the diagnostic

    capabilities of tympanometric measurements, the clinical usefulness of this additional

    information remains to be firmly established. It does appear, however, that multiple-

    frequency tympanograms may be more useful than those obtained using only a 220-Hz

    or 660-Hz probe tone in detecting the presence of fluid in the middle ears of young in-

    fants (under 4 months of age).

  • 7/31/2019 Letter 03 En

    20/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment20

    Static Acoustic Immittance

    Static acoustic immittance measures the ease of flow of acoustic energy through the

    middle ear and is usually expressed in "equivalent volume" in cubic centimetres. To ob-tain this measurement, immittance is first determined under a positive pressure (+200

    daPa) artificially induced in the canal. Very little sound is admitted through the middle

    ear under this extreme positive pressure, with much of the acoustic energy reflected

    back into the ear canal. Next, a similar determination is made with the eardrum in its

    most compliant position, thus maximising transmission through the middle ear cavity.

    The arithmetic difference between these two immittance values, usually recorded in cu-

    bic centimetres (or millilitres) of equivalent volume, provides an estimate of immittance

    at the tympanic membrane.

    Compliance values less than or equal to 0.25 cm3 (ml) of equivalent volume suggest

    low acoustic immittance (indicative of stiffening pathologies), and values greater than orequal to 2.0 cm3 generally indicate abnormally high immittance (suggestive of ossicular

    discontinuity or healed tympanic membrane perforations).

    Acoustic Reflex Threshold

    The acoustic reflex threshold is defined as the lowest possible intensity needed to elicit a

    middle ear muscle contraction. Contraction of the middle ear muscles evoked by intense

    sound results in a temporary increase in the middle ear impedance. The acoustic reflex

    is a consensual phenomenon; acoustic stimulation to one ear will elicit a muscle con-

    traction and subsequent impedance change in both ears. Usually, the acoustic reflex ismonitored in the ear canal contralateral to the ear receiving the sound stimulus. Figure 7

    shows how it is measured. A headphone is placed on one ear, and the probe assembly is

    inserted into the contralateral ear.

    Fig. 7. Acoustic reflex measurement.

  • 7/31/2019 Letter 03 En

    21/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment21

    When the signal transduced from the earphone reaches an intensity sufficient to evoke

    an acoustic reflex, the stiffness of the middle ear is increased in both ears. This results in

    more sound being reflected from the eardrum, and a subsequent increase in sound pres-

    sure is observed on the immittance instrument. In recording the data, it is standard pro-cedure to consider the ear stimulated with the intense sound as the ear under test. Be-

    cause the ear stimulated is contralateral to the ear in which the reflex is measured, these

    reflex thresholds are referred to as contralateral reflex thresholds. It is also frequently

    possible to present the loud reflex-activating stimulus through the probe assembly itself.

    In this case, the reflex is both activated and measured in the same ear. This is referred to

    as an ipsilateral acoustic reflex.

    In the normal ear, contraction of middle ear muscles occurs with pure tones ranging

    from 65 to 95 dB HL. A conductive hearing loss, however, tends to either elevate or

    eliminate the reflex response. When acoustic reflex information is used in conjunction

    with tympanometry and static acoustic immittance measurements, it serves to substanti-

    ate further the existence of a middle ear disorder. With unilateral conductive hearing

    loss, failure to elicit the reflex depends on the size of the air-bone gap and on the ear in

    which the probe tip is inserted. If the stimulus is presented to the good ear and the probe

    tip is placed on the affected side, an air-bone gap of only 10 dB will usually abolish the

    reflex response. If, however, the stimulus is presented to the pathologic ear and the

    probe is in the normal ear, a gap of 25 dB is needed to abolish or significantly elevate

    the reflex threshold.

    Acoustic reflex thresholds can also be useful in differentiating whether a sensorineural

    hearing loss is due to a lesion in the inner ear or to one in the auditory nerve. For hear-

    ing losses ranging from mild to severe, the acoustic reflex threshold is more likely to be

    elevated or absent in ears with neural pathology than in those with cochlear damage.The pattern of acoustic reflex thresholds for ipsilateral and contralateral stimulation

    across both ears, moreover, can aid in diagnosing brain stem lesions affecting the reflex

    pathways in the auditory brainstem.

    Special Considerations in the Use of Immittance with Young Children

    Immittance can be most valuable in the assessment of young children, although it does have

    some diagnostic limitations. Further, although electroacoustic immittance measures are ordinar-

    ily simple and quick to obtain, special consideration and skills are required to obtain these

    measures successfully from young infants.

    Although immittance tests may be administered to neonates and young infants with a reason-

    able amount of success, tympanometry may have limited value with children younger than 7

    months of age. Below this age, there is a poor correlation between tympanometry and the actual

    condition of the middle ear. In very young infants, a normal tympanogram does not necessarily

    imply that there is a normal middle ear system. However, a flat tympanogram obtained in an

    infant strongly suggests a diseased ear. Consequently, it is still worthwhile to administer immit-

    tance tests to this population.

    Another limitation of immittance measurements is the difficulty of obtaining measurements

    from a hyperactive child or a child who is crying, yawning, or continually talking. A young

    child who exhibits excessive body movement or head turning will make it almost impossible tomaintain an airtight seal with the probe tip. Vocalisation produces middle ear muscle activity

  • 7/31/2019 Letter 03 En

    22/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment22

    that in turn causes continual alterations in the compliance of the tympanic membrane, making

    immittance measurements impossible. With difficult-to-test and younger children, specialised

    techniques are needed to keep the child relatively calm and quiet. For young children, it is

    always recommended that a second person be involved in the evaluation. While the child issitting on the parent's lap, one person can place the earphone and insert the probe tip while a

    second person manipulates the controls of the equipment.

    With infants below 2 years of age, placing the earphones and headband on the child's head is

    often distracting. It may be helpful to remove the earphone from the headband, rest the band

    over the mother's shoulder, and insert the probe tip into the child's ear. It is also helpful to use

    some distractive techniques that will occupy the child's attention during the test.

    Overview of auditory evoked responses and otoacoustic emissions

    This chapter introduces auditory evoked responses (AERs) and otoacoustic emissions (OAEs)

    to readers who are unfamiliar with the topic.

    What are Auditory Evoked Responses ?

    An auditory evoked response (AER) is activity (a "response") within the auditory system (the

    ear, the auditory nerve, or auditory regions of the brain) that is produced or stimulated

    ("evoked") by sounds ("auditory" or acoustic stimuli). Representative waveforms of majorAERs are shown in Figure 8. In the simplest of terms, AERs are brain waves (electrical poten-

    tials) generated when a person is stimulated with sounds. These sounds may range from clicks

    (very brief, sham sounds) to tones, and even to speech sounds. The intensity or strength of the

    sounds (corresponding to how loud they are) may be high (loud) to low (faint). As a rule,

    sounds with greater intensity produce larger auditory brain responses. The sounds are presented

    to a person by way of some type of acoustic transducer: a conventional earphone or an inserted

    earphone.

    Fig. 8. Major AERs evoked by sound stimuli.

  • 7/31/2019 Letter 03 En

    23/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment23

    Brain activity evoked by the sounds is picked up by electrodes, which are usually placed at

    specific places on the scalp (e.g., high on the forehead) and near the ears (e.g., on the earlobes

    or mastoids). A typical electrode consists of a wire with a disc at one end, which makes contact

    with the skin, and a pin at the other end, which plugs into an electrode box of a junction. Theactivity evoked by the sounds arises from structures within the ear (hair cells), nerve, and brain,

    at some distance from these skin electrodes. This sensory and neural activity is conveyed from

    the auditory structures through body tissue and fluids to the surface electrodes. Then, the wires

    lead this electrical activity to the sensitive biological amplifier and then to a specially pro-

    grammed computer that is capable of high-speed processing.

    A logical question at this juncture is, "If the electrodes are located relatively far from the gen-

    erators of the responses, then how does the tester know where the response is coming from in

    the brain?" Because the stimulus is a sound, it is clear that the response arises somewhere in the

    auditory system. The specific source of the response within the auditory system is often diffi-

    cult or impossible to pinpoint. Nonetheless, by analysing the pattern of the response and by

    calculating the time between presentation of the stimulus and the occurrence of the response, it

    is usually possible to determine the regions in the auditory system giving rise to the response.

    The time after the stimulus at which AERs occur is invariably less than 1 second. Therefore, the

    post-stimulus times (latencies) of peaks in the response pattern (waveform) are described in

    milliseconds.

    Those responses with the shortest latencies are generated by the inner ear and the auditory

    nerve. A few milliseconds later are unique response patterns that reflect activity within the

    auditory brainstem. Still later are response patterns due to activity in higher auditory portions of

    the brain, such as the cerebral cortex. Extensive experience in recording AERs from normal

    human subjects and from patients with pathologies involving different regions of the brain has

    produced some useful correlations among response patterns, the periods after the stimulus, andthe generators of AERs. In fact, the terms used in referring to different categories of AERs are

    sometimes related to the auditory structures that give rise to the response. Examples are electro-

    cochleography (from the cochlea or inner ear) and auditory brainstem response (from the brain-

    stem). To a large extent, however, AER terminology is inconsistent and quite confusing.

    The brain activity underlying AERs is of extremely small voltage and is measured in microvolts

    (V). Activity arising from the higher regions of the auditory system (the cerebral cortex)

    involves hundreds of thousands, perhaps millions, of brain cells. The electrodes are also rela-

    tively close to the sources of this activity. Therefore, these responses tend to be somewhat

    larger in size (amplitudes - on the order of 5-10 V). In contrast, activity generated by the ear,

    auditory nerve, and brainstem, which involves fewer neural units and may arise farther from theelectrodes, may be extremely small - on the order of 0.10-0.50 V. Because auditory evoked

    brain activity is so tiny, two processes are essential for detecting AERs. One process is to in-

    crease, or amplify, the voltage example, smaller voltage activity from the ear, auditory nerve,

    and brainstem is typically made 100,000 times greater by amplification before any analysis of

    the response takes place.

    The second process is called "signal averaging". Before signal averaging is defined, the need

    for it is explained. The signal (the actual auditory evoked brain activity) is buried within other

    brain activity (general background brain activity, electroencephalogram type activity, EEG), as

    well as within electrical and muscle activity from sources outside of the auditory system, such

    as fluorescent lights in the test room and movement of the jaw or neck. This electrical activitythat is not related to the auditory stimulus is referred to as "noise". If a sound was presented to a

  • 7/31/2019 Letter 03 En

    24/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment24

    person just once, it would be impossible to distinguish the tiny signal (in the ear, auditory

    nerve, or brain) that is produced by the sound from the much larger ongoing background elec-

    trical activity - the noise - which is also detected by the electrodes.

    Several techniques are available for increasing the strength of the signal (AER) and reducingthe strength of the noise (other electrical activity) - that is, improving the signal-to-noise ratio.

    The most important of these techniques is signal averaging, which may be described as follows:

    every time a stimulus is presented, the tester's computer records the brain activity detected with

    the electrodes, including the auditory response if there is one. During the signal averaging

    process, hundreds, and often thousands, of stimuli are presented, and the electrical brain activity

    is stored. The basic assumption underlying signal averaging is that the pattern of auditory brain

    activity produced by each stimulus will almost always be about the same and at the same (with

    stimulus) time. At any single moment during the recording, a similar voltage response will

    follow each stimulus. With each additional presentation of a stimulus, then, the resulting AER

    will add to and will strengthen previous responses. The resulting record is usually divided by

    the number of stimulus presentations, and an average response is calculated. Meanwhile, the

    record of the electrodes background electrical activity is also adding up. This activity is consid-

    ered random, however, and does not have the same pattern after each stimulus presentation. For

    example, at any given moment after one stimulus, the "noise" may have a positive voltage (e.g.,

    +1 V), yet after another stimulus, the voltage at this time may be negative (e.g., -1 V). As

    these subsequent random background activity recordings are added together, they eventually

    cancel one another. Thus, these random responses are averaged out or reduced in size, leaving

    only the AER.

    As detailed in the following part of this chapter, AERs have an interesting and varied history,

    dating back to 1930. The most dramatic growth in AER clinical use, however, has occurred

    since 1970 and is directly related to the availability of relatively small and inexpensive, yetpowerful, computers. Now, there are over a many manufacturers of user friendly computerised

    equipment for recording AERs. As equipment has become more widely available, and different

    types of health care professionals have incorporated AERs into the scope of their practice, the

    clinical uses for AER have correspondingly expanded. Just a few examples of these clinical

    uses would include estimation of auditory sensitivity in new-born infants and older children

    who are at risk for hearing impairment, diagnosis of inner ear disease (e.g., Meniere's disease),

    detection of tumours and other pathology of the auditory central nervous system (auditory

    nerve, brainstem, or cerebrum), monitoring central nervous system status during nerve and

    brain surgery, and even diagnosis of brain death.

    Factors in the Measurement of AERs

    Subject Factors. Subject characteristics are a logical starting point for a review of factors be-

    cause there can be no AER without a living organism. With rare exceptions, the information in

    this chapter is limited to human AERs. Among subject variables, the anatomical and physiol-

    ogic bases of AERs are, perhaps, the most controversial yet clinically important. For example,

    application of AERs in frequency-specific assessment of auditory sensitivity requires knowl-

    edge of which portions of the cochlear partition (basilar membrane) contribute to the averaged

    response for given tonal stimulus parameters. Effective use of AERs in evaluation of central

    nervous system (CNS) pathophysiology, such as identification and localisation of brain lesions,

    directly depends on knowledge of the neural generator(s) of the specific wave components.

  • 7/31/2019 Letter 03 En

    25/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment25

    The subject characteristics are known to influence AERs and must be considered clinically in

    the interpretation of findings:

    Age

    Gender (male vs. female differences)Body temperature

    State of arousal

    Muscular artefact

    Drug effects

    There may be important variations in the effects of these factors on the different AERs. For

    example, gender mostly affects auditory brainstem response (ABR), and age seriously affects

    all AERs; state of arousal and certain drugs influencing the CNS are not important factors in

    interpreting earlier latency responses (ECochG and ABR), but they must be considered for valid

    measurement of later latency responses (ALLR and P300).

    Finally, one must consider the relation between AERs and pathology of the peripheral auditorysystem (middle ear, cochlea and eighth cranial nerve) and central auditory system (brainstem

    and cerebrum). There are myriad interactions among pathologies, AER measurement parame-

    ters, and AER outcome - interactions that differ for each of the AERs.

    Stimulus Factors. The main stimulusparameters which should be considered are:

    Type of stimulus (e.g., click or tone burst);

    Duration characteristics;

    Intensity level in decibels (dB);

    Rate of presentation;Acoustic polarity;

    Mode of presentation (e.g., monaural vs. binaural);

    Type of transducer.

    Selection of the stimulus parameters that are appropriate for a given subject depends largely on

    the type of AER to be recorded and the objective of the assessment. The consequence of select-

    ing an inappropriate stimulus parameter may vary greatly. For example, if a low-frequency tone

    stimulus with excessively long rise-fall times and a plateau (e.g., a 500-Hz tone pip with rise-

    plateau-fall values of 10-msec-5-msec-10-msec, respectively) is chosen for ABR measurement,

    rather than a transient (click) stimulus, there will be little or no response, even in a normal

    subject. These same stimulus parameters, however, would be appropriate and effective for

    generating an auditory late response (ALLR). On the other hand, increasing the rate of click

    stimulation to 21.1/sec will have no serious influence on the ABR but will abolish the ALLR.

    Acquisition Factors. To a large extent, the acquisition factors determine the type of AER re-

    corded (e.g., ECochG vs. ALLR) and the success with which it is measured. The type of elec-

    trode used in many clinical AER settings is a metal-alloy, disk-shaped EEG electrode attached

    to the skin, although other types are often necessary for monitoring applications or for neonatal

    assessment. Electrode placement ("array") is a crucial factor in all AER recordings, and it varies

    greatly, depending on the type of AER to be recorded and the purpose of the assessment.

    Perhaps more than any other factor, post-stimulus evoked response analysis time has a funda-

    mental impact on the AER that will be recorded. Without an analysis time period that encom-passes the region of the response (e.g., at least 10 msec for the ABR), the response will not be

  • 7/31/2019 Letter 03 En

    26/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment26

    observed, even if all other measurement parameters are appropriate. There is a relation between

    analysis time, the number of data points sampled in recording an AER, and the time resolution

    of the recording. Detecting an AER in the presence of ongoing neurogenic, myogenic, and

    environmental activity-noise is facilitated by filtering out the frequency regions of the unwantedactivity. With inappropriate filtering, however, part or all of the desired evoked response may

    also be eliminated from analysis.

    Tester Factors. AER measurement is often referred to as an "objective" method of assessing the

    status of the peripheral and central auditory system. The description of AERs as objective is

    probably used because, in contrast to traditional auditory measurement, an overt behavioural

    response from the subject is not required. AERs are electrophysiologic and not behavioural

    responses; that is, with the appropriate measurement conditions. the subject will typically pro-

    duce a response without performing any externally observable behavioural task or act.

    Though the measurements may be considered objective, analysis of the AER waveform pres-

    ently depends on subjective analysis by the tester. Response interpretation is very much influ-enced by clinical skill and experience. There are several possible exceptions to this statement.

    For example, within recent years devices have been developed for "automated" response analy-

    sis. Nonetheless, for the time being, a clinician must (a) identify some feature of the response

    (such as a wave component), (b) calculate a measure or index of this component (such as la-

    tency or amplitude), and (c) form a judgement about response reliability and the accuracy of the

    calculations. This process can become highly subjective.

    Furthermore, there are neither standardised protocols for AER measurement nor accepted crite-

    ria for definition or analysis of responses. Response parameters other than wave latency or

    amplitude, such as morphology or frequency composition (spectrum). are not yet routinely

    analysed clinically. There is no single technique or approach that can or should always be used

    in recording AERs. The best technique or approach is the one that produces the best and most

    accurate response. Conversely, there are many ways that measurement of AERs can run afoul.

    Put simply, AER measurement is a far more challenging clinical task than sticking a few elec-

    trodes on a head, presenting some sounds to an ear, pressing some buttons, and determining at a

    glance whether the resulting response is normal or abnormal.

    Electrocochleography (ECochG):

    General Description And Waveform Terminology

    Electrocochleography (ECochG) records the electrical activity originating from within and

    around the cochlea in a 1 to 10 ms post-stimulus time window. The activity is (1) pre-synaptic,

    receptor activity i.e. cochlear microphonic (CM) and summating potential (SP) from the hair

    cells and (2) post-synaptic, neural activity i.e. auditory compound action potential (CAP) from

    the peripheral part of the cochlear nerve.

    Typical ECochG waveforms are shown in Fig. 9. The response, which arises from the cochlea

    and the eighth (auditory) cranial nerve, occurs within the first 2 or 3 msec after an abrupt stimu-

    lus. The first component observed, under certain measurement conditions, is the cochlear mi-

    crophonic (CM): an alternating presynaptic electrical potential generated at the hair-cell level in

    the cochlea. With single polarity tonal stimulus, the CM appears as a waveform with a series of

  • 7/31/2019 Letter 03 En

    27/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment27

    repeated upward peaks and downward troughs. The CM component can obscure other later

    components in the ECochG waveform because it continues as long as the stimulus is presented.

    Fig. 9. ECochG waveforms: left panel - CM, AP and SP; right panel - AP-SP-complex after CM reduction.

    Use of an alternating polarity stimulus effectively reduces the CM. The upward bumps of the

    positive voltage polarity stimuli are averaged in with the downward bumps of the negative

    voltage polarity stimuli. In this way, the CM is usually cancelled out. This is usually desirable

    in ECochG measurement. The next two ECochG components are the action potential (AP) and

    the summating potential (SP). Other terms for the AP are N1 (referring to the first negative

    peak) and ABR wave I. The SP may appear as a separate peak preceding the AP, going in the

    same direction as the AP, or going in the opposite direction. It may also appear as a ledge or

    hump on the beginning slope of the AP. The AP is generally the larger of the two peaks and has

    a latency (time of occurrence after the stimulus) of about 1.5 msec. Whether the AP (and some-times SP) go upward or downward depends on the location of the two (noninverting and invert-

    ing) recording electrodes. In this chapter, however, AP is plotted downward.

    The SP is pre-synaptic potential and arises from the cochlea, while the AP (actually the com-

    pound or combined action potentials of many nerve fibres) is generated by the distal (cochlear)

    end of the auditory (eighth cranial) nerve. Thus, even with this earliest AER there is inconsis-

    tency in terminology. That is, electrocochleography includes components that do not arise

    directly from the cochlea. A second major peak is usually observed following the AP and some-

    times described in reference to ECochG. Again, whether this peak goes upward or downward

    depends on the electrode configuration. Also, the N2 does not arise directly from the cochlea

    and is generated by the proximal part of the auditory nerve (outside the internal auditory canal)

    and is equivalent to the wave II component of the ABR.

    The recordings are obtained from either trans-tympanic or extra-tympanic electrodes. The trans-

    tympanic electrode is normally a thin flexible stainless-steel needle electrode, which through

    the tympanic membrane, is placed on the promontory of the medial part of the middle ear.

    The extra-tympanic electrode exists in many different versions e.g. silver- or steel needles,

    silver wires, silver balls, plastic clips and tiny conductive sponges shaped to fit the external

    meatus.

    The electrical activity from the trans- or extra-tympanic electrode is normally recorded relative

    (differential) to the activity from an electrode on the ear lobe. There are both advantages and

    disadvantages with the trans- and extra-tympanic electrode placements and there are significant

  • 7/31/2019 Letter 03 En

    28/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment28

    differences in the magnitudes of the evoked potentials obtained (tenfold higher in trans-

    tympanic recording).

    Auditory Brainstem Response (ABR)

    General Description and Waveform Terminology

    Brain stem response audiometry records the electrical activity in the cochlear nerve and parts of

    the brain stem in a 1 to 15 ms time window (Fig. 10).

    Fig. 10. ABR waveforms

    The term ABR was formally introduced by Davis (1979) in a report of a United States-Japanseminar on "auditory responses from the brain stem". Perusal of the references for this book

    clearly indicates that the ABR is described with a variety of terms. The most common two

    alternative terms for the ABR are brainstem auditory evoked response (BAER), which is used

    consistently in neurology, and brainstem auditory evoked potential (BAEP). The term brain-

    stem evoked response (BSER), popular in the late 1970s, is improper because it does not spec-

    ify the auditory system. Further, responses from other sensory systems. such as the somatosen-

    sory system, also have brainstem components. Obviously, the same can be said for the totally

    inadequate term, brainstem, as in, "what were the patient's brainstem results?. Incidentally, the

    same limitation applies to the popular term "middle latency response (MLR)".

    Arguments for use of the specific term potential instead of the more general term response and

    apparent distinctions in the neurophysiologic meanings of these terms, have been presented in

    the past. While it is possible that the term response might imply to some readers an overt, vol-

    untary, behavioural reaction to a sensory stimulus, no deference in meaning is argued here. In

    the interest of consistency, the term response is typically used in this chapter, and it is consid-

    ered interchangeable with the term potential. Furthermore, the term auditory brainstem re-

    sponse, abbreviated ABR, is used exclusively.

    Wave components are labelled by Roman numerals, by positive (P) and negative (N) voltage

    indicators plus Arabic numerals, and simply by Arabic numbers. There are inconsistencies, as

    noted previously, in vertex polarity (negative vs. positive). With the Roman numeral labelling

    system as introduced by Jewett and Williston (1971), vertex positive waves are plotted upward.

    That is, the electrode at the vertex (or high forehead) is plugged into the positive voltage inputof the amplifier, while the earlobe (or mastoid) electrode is connected to the negative voltage

  • 7/31/2019 Letter 03 En

    29/44

    QESWHIC Project - Study Letter 3

    Audiological Assessment29

    input. This produces the typical ABR waveform. However, some investigators, usually Japa-

    nese (e.g., Hashimoto) or European (e.g., Terkildsen), reverse this electrode arrangement (nega-

    tive voltage input is at the vertex, or the high forehead) and major peaks in the resulting wave-

    form are plotted downward. The majority of well-known European electrophysiologists do notadhere to this arrangement. On the other hand, several prominent auditory neurophysiologists in

    the United States (e.g., A. Mller) and Canada (e.g., T. Picton) also persistently follow the

    convention of showing positive peaks downward and negative peaks upward.

    Probably because the ABR as typically recorded with a vertex-ipsilateral ear (earlobe or mas-

    toid) electrode array often lacks a distinct wave IV versus V, some investigators (Lev & Soh-

    mer, 1972; Thornton & Coleman, 1975) labelled the wave IV-V complex with the number 4,

    and used number 5 (or P5 or N5) as the label for what is conventionally shown as wave VI.

    Traditionally, little attention has been given to the troughs following ABR peaks, even though

    the troughs may correspond to different anatomical regions than the peaks and, therefore, have

    clinical value.

    At relatively high-stimulus intensity levels (70 dB HL and above) and with click stimuli, the

    wave I component normally occurs at approximately 1.5 msec after the stimulus, and then each

    subsequent wave occurs at approximately 1.0-msec intervals (wave II at 2.5, wave III at 3.5,

    etc.). An approximation of the normal absolute latencies of each of the ABR waves can be

    recalled, therefore, by counting the number of waves beyond wave I and adding this number to

    1.5 msec. For example. wave III is 2 waves after wave I, and 2 plus 1.5 msec equals 3.5 msec.

    Estimation of average inter-wave intervals is also rather simple because there is usually about

    1.0 msec between each wave. The wave I to V latency interval (an important response parame-

    ter in neurodiagnosis with ABR) is estimated by subtracting 1 from 5, or by adding up the

    intervals (1.0 msec) between each of the intervening waves (for wave I-II, wave II-III. wave III-

    IV, and wave IV-V). Either way, the result is an average normal wave I-V latency interval of4.0 msec.

    Another handy mnemonic for remembering vital ABR normative data for high-intensity-level

    stimulation is to keep in mind that wave V occurs at about 5.5 msec and has an amplitude of

    about 0.5 V (here, 5 is the magic number). In a further extension of this theme, the upper cut-

    off for a normal wave I-V interval in adults (actually persons over 1.5 years of age) is in the

    region of 4.5 msec, whereas a new-born infant typically has a wave I-V latency interval of no