Home » Module 2 – Pedagogic Audiology

Module 2 – Pedagogic Audiology

Describes what is specific for early childhood hearing diagnostics and which methods have to be applied to optimize the provision of assistive devices and the environmental parameters.
Author: Barbara Bogner

Introduction

Aim of the Module

The aim of this Module is to explore ways of finding out how well children aged 0-3 years can hear, and to look at the ways in which good hearing helps each individual child in their own overall development. We will start by looking at normal hearing and hearing loss. We will discuss in which parts of the spectrum hearing is possible for normal-hearing people, which regions are not audible to those with hearing impairment, and what can be achieved with the aid of modern assistive hearing devices.
The difficulties will then be highlighted that currently exist in terms of diagnosis and provision of suitable assistive hearing devices in children up to the age of 2, and how to deal with these problems. In order to assess how effective these technologies are, early hearing development must be documented and evaluated. Milestones in hearing development will therefore be pinpointed and techniques presented which – drawing on both educational practice and day-to-day observation of the child – enable feedback to be provided about their hearing ability.
It is not only individual factors inherent to the hearing-impaired child that are important in learning to hear, but also the acoustic environment in which this learning takes place. This Module describes how an environment can be created that is conducive to hearing, and how to achieve this in practice.

Back to Top

Learning rationale

Through self-study, readers should acquire a basic understanding of hearing, both with and without assistive hearing devices, and of how hearing ability is assessed in the under-threes. This Module will help them find ways of documenting – in practice, on a day-to-day basis – the progression of hearing development. Many of the materials listed here can be obtained from the Internet. Readers will also have the opportunity to assess how much they have learned by working through review questions for each chapter.

Back to Top

What hearing loss is

The main learning goals are:
to understand the scope of and limits to hearing;
to understand what hearing loss means;
to identify which parts of the auditory spectrum are audible with and without hearing devices.

Normal hearing and hearing impairment

Normal hearing
The anatomy and physiology of the ear
Anatomically, the ear is divided into three parts: the outer ear, the middle ear and the inner ear. The outer ear, which comprises the auricle (the ear flap) and the ear canal, acts as a sound funnel: the auricle collects sound and directs it into the ear canal, where it strikes the eardrum and sets it vibrating. These oscillations are, in turn, transmitted to the inner ear via the ossicular chain (consisting of three tiny bones: the hammer, incus and stapes). This is termed air conduction. Sound can also be transmitted via bone conduction, which involves sound waves striking the skull bone (mastoid) and being guided directly to the inner ear. In the inner ear, these incoming vibrations continue in the form of a fluid wave within the cochlear. The outer hair cells take up this wave motion and pass it on to the inner hair cells, in turn causing these sensory hairs to move (i.e. be displaced from their resting position). This leads to voltage changes within the cell, with a brief flow of current stimulating the auditory nerve which passes information about the sound event on to the centres of perception in the central nervous system (specifically, the cerebral cortex of the brain). It is only here that what is heard is actually ‘understood’ by means of comparison with previously learned patterns. We hear, therefore, not with our ears but with our brains.

Manifestations of hearing impairment
In order that an acoustic signal is both audible and useable, the following needs to happen:
1. Sound must be conveyed to the inner ear;
2. Sound must be converted into electrical impulses that are transmitted onwards;
3. The signal must be processed at the level of the central nervous system.
Things can go wrong at each of these stages in the process. This is reflected in the main categories of hearing impairment, of which there are basically three: conduction hearing loss, sensorineural hearing loss, and combined hearing loss. Above and beyond this distinction, hearing disorders may be categorized as peripheral or central (depending on where they originate) and described as cochlear hearing loss, retrocochlear hearing loss, auditory neuropathy, or also as auditory-processing and perceptual disorders.

Conduction hearing loss
Conduction hearing impairment originates in the outer or middle ear. Possible causes include:

Outer ear
Injuries;
Cerumen (earwax);
Foreign bodies in the ear canal;
Anatomical abnormalities such as the absence of the outer ear or malformation of the ear canal (atresia or stenosis of the ear canal).
Middle ear
Ventilation problems;
Inflammations of the middle ear (otitis media):
Otitis media with effusion (serous otitis media)
Acute otitis media
Chronic otitis media
Otosclerosis
Malformation of the ossicular chain
Injury to the eardrum and middle ear

The most frequent cause of conduction hearing loss in children under the age of 8 is inflammation of the middle ear (otitis media, characterized by ‘running ears’), so this will be dealt with in some depth here. Around 15-20 % of all children between the ages of 1 and 4 will be affected with this condition at some stage. Since it leads to temporary hearing loss, it is important that it be detected, as even mild short-term hearing impairment can (during sensitive phases of development) have an adverse effect on the acquisition of both hearing and language.
Otitis media occurs when the Eustachian tubes are not able to adequately ventilate the middle ear, a situation which is frequently associated with infections in the oropharynx. Fluid that accumulates in the middle ear is thus unable to drain away. The problem in very young children is that, owing to the narrowness and relatively horizontal position of the Eustachian tube, adequate ventilation and fluid drainage are prevented. If the tympanic cavity remains unventilated for any length of time, this leads to the pressure in this cavity being lower than the ambient atmospheric pressure outside, and in the outer ear canal. The eardrum is thus pushed inwards, which stiffens the ossicular chain.
Children with chronic accumulation of fluid in the middle ear have a hearing loss that changes on a day-to-day basis and may not be of the same degree in both ears. These unreliable auditory sensations may affect the maturation of the central auditory pathway and are also cited as a possible contributory cause of both auditory-processing and perceptual disorders.
Negative pressure in the middle ear can be diagnosed by means of tympanometry. However, before other diagnostic and therapeutic audiological methods are applied, the middle-ear inflammation should be cleared by medical means (i.e. treatment with antibiotics and the use of grommets).

It is generally true for all forms of conduction hearing loss that the input of sound via air conduction to the inner ear is disrupted, but that bone conduction is unaffected. This leads to the perception of noises, sound and speech being muffled, and the range of audibility being markedly reduced. The maximum hearing loss is 70-80 dB. The nature of the sound events experienced is unchanged. Comprehension of spoken language is restricted, but possible if sufficient overall amplification is provided. The ability to recognize spoken language is greatly reduced with regard to unstressed syllables, although prosodic features are largely preserved. We monitor our own speech using bone conduction.
Therapy for conduction hearing loss initially entails the need to take medical action aimed at clearing up any infection/inflammation in the outer and middle ear. This may include medicinal steps involved in treating infection, as well as surgical procedures. The second stage involves initiating the process of fitting with hearing devices. The options for this technical intervention include hearing aids (such as behind-the-ear hearing aids and implantable hearing systems) or bone conduction devices. A good level of speech comprehension is possible, provided that sufficient amplification is provided.

Sensorineural hearing loss
A hearing disorder that is caused by malformation, injury or conditions affecting the various regions of the inner ear, or the neural pathways ‘downstream’, and/or one that is located in the associated cerebral processing system of the brain, is termed ‘sensorineural hearing loss’. Here, the conversion of acoustic (i.e. mechanical) impulses into neural impulses is impaired, as is their onward transmission. In 98 % of these cases, the cause lies in the cochlea (with retrocochlear hearing disorders being relatively infrequent), which is why the designations ‘inner-ear hearing loss’ and ‘cochlear hearing loss’ tend to be used synonymously with the term ‘sensorineural hearing loss’.

Sensorineural hearing loss can affect people of all ages. Possible causes of these problems include:

Prenatal
Perinatal
Postnatal
Genetic predisposition (indicated by hearing loss in close relatives)
Mother had rubella during the first six months of pregnancy
Cytomegalovirus infection
Embryonic infections
Malformations of the head region
Chromosomal aberrations
Oxygen deprivation
Birth trauma
Cerebral haemorrhaging
5 min. Apgar below 5
pH below 7.2
Birth weight under 1,500 g
Spells in intensive care
Severe jaundice
Meningitis, encephalitis
Viral illnesses (mumps, measles)
Middle-ear inflammations
Ototoxic therapy
Traumatic brain injury
Cerebral movement disorders
Presbyacusis (age-related hearing loss)
Menière’s disease
Acoustic trauma (blast trauma or more chronic exposure to noise)
Sudden hearing loss

A healthy inner ear has around 15,000 sensory cells (hair cells) responsible for converting mechanical energy into electrical impulses; in people with inner-ear hearing loss, however, some or all of the hair cells (the number varies) have been destroyed, so that electrical impulses can be transformed only at those sites where intact sensory cells are still present. If the sensory hair cells are destroyed in their entirety, even the most powerful hearing aids cannot help as they are able to provide amplification only where action potentials can still be triggered. In these cases, the insertion of an electrode array into the cochlear (in the form of a cochlear implant) is the only way to replace the absent function of the destroyed hair cells and to electrically stimulate the auditory nerve (see Module 3).
Those with sensorineural hearing loss have their auditory sensations affected in several different respects. As in conduction hearing loss, there is a loss of intensity; in other words, sounds are perceived as softer (depending on the extent of hearing loss). Speech understanding is made more difficult, especially in a noise-filled environment. A reduction in both frequency resolution ability and temporal-resolution ability also occurs. As not all the hair cells are necessarily damaged to the same extent (see Module 3), some frequencies cannot be transmitted at all, although others can in part. High frequencies tend to be affected more than low ones, resulting in auditory sensations that are distorted and fragmentary; hearing is ‘out of tune’, so to speak. Temporal resolution may also be affected. High, quiet tones can be masked by low, loud sounds that reverberate for too long. Speech is audible but not always understandable. A further problem is that quiet sounds are, because of the hearing impairment, perceived either not at all or only poorly. Up to a point, loud sounds create no discomfort but, at a certain level, they are just as loud to hearing-impaired people – despite their hearing loss – as they are to those with normal hearing. Smaller fluctuations in volume are, therefore, perceived more intensely than they are by normal-hearing people. The ULL is not displaced to the same extent that the hearing threshold is. It may, despite the hearing threshold being considerably higher than in people with normal hearing, be unchanged or even diminished. The dynamic range is thus constrained on both sides (i.e. the ‘hearing threshold’ and ‘discomfort’ extremes). The ability to hear spoken language is, in terms of the perception of both self and others, highly restricted.

Back to Top

The human hearing range

In order to understand the limitations associated with hearing impairment, an overview will first be provided explaining which parts of the auditory spectrum human hearing is possible in. Our ear is capable of hearing frequencies (i.e. pitches) ranging from around 20 to 20,000 Hertz, the Hertz (Hz) or kilohertz (kHz) being the unit of frequency named after the physicist Heinrich Rudolf Hertz (1857-1894). The designation Hz is used to indicate the number of vibrations (or wave periods) per time unit. One Hz, 250 Hz and 4,000 Hz represent one, 250 and 4,000 such oscillations per second respectively. Doubling the number of vibrations creates a sound that is, musically speaking, an octave higher. Our hearing ability diminishes as we age, especially in the high-frequency range. The frequency range of speech extends from 125 to 8,000 Hz, with the main speech range lying between 500 and 4,000 Hz. The frequencies of the speech sounds vary from low-frequency vowels such as [o] and [u] and nasal sounds such as [n] and [m] up to high-frequency consonants such as [f], [s] and [].
Our organ of hearing is also able to process non-speech sounds ranging in intensity from extremely quiet (such as a cornfield rustling in the wind) to extremely loud (such as a thunderclap). This large dynamic range means that the ear can, in effect, register sounds on a scale from one to a million (corresponding to the difference between 1 gram and 1 ton). In view of this enormous differential capacity, it makes little sense to represent this vast dynamic range in a linear diagram. It is more useful to employ a logarithmic measure which can express complex mathematical realities in simple numbers. For ease of use, acoustics – and thus audiometry – uses a scale that indicates sound pressure level measured in decibels (dB; 1 decibel = 1/10 Bel), named after the Scottish-born American inventor Alexander Graham Bell (1847-1922). Perception of loudness is subjective and depends on frequency. Zero decibels (0 dB) describes the intensity of sound at which a person with good hearing can just detect a sound played over headphones. This is known as the hearing threshold (marking the transition between the inaudible and the audible range) and, for most people, it lies between 0 and 10 dB. The ‘pain threshold’ is between 130 and 140 dB, and the ‘uncomfortable loudness level’ (ULL) is around 100 dB. The range between the hearing threshold and the ULL is described as the auditory-sensation area, and an example is shown in Figure 1. The X (horizontal) axis shows the various pitches in Hertz (Hz), and the Y (vertical) axis gives volume in decibels from top to bottom. Various noise sources are indicated, clearly showing which frequencies or loudness levels certain sounds consist of. It is a similar picture with speech, the frequency range of which extends from around 500 to 4,000 Hz and between loudness levels of 30 and 60 dB: vowels are louder than soft, voiceless consonants. Speech at normal volume has a loudness level of 65 dB. Plotted on the graph below you can see the normal hearing curve and how the hearing thresholds change in people with mild, moderate or severe hearing loss and those with only residual hearing ability. These curves are for unaided hearing. Similarly, the aided threshold of hearing shown by a pure- tone audiogram clearly illustrates which parts of the spectrum are shifted back into the audible range by assistive devices. What is crucial is the level of hearing ability that these technologies make possible.

Figure 1: Auditory-sensation area in humans, showing the frequency range and dynamic range of speech and non-speech sounds as well as differing degrees of hearing impairment. The hearing curve at the top (black circles) represents the hearing threshold of a normal-hearing person; the hearing curve at the bottom (red circles) is for an individual with hearing loss; and the hearing curve in the middle (black squares) shows the aided threshold of hearing (i.e. that obtained using hearing devices). Everything above the hearing threshold is audible.

Awareness of this auditory-sensation area can give you an idea of:
the scope of, and limits to, hearing;
what hearing loss means;
how to assess a hearing curve;
which regions of the frequency spectrum can be made audible again with the aid of hearing devices;
where the boundaries to these regions lie.

Back to Top

Different grades of hearing ability and their impact

Hearing ability can be classified into five main categories depending on both intensity and frequency. Normal hearing is defined as when the average hearing loss in the better ear does not exceed 20 dB. This average is calculated by taking the values obtained at the frequencies 500, 1,000 and 2,000 Hz and dividing by three. Mild hearing loss is defined as a loss of up to 30 dB. Where the average loss is between 30 and 60 dB, this is defined as moderate hearing loss; and where the average loss is between 60 and 90 dB, this is described as severe hearing loss; hearing losses of 90 dB or more are designated profound hearing loss, where only residual hearing is present.
The frequency range between 500 and 4,000 Hz is particularly important for learning spoken language. The individual speech sounds are composed of different frequencies. If these are not audible, this results in distorted, ‘fragmentary’ hearing. Which speech sounds can be perceived and identified depends partly on the shape of the hearing curve. Vowels are to be found in the region between about 250 and 2,400 Hz, whereas consonants account for the region between about 250 and 8,000 Hz. This explains why, for many hearing-impaired people whose hearing curve extends up to 3,000 Hz, the only sounds that can be even half-discerned are vowels, whereas many consonants, such as fricatives (sibilant sounds such as [], [] and []) cannot be heard at all. One of the main aims in providing people with hearing aids and cochlear implants (CIs) is, as far as possible, to make those frequencies that are relevant for speech understanding audible again.
It is very important that the speech signal is delivered to the child’s ear in as pure a form as possible. However, how well a child can hear – and, above all, understand – spoken language, and how well their residual hearing can be exploited, depends not only on the extent of hearing loss. For example, children whose pure-tone audiograms show that they have almost identical hearing curves may differ strongly in how they hear language. Moreover, the way in which different kinds of hearing impairment affect different aspects of the child’s development varies greatly between individuals. It is therefore very important to get an idea of the child’s hearing ability as opposed to their hearing loss. Thanks to early screening, modern hearing devices and professional auditory-oral early intervention, children who (medically speaking) are deaf can become ‘deaf-but-hearing’ children and acquire spoken language through hearing in the normal way.

Back to Top

 Summary

Hearing is truly a miracle of nature. The human hearing range covers the frequencies between 20 and 20,000 Hz. The hearing threshold in normal-hearing people lies between 0 and 10 dB, and the ULL at around 110 dB. A normal-hearing person can hear quiet sounds without difficulty. In the hearing impaired, the hearing threshold shifts towards higher volumes. Depending on the extent of the hearing disorder and how it has progressed over time, the hearing threshold overlaps with the speech range. Those components of speech that lie above the hearing threshold are not heard at normal speaking volume. For the acquisition of spoken language it is important that, as far as is possible, all of the relevant components of speech are made audible again by the use of hearing devices. The way in which hearing and language develop depends on many different learning processes, for which hearing devices provide a strong foundation for success.

Back to Top

 Review questions

What, expressed as a frequency, is the human hearing range?
16 to 6,000 Hz
1,000 to 20,000 Hz
20 to 20,000 Hz
500 to 4,000 Hz
What is meant by the auditory-sensation area?
A ‘sensory experience’ theme park
The zone between the threshold and the ULL
An area within which ear protection must be worn
The most comfortable range (MCR) for hearing
What is meant by the hearing threshold?
The amplification of particularly quiet sounds
The minimum sound pressure level required to generate an auditory sensation
The threshold above which sounds are uncomfortably loud
The most comfortable range (MCR) for hearing
Which speech sounds cannot be heard by those with severe hearing loss at 3,000 Hz and above?
Most vowels
Nasal sounds such as [n] and [m]
Fricatives (‘sibilants’) such as [], [], []
All speech sounds
What is the point of hearing aids?
They enable users, as far as possible, to hear again all the parts of the auditory spectrum that are relevant to speech
They enable hearing impairment to be healed
All they are good for is being left in the drawer!
They are a big money-earner for the industry

Back to Top

How we can find out what babies and toddlers hear

The main learning goals are to:
understand which problems are associated with audiological diagnostics and the fitting of assistive hearing devices in the first years of life;
be able, nevertheless, to make the case for providing a child with hearing devices as early as possible;
understand how the hearing ability of babies and toddlers can be determined through everyday observation.

Audiological diagnostics in the first years of life

The increasingly widespread use of newborn hearing screening (NHS; see Module 1) means that children are, at an increasingly young age, becoming candidates for full diagnostic procedures in paediatric audiology and for subsequent fitting with assistive hearing devices.
Specific aspects associated with this (such as particular anatomical requirements, the importance of maturation of the auditory pathway, and the fact that subjective hearing responses are difficult to assess) must be appropriately taken into account and are key aspects of both parent guidance and early intervention.
Paediatric audiometry is, in a number of respects, quite different from audiometry in adults. Children are, of course, far more limited in terms of their attention span and ability to cooperate. They grow tired more quickly than adults and are quicker to lose interest in their hearing test. Failure to respond does not, therefore, necessarily mean a failure to hear. It is thus very important to create a testing environment in which audiometry examinations can be successfully carried out even under difficult conditions. This means, for example, dealing flexibly with the rhythms of children’s phases of sleep and wakefulness. Babies and toddlers can, by applying educational know-how accordingly, be motivated so as to make them attentive and focused on hearing, at least for short periods. Children have to ‘learn’ how to perform in hearing tests, i.e. the hearing threshold can very rarely be determined during a single session. Frequent repeats are usually necessary.
Both the conducting of the tests and the evaluation of the results obtained requires thorough knowledge of the child’s level of mental and physical development (see Module 8), as well as profound understanding of how hearing function develops. This function continues to mature during the first years of life; it is vital that age-dependent auditory responses be taken into account when interpreting audiometric data. For example, the stimulus response threshold (in response to sound transmitted by air conduction) for a newborn baby is about 80 dB in free field (FF); at 3 months this has decreased to about 60 dB in FF, and is around 40-50 dB in FF and 30-40 dB in FF at 6 months and 1 year respectively; after 3 years it is around 20 dB in FF. Children’s hearing and response thresholds are not the same as those of adults until they reach the age of about 6.
As long as the child is unable to provide any verbal feedback about their auditory sensations, diagnosis must necessarily be a process that – tailored to the child’s age and stage of development – is ongoing rather than episodic in nature. Although ‘hard facts’ can be delivered by methods such as objective audiometry, the measurement of otoacoustic emissions (OAE) and brainstem audiometry (AABR) (see Module 1), these provide only part of the data required. At present, both techniques deliver only limited information about the actual hearing threshold.
Subjective audiometry is also used and is, at this age, based largely on behavioural observations. Techniques here include reflex audiometry as well as behavioural and observational audiometry (with and without visual reinforcement).
It is, in this connection, important to know that unconditioned reflexes are triggered only at very high sound levels of 70-90 dB. As, for loud sounds, the hearing of those with sensorineural hearing loss is practically normal, a lower level of hearing loss cannot be detected using this method; in other words, normal reflexes in response to loud sounds do not allow conclusions to be drawn about hearing threshold. Hearing ability is assessed by observing the child’s different responses to acoustic stimulation. Various reflexes can be triggered by acoustic stimuli and observed or measured:
Moro startle reflex (shaking movement of the arms and legs; the child stretches their arms and legs away from their body and pulls them back towards their body);
Cochleo-palpebral reflex (involving the child tensing its eyelids if they are closed or quickly closing its eyes if they are open);
Breathing reflex (deep inhalation followed by brief respiratory arrest for 5-10 seconds, after which breathing is normalized again);
Stapedius reflex.

The unconditioned reflexes in newborns are reduced between the ages of 2 and 4 months. The first orientation responses then develop. From this point, behavioural and observational audiometry involves determining whether there are reproducible reactions to acoustic signals in the form of behavioural changes. Possible responses may include:
change in facial expression;
turning or moving the head;
moving the eyes or eyebrows;
feeding activity: pausing or sucking harder;
changes in breathing;
moving the arms and/or legs.

The investigators require a lot of experience in order to interpret the responses provided by this technique. Behavioural and observational audiometry, too, do not give a full picture. The hearing curve obtained in this way represents merely a response threshold, as children at this early age cannot yet indicate when they have just heard a sound. They tend to show a reaction only if the sound is somewhat louder and clearly audible. If the response is positive, this does not automatically mean that normal hearing is present. By the same token, if no reaction is apparent or the response is greatly delayed, this also need not necessarily mean that the child is hearing impaired. The responses derived from both reflex and behavioural-observation audiometry at this age are, therefore, insufficient grounds for fitting hearing aids and do not amount to an indication for cochlear implantation.
In order that this information can attain the status of ‘hard facts’, there is a need during this early phase to incorporate ‘soft data’ – such as systematized behavioural observation – in different situations. To this end, techniques have emerged such as video-aided analysis of interaction processes or questionnaire batteries. Stimulus- and situation-dependent hearing performance must be recorded and documented both using audiometry and in the child’s everyday life (as they interact with childminders and within the family, playgroup, crèche, etc.). These findings should be obtained, documented, used and evaluated (see Section 4) to serve as the underlying data for hearing-aid optimization and for creating an individual intervention programme.
With young children, it is especially important to ascertain how well the residual hearing ability can be exploited for language perception. The way in which the child hears speech and perceives it auditorily must be investigated. This is, however, possible only if the child is already so far advanced in their language development that they have incorporated the linguistic material from speech audiometry into their vocabulary, as they need to repeat these items or indicate what they have heard by pointing to picture-cards lying in front of them. One possible alternative here is the method developed by Eargroup Antwerpen (http://www.eargroup.net/), namely A§E (The Auditory Speech Sound Evaluation). Here, test material for discovering, discriminating between and identifying speech sounds has been put together, incorporating isolated speech sounds (phonemes) that occur in many languages, and which can be used irrespective of language development level and native language.
Another option is what is known as the Ling test, named after the Canadian audiologist Daniel Ling. Although very simple to conduct, this test delivers highly reliable information. Six phonemes – given the international phonetic symbols [u] (pronounced ‘oo’), [] (pronounced ‘ah’), [i] (pronounced ‘ee’), [] (pronounced ‘sh’), [m] (pronounced ‘mm’) and [s] (pronounced ‘ss’) – are used; these differ in frequency and represent the full range of frequencies. They cover the entire spectrum of speech, i.e. those frequencies that one must be able to hear in order to understand spoken language. The child should be able to acoustically distinguish between these six sounds. The sounds are presented, one at a time, to the child at normal speaking volume. The distance between child and sound source can be set on an individual basis. Depending on their age and language skills, the child either indicates whether they have heard the sound, simply imitates the sound or points to the relevant picture. If the child is well able to discriminate between all these sounds, it can be assumed that they are – audiologically speaking – able to hear the most important auditory elements of spoken language.
Figure 2: Phoneme distribution in Daniel Ling’s Six Sound Test

Back to Top

Fitting hearing aid and cochlear implant (CI) in the first years of life

An important first prerequisite for successful hearing-aid fitting in children is that the hearing impairment be correctly diagnosed and the hearing threshold determined as accurately as possible. It is precisely this latter point – namely, reliably establishing this threshold – that often proves difficult (see Section 3.1). Hearing aids must initially be fitted on the basis of incomplete information. Moreover, especially in the first years of life, hearing is subject to continuous development. Even if the actual extent of the hearing impairment cannot yet be reliably determined in infants and children, the process of fitting hearing aids should still be set in motion. Over time, however, various diagnostic measures must be additionally employed and the findings incorporated for the purpose of ‘gradual fitting’. Fitting is a process that extends over several months. Initially, the priority is to achieve any kind of hearing response at all. A highly cautious approach is required here. In subsequent fittings an attempt will be made to push back the limits and find the best possible setting, in order to create optimal conditions for language acquisition. Thanks to modern, non-linear hearing-aid technology (see Module 3), the risk today is lower that a child’s ears will be subjected to an excessively high sound pressure level, as loud signals are amplified far less than quiet ones. This raises the question as to whether the auditory nerve is sufficiently stimulated with a view to facilitating a level of hearing and language acquisition that matches that of hearing children.
In the fitting process, too, we have hard data telling us about the properties of the hearing aid or cochlear implant (CI) and how the device is fitted or programmed. These are, however, not sufficient alone. Severely hearing-impaired infants and toddlers are, in this early phase, not yet able to provide feedback concerning the effectiveness of their assistive hearing devices. They have no hearing experience as a comparative frame of reference – on the contrary, it is with these devices that they are to learn to hear for the first time. Here again, the only thing that helps is making observations of everyday behaviour, picking up on many different and detailed aspects, in different hearing environments. Ongoing monitoring, taking into account the observations of parents and other caregivers, is important in order to gradually achieve the best possible hearing-aid setting (see Section 4).
During this phase it is crucial to sensitize the parents to the importance of the technical aids by making them aware of what the child can hear with and without assistive hearing devices (see Section 2.2). They must learn how to manage this technology and, for example, what to do when the hearing aid whistles while they are feeding the baby.
If the hearing aid is rejected by a child, this may be due to the new, unfamiliar situation or have causes that should be urgently dealt with (e.g. the earmould is pressing on the wall of the ear canal, there is an inflammation or allergic reaction in the ear canal, or the hearing aid is wrongly set) (see Module 3). Particularly in the first weeks and months, strict monitoring by the audiologist or the clinic is required here.

Back to Top

Summary

Audiological diagnostics and fitting with assistive hearing devices is necessary right from the first year of life, in order that the opportunity afforded by early detection (thanks to the establishment of newborn hearing screening) is properly exploited. It has become clear that available diagnostic techniques for this stage of life need to be developed further in order to enhance their suitability. It is certain that the testing parameters alone are currently not sufficient and that soft data are highly relevant, especially in early childhood when children are unable to give us any feedback on their auditory sensations. What is crucial is an ongoing diagnostic process in which not only the fitting of assistive hearing devices is optimized, but various aspects of development – hearing, relational, language and social development – also receive ongoing support in terms of educational diagnostics.

Back to Top

Review questions

At what age does the hearing threshold of children match that of adults?
3 years
6 years
Immediately after birth
1 year
Why is the informative value of behavioural and observational audiometry limited?
Because children respond only to a very high level of sound
Because the findings are interpreted by different people
Because babies and infants do not like being watched
Because, even if responses are positive, it cannot automatically be assumed that hearing ability is normal
Which of these are not valid arguments against fitting infants with hearing aids?
Insufficient information about the shape of the hearing-threshold curve
Inadequate cooperation on the part of the child
Different anatomical conditions in the ear canal
The young age of the child (e.g. 3 months)
How can we assess whether the child can hear the most important elements of spoken language?
By means of objective audiometry (OAE and AABR testing)
We cannot, as the child is unable to tell us
By using the ‘aided threshold of hearing’ curve
By using the Ling test
How can the effectiveness of hearing-aid fitting in a 6-month-old baby be assessed?
By using speech development tests
From a read-out of the parameters in the hearing aid’s recording unit
By using batteries of questionnaires to assess hearing development in everyday life
By establishing whether the child can reliably locate a sound source

Back to Top

The recepy for success in helping children with hearing impairment to hear

The main learning goals are:
to appreciate the key milestones in hearing development;
to be aware of criteria for documenting hearing development in hearing-impaired children;
to understand how use can be made of development protocols to evaluate the success of hearing-aid/cochlear implant (CI) fitting and early intervention.

Milestones in hearing development

The way in which hearing function develops, and in which functional hearing leads to ‘comprehending’ hearing, not only involves a succession of maturation processes but primarily depends on the right input for learning being provided within the right timeframe. In order to be able to assess the progression of hearing development in hearing-impaired children, it is important to keep in mind how hearing function develops in children with normal hearing.
In normal-hearing children, hearing development begins as early as the last four months of pregnancy. Prelingually deaf children may lack these early hearing experiences. Especially in the first and second years of life, important processes take place in the physiological maturation of the auditory pathway, which are the prerequisite for the development of normal auditory processing and perception.
As described in 3.1, newborn children initially respond only to very loud sound events. Up to the age of 4 months, there is not necessarily an immediate response to loud speech if this comes from outside the field of vision. It may be that the baby reacts only once or twice to auditory input and later no longer responds to the same cue. At this early stage, the child will perceive and discover that there is a world of sound out there and pay increasing attention to auditory sensations. They may briefly pause, stop moving and listen. They can distinguish between the hearing of a sound and silence. At 3 months they can, increasingly, also perceive quieter sound stimuli. They may intensify or reduce sucking behaviour in response to sound. The voices of the parents are of particular importance throughout this phase: the child will quieten or smile when spoken to. During this early stage, even hearing children do not yet know that noises or speech carry meaning. At about 6 months, a hearing child is able to move their eyes towards the source of a sound they hear. They begin trying to locate where these sounds are coming from. The ability to localize sound is important for spatial hearing and forms one of the key milestones in the hearing development of a child. At 6 months, the child’s responses become more obvious. They show interest in music. The child discovers their own voice and, at the age of 9 months, can already distinguish between the voices of people they know. They recognize various common everyday noises and respond to them appropriately. They recognize prosodic aspects of speech such as duration, pitch, differences in loudness, rhythm and stress. And they listen when spoken to.
Other steps include the ability to discern whether two or more linguistic utterances are the same or different, i.e. the child must learn to be aware of the order in which these utterances come. Hearing development now goes hand in hand with language development (see Module 8). The child acquires the ability to repeat the name of an object, point to it or follow instructions. They can recognize, and distinguish between, the Ling sounds, and can identify words with different numbers of syllables as well as words with the same vowels but different consonants, and vice versa. They understand instructions that they hear every day and the typical expressions used within the family; they may, for example, be able to point to parts of their body asked to. They understand simple instructions and simple questions (such as “Roll the ball,” “Kiss the doll,” “Where’s your shoe?”) That a child understands speech through the ear is evident from, among other things, their answering questions (although the child does not have to word their reply in the same way as the questioner). The child wants to tell stories or have stories read to them, and understands these. They point to pictures in a book when these items are named. They become more and more able to carry out multi-step instructions, and grow in skill as a conversational partner. They increasingly also understand speech that is not directed at the child themselves, such as when the parents are speaking on the phone (Schmid-Giovannini, no year given). The following table provides a summary recap of all these points:

Table 1: Stages in hearing development

Development of hearing
Birth
Perception of auditory sensations

Auditory attention and awareness

Localization of auditory sensations

Discrimination between auditory sensations

Auditory feedback system

Recognition of one’s own voice

Processing of sequential information

Understanding language


Age of 5
The child can understand what it hears at a sophisticated level

Back to Top

Hearing development in hearing-impaired children

The hearing-impaired child passes through the same stages as their normal-hearing peers. In order that they are neither overtaxed not underchallenged, it is important to know at which level of development the child is and which stage can be expected next. Then, and only then, can the question be asked, “How can the child make the transition to this next step?” In this connection, the crucial thing is less the child’s chronological age than their hearing age. Hearing age refers to the period from the beginning of the initial hearing-aid fitting or the initial programming of the speech processor. For example, a 1-year-old child that has received hearing aids or a CI at 6 months has a hearing age of around 6 months. How quickly a child begins to hear, how soon they learn to recognize speech through the aural channel, and how soon they begin to develop these abilities, depends on many individual factors.
If the child has been fitted with hearing aids, the parents often expect them to achieve results quickly. They hope for immediate acceptance and clearly visible responses. Hearing aids or a CI enable the child to – physically – hear well, but do not help them to clearly tell what they are hearing and where it comes from. It is only through hands-on experience, by connecting with the people and objects around them, that the meaning of what is heard becomes clear. It is very important to register even the smallest step forward in hearing development and to positively reinforce each and every of the child’s hearing responses – when they listen, when they react to sound sources, when they turn round, and when they imitate. Parents and professionals need to have confidence that normal patterns of hearing will develop if all the above factors are taken into account.

Back to Top

How to observe hearing development and make sure it’s on the right track

Newborn hearing screening now enables hearing impairment to be detected and treated at an early stage (see Module 1). Whether a child’s current hearing-aid solution is sufficient, or whether it requires optimization, or whether the child should receive cochlear implants (CIs) at an early stage, can be assessed only by carefully documenting hearing development. It is important to give the parents and caregivers criteria by which they can recognize the child’s responses to auditory stimuli, and what this is contingent on. With a view to recognizing which stage of hearing development the child is at, and whether development is indeed taking place or the situation is more one of stagnation, there are a number of questionnaire batteries that are currently available in several languages. These questionnaires are addressed to parents, childminders, nursery school teachers and early-years practitioners in order to specifically assess the child’s hearing responses and hearing behaviour by reference to key questions, and thus to document specific milestones. While it is not the task of the caregivers to carry out those monitoring protocols, they can look to see that these tools are used carefully by parents and professionals alike.
Examples of these arrays of questionnaires are:

Documenting hearing development
LittlEARS Auditory Questionnaire (MED-EL 2003)
The ‘LittlEARS Auditory Questionnaire’ is a parents’ questionnaire for recording, in the child’s everyday environment, children’s early hearing development following newborn hearing screening, from birth to the age of 24 months, or that of children with a CI or hearing aid who have a hearing age of between 0 and 24 months. It is the first module of the ‘LittleEARS battery’, which is designed for recording and assessing prelingual auditory development in very young children. It consists of 35 ‘yes or no’ questions for parents. The number of questions answered ‘yes’ gives the total score, which correlates to the child’s hearing age. This score is compared with the critical reference values given. If a child achieves a score above the minimum score (i.e. the lowest result that a child of this hearing age should achieve), then it can be assumed the child’s hearing is developing normally for their age.
The ‘LittlEARS Hearing Questionnaire’ is available in the following languages: German, English, French, Spanish, Bulgarian, Dutch, Finnish, Norwegian, Serbian, Slovakian, Slovenian and Turkish.

My LittlEARS Diary (MED-EL 2005)
In the ‘My LittleEARS Diary’, information is collected about early hearing, speech and language development. It is used for recording information and assessing the early development of hearing-impaired children with a hearing aid or CI. This information forms the basis for both research and therapy.
It contains:
a diary for the parents with guidance on purposeful monitoring of the child’s behaviour;
a parents’ handbook explaining how to use the diary;
a handbook for the therapist with an overview of the milestones in hearing development in the first two years of life (or the first two years of hearing);
guidance on how to use the diary in therapeutic practice;
‘diary overview sheets’ for documenting questions and observations from the diary and from parents’ reports;
a list of the hearing-impaired child’s ‘first words’.
The ‘My LittlEARS Diary’ is available in both German and English.

Monitoring Protocol for Deaf Babies and Children (Lewis et al 2006)
The Monitoring Protocol for Deaf Babies and Children is part of the central and well-evaluated Early Support government programme in the United Kingdom (www.earlysupport.org.uk). It is used for identifying and documenting the developmental progress shown by a hearing-impaired child between the ages of 0 and 3, and as a guide indicating which developmental stage can be expected next. It forms the basis for discussion and decision-making between parents and professionals from various specialist disciplines. It consists of a comprehensive booklet on how to use the protocol, as well as the protocol itself with overview tables and highly detailed and specific checklists for monitoring the child’s behaviour in the following categories: communication, attending, listening and vocalization, socio-emotional, play and other developmental milestones. Those aspects that are affected by the hearing impairment are broken down for recording purposes on a particularly detailed basis. Here, again, distinctions are made between attending, listening and vocalization. It also contains summaries of all developmental characteristics that can be used to compare the various aspects of development, as well as colour-coded development profiles. This colour-coding helps make progress visible and highlights where, at any given time, different areas are at different levels of development.

ELF – Early Listening Function (Anderson K.L 2002)
http://www.phonak.ch/ccch/professional-2/pediatrics/diagnostic.htm
ELF is designed to provide parents and other caregivers with insight into the functional hearing of infants and toddlers. It contains very precise descriptions of how particular observations can be obtained in everyday life. It has three main objectives:
Active involvement and empowerment of the parents;
Assessing the effectiveness of hearing devices based on everyday observations;
Documenting progress in hearing development.
ELF, too, also envisages that the responses will be made available to the early-intervention team in order to gear the intervention programme to the family’s individual needs. The auditory stimuli are not calibrated signals; rather, it is a matter of checking whether various quiet and loud activities – such as ‘Mum singing a song’ – are heard from five different distances, and whether a hearing response takes place. ELF incorporates understanding both in quiet and in noise. It is not a diagnostic tool or formal screening technique for detecting hearing impairment, and cannot replace acoustic testing methods for verifying hearing-aid fitting. Rather, in conjunction with the parents and nursery school teachers, it is aimed at obtaining information about how the child uses its hearing ability in specific everyday situations.
A second sheet is then completed when a new adjustment takes place, a new ‘map’ (i.e. a CI programme) is created, or additional technology – such as an FM system – is used. Changes in specified hearing responses are to be entered using a five-step scale.
Specific intervention measures are then to be drawn up based on these observations, in consultation with the audiologist / early-years practitioner.
ELF is available in English.

Verifying the success of technical intervention
It is very important to take the child seriously if they complain of problems with the hearing aid or the CI. Difficulties may lie with the technical settings or with auxiliary parts such as earmoulds (see Module 3). Questionnaire batteries are used for assessing hearing responses and acceptance of hearing devices in everyday use. Examples include the parental questionnaire created by an interdisciplinary working group based in Germany, these being available in German and English and which are obtainable online or from Widex Deutschland. These are designed both to help log observations on a systematic basis and to highlight where possible problem areas may lie. The results of the questionnaire can also be used to compare hearing success using various models of hearing-aid on an everyday basis, and to monitor hearing development.
Different inventories are available for ages 0-3 and 3-6:
Questionnaire and instructions for parents for use with infants and toddlers who are not yet themselves able to speak: questionnaire English

Questionnaire for use with children aged 3-6 years who have already begun to communicate using spoken language:
parentquestionnaire part 1 english
parentquestionnaire part 2 english
These include highly specific questions on the use and acceptance of, and problems with, hearing-aid intervention. A choice of four answers is designed to help provide parents or nursery school teachers to focus their observations when monitoring. The questions are to be answered by the parents at home and the replies can then be transferred into an evaluation table by the supporting professionals during their discussions with the parents.
As well as hearing development, other aspects of development are also to be observed and documented (see Module 8), such as the development of early-childhood parent-child interaction in language, cognition, motor skills, play and social/emotional behaviour.

Back to Top

Summary

Hearing-impaired children pass through the same stages of hearing development as their hearing peers, although not necessarily in the same chronological order. Knowledge of the developmental sequence is helpful in predicting which learning stage can be expected next. Qualitatively chronicling the development of hearing largely involves collecting information from the child’s everyday life and their hearing behaviour in a wide range of situations. Questionnaire batteries and development protocols make a considerable contribution to evaluating the use of technical aids and early intervention between the ages of 0 and 3.

Back to Top

Review questions

Which hearing functions can a 2-month-old baby be expected to have?
Reliable localization of a sound source
Reliable response to quiet sound events
Clearly visible response to loud sound events
Focused attentiveness to acoustic events
Which hearing functions can a 6-month-old baby be expected to have?
Reliable localization of a sound source
Reliable response to quiet sound events in a noisy environment
Clearly visible response to loud sound events
Distinguishes between words that sound alike
How can one establish at an early stage that hearing development is taking place?
By regularly performing pure-tone audiometry
By means of objective audiometry
By comparison with the child’s language development
By observing and documenting hearing responses in everyday situations using questionnaires or development protocols
What needs to be done when hearing development is stagnating?
Patience is called for, as some children take longer than others
Optimize use of technical aids and review the early-intervention strategy
Initiate intensive auditory training straight away
Send the child to a special institution for the hearing impaired

Back to Top

The listening environment in early childhood

The main learning goals are:
to appreciate the various factors that affect understanding in enclosed spaces;
to understand why poor room acoustics are a much bigger problem for hearing-impaired children than their hearing peers;
to be aware of ways of optimizing the acoustics and to be able to implement these as appropriate to the situation.

Back to Top

Factors influencing speech understanding in enclosed spaces

The environment in which the child receives language input is not always ideal for doing so. Ambient noise can make it very difficult – and sometimes impossible – to understand speech. A room’s acoustics play a crucial part in determining how the child takes in what they hear and how it is presented to the brain. This makes it especially important to create conditions that are as favourable as possible. If noises, music or speech are easily audible and understandable in a particular environment, this indicates that the acoustic conditions here are particularly good. The main factors that greatly hinder and interfere with the hearing and understanding of spoken language in a given environment are background noise, distance and reverberation.

Background noise
Even very young children are very rarely in completely quiet environments. Even if there is no-one in a room, a certain level of sound is always present there. If several people are present, the intensity of background noise increases dramatically. And, at home, the problem is compounded by other sources of interfering noise such as CD players, the TV, domestic appliances, heating, toilet flushing, air conditioning or electric fans, as well as noise from outside – such as that of traffic from a busy street – which enters through open or poorly soundproofed windows. If several children are in the home or in a room, the sound level rises rapidly owing to the noise generated by toys and the children themselves, who talk, call out, cry, laugh, etc., etc. The situation is, of course, similar in a nursery school setting. It is easier for the hearing-impaired child to learn to ignore and tune out unvarying, relatively quiet continuous noise such as air conditioning or electric fans, despite the fact that these sounds may make it harder to hear and understand. It is, however, very disruptive if – while a story is being read to the class, say – the child sitting next to the hearing-impaired child is speaking loudly. And it will be impossible to follow a conversation at meal-times, when several people are speaking loudly at the same time and may be talking to each other across a table.
If speech is to be understood well, the background noise level should not exceed 45 dB. In real-life situations, however, it often lies between 60 and 80 dB (the latter being equivalent to the noise level on a busy road). This situation is especially problematic for hearing-impaired children in that – in order to understand well – they need (more so than normal-hearing people) the ‘signal’ to be very distinct from the ‘noise’. The following example should highlight what this means in practice: if the average background noise level in a room is 55 dB (equivalent to a quiet conversation among adults) and the average signal level (e.g. the voice of the mother or other caregiver is 65 dB (i.e. the usual level for conversational speech at normal volume), then the signal-noise ratio is + 10 dB. This means that the signal is 10 dB louder than the level of interfering noise and, in certain parts of the room, possibly as low as 5 or even 0 dB. However, the optimal ratio for hearing-impaired children is at least +15-20 dB, i.e. the signal should be 15-20 dB louder than the noise. Hearing people are still able to follow conversation without any difficulty even where the signal-noise ratio is unfavourable, as they possess sufficient linguistic knowledge to fill in any perceptual gaps. It is precisely this that is far harder for the hearing-impaired child. In a noisy environment, their understanding can be fragmentary at best.

Distance
The greater the distance between a sound source and the listener, the weaker the sound energy will be (i.e. the lower the volume will be) when it reaches them. A useful rule of thumb is that if the distance is doubled, the intensity of the speaker’s voice is diminished by 6 dB. An example will illustrate just how much the signal is reduced: if someone is speaking at normal volume, the intensity will be about 65 dB at a distance of one metre, 59 dB two metres away, 53 dB at four metres, and so on. Children sitting close to the person speaking will be able to hear well around 80 % of what he or she is saying, whereas those who are further away may have to be content with no more than 60 %. This also soon becomes evident at home: a hearing-impaired child sitting on Mum’s lap while they look at a picture-book together will understand every word. However, if the mother is standing in the kitchen with the dishwasher on, while the child is playing in their room at the other end of the home, then the child will not – even assisted by the best digital hearing aids or a CI – be able to understand everything their mother says; they may just be about be aware when they are being called. This is evident at nursery school, as for example when the children are sitting round in a circle; those children who are sitting close to the teacher have a far better chance of fully understanding them. Although digital hearing systems and CIs can make many sounds audible again, the range of their microphones is limited – which is why they show their limitations over greater distances and in noisy environments.

Reverberation
Reverberation, which arises through repeated reflection of sound, indicates how ‘echoey’ a room is. The reverberation time indicates how long it takes for sound to die away in an enclosed space. In rooms with a long reverberation time, sound events remain audible for a sustained period. A proportion of the sound waves reach the pupil’s ear directly, but the greater part are – depending on the properties of the walls, ceiling and floor – either reflected back or absorbed. The quantity of sound that is reflected or absorbed depends on a room’s acoustics.
If the reverberation levels in a room are too high (or the reverberation time too long), then even normal-hearing people have difficulty understanding speech. Everyone knows how hard it can be to make out announcements over the public-address system at a station, or what the acoustics are like when one moves into a new flat that is completely empty. If the reverberation time is long, the first syllables spoken will mask subsequent ones if they resonate for too long. Consonants are absorbed more quickly, and are not heard as well as vowels, which tend to be low-frequency sounds and are reflected for longer. Distortion of the speech signal occurs, which makes it harder to understand speech. Furthermore, if the reverberation period is too long, undesirable noises such as the scraping of chairs on the floor, coughing, building bricks being banged together, etc., remain in the room for too long, causing the overall noise level to rise further. According to the latest findings, the optimal reverberation time for rooms in which hearing-impaired children are engaging in oral communication is 0.3 to 0.4 seconds. In many nursery schools – in which visual criteria are accorded a higher priority than acoustic considerations – and in older buildings with high walls and smooth, highly reverberating floors, reverberation times are often found to be considerably longer.

Back to Top

Measures aimed at improvement

Everyone in the room – not only the children with hearing impairment – will benefit from measures to enhance its acoustics. The immediate priority is to look at the external environment from the educational standpoint as well and – as far as possible – to optimize it. This means doing everything that can be done to improve the room acoustics, provide good conditions for hearing and, above all, create and maintain interest in hearing. In this connection, the language used by the caregivers is particularly important: it needs to be stimulating and to foster hearing development.

Back to Top

Improving room acoustics

Simple measures
To start with, there are a number of simple, low-cost measures that will help to reduce a room’s reverberation time and minimize noise. The reason why even small rooms can be very resonant is that these rooms have large, smooth surfaces – such as a smooth ceiling, a smooth, highly reverberating floor, smooth walls or a large window surface area. The extent to which sound is reflected back from these surfaces is highly detrimental. This means an attempt must be made to break them up and to so outfit the room that sound is absorbed and not reflected. This can be achieved by measures such as:
putting up curtains;
putting felt pads under chair and table legs;
laying a carpet;
obtaining cushions for ‘circletime’;
making room dividers out of fabric;
cladding ceilings or walls;
covering notice boards with fabric;
inspecting the furniture as to the noise it makes (e.g. squeaky drawers);
covering toy boxes with fabric, carpet offcuts, etc.;
sealing off doors in order to reduce infiltration of external noise sources.

Overhauling the structural fabric of the room to optimize its acoustics
If the above measures are insufficient (as in an old building, for example), one may need to think about radically overhauling the fabric of the room, i.e. taking in hand structural measures such as putting in sound-absorbing wall and ceiling coverings, and soundproof windows. If the room has a highly reverberating floor (e.g. one made of linoleum or PVC), this should be replaced by a wear-resistant carpet. Acoustic requirements should take precedent over cleaning considerations. If a new nursery school is to be built, all the available scope for sound insulation should be exploited at the planning stage. This is cheaper than making subsequent alterations once it has been built.

Soundfield system
Another option is to install a soundfield system. This involves the speaker’s voice being picked up by a microphone near their mouth, slightly amplified, and relayed by radio (i.e. a wireless system) to several loudspeakers in the room, thus enabling the sound to be evenly distributed so that the hearing conditions are the same everywhere in the room, irrespective of the distance from the speaker. Everyone listening in the room will benefit from these improvements, whether they are hearing-impaired or not.

Back to Top

FM systems

In order to achieve the best possible acoustic conditions for the hearing-impaired child in larger rooms in which several people are present, an FM system may prove a worthwhile complement to hearing aids or CIs.
FM is short for ‘frequency modulation’. An FM system consists of a transmitter that the person speaking wears on their body, and either one or two receivers that are connected to the child’s hearing aid or speech processor via the audio input. The speaker’s voice is picked up by a microphone placed around 15-20 centimetres from their mouth and wirelessly fed into the child’s hearing aid or CI. The child now hears the speaker in exactly the same way as if they were speaking directly into the microphone of the hearing aid or CI from only 15-20 centimetres away. Hearing via an FM system is comparable to listening to the radio using an FM set, which also involves radio waves being propagated over large distances. Use of an FM system makes it possible to flexibly increase the distance between the speaker and the hearing-impaired child without causing deterioration in understanding, as the system enables the child to always hear the speaker’s voice at the same volume. Background noise is not a problem, as the microphone of the hearing aid or CI is either switched off or given a much quieter setting, thus creating a far more beneficial signal-noise ratio.
The FM system should always be used when the hearing-impaired child needs to receive acoustic information reliably and completely – across greater distances and for any length of time – in a noisy environment. This is most straightforward to use when only one person is speaking, as is the case during ‘circletime’ at nursery school or later at school. It can also be a good idea to use it on trips, so that the hearing-impaired child is ‘within earshot’, so to speak, even when they are quite a distance away. When driving, or when cycling with the child in a bike trailer, using the FM system is also a good idea, as interfering noise can be masked out and speech can reach the child’s ears in ‘pure, undiluted’ form. Use of the system in other situations, however, calls for careful consideration. Let us say the mother is in the kitchen and the child is playing in the garden. While it may be convenient for the mother to be able to maintain auditory contact with the child at all times, this is not very beneficial in terms of their learning to hear, as the garden offers up a whole other world of sounds that should not be masked out. During free play at the nursery school, for example, it is important that the child can also hear the other children and not just the teacher. And it is essential that the system be switched off when the adult is speaking to other people, as otherwise the hearing-impaired child will hear these conversations more clearly, but not what is going on immediately around them.
An FM system can bridge distances, bring the speaker’s voice right up close and make it easier for the child, in different situations both at home and in kindergarten or (later) at school, to receive and understand this language input. However, learning to use it properly will take some time.

Back to Top

An aurally stimulating environment

As well as ideally being provided with state-of-the-art technical aids, additional technology such as an FM system, and a room with optimal acoustics, the hearing-impaired child needs input that stimulates them to learn to hear. Their interest in, and desire to, hear must be aroused in all possible situations. Research into language acquisition has shown that, in many cultures, child-directed speech (CDS) has special characteristics designed to help the child get used to language. Chief among these are prosodic features: an expressive, rhythmic, very melodic speech pattern. Sounds higher up the frequency spectrum are used, and the intonation structures are clearly emphasized. Vowels are stressed and/or lengthened. Among the names given to this form of speech are ‘baby talk’ and ‘motherese’; it provides fascinating input that captivates the infant’s attention and stimulates hearing. There are good reasons for assuming that hearing-impaired children have an especial need for this.
The best instrument here is the human voice. It is therefore necessary that parents, childminders, teachers and therapists learn to be aware of their own voice – in terms, for example, of whether it is too loud or too quiet, whether their diction is clear, and whether their speech is rhythmic and melodious and employs a certain degree of ‘dramatic tension’ rather than narrating in a monotonous way that is unlikely to arouse much enthusiasm. It might be that, particularly in difficult situations, the hearing-impaired child does not fully understand what is said to them first time round. Here it is important not to immediately fall back on visual aids but – stressing speech appropriately, and in a natural voice – repeating (several times if necessary) what has just been said.
It is also important that conversation with the hearing-impaired child is in its very nature dialogue-based. This means that both the parents and the professionals involved see each other as equal partners and allow space for the hearing-impaired child to frequently contribute; it means that genuine ‘turn-taking’ can take place (see Module 5 ). When conversations are taking place in a larger group – at meal-times, for example – it is important to make sure that only one person speaks at a time. When sharing news, during story-time, etc., it is important to create a quiet environment – the TV and radio should not be on in the background all day.

Back to Top

Summary

Learning to hear must be something that is possible for the hearing-impaired child in any and every situation. A noisy environment makes this vastly more difficult. So caregivers must be aware of the child’s ‘hearing situation’ at any given moment, and recognize that this can – when the priority is for the child to understand speech unhindered and undisturbed – be optimized. This can be achieved by using an FM system.

Back to Top

Review questions

Which factors make it harder to understand speech in enclosed spaces?
Bright lighting
Lots of windows
Soft carpeting
Having the radio on all day
What does an FM system achieve?
It amplifies speech at nursery school
It means that distance is no longer a factor in understanding speech
It ensures that the hearing-impaired child can hear the other children well
It reduces background noise by 15 dB
In which situations is an FM system useful?
When speech input has to be received without any interference for any length of time
When several people are speaking at the same time
When the child is playing in the garden
When the environment is especially quiet
What can help improve the acoustics of a room?
Frequent airing
Using sound-absorbent materials
Improving the lighting
Laying parquet flooring
Why do poor room acoustics place such a strain on the hearing-impaired child?
Because the proportion of sound ‘lost’ is too great
Because the hearing-impaired child needs the ‘signal’ to be very distinct from the background noise
Because the hearing-aid fitting is wrong
Because the child does not have a CI

Back to Top

References

Anderson K.L. (2002) ELF – Early Listening Function http://www.phonak.ch/ccch/professional-2/pediatrics/diagnostic.htm (30.01.2009)
Lewis S. et al (2006) Monitoring Protocol for Deaf Babies and Children www.earlysupport.org.uk (30.01.2009)
Cole E.B. & Flexer C.A. (2007) Children with Hearing Loss: Developing Listening and Talking Birth to Six. San Diego: Plural Publishing Inc
Maltby M. & Knight P. (2000): Audiology. An Introduction for Teachers and Other Professionals. London: David Fulton
Schmid-Giovannini S. (o.J.): Studienbrief 7: Auditiv-verbale Therapie. In: Qualification of educational staff working with hearing impaired children (QESWHIC) www.lehn-acad.net

Internet: useful resources
http://www.asha.org/public/speech/development/chart.htm (30.01.2009)
http://www.jtc.org/downloads/index.php (30.01.2009)

Back to Top


Leave a comment

Your email address will not be published. Required fields are marked *

Donate-Button copy

Recent Posts

Recent Comments