Digital Hearing Aids: Past, Present, and Future

by Harry Levitt, PhD
Dr. Levitt
Harry Levitt, PhD
Center for Research in Speech and Hearing Sciences, Graduate School and University of the City University of New York, New York, NY 10036

HARRY LEVITT, PhD is a Distinguished Professor at the City University of New York. Dr. Levitt received his Doctoral degree in electrical engineering from the Imperial College of Science and Technology, London, UK. In 1969, he set up the Communication Science Laboratory for the City University of New York. He has received regional and national awards for his work in speech and hearing science, and its application to hearing aids and communication.

  The major trend in hearing aid development has, until now, been that of miniaturization. The trend has been consistent and has closely followed the latest advances in electronic miniaturization, from tabletop electrical devices to bodyworn units using vacuum tubes, to smaller units using miniature tubes, to even smaller transistorized units, to behind-the-ear (BTE) instruments employing integrated circuits, to microminiature in-the-canal (ITC) hearing aids, to the latest completely-in-the-canal (CIC) instruments. The dominating force driving this trend has been that of cosmetics: the less visible the hearing aid, the better. With the introduction of the CIC hearing aid, however, the trend may have reached its zenith. CIC instruments are virtually invisible (unless one peers directly into the ear canal); hence, there is no additional cosmetic advantage to be gained from further reductions in size.

  Miniaturization of electronic components is an ongoing process driven by forces far more powerful than those governing the hearing aid industry. As a consequence, the components used in hearing aids are likely to become even smaller in the years to come. Reductions in component size can be utilized either to reduce the physical size of the hearing aid, or to increase the signal processing capabilities of the hearing aid within the limited space available. The latter option is likely to grow increasingly more important as the demand for further reductions in hearing aid size lessens.

  This new trend toward more advanced signal processing capabilities (rather than miniaturization) has become evident. Programmable hearing aids were introduced only recently and have already found a firm and growing niche in the market. These are similar in size to traditional hearing aids, but embody important new features, such as programmability, in addition to more advanced signal processing.

  The development of practical programmable hearing aids has resulted largely from the introduction of digital technology. Although some degree of programmability is possible using traditional analog circuits, digital technology provides a level of programmability that is substantially greater and much easier to implement than that possible with analog techniques, and offers additional substantive advantages, such as memory, logical and arithmetic operations, the capability of storing and transferring information without error, and compatibility with other digital devices, such as personal computers. There are also limitations to the use of digital technology in hearing aids that need to be understood.

  The purpose of this editorial is to review the extent to which digital techniques are used in modern hearing aids and to discuss the implications of this new technology for the future development of the field.


Quasi-Digital, Hybrid Analog-Digital and True-Digital Hearing Aids

  The introduction of digital technology to hearing aids has not been an all-or-none process. There are graduations in the extent to which it has been used in hearing aids and in acoustic amplification in general.

  The first application was the simulation of a hearing aid on a general purpose digital computer. These early simulations were done off-line; that is, the audio signals to be processed were first converted to digital form and stored on digital tape, which was then read into the computer. Several hours later, a second tape was generated containing the digitized audio signal as processed by the simulated hearing aid. This computer simulation of a hearing aid served as a useful, though cumbersome, research tool. The first practical application of this technique took place at Bell Laboratories in the mid 1960s and involved the development and evaluation of a high-gain handset for use in pay telephones. It is significant to note that this early simulation combined both analog and digital techniques in order to achieve a practical result (1).

  Advances in technology brought about dramatic increases in the speed of digital computers, and within a decade real-time (where the output is virtually simultaneous with the input) simulation of hearing aids by a general-purpose computer (an array processor) became a practical reality (2). The first such simulation of a hearing aid took tens of milliseconds for the computer to process the audio signal. The processing time depended on the complexity of the processing algorithms that were used. Here again, the system was not entirely all digital in that an analog preamplifier and analog power amplifier were used at the input and output, respectively. Other important developments during this period were the development of an experimental programmable hearing aid using analog components (3) and preliminary work on the use of digital technology in acoustic amplification (4,5).

  The first digital hearing aid using computer-simulation techniques was designed as a research tool for laboratory-based investigations. It soon proved its worth in a series of experiments exploring the potential of digital signal processing techniques in acoustic amplification (1,6-11). The limitations of applying digital technology to hearing aids also became evident during these and related experimental evaluations. The most serious of these was the large physical size and high power consumption of the hardware.

  Size and power consumption are crucial considerations for personal hearing aids. Although the first digital hearing aid was a rack-mounted experimental device, it was recognized that the technology was advancing rapidly and a wearable digital hearing aid would be feasible in the not-too-distant future. A major breakthrough came with the development of specialized digital signal processing (DSP) chips, designed specifically for high-speed signal processing and allowing for real-time processing of audio signals in a unit of small size. Experimental, wearable digital hearing aids of small size were soon developed (12-14). Although these experimental instruments could be worn on the body, they were far too large and consumed too much power to be a practical alternative to conventional analog hearing aids; however, they provided an impressive demonstration of what could be achieved using digital technology, and hearing aid manufacturers began to incorporate the technology into their products.

  It is instructive to look at the basic design of the first digital hearing aid and how this design has been modified to accommodate the requirements of small size and low power consumption. An understanding of the engineering compromises that have been made will not only provide a clearer picture of what can be achieved in practice, but will also identify the likely direction of future advances in these devices.

  Figure 1 shows a block diagram of the first digital hearing aid. Note that it consisted of two computers, a high-speed array processor dedicated to signal processing, and a smaller, slower computer for controlling the operation of that processor. Of these two units, the array processor was by far the larger and required substantially more electrical power.

Array processor Figure 1. Array processor digital master hearing aid.

  The introduction of high-speed DSP chips led to the replacement of the array processor by a much smaller microprocessor configured around such a chip, as shown in Figure 2. Although the smaller microprocessor could not perform many of the advanced signal processing algorithms that had been implemented in the array processor, it did allow for many of the advantages of digital technology to be implemented in a wearable unit. The first commercial digital hearing aid, the Phoenix, used technology of this type. Although small enough to be wearable, the instrument was a body aid and relatively large compared to the BTE and ITE analog hearing aids available at the time. The Phoenix was an impressive technological achievement, but not a commercial success. Its development, however, did much to pave the way for the future application of digital technology in hearing aids.

Digital master Figure 2. Digital master hearing aid consisting of a microprocessor-controlled digital filter.

  The problem of high power consumption was alleviated, but not solved, by the introduction of DSP chips. Another engineering compromise would be necessary in order to develop a digital hearing aid that could compete effectively with analog hearing aids in terms of size and power consumption. Of the various operations in a digital hearing aid, the unit converting the incoming audio signal to digital form, the analog-to-digital (A/D) converter, drew the largest power. A good quality A/D converter for this use should sample the incoming signal at a rate of 20,000 samples/sec, or higher, and convert each sample with at least 12-bit precision (i.e., one in which each sample of the audio waveform is represented by 12 binary digits). Traditional A/D conversion is particularly costly in terms of power consumption, since a separate comparison between the sampled voltage and a reference voltage is needed for each binary digit; in other words, sampling with 12-bit precision requires 12 comparisons of this type for each sample. Even if each comparison draws a very small amount of power, the total power consumption is still relatively high, since 12×10,000 such comparisons per second are required. Non-traditional methods of A/D conversion must be used in order to reduce power consumption.

Reducing Power Consumption
  There are at least three ways in which power consumption can be reduced, leading to three different types of digital hearing aids. Each method also introduces limitations to the capabilities of the instrument.

  The largest reduction in power consumption and, concomitantly, the most severe constraints on the signal processing capabilities, is obtained using a hybrid approach in which digital techniques are used for the controller but the audio signal is not digitized (i.e., the audio pathway remains in analog form). Figure 3 shows a device of this type. The audio pathway from microphone to output transducer consists of analog components only (preamplifier, filters, limiter, and output power amplifier). The operating characteristics of these components (gain, filter cut-off frequencies, output limiting level, and compression characteristics) are, however, controlled by a digital unit. Since the digital controlling signals have a low sampling rate with a precision that is considerably less than that needed for audio signals, the power consumption of the digital controller is small, while that of the analog components is comparable to that of a conventional analog hearing aid. This hybrid analog-digital hearing aid has several very useful features. These include programmability, memory, and signal-dependent dynamic forms of amplification, such as multiband compression and adaptive filtering. Most of the digital hearing aids currently available for clinical use are of this type.

Hybrid Figure 3. Hybrid analog-digital hearing aid.

  Another method of reducing power consumption is to sample, but not digitize, the audio signals. The electrical signal representing the audio waveform is sampled at regular intervals, but these samples remain in analog form: they are not converted to strings of binary digits, as they would be in a digital computer.

  It is possible to store a sampled electrical waveform as a series of electrical charges in a series of capacitors (one charge per capacitor). These electrical charges can then be switched to other capacitors so as to perform operations, such as addition, subtraction, and delay (by one or more sampling intervals). The use of switched capacitor technology allows for sampled waveforms to be processed in ways similar to those used in digital computers.

  Hearing aids employing switched capacitor technology are referred to as quasi-digital hearing aids. These instruments have many of the advantages of an all-digital system; in addition to programmability and memory, the audio signal can be amplified and filtered with great precision, including precise and convenient processing of the phase characteristics of the audio signal. This feature is very useful for reduction of acoustic feedback, where both the amplitude and phase of the amplified audio signal must be adjusted so as to cancel the feedback component. Manipulation of the phase spectrum is both difficult and imprecise with traditional analog circuits.

  A shortcoming of switched capacitor technology is that the switching and storing operations are not perfect and typically introduce some background noise. As a consequence, there are limitations to the amount of processing that can be achieved and still maintain an acceptable signal-to-noise ratio. The relative level of the background noise compared to the signal strength is also poorer for low operating voltages, such as those provided by the small batteries used in hearing aids.

  A third approach to the problem of reducing power consumption is to modify the form of the A/D conversion process. It is possible, for example, to use an approximate method of conversion in which the effect of the approximation is not perceptible: true-digital aids have recently been introduced using these alternative techniques.

  The distinguishing characteristic of a true-digital hearing aid is that the audio signal is converted to strings of binary numbers. The term "all-digital" is sometimes used, but this terminology is not entirely accurate, since hearing aids of this type still include one or more analog components, such as the microphone and associated low noise preamplifier.

  True-digital hearing aids have all of the advantages of hybrid and quasi-digital instruments plus the potential for advanced signal processing in evolving amounts, depending on the engineering philosophy underlying the design of the hearing aid as well as practical considerations, such as chip size, power consumption, and battery life. True-digital hearing aids, at present, have signal processing capabilities only a step beyond those of other digital hearing aids, but the potential exists for implementing substantially more advanced methods in the future.

  It is interesting to note that of the two converters needed for a true-digital hearing aid, major engineering difficulties were encountered with A/D conversion, whereas digital-to-analog (D/A) conversion was not only easily accomplished, but the method used for this process led to the development of a more efficient type of power amplifier, the Class D amplifier.

  In a Class D amplifier, pulses of varying width are used to drive the receiver. The acoustic signal is encoded by modulating the width of these pulses. Since the amplitude of the pulse does not convey any information, it can be as high (and hence as powerful) as the battery voltage will allow. In a conventional analog power amplifier, the amplitude of the signal being processed is limited to the linear operating range of the amplifier, which is significantly less than what the class D amplifier can achieve for the same battery voltage and power consumption. As a consequence, the digital class D amplifier has a wider dynamic range (more headroom) than a conventional analog power amplifier. This feature is of great practical value in low-voltage amplifiers, such as those used in hearing aids (15).

Capabilities of Modern Digital Hearing Aids
  Several engineering compromises have been necessary in order to develop practical digital hearing aids. These compromises, in turn, impose practical constraints on the capabilities of the device. It would be useful to know what capabilities can reasonably be expected for the different types; Table 1 provides a summary of "reasonable expectations" for modern digital hearing aids. It is recognized that as digital technology continues to improve (that is, as smaller chips with lower power consumption and greater signal processing capabilities become available) these reasonable expectations will change.

  As is evident from Table 1, even the simplest type of digital hearing aid, the hybrid analog-digital, has many of the features of the most advanced types, although in limited form. For example, they are programmable, but the flexibility of the programming routines and the number of variables that can be programmed are not nearly as great as for a true-digital hearing aid. This is also true of other features, such as memory, frequency shaping (i.e., fixed frequency filtering), adaptive filtering, compression amplification, and limited noise reduction. The issue of noise reduction, because of its importance, is discussed separately in the next section.

Table 1.
Expected capabilities of different types of digital hearing


Programmability * ** *** ***
Memory ** ** *** ***
Detailed frequency shaping * ** *** ***
Adaptive filtering * ** ** ***
Multi-channel compression ** ** ** ***
Acoustic feedback cancellation -- ** ** ***
Noise reduction * * * **
Advanced signal processing -- -- * **

* = limited capability; ** = good capability; *** = excellent

  Most of the digital hearing aids in current use are hybrid analog-digital instruments. A smaller number of quasi-digital hearing aids are also in use. Several of the larger manufacturers have recently introduced "all-digital" instruments, and the use of these is expected to grow rapidly.

  An advantage of both quasi-digital and true-digital designs is that they allow for convenient, detailed control of the frequency response of the hearing aid, including the phase response. This allows for the possibility of canceling acoustic feedback by introducing a cancellation signal that is equal in amplitude but opposite in phase. These designs also have the potential for providing more effective forms of adaptive filtering (e.g., frequency shaping determined by the level and spectral shape of the incoming audio signal) than the simpler hybrid analog-digital instruments.

  At present, quasi-digital hearing aids have capabilities similar to, but not as good as, true-digital hearing aids. It is also important to bear in mind that the true-digital era has only just begun, and the capabilities of this type are likely to improve significantly with continuing advances in technology. The potential for implementing more advanced methods of signal processing is far greater for true-digital hearing aids than for any other type.

  Of the hearing aids shown in Table 1, the type with the most advanced signal processing capabilities is the bodyworn, true-digital hearing aid. It is not cosmetically attractive and is used primarily as an experimental unit for investigating new forms of signal processing. Even more advanced signal processing capabilities are possible with desk-mounted devices. Experimental hearing aids of this type can be implemented relatively easily today using a high-speed DSP board with A/D and D/A capabilities mounted in a personal computer. Such experimental devices have proven to be invaluable in exploring new methods of signal processing for acoustic amplification (16-19).

  The most serious limitation of the bodyworn or desk-mounted true-digital hearing aid lies not in the technology but in our lack of understanding of the most effective way of processing signals for people with hearing loss. Extremely sophisticated signal processing techniques can now be implemented using digital technology, yet despite this capability we have yet to determine a method of signal processing that will improve significantly the ability of hearing aid users to hear and understand speech. A case in point is that of improving speech intelligibility in noise.

Digital Hearing Aids and Noise Reduction
  One of the most common complaints of hearing aid users is that speech is particularly difficult to understand in noise. Although methods for improving speech intelligibility in noise have been investigated by many researchers over many years, a satisfactory solution has yet to be found. This is true for persons with both impaired and unimpaired hearing. The most difficult case is that in which both speech and noise are picked up by a single microphone, as in the vast majority of hearing aids. Our understanding of this problem is so limited that we have not only been unsuccessful in finding a solution, but we do not even know whether it is possible to improve the intelligibility of speech in noise by any significant amount.

  While it is known that there is an inherent limit to the amount by which signal-to-noise ratio can be increased for statistically stationary signals, such as a tone in random noise, we do not know if there is a similar inherent limitation for speech in noise. Neither many noises nor speech is statistically stationary. Further, an increase in speech-to-noise ratio does not necessarily produce a corresponding improvement in speech intelligibility (7,20).

  The most common result in experimental investigations of noise-reducing techniques for single-microphone systems is that the speech-to-noise ratio can be improved substantially (e.g., by as much as 12 dB), but without any corresponding improvement in speech intelligibility (7,21). In many cases, the improvement in speech-to-noise ratio is accompanied by a reduction in intelligibility. Distortions of the speech signal introduced by the method of signal processing can often be more damaging to intelligibility than background noise.

  The experimental evidence is not all negative. Small improvements in intelligibility have been observed for some persons with hearing-impairment under special conditions (7,11,20), such as signal processing to reduce the spread of masking. An illustrative example that typically comes to mind is that of speech partially masked by a very intense low-frequency noise. A high-pass filter, or frequency-dependent amplitude compression, can be used to reduce the level of the low-frequency noise, thereby reducing the upward spread of masking (22). Although this simple example illustrates a technique that, in principle, should improve speech intelligibility, its implementations in practical hearing aids have seldom produced the anticipated improvements (23-25). As in other forms of noise reduction, lowering the relative intensity of the low-frequency noise produced an improvement in overall sound quality, but without a corresponding improvement in speech intelligibility.

  It is instructive to examine why attempts at improving speech intelligibility in noise by reducing upward spread of masking have met with little success in practice. To begin with, a substantial amount of upward spread of masking is needed in order to produce a significant reduction in speech intelligibility over and above the reduction in intelligibility produced by the noise without upward spread of masking. If there is no upward spread of masking, reduction of the noise level by either low-band compression or high-pass filtering will reduce the speech signal by the same amount as the noise: the effective speech-to-noise ratio will remain unchanged, and there will be no change in intelligibility.

  In order for a low-frequency noise to produce substantial upward spread of masking, it is necessary for the noise to be very intense with a spectrum that falls off rapidly with increasing frequency. The ambient noises typically encountered by hearing aid users seldom have these exact characteristics. With the exception of certain industrial noises (now subject to noise abatement regulations), the ambient noises encountered by most users are not extreme and typically have spectra that fall off gradually with frequency. As a consequence, upward spread of masking for most ambient noises is not sufficiently large to cause a substantial reduction in speech intelligibility.

  Another factor to be considered is that the fixed frequency response of a properly prescribed hearing aid typically has more gain in the high frequencies than in the low, thereby reducing any upward spread of masking that may exist. Thus, there is less room for further improvement using adaptive filtering. Another important practical consideration is that upward spread masking is greatest in the frequency region immediately above the noise. Therefore, in order to effectively reduce upward spread of masking, the shape of the adaptive filter must match the spectrum of the interfering low-frequency noise, otherwise frequency components of the speech signal that are not masked by noise will be attenuated by the adaptive filter, leading to a possible reduction in intelligibility.

  The above discussion does not rule out the possibility of improving speech intelligibility in noise by reducing spread of masking effects, but it rather explains why relatively simple methods of attacking this problem, constrained by the limitations of analog technology (e.g., relatively crude frequency filtering), have, thus far, met with little success. Improvements in speech intelligibility resulting from a reduction in upward spread of masking using high-pass filtering have been demonstrated under carefully controlled laboratory conditions using an intense low-frequency noise band with a high-frequency roll-off in excess of 120 dB/octave (26).

  It is also important to recognize that spread of masking can take several different forms and that upward spread of masking is only part of the problem. Temporal spread of masking and intra-component masking within the amplified speech signal are other forms of masking that need to be addressed using more advanced forms of signal processing.

  The potential for improving speech intelligibility in noise is substantially greater in systems using more than one microphone. One approach to the problem is to use an array of microphones with their outputs combined, so as to focus on sound coming from a given direction. If the speech and noise come from different directions, the array can be focused on the speech while attenuating the noise. The amount by which the noise can be attenuated depends on the spacing of the microphones and the way in which the microphone outputs are combined. Substantial improvements in speech-to-noise ratio (11 dB or larger) with concomitant improvements in speech intelligibility have been obtained using sophisticated adaptive signal processing techniques (27,28). Relatively simple processing techniques have also been developed yielding improvements in speech-to-noise ratio in excess of 7 dB (29,30).

  An important requirement for these directional hearing aids is that the speech and noise come from different directions. If the speech and noise came from a common source (e.g., a single loudspeaker) the advantage of directionality is lost. These hearing aids will also be less effective in a reverberant room. A major practical limitation with the use of microphone arrays to achieve directionality is the large spacing needed for the microphones. Very good directionality can be achieved with large spacing. For the kind of spacing that could be practical, such as mounting the microphones on the frame of a pair of eyeglasses (29,30), the improvement in directionality is limited, particularly at low frequencies. Nevertheless, the improvement in intelligibility that can be obtained using microphone arrays of this type is still significant.

  A major hurdle that still remains is the cosmetic acceptability of advanced signal-processing hearing aids using microphone arrays. As noted above, microphone arrays could be mounted on the frame of a pair of eyeglasses, thereby reducing the inconvenience and other shortcomings of a bodyworn instrument, but it remains to be seen whether this type of hearing aid will receive wide acceptance. Cosmetic appearance is not an overriding concern of all hearing aid users, and it is hoped that the advantages to be gained from the use of advanced signal-processing instruments embodying superior directional characteristics will result in sufficient demand for instruments of this type to be a viable alternative to traditional nondirectional hearing aids.

  An alternative method of multimicrophone noise reduction is for one or more microphones to provide information on the nature of the noise and then to use this information to reduce the noise in the output of the remaining microphone(s). Consider the following example of a two-microphone noise cancellation system. One microphone, the primary, picks up speech plus noise. The second microphone, the reference, is placed as close as possible to the noise source and picks up noise only.

  The output of the reference microphone is filtered to adjust for any phase or amplitude difference that may exist between its input and the noise in the primary. Once this is accomplished, the filtered noise from the reference microphone is subtracted from the signal of the primary, leaving just the speech: the noise is canceled.

  Since the characteristics of the noise filter are not known in advance, an adaptive filter is used that is adjusted systematically until the noise is canceled. Efficient adaptive algorithms for noise cancellation using this technique have been developed by Widrow et al. (31), and this type of device is sometimes referred to as a Widrow filter.

  The above method of noise cancellation has been used effectively with many persons, both with and without hearing impairment (32,33). A practical limitation of the technique for hearing aid applications is that the reference microphone needs to be placed at the noise source. If, however, the speech and noise sources are separate and there is flexibility as to the placement of microphones, it would be both easier and more efficient to place the primary microphone at the speech source so that there is very little, if any, noise to begin with. The latter approach is, of course, the basic philosophy underlying the use of FM and infrared systems. These approaches, however, are not always practical for typical hearing aid use.

  It is possible to use a Widrow filter under conditions typical of hearing aid use and still obtain some benefit. Both microphones, for example, can be mounted on the head. Although both pick up speech and noise, the one picking up more noise than speech can be used as the reference, and its output can be filtered and subtracted from the output of the primary so as to maximize the speech-to-noise ratio.

  An experimental evaluation of this concept showed that with a head-mounted directional microphone pointing toward the noise source as the reference microphone, an improvement in speech-to-noise ratio on the order of 7 dB was obtained (17). A correspondingly large improvement in speech intelligibility was also obtained.

  There are, however, limitations with respect to the implementation of this two-microphone technique in a practical hearing aid. Although both can be mounted on the same side of the head (a useful practical advantage), the use of a microphone small enough to fit on a typical BTE or ITE hearing aid will result in a much smaller gain in speech-to-noise ratio. The technique will also not work well in a highly reverberant room or if more than one noise is present (18). An additional, suitably placed reference microphone is needed to cancel each additional noise source.

  The processing algorithms for multiple-microphone systems designed for noise cancellation may, in some cases, be similar to the algorithms used for microphone arrays with adaptive directional characteristics (34). Whichever approach is used, the practical placement of the microphones remains a problem.


Digital Devices and Electromagnetic Pollution

  The digital revolution has affected almost every aspect of our daily lives from microprocessor-controlled domestic appliances to large-scale computerization in commerce, industry, communications, and entertainment. The wide-scale use of digital technology has many advantages. Digital chip development, driven by powerful economic forces, has advanced at an astounding pace, leading to a plethora of new devices as well as dramatic improvements in older devices. All of this has occurred with remarkable reductions in both the size and cost of the devices themselves.

  The ubiquity of this technology has led to a growth in the standardization of digital codes and methods of interconnection. Thus, it is possible for a personal computer to communicate not only with another personal computer, but also to monitor or control other devices. Hearing aid manufacturers have taken advantage of this trend and have cooperated with each other in developing common hardware and software platforms (HI-PRO and NOAH) for programming digital hearing aids. This development greatly reduces the cost of developing programming systems for hearing aids, since relatively inexpensive mass-produced personal computers (or the basic elements thereof) can be used to program a digital hearing aid.

  There are, however, growing problems resulting from the use of digital technology. Digital signals are inherently pulsive. Pulse-like electrical signals are more likely to produce high-frequency electromagnetic radiation than steady-state nonpulsive signals. As a consequence, many digital devices generate signals that interfere with other electronic devices.

  With the inexorable trend toward miniaturization, many new digital devices are designed to be portable, such as laptop computers, handheld computer games, and cellular telephones, resulting in another complicating factor. The widespread use of these devices, their mobility, and the use of radio transmitters in many of them has substantially increased unwanted electromagnetic signals, creating a new form of pollution: electromagnetic pollution.

  Many hearing aid users have already experienced audible interference from electromagnetic pollution (e.g., the buzz produced by computer monitors or flickering fluorescent lights). A major new source of interference is the digital cellular telephone. While analog versions were known to produce some interference in hearing aids, the amount produced by the digital phones is much greater. This is because these typically use a highly pulsive modulation scheme.

  The wires and other metal connectors in a hearing aid serve as miniature antennae that pick up high-frequency electromagnetic signals. Any nonlinearities in the hearing aid circuit will demodulate these electromagnetic signals, and demodulated signals from a digital cellular radio transmission contain significant components in the audio frequency range. These components are amplified by the hearing aid and are heard as a buzz. This interference can, in some cases, be more intense than the amplified speech signal.

  There are three areas of concern relating to the interaction between hearing aids and cellular telephones; bystander interference, user interference, and hearing aid compatibility. Bystander interference occurs when someone using a cellular telephone is close to a person using a hearing aid. User interference occurs when a person wearing a hearing aid uses a cellular phone. Despite these two sources of interference, it is also necessary for cellular telephones to be hearing aid compatible (HAC).

  A telecoil is commonly used with telephones in order to convey the audio signal from the telephone to the hearing aid. The mode of signal transmission used in a telecoil is quite different from the mode of electromagnetic transmission causing audible interference in the hearing aid. As a consequence, the electromagnetic interference in a hearing aid is essentially independent of whether the hearing aid has a telecoil or not. At the same time, if steps are taken to reduce electromagnetic interference in a hearing aid, these methods should not reduce or modify the desired audio signals picked up by the telecoil. It should be noted that in addition to interference produced by the the radio frequency electromagnetic field, there is also the possibility of low-frequency interference from magnetic fields being picked up by the telecoil.

  There are several ways in which electromagnetic interference can be reduced. One method is to maximize the distance between the source of interference and the hearing aid. This may not be too difficult in the case of bystander interference, although there will be situations in which sufficient distance cannot be achieved (e.g., two passengers sitting next to each other on a bus, one of whom is using a cellular telephone, the other using a hearing aid).

  The problem of user interference is much more difficult, since the usual method of using a telephone is to place the telephone handset next to the ear. It is possible to increase the distance between the telephone handset and the hearing aid by using an external device with an extension cord that plugs into the handset and conveys the audio signal by means of either a telecoil or a headset that fits over the hearing aid, or by means of a special transducer that delivers the acoustic signal directly to the ear canal instead of to the hearing aid. These external devices also include a microphone, so there is no need to talk into the mouthpiece of the cellular telephone.

  It is also possible to reduce interference by increasing the immunity of the hearing aid to unwanted electromagnetic signals, a process known as "hardening" the hearing aid. There are several ways to achieve this. One is to coat the hearing aid with a metallic substance that acts as an electromagnetic shield. Another method is to eliminate the interference electronically by using capacitors that have no effect on the audio signals but will effectively short circuit very high-frequency signals, such as the carrier used by cellular telephones. A third method is to modify the wiring and physical structure of the hearing aid. These methods can be combined to maximize immunity to electromagnetic interference, and hearing aid manufacturers are actively engaged in developing combinations for hardening their products.

  Digital cellular telephones have only just been introduced to the United States; therefore, the problem is not as severe in this country as it is in Europe and other countries that have had them for several years. The Federal Communications Commission is concerned about electromagnetic interference in hearing aids caused by cellular telephones, particularly the new digital cellular telephones, and has initiated a process whereby hearing aid manufacturers, wireless telephone manufacturers, and consumers are working together to find practical solutions to the problem.

  In conclusion, the use of digital techniques in acoustic amplification has a much longer history than may at first be apparent. Although the advantages of digital techniques were recognized some time ago, it is only recently that practical digital hearing aids became a reality. Even then, the introduction of digital technology has followed a graduated sequence, from hybrid analog-digital hearing aids, to quasi-digital instruments to true-digital instruments.

  The era of the digital hearing aid has only just begun and further significant advances in the application of digital technology to hearing aids are to be expected. One prediction is that the signal-processing capabilities of digital hearing aids will increase significantly. Another is that advances in other areas using digital technology will find applications in acoustic amplification.

  One possible development is that some aspects of automatic speech recognition (ASR) will be incorporated into the more advanced digital hearing aids of the future. For example, ASR techniques could be used to recognize speech in noise and then the speech could be resynthesized without any background noise (20,34). This approach would provide an innovative solution to the very difficult problem of single-microphone noise reduction. In order for it to succeed, however, it is necessary to develop ASR systems that work reasonably well in a noisy environment. Even if automatic speech recognition in noise proves to be as intractable a problem as single-microphone noise reduction, other aspects of ASR technology could still be of value in digital hearing aids of the future.

  Another possibility is that of improving the intelligibility of speech for people with severe hearing-impairment by automatically recognizing phonetic cues that would otherwise be missed and then exaggerating the acoustic correlates of these phonetic cues to improve their recognition by the listener with hearing impairment. The feasibility of improving speech recognition in a severely hearing-impaired population using exaggerated acoustic cues has been demonstrated (35,36). What remains to be done is to develop a practical system for automating this technique.

  The most serious limitation in the successful application of advanced digital techniques to hearing aids, however, is not technological. It is a fundamental lack of understanding as to how to process speech (in quiet or in noise) so as to make it more intelligible to individuals with hearing impairment. Existing digital technology has almost unlimited capabilities for processing speech signals and much basic research is still needed to develop more effective methods of processing speech for hearing impairment.

  In basic research of this type, the investigator should not be constrained by practical limitations of size and power consumption; large desk-mounted experimental hearing aids with substantial computational capabilities may be needed for these fundamental studies. Once an effective new method of signal processing has been developed, however, the problem of implementing it, or a reasonable approximation of it, in a practical hearing aid would then need to be addressed. At this stage, the perennial conflict between cosmetics and performance will reappear. This is likely to happen if hearing aids with relatively large microphone arrays are introduced. It remains to be seen whether these larger hearing aids, which are clearly superior in terms of processing speech in noise, could become a viable alternative to smaller, cosmetically more acceptable instruments.

  A problem of immediate concern to hearing aid dispensers is that of fitting modern digital hearing aids. Most fitting procedures, until now, have focused on the problem of choosing the right frequency response and output level. Research on how to fit hearing aids with compression amplification, or other dynamic forms of amplification, is still in its infancy. There is a danger that if dispensing audiologists do not know how to adjust the many variables of modern digital hearing aids, manufacturers will reduce the flexibility of their products (i.e., reduce the number of variables under the dispenser's control) so as to make these instruments easier to fit. This would reduce one of the inherent advantages of digital hearing aids, that of greater flexibility in dealing with individual differences in the fitting of hearing aids. As the signal processing techniques used in modern hearing aids increase in complexity there will be greater need to take individual differences into account. For example, dispensers will need to consider individual differences not only in the selection of the most appropriate frequency response and output level, but also in the choice of compression characteristics and other variables. Loss of flexibility in dealing with individual differences will be a sad loss indeed.


  This guest editorial was based upon work supported by the National Institutes on Disability and Rehabilitation Research (NIDRR), United States Department of Education.


  1. Levitt H. Digital hearing aids: a tutorial review. J Rehabil Res Dev 1987;24(4):7-20.
  2. Levitt H. An array-processor computer hearing aid. ASHA 1982;24(10):760.
  3. Mangold S, Leijon A. Programmable hearing aid with multichannel compression. Scand Audiol 1979;8:121-6.
  4. Graupe D, Causey GD. Development of a hearing aid system with independently adjustable subranges of its spectrum using microprocessor hardware. Bull Prosthet Res 1975;12:241-2.
  5. Moser LM. Hearing aid with digital processing for: correlation of signals from plural microphones, dynamic range control, or filtering using an erasable memory. US Patent #4,187,413, February 5, 1980.
  6. Levitt H, Neuman A, Mills R, Schwander T. A digital master hearing aid. J Rehabil Res Dev 1986;23(1):79-87.
  7. Levitt H, Neuman A, Sullivan J. Studies with digital hearing aids. Acta Otolaryngol 1990;Suppl 469:57-69.
  8. Levitt H. Future directions in hearing aid research. J Speech Lang Pathol Audiol 1993;Monograph Suppl 1:107-24.
  9. Levitt H, Neuman A. Evaluating orthogonal polynomial compression. J Acoust Soc Am 1991;90:241-52.
  10. Neuman AC, Levitt H, Mills R, Schwander T. An evaluation of three adaptive hearing aid selection strategies. J Acoust Soc Am 1987;82:1967-76.
  11. Neuman AC, Schwander TJ. The effect of filtering on the intelligibility and quality of speech in noise. J Rehabil Res Dev 1987;24(4):127-34.
  12. Nunley J, Staab W, Steadman J, Wechsler P, Spencer B. A wearable digital hearing aid. Hear J 1983;October:29-31, 34-35.
  13. Cummins KL, Hecox KE. Ambulatory testing of digital hearing aid algorithms. In: Proceedings of the 10th Annual RESNA Conference; 1987, San Jose, CA. Washington, DC: RESNA Press; 1987. p. 398-400.
  14. Engebretson AM, Morley RE, Popelka GR. Development of an ear-level digital hearing aid and computer-assisted fitting procedure: an interim report. J Rehabil Res Dev 1987;24(4):55-64.
  15. Sammeth CA, Preves DA, Bratt GW, Peek BF, Bess FH. Achieving prescribed gain/frequency responses with advances in hearing aid technology. J Rehabil Res Dev 1993;30(1):1-7.
  16. Levitt H, Sullivan J, Hwang JY. A computerized hearing aid/measurement system. Hear Instrum 1986;37(2).
  17. Schwander TJ, Levitt H. Effect of two-microphone noise reduction on speech recognition by normal-hearing listeners. J Rehabil Res Dev 1987;24(4):87-92.
  18. Weiss M. Use of an adaptive noise canceller as an input preprocessor for a hearing aid. J Rehabil Res Dev 1987;24(4):93-102.
  19. Sullivan JA, Levitt H, Hwang J, Hennessey A. An experimental comparison of four hearing aid prescription methods. Ear Hear 1988;9:23-32.
  20. Levitt H, Bakke M, Kates J, Neuman A, Weiss M. Advanced signal processing hearing aids. In: Beilen J, Jensen GR, editors. Recent developments in hearing instrument technology: proceedings of the 15th Danavox Symposium. Copenhagen: Stougaard Jensen; 1993. p. 247-54.
  21. Lim JS, Oppenheim AV. Enhancement and bandwidth compression of noisy speech. Proc IEEE 1979;67:1586-604.
  22. Ono H, Kanzaki J, Mizoi K. Clinical results of hearing aid with noise-level-controlled selective amplification. Audiology 1983;22:494-515.
  23. Van Tasell DJ, Larsen SY, Fabry DA. Effects of an adaptive filter hearing aid on speech recognition in noise by hearing-impaired subjects. Ear Hear 1988;9:15-21.
  24. Van Tasell DJ, Thomas R, Crain MA. Noise reduction hearing aids: release from masking and release from distortion. Ear Hear 1992;13:114-21.
  25. Fabry DA. Programmable and automatic noise reduction in existing hearing aids. In: Studebaker GA, Bess FH, Beck LB, editors. The Vanderbilt hearing aid report II. Parkton, MD: York Press; 1991. p. 65-78.
  26. Fabry DA, Leek MR, Walden BE, Cord M. Do adaptive frequency response (AFR) hearing aids reduce 'upward spread' of masking? J Rehabil Res Dev 1993;30(3):318-25.
  27. Peterson PM, Durlach NI, Rabinowitz WM, Zurek PM. Multimicrophone adaptive beamforming for interference reduction in hearing aids. J Rehabil Res Dev 1987;24(4):103-10.
  28. Kates JM, Weiss MR. A comparison of hearing-aid array-processing techniques. J Acoust Soc Am. In press 1997.
  29. Soede W, Berkhout AJ, Bilsen FA. Development of a directional hearing instrument based on array technology. J Acoust Soc Am 1993;94:785-98.
  30. Soede W, Bilsen FA, Berkhout AJ. Assessment of a directional microphone array for hearing-impaired listeners. J Acoust Soc Am 1993;94:799-808.
  31. Widrow B, Glover JR, McCool JM, et al. Adaptive noise canceling: principles and application. Proc IEEE 1975;63:1692-716.
  32. Brey RH, Robinette MS, Chabries DM, Christiansen RW. Improvement in speech intelligibility in noise employing an adaptive filter with normal and hearing-impaired subjects. J Rehabil Res Dev 1987;24(4):75-86.
  33. Chabries DM, Christiansen RW, Brey RH, Robinette MS, Harris RW. Application of adaptive digital signal processing to speech enhancement for the hearing impaired. J Rehabil Res Dev 1987;24(4):65-74.
  34. Greenberg JE, Zureck PM. Evaluation of an adaptive beamforming method for hearing aids. J Acoust Soc Am 1992;91:1662-76.
  35. Levitt H. Processing of speech signals for physical and sensory disabilities. Proc Natl Acad Sci USA 1995;92(22):9999-10006.
  36. Revoile SG, Holden-Pitt LD, Edward DM, Pickett JM. Some rehabilitative considerations for future speech-processing hearing aids. J Rehabil Res Dev 1986;23(1):89-94.
  37. Revoile SG, Holden-Pitt LD, Pickett JM, Brandt F. Speech cue enhancement for the hearing impaired: I. Altered vowel durations for perception of final fricative voicing. J Speech Hear Res 1986;29:240-55.

Go to top.

Previous Contents Next

Last revised Fri 02/12/1999