VA Research and Development LOGO

Logo for the Journal of Rehab R&D
Volume 42 Number 4, July/August 2005, Supplement 2
Pages vii — xiv

Guest Editorial


Some interesting analogies

Introduction

The study of hearing has many facets and, like a sparkling diamond, each facet provides a different view of an intricate and intriguing center. The brilliance of a diamond is largely a reflection of the light we shine on it. Similarly, the remarkable findings that characterize our study of hearing reflect, in no small measure, the tools we have used in this pursuit. Perhaps more importantly, these tools have influenced our way of thinking. The electronic amplifier, for example, provided new ways of measuring hearing and improved methods of intervention, such as the personal hearing aid. Engineering analyses of amplifiers, filters, and other electronic devices advanced the theory of linear systems, which, not surprisingly, was also applied to the analysis and modeling of the auditory system. As more advanced technological tools were developed (compression amplification, modulation techniques, signal detection theory), so too, were our models of hearing upgraded with the concepts underlying these new tools.

A more subtle analogy lies in the many striking contrasts found in the physical properties of diamonds and in the psychoacoustic properties of hearing. A diamond is one of the hardest substances known, yet is easily splintered by a light tap with a well-positioned chisel. Similarly, the ear is an incredibly sensitive organ that can detect acoustic vibrations only marginally more intense than the Brownian vibration of air molecules, yet it can process sounds a billion times more powerful-sounds sufficiently powerful to shatter delicate wineglasses. The monaural perception of phase differences is extremely poor, yet binaural perception of phase differences (interaurally) is incredibly good; interaural time differences as small as 10 ms can be detected binaurally.

Why does the ear, like the diamond, exhibit such striking contrasts in its properties? The answer may lie in the dual role of hearing as an extremely sensitive alerting system and as a communication system that is closely linked to the vocal mechanism. From this perspective, it is not surprising that the ear is incredibly sensitive in detecting and localizing the direction of sound since these are key requirements for an efficient alerting system. Likewise, it is not surprising that the auditory system is less sensitive to many of the cues used for sound detection and localization when processing sound for recognition and communication.

An appreciation of the dual role of hearing is important in developing a deeper understanding of the hearing mechanism. In particular, an understanding of the ear's remarkable strengths (and equally remarkable weaknesses) and ways to capitalize on these strengths and guard against these weaknesses is essential to the development of more effective methods for the prevention and treatment of hearing disorders.

The instruments we use to measure hearing are, obviously, of considerable importance in the study of hearing and its disorders. Of even greater importance, however, is the extent to which our thinking is influenced by the capabilities of our measuring tools. For example, the power or intensity (power/unit area) of an acoustic signal is determined by the product of its pressure and volume velocity. Pressure is relatively easy to measure, whereas volume velocity is particularly difficult. Consequently, most of our knowledge of signal levels in the auditory system is based on pressure measurements. Over the years, a substantial body of data on normal and impaired hearing has been gathered in terms of sound pressure levels, but little is known about normal and abnormal volume velocities. In this case, ignorance is not bliss.

Power flow in an acoustic system can also be determined from measurements of both the pressure and impedance of the acoustic system. Various methods of measuring the acoustic impedance of the ear have been developed over the years. However, these measurements are relatively difficult, and until recently, instruments for the clinical measurement of acoustic impedance were severely limited in both bandwidth and accuracy. Nevertheless, the use of these instruments has led to important advances in the assessment of outer and middle ear function, but only with respect to low-frequency effects.

The measurement of reflected pressure signals resulting from discontinuities in the sound path from ear canal to cochlea provides useful information about the functioning of the outer and middle ear, as well as about mechanical sound transmission within the cochlea. However, these reflected pressure signals are difficult to measure and did not attract much clinical interest until the discovery that what appeared to be acoustic reflections from the inner ear were, in fact, signals generated by the ear itself (otoacoustic emissions). This discovery not only opened up a new field of investigation but also resulted in important clinical advances, such as more efficient methods of hearing screening using otoacoustic emissions.

The preceding analogies and illustrative examples are intended to convey the following messages-

· Our understanding of hearing reflects, in a complex way, the light (i.e., the technological tools) we have used to study hearing.
· The dual role of hearing as an alerting system and a means of communication has resulted in some surprising dualities in the properties of the auditory system.

The articles in this special issue illustrate how an understanding of these concepts has led to significant advances in the field.

Commentary on this Issue

This special issue on hearing and hearing loss addresses topics at the cutting edge of basic and clinical research. The range of topics studied and the many important advances made over the past few years are too numerous to be included in a single issue. For example, implanted auditory prostheses is an area that has experienced explosive growth in the past decade. Therefore, a separate special issue will focus on cochlear implants and other implantable prostheses and relevant advances in electrophysiological acoustics.

The articles in the current issue are grouped according to the topics Overview, Prevention, Evaluation, Intervention, and Future Directions.

Overview

In the first article, Maurice H. Miller and Jerome D. Schein (p. 1) present an overview of auditory disorders of particular relevance to the veteran population, such as noise-induced hearing loss, idiopathic sudden sensorineural hearing loss, otosclerosis, and Ménière's disease. All of these disorders involve sensorineural hearing loss combined with other significant symptoms. This article provides relevant background information for the articles that follow.

The next two articles review the effects of aging on speech perception. The health of the aging veteran is of particular concern to the Department of Veterans Affairs (VA) because of the extremely high proportion of older veterans. Older people are recognized to have greater difficulty understanding speech than younger adults. An important question is whether this age-related communication deficit is primarily a result of reduced auditory sensitivity with advancing age or whether more central factors are involved. In the first of these two articles, Sandra Gordon-Salant (p. 9) reports on a substantial body of evidence showing that temporal processing slows significantly with age and may account for the relatively poor speech recognition of seniors even after the effect of hearing loss is considered. Other researchers argue that peripheral effects, such as reduced auditory sensitivity, are primarily responsible for seniors' relatively poor speech understanding (Wilson and McArdle, p. 79).

In the second article on aging effects, Jeffrey S. Martin and James F. Jerger (p. 25) argue that "while loss in peripheral hearing sensitivity explains many of the listening problems of elderly persons, age-related declines in general cognitive skill and central auditory processing also appear to contribute." In their review of the topic, they explain that although auditory temporal processing deficits are a major factor contributing to age-related deficits in speech understanding, other central processing factors need to be considered. Martin and Jerger's research adds a new dimension to the problem by  showing that age-related deficits in interhemispheric information processing may also contribute to the decline in speech understanding with age. An important clinical implication of these data is that bilateral amplification may be contraindicated for an elderly person with this type of central auditory processing disorder. VA clinicians must be sensitive to a possible age-related central disorder while cognizant of the advantages of bilateral amplification for an individual with symmetric peripheral hearing loss (Simon's article also discusses bilateral amplification, p. 117).

Prevention

The most efficient method of dealing with hearing loss is to prevent it in the first place. As indicated earlier, the ear is remarkably sensitive, yet also extremely fragile. In their article, Stephen A. Fausti, Debra J. Wilmington, Patrick V. Helt, Wendy J. Helt, and Dawn Konrad-Martin (p. 45) review the prevalence, incidence, and causes of hearing loss. Noise exposure and the side effects of ototoxic medication are two major sources of hearing impairment.

The dual role of ear function provides a means of communication and a very sensitive alerting system. The protective mechanisms of the ear are well suited for speech communication at a comfortable listening level (including protection from the intense vibration of one's own voice) as well as for endurance of a moderately intense alerting signal. The ear, however, is fragile with respect to long-term exposures to high-intensity sounds. While this form of exposure may not appear to be harmful during each individual exposure (although brief sounds of sufficient intensity can cause immediate damage), cumulative damage from noise exposure may result in long-term impairments. Prevention is preferable to rehabilitation, and as noted by Fausti et al., a hearing loss prevention program focused on education in appropriate use of ear protection and reduction of noise exposure can effectively reduce the incidence of noise-induced hearing loss.

Treatment with certain chemotherapeutic medications or aminoglycoside antibiotics can also cause hearing loss. Individual susceptibility to hearing loss varies depending on age, concomitant exposure to certain medications or chemicals, and prior exposure to noise. Therefore, clinicians must recognize when precautions are needed to protect the hearing mechanism and carefully monitor the hearing of patients taking potentially ototoxic medications. Fausti et al. describe methods for efficient monitoring of hearing under these conditions.

Evaluation

The next two articles address issues of measurement and evaluation. Middle ear status should be ascertained at the outset of an audiological evaluation because the middle ear is involved in most hearing tests. In the first article, Jont B. Allen, Patricia S. Jeng, and Harry Levitt (p. 63) describe a recent advance in the evaluation of middle ear function: an instrument that measures acoustic power flow in the ear canal and the proportion of this power entering the middle ear. A valuable feature of the instrument is that this information is obtained rapidly (within a few minutes) and conveniently with a bandwidth of about 10 kHz. Other clinical instruments for evaluating middle ear function are limited to measurements in the low frequencies (below 1.0 kHz).

The status of the middle ear is particularly important in the measurement of otoacoustic emissions because the external signal traveling toward the cochlea and the evoked otoacoustic emission traveling away from the cochlea are both subject to attenuation due to abnormal middle ear function. The hardware (ear insert, computerized instrumentation) for the instrument described by Allen et al. is the same as that for measuring otoacoustic emissions, allowing the tests for evaluating middle ear function (power flow measurements) and for screening inner ear function (otoacoustic emissions) to be combined in a single instrument and administered in rapid succession. This new instrument may prove to be extremely useful since middle ear disorders are a major cause of false positives in hearing screening.

The analogy of the facets of a diamond is particularly apt with respect to measuring middle ear function. The acoustic impedance and acoustic power reflectance of the middle ear are different facets of the same underlying set of physical properties. Thus, acoustic impedance can be derived from power reflectance and vice versa by means of a mathematical transformation. Allen et al. examined six possible ways of measuring middle ear function (three power-based measurements: power reflectance, power absorption, and transmittance; and three impedance-based measurements: normalized acoustic resistance, normalized acoustic reactance, and normalized absolute impedance). Each measure provided a useful view of middle ear function; transmittance, however, was judged to be the most useful because it "specifies power absorption on a decibel scale and in so doing provides a useful link to other widely used audiological measurements, such as hearing level." An additional advantage of transmittance (and for any of the power-based measurements) is that the exact location of the insert in the ear canal is not critical because power flow does not vary with distance along the ear canal (frictional losses from power flow are extremely small and negligible for all practical purposes). Thus, the measurement of power flow in the ear canal is unaffected by the location of the ear insert. In contrast, traditional impedance-based measurements critically depend on the exact location of the ear insert.

In the second article on issues of measurement, Richard H. Wilson and Rachel McArdle (p. 79) examine the tricky problem of using speech signals to evaluate the functional status of the auditory system. The article briefly reviews the evolution of audiometry, then continues with a discussion of the key problems in the area. Of central importance is the two-component characteristics of hearing loss: "loss of acuity" and "deficiency in the clarity with which speech is received." The first can be predicted from the audiogram and corrected by acoustic amplification, whereas the latter involves perceptual distortions. As a consequence, deficiency in clarity is not readily predicted from the audiogram, and amplification, although helpful, cannot restore normal hearing.

A closely related problem is the relatively poor speech recognition in noise exhibited by people with hearing loss. In comparison with normal-hearing listeners, individuals with hearing loss require substantially higher speech-to-interference ratios to function at a comparable level. Wilson and McArdle's data show that listeners with hearing loss who have good speech recognition in quiet perform relatively poorly in noise and listeners with poor speech recognition in quiet perform much more poorly in noise.

Intelligibility for normal-hearing listeners is determined by the signal-to-noise ratio (SNR) and is independent of presentation level, except at very high levels. For listeners with hearing loss, however, the SNR required for a given percent intelligibility increases with increasing presentation level. Wilson and McArdle concisely summarize this effect for listeners with different degrees of hearing loss. In assessing the joint effects of SNR and presentation level, one must bear in mind that for listeners with a severe hearing loss, not all the available speech signal (i.e., that portion of the speech signal above the noise) will be audible at lower presentation levels.

Wilson and McArdle also address the effects of age and hearing loss on speech recognition. Their findings show that the age-related decrease in word recognition in noise is almost entirely accounted for by the concomitant reduction in hearing sensitivity. In addition to the effects of peripheral hearing loss, they note that the recognition of more complex speech forms (e.g., sentences) involves central auditory processing and that such factors as age-related decrements in working memory and processing speed result in poorer recognition of everyday speech (more on central auditory processing can be found in Gordon-Salant, p. 9; Martin and Jerger, p. 25; and Neuman, p. 169).

Intervention

In the first article on intervention, James A. Henry, Martin A. Schechter, Carl L. Loovis, Tara L. Zaugg, Christine Kaelin, and Melissa Montero (p. 95) address the common hearing problem of tinnitus, or ringing in the ears. Although everybody experiences tinnitus at some time, a significant proportion of the population (10%-15%) experiences chronic tinnitus and about one in five individuals requires clinical intervention. The prevalence of tinnitus is even greater in the veteran population since the condition increases with age and hearing loss and is more common in men. At present, no accepted standard of practice for the clinical management of tinnitus exists. Henry et al. describe their research-based, five-level "progressive intervention" approach to the management of tinnitus. They also provide preliminary data from their clinical trials that show significant reductions in tinnitus severity after 12 months of treatment, with greater improvement demonstrated with Tinnitus Retraining Therapy than Tinnitus Masking. Education and counseling were particularly important for effective treatment.

In the second article, Helen J. Simon (p. 117) reviews the history and relative merits of bilateral amplification. The overall conclusion is that bilateral amplification has significant advantages over unilateral (monaural) amplification, except in a few cases where bilateral amplification is contraindicated (Martin and Jerger, p. 25). In addition to the merits of bilateral amplification, Simon points out a previously unrecognized negative effect of monaural amplification: long-term amplification in one ear can reduce localization ability significantly, even when listening is unaided.

Simon also provides revealing data on the accuracy and precision of localization by people with hearing loss. Localization accuracy in the medial plane is remarkably good, about 1° or 2° for normal-hearing listeners. Simon shows that, in comparison with normal-hearing listeners, accuracy of localization is almost as good in listeners with symmetric hearing loss and long-term bilateral amplification but poor in listeners with similar symmetric hearing loss and long-term monaural amplification (note that the measurements were obtained unaided). Further, the difference between accuracy and precision of localization increases with angle of azimuth, i.e., for sounds coming from an off-center location. Precision of localization decreases only slightly with increasing angle of azimuth, while accuracy of localization decreases significantly.

The ear is remarkably sensitive to interaural differences, resulting in a high degree of accuracy in auditory localization. Another remarkable facet of auditory localization is that even when there are errors, the errors show a surprisingly high degree of symmetry on either side of the midline. For example, localization errors on the right of the midline are roughly the same in magnitude (but opposite in sign) to errors on the left of the midline. Simon observed this high degree of symmetry for normal-hearing listeners and listeners with symmetric hearing loss and long-term bilateral amplification.

Note that bilateral amplification has been used in this discussion instead of the more commonly used binaural amplification because binaural hearing aids are not truly binaural, in that the left and right ear instruments are not usually matched appropriately. One company recently introduced a digital hearing aid that uses an interaural wireless connection to match signals at the two ears. Although the degree of matching is limited to overall gain, it represents the beginning of true binaural amplification in modern hearing aids. Whether this form of  amplification will yield significant benefits over current implementations of bilateral amplification remains to be seen.

Todd A. Ricketts' article on directional hearing aids (p. 133) is a logical follow-on to Simon's article. Bilateral amplification not only provides directional information resulting in a remarkably high degree of localization ability but also provides improved speech intelligibility in noise, provided the speech and noise come from different directions. Similarly, directional hearing aids provide improved speech intelligibility in noise under the same conditions but with the important difference that directional information is reduced for sounds coming from directions attenuated by the directional input. This ability to improve speech intelligibility in noise in a  manner analogous to directional input yet still maintain a high level of sensitivity to cues from all directions is another example of the remarkable dual functioning of the auditory system.

Despite the importance of improving SNR and the development of directional microphones more than 80 years ago, directional hearing aids have only recently gained acceptance as a viable option. Ricketts provides an excellent historical review of the development of directional hearing aids and the practical constraints that impeded the implementation of this powerful method for improving SNR. He also explains why the large improvements in SNR obtained under laboratory conditions are reduced substantially in the everyday use of hearing aids.

Factors contributing to the loss of directionality and, concomitantly, to smaller improvements in SNR under real-world listening conditions include environmental factors (e.g., room reverberation, location of listener relative to sound source) and fitting factors (e.g., vent size, microphone opening azimuth). Clinicians must understand the interplay among these factors so as to maximize the benefit of directional hearing aids for each individual. The benefits provided by directional hearing aids also have limitations, as well as possible detriments, such as reduced sensitivity to alerting signals beyond the compass of the directional input. Ricketts addresses these issues clearly and offers useful advice on selecting and fitting directional hearing aids.

One of the great advantages of digital technology is the tremendous flexibility that it provides in the development and implementation of new methods of signal processing. One outcome has been the development of novel and substantially improved directional inputs for hearing aids. These technological advances offer the possibility of developing instruments that are significantly more effective than the current generation of directional hearing aids, a prospect that has spurred major new research efforts on directional hearing aids. The last section of Ricketts' article describes state-of-the-art research in this area and its clinical implications.

In the next article, Linda Kozma-Spytek and Judith Harkins (p. 145) address a new problem facing hearing aid wearers. Digital cellular telephones transmit electromagnetic carrier signals at very high frequencies (in the gigahertz range). Very high-frequency signals have very short wavelengths, and as a result, short pieces of metal in a hearing aid (or any other electronic device) close to the digital cellular telephone can act as antennae and pick up the transmitted carrier signal. Although high-frequency carrier signals are well beyond the audio frequency range, the digital modulations of these signals are within the audio range and nonlinear circuit components in the hearing aid can demodulate the high-frequency carrier signal, resulting in an audible interference or buzz.

Kozma-Spytek and Harkins review audible interference from digital cellular telephones and the steps being taken by the Federal Communications Commission, involved industries, and consumer organizations. This problem is difficult and multidimensional because of different types of signal transmission technologies, different types of hearing aids, different types of hearing aid inputs (microphone, telecoil), and significant individual differences among hearing aid users with respect to the audibility of the interference. These issues are addressed incisively in the review, followed by the results of an experiment at a national conference of Self-Help for Hard of Hearing People, Inc., in which typical hearing aid users rated the interference generated by several representative transmission technologies. The results indicate that certain transmission technologies create more annoyance from interference than others and that the use of a microphone or telecoil input to the hearing aid also affects the wearer's susceptibility to the interference. The data also showed that when annoyance from interference is high, it is the dominant factor affecting the usability of the handset. However, other factors play an increasingly important role in handset usability at lower interference levels.

The problem of electromagnetic interference in hearing aids is rapidly expanding and growing in complexity as digital wireless technologies become increasingly ubiquitous. Audiologists, service providers, and other concerned individuals need to familiarize themselves with the issues and methods of addressing this new problem.

In the last article on intervention methods and outcome measurement, Gabrielle H. Saunders, Teresa H. Chisolm, and Harvey B. Abrams (p. 157) deal with an issue of particular importance to the field: assessing the cost-effectiveness of audiological intervention. The VA dispensed more than 300,000 hearing aids in 2004 at a cost approaching $120 million. In view of the substantial costs involved, cost-effectiveness is a major concern. Saunders et al. describe how the efficacy of acoustic amplification can be measured in terms that will allow for a sensible cost-benefit analysis.

The first step in the analysis is to define hearing aid outcome in a way that permits comparison of the benefits obtained from acoustic amplification with the costs of obtaining these benefits. An effective way of achieving this objective is using generic health status instruments that compare treatment effects and costs across interventions for different diseases and disorders.

One useful generic health status instrument is the  World Health Organization (WHO) Disability Assessment, developed by the WHO and the National Institutes of Health. This tool assesses multiple domains associated with quality of life, including speech understanding and communication with others. Another useful generic measure sensitive to hearing aid intervention is the Psychosocial Impact of Assistive Devices Scale, which assesses the way in which assistive devices affect subjective perceptions of psychological well-being and quality of life.

The next step in the cost-benefit analysis process is to choose an appropriate measure of hearing aid outcome. The WHO's International Classification of Functioning, Disability and Health (ICF) is a useful conceptual framework for delineating the goals of hearing aid intervention and subsequently selecting instruments to measure outcomes related to those goals. Saunders et al. examine several existing outcome measures in the context of the ICF and discuss their advantages and limitations.

A key step in developing a cost-benefit analysis is to compare the cost of the intervention with the improvement in quality of life, as measured in terms of a universal generic standard. The numerical index used for this is the costs of intervention per quality-adjusted life year gained, which is defined in the article. An alternative approach is to use utility measures where utility is essentially a gauge of health state preference on a universal scale from 0 (least desirable health state) to 1 (most desirable health state). The authors conclude with a discussion of factors that can affect the results of a cost-benefit analysis.

Saunders et al. have attacked a complex problem of crucial importance to the field. In an age when cost-effectiveness is a dominant mantra, the cost-effectiveness of audiological intervention must be established logically and reliably.

Future Directions

The remaining section provides a glimpse of the future. In the first of the two articles, Arlene C. Neuman (p. 169) examines the neurophysiological evidence of plasticity in the central auditory system. A substantial body of human behavioral data shows the effects of auditory deprivation, auditory enhancement, and auditory training on the identification and/or discrimination of speech and nonspeech stimuli. In addition, substantial electrophysiological data provide evidence of changes in the auditory cortex of mature animals with acquired sensorineural hearing loss. A  fundamentally important question for auditory rehabilitation is whether behavioral observations of deprivation and/or acclimatization in humans are purely perceptual or are a result of deeper electrophysiological changes in the mature central auditory system. Recent advances in noninvasive auditory evoked potentials (AEPs) and functional brain-imaging techniques, such as magnetoencephalography and functional magnetic resonance imaging, have allowed the study of plasticity in the human central auditory system. Studies of AEPs, including the mismatch negativity technique, have yielded significant new evidence of plasticity in the central auditory system. Neuman provides an insightful review of recent electrophysiological evidence of central plasticity. She focuses on evidence of plasticity due to acquired hearing loss and on whether amplification and auditory training interventions in adults lead to plastic changes and, if so, the perceptual significance of these changes. The review also includes evidence of changes in the auditory cortex associated with auditory training in persons with normal hearing.

The development of effective auditory rehabilitation methods requires an in-depth understanding of the plasticity of the auditory system. The evidence of central plasticity obtained from noninvasive electrophysiological measurements combined with information obtained from behavioral studies represents an important step toward developing a unified physiological and behavioral framework for the study of auditory plasticity. This deeper understanding can be used effectively to develop rehabilitation procedures that facilitate useful functional changes in the processing of impoverished signals by the central auditory system.

In the second article, Jonathan I. Matsui and Brenda M. Ryals (p. 187) provide perhaps the most exciting peek into the future with their discussion of hair cell regeneration. As they note, "The discovery that hair cells can regenerate in birds and other nonmammalian vertebrates has fueled a wide range of studies that are designed to find ways of restoring hearing and balance after such damage." Matsui and Ryals review key studies on sensory hair cell regeneration and their clinical implications. The review begins with a brief description of the different types of supporting and sensory cells in the mammalian inner ear. A major difference between hair cells in the auditory system and the vestibular system is that auditory hair cells do not regenerate, whereas vestibular hair cells of both mammalian and nonmammalian species can regenerate at low levels.

As noted by Matsui and Ryals, "when hair cell damage or death occurs in birds, some signal from the dying hair cell triggers the neighboring supporting cells to either proliferate or transdifferentiate into immature hair cells. The cells then need environmental, molecular, or genetic cues to differentiate into hair cells. Finally, nerve fibers from the auditory nerve reconnect the hair cells to the central nervous system so the bird can process the sensory information." Unfortunately for human hearing, proliferation or transdifferentiation of cochlear-supporting cells to damaged or dying hair cells does not occur spontaneously in mammals. Much of the ongoing research in hair cell regeneration has focused on the nature of the differences between mammalian and avian cells and whether regeneration in the mammalian cochlea can be stimulated, for instance, with the use of stem cells. Matsui and Ryals provide an incisive, detailed review of current research in the area and draw several important conclusions.

A crucial question from the clinician's perspective is, Will research on cell regeneration substantially impact clinical practice in the foreseeable future? The answer is a definite "maybe." Although evidence exists that several cell types can be regenerated in the mammalian cochlea by means of stem cells, the differentiation of these cells into supporting and sensory cells with functional capabilities is still a long way off. Consequently, restoring a fully functioning cochlea through cell regeneration is highly unlikely in the immediate future. On the other hand, partial restoration of some cells could be of great clinical significance since the difference between profound deafness and even a small amount of hearing is substantial.

Photos of Drs. Levitt, Fausti, and  Schein
Harry Levitt, PhD
Distinguished Professor Emeritus, The City University of New York, and Investigator, National Center for Rehabilitative Auditory Research, Portland, Oregon; harrylevitt@earthlink.net
Stephen A. Fausti, PhD
Director, National Center for Rehabilitative Auditory Research, Portland, Oregon stephen.fausti@med.va.gov
Jerome D. Schein, PhD
Professor Emeritus, New York Univer-sity, and Professor, University of Alberta, Canada scheinej@aol.com
DOI: 10.1682/JRRD.2005.09.0148

Go to TOP  

Go to the Contents of Vol. 42 No. 4, Supplement 2

Last Reviewed or Updated  Friday, January 27, 2006 7:55 AM