VA Research and Development LOGO

Logo for the Journal of Rehab R&D
Volume 43 Number 4, July/August 2006
Pages 537 — 552


Perceptual training improves syllable identification in new and experienced hearing aid users

G . Christopher Stecker, PhD;1-2 Glen A. Bowman, BA;1 E. William Yund, PhD;1 Timothy J. Herron, MA;1 Christina M. Roup, PhD;1,3 David L. Woods, PhD1,4*

1Human Cognitive Neurophysiology Laboratory, Department of Veterans Affairs Northern California Health Care System, Martinez, CA; 2Department of Speech and Hearing Sciences, University of Washington, Seattle, WA; 3Department of Speech and Hearing Science, Ohio State University, Columbus, OH; 4Department of Neurology, University of California Davis, Sacramento, CA
Abstract — We assessed the effects of perceptual training of syllable identification in noise on nonsense syllable test (NST) performance of new (Experiment 1) and experienced (Experiment 2) hearing aid (HA) users with sensorineural hearing loss. In Experiment 1, new HA users were randomly assigned to either immediate training (IT) or delayed training (DT) groups. IT subjects underwent 8 weeks of at-home syllable identification training and in-laboratory testing, whereas DT subjects underwent identical in-laboratory testing without training. Training produced large improvements in syllable identification in IT subjects, whereas spontaneous improvement was minimal in DT subjects. DT subjects then underwent training and showed performance improvements comparable with those of the IT group. Training-related improvement in NST scores significantly exceeded improvements due to amplification. In Experiment 2, experienced HA users received identical training and testing procedures as users in Experiment 1. The experienced users also showed significant training benefit. Training-related improvements generalized to untrained voices and were maintained on retention tests. Perceptual training appears to be a promising tool for improving speech perception in new and experienced HA users.
Key words: auditory, hearing aid, hearing loss, learning, masking noise, nonsense syllables, perception, perceptual training, personal computer, phonemes, presbycusis, rehabilitation, speech.

Abbreviations: CV = consonant-vowel, DT = delayed training, HA = hearing aid, IT = immediate training, NS = nonsignificant, NST = nonsense syllable test, PC = personal computer, R-SPIN = Revised Speech Perception in Noise (test), SNHL = sensorineural hearing loss, SNR = signal-to-noise ratio, SPL = sound pressure level, VA = Department of Veterans Affairs, VC = vowel-consonant.
*Address all correspondence to David L. Woods, Professor of Neurology, University of California Davis, and Chief of Clinical Neurophysiology and of Research fMRI; Neurology Service (127E), Building R4, VA Northern California Health Care System, 150 Muir Road, Martinez, CA 94553; 925-372-2571; fax: 925-229-2315. Email: dlwoods@ucdavis.edu
DOI: 10.1682/JRRD.2005.11.0171
INTRODUCTION

Progressive high-frequency sensorineural hearing loss (SNHL) alters auditory processing at multiple levels of the auditory system. Most obviously, it alters cochlear function and deprives listeners of high-frequency speech cues that are critical in discriminating consonants [1]. In addition, gradual high-frequency SNHL results in a widespread reorganization of central auditory connections, diminishing high-frequency inputs and enhancing connections from nearby zones with intact cochlear function [2]. Phoneme processing strategies are correspondingly altered, with hearing-impaired subjects depending more on phonetic cues conveyed by low frequencies [3-6]. For example, hearing-impaired subjects rely disproportionately on vowel duration to discriminate voiced and voiceless fricative pairs such as /v/-/f/ and /z/-/s/ [4].

While hearing aids (HAs) can partially compensate for cochlear deficits by amplifying high-frequency sounds, long-standing peripheral hearing loss will also produce neuroplastic alterations within the central auditory system, including changes in synaptic connections and dendritic arborization [7]. While the newly amplified auditory inputs provided by HAs may enhance functional neuroplasticity [8], abnormal synaptic connections will not instantly renormalize. Normalization may be reflected in the gradual perceptual changes occurring during acclimatization [9-11]. However, acclimatization effects are generally small in magnitude and inconsistently obtained [12-15]. This minimal acclimatization benefit suggests that the neuroplastic changes needed to normalize auditory perception often may not occur. In the current study, we investigated the capability of adaptive perceptual training to force reorganization and consequent perceptual improvement in a high-level auditory task: syllable identification.

Cortical Plasticity and Perceptual Learning

Research over the last decade has revealed extensive neuroplasticity in the auditory system that optimizes neuronal responses to the behaviorally relevant acoustic features present in the environment [16-18]. Changes in cortical organization occur reliably following both selective stimulation and deprivation. For example, explicit exposure to particular sound frequencies can enhance the number of cortical neurons driven by those stimuli and alter neuronal tuning properties and response latencies to favor behaviorally relevant sounds [19]. Environmental enrichment can also sharpen neuronal tuning curves in auditory cortex [20], while noisy environments impair the development of normal tuning [21].

In typical SNHL, responses to high-frequency sounds gradually reduce. In cats with high-frequency hearing loss, neurons in auditory cortex originally tuned to high frequencies alter their tuning to process inputs from intact nearby portions of the cochlea [2,22]. Similar neuroplastic changes occur in humans with SNHL: difference limens for frequency are altered in a manner consistent with a neuroplastic expansion of the representation of sounds below the  high-frequency hearing loss and the correspondingly reduced representation of higher frequencies [23]. Some of these changes occur in auditory cortex and can be imaged electrophysiologically [24]. The magnitude of auditory cortex reorganization following hearing loss depends critically on the nature of the posttraumatic acoustic experience. For example, recent evidence suggests that the reorganization of the tonotopic map of primary auditory cortex following sudden high-frequency hearing loss can be reduced by exposure to an auditorily enriched environment with an overrepresentation of high-intensity sounds at frequencies in the expected range of hearing loss [25].

Since synaptic connections within the central nervous system are in continuous flux, neuronal reorganization that has occurred as a consequence of gradual hearing loss will continue to impair hearing even if normal cochlear function were completely restored by an HA. With typical SNHL, high-frequency inputs are gradually lost, altering the acoustic cues accessible to phonetic analysis. Some phonetic cues (e.g., carried by the fundamental and first formant) will be minimally affected, whereas others (e.g., high-frequency fricative energy, plosive bursts, and second and higher-format transitions) will be attenuated to varying degrees depending on the listening situation, the phoneme articulated, and the audiometric configuration. One would think that restoring speech cues with an HA would be sufficient to reestablish normal connections during acclimatization. However, acclimatization effects for most types of HAs are small in magnitude and difficult to distinguish from improvement due to increased familiarity with test materials [26]. Acclimatization is unsuccessful for several possible reasons. First, many HAs do not present consistent high-frequency speech cues across a broad range of speech intensities, thus complicating the perceptual learning process. Second, the information in   everyday communication is highly redundant. Visual, semantic, and redundant low-frequency phonetic cues often allow one to understand information without relying on normal high-frequency phonetic cues. Indeed, impaired individuals may try to avoid difficult high-noise situations where high-frequency cues become more essential. Finally, feedback about phoneme-processing errors is delayed and inconsistent in normal conversation so that subjects lack the critical information needed for low-level perceptual changes to occur.

Perceptual learning can be optimized with adaptive training paradigms that are targeted to enhance selected sensory discriminations. A wide array of recent research has demonstrated that perceptual learning paradigms can drive neuronal reorganization in visual [27-29] and auditory [30-31] modalities. Perceptual learning results in cortical reorganization [16-17] that is reflected in altered response properties of single neurons [19] as well as large-scale alterations seen with functional magnetic resonance imaging [32] and event-related brain potentials [33-36]. Here, we test the capability of focused perceptual training to enhance syllable discrimination in new and experienced HA users.

Perceptual Training of Hearing Aid Users

Currently, audiological rehabilitation is largely limited in HA dispensing [37-38]. Existing rehabilitation training usually consists of group discussions that include other HA users and their families [39]. While this training helps patients adjust their behavior for successful HA use [40], it is not designed to enhance their low-level perceptual use of the speech cues that the HA has restored.

Recently, Sweetow and Palmer reviewed the capability of perceptual training to improve hearing in patients with SNHL [41]. Several studies of adaptive auditory training have shown performance improvements in subjects with hearing loss [42-43], particularly when visual cues are present [44]. However, these studies typically required the intervention of a suitably trained clinician, which limited their practical application. In contrast, at-home personal computer (PC)-based perceptual training has received much recent attention as a possible treatment for central auditory processing disorder and dyslexia [45-50]. Although the mechanism by which PC-based training enhances performance in these disorders is not well understood [51], training effects can be substantial. In this study, we evaluated the effect of at-home PC-based auditory training on the syllable identification capabilities of patients with SNHL who were new (Experiment 1) or experienced (Experiment 2) HA users.

METHODS
Experiment 1: New Hearing Aid Users
Subjects

Experiment 1 investigated training effects in subjects with newly prescribed HAs. Twenty-three subjects (all male) with mild-to-moderate SNHL were recruited from the Department of Veterans Affairs (VA) Audiology Service (Northern California Health Care System, Martinez, California). The subjects ranged in age from 50 to 80 (mean = 69), had no history of neurological or psychiatric disorder, and were in good health. Subjects were paid for their participation during both training and testing. They were randomly assigned to one of two conditions: 12 subjects were assigned to the immediate training (IT) group and received training during the initial 8 weeks of HA use after a 1-week acclimatization period. The remaining 11 subjects were assigned to the delayed training (DT) group as untrained controls during the first 8 weeks of testing and then received training during the subsequent 8 weeks.

Participation was limited to patients with high-frequency SNHL (thresholds at 4000 Hz at least 20 dB worse than at 500 Hz with maximal thresholds of 40 dB at 500 Hz and 80 dB at 4000 Hz) that was bilaterally symmetric (left and right ear thresholds differed by less than 20 dB from 250 to 8000 Hz). Bone conduction thresholds were within 10 dB of air-conduction thresholds. Average audiograms for the two groups are plotted in Figure 1. Hearing losses did not differ significantly between groups at any tested frequency. Subjects had been issued bilateral digital HAs mostly with 2, 3, or 4 channels and low-to-moderate compression ratios (range 1.0-2.0) as prescribed by the VA Audiology Service. HAs were fitted using the NAL-NL1 (National Acoustic Laboratories' nonlinear fitting procedure, version 1 [Australia]) target [52] and verified by probe microphone measurements.


Figure 1. Average pure tone audiograms showing hearing loss in decibels at tested pure tone frequencies for immediate training (IT) (n = 12) and delayed training (DT) (n = 11) groups of new hearing aid users.
Design

Figure 2 depicts the time line of the training and testing sessions. Both groups were first familiarized with the nonsense syllable test (NST) [1], and then we tested them in two separate sessions before HA fitting to measure unaided performance. Immediately after HA fitting, they were retested twice. We used the difference between the initial aided and unaided scores to quantify the immediate benefit of hearing amplification. Postfitting performance scores also provided the baseline against which training-related changes (IT group) and acclimatization (DT group) were assessed. Beginning 1 week after HA fitting, PCs were installed in the homes of IT subjects, who underwent 5 days a week of adaptive training (1 h/day) for each of the next 8 weeks. The DT group received no training during the initial 8-week period and served as controls for the IT group. Both groups underwent identical testing in the laboratory at weeks 1, 2, 4, and 8. Following the delay period, subjects in the DT group underwent an identical training regimen.1 For the DT group, week 8 scores were used both as end point measures of acclimatization and as baseline measures for evaluating training effects. In-laboratory testing of the DT group was repeated at the same intervals as for the IT group (i.e., at weeks 9, 10, 12, and 16). Finally, both groups were tested for retention 8 weeks after the end of training (i.e., week 16 for the IT group and week 24 for the DT group).


Figure 2. Time line of Experiment
Stimuli

Stimuli for both at-home training and in-laboratory testing consisted of 27 consonant-vowel (CV) and 27 vowel-consonant (VC) syllables selected from the NST. The set of CVs was composed of all combinations of nine unvoiced consonants (/ch/, /f/, /h/, /k/, /p/, /s/, /sh/, /t/, /th/) and three vowels (/a/, /i/, /u/), while the set of VCs combined the same three vowels (/a/, /i/, /u/) with nine voiced consonants (/b/, /d/, /g/, /m/, /n/, /ng/, /TH/, /v/, /z/). Four phonetically trained speakers (two male and two female) recorded the stimuli. To incorporate natural speech variability, each speaker recorded six exemplars of each of the 54 syllables. Syllables were selected randomly from among different exemplars during both training and testing. Two voices, one female and one male, were selected for training of each subject, with the selected voices counterbalanced over subjects. We examined generalization by testing performance with all four voices (both trained and untrained).

Computers with off-the-shelf multimedia components were used for stimulus delivery. They included a Creative Labs SB0240 sound card (Singapore) and a pair of Boston Acoustics satellite speakers (Model BA745) (Peabody, Massachusetts) positioned approximately 2 feet in front and 20 to the left and right of the listener's position. Subjects who had HAs with volume controls were instructed to set the gain to a comfortable level and to not change the settings throughout training and testing. Training and testing procedures were implemented using Presentation software (Neurobehavioral Systems Inc, Albany, California).2

In-Laboratory Testing

During the NST, stimuli were mixed with speech-spectrum noise at 10 and 0 dB signal-to-noise ratios (SNRs). Syllable durations ranged from 280 to 850 ms. Noise samples were 1,200 ms in duration, starting 200 to 290 ms before the onset of each syllable and extending beyond its offset. One of 100 different noise samples was randomly selected on each trial. We jittered syllable onset by 45 ms on each trial to reduce the predictability of syllable timing with respect to noise onset. Stimuli were presented at comfortable listening levels, ranging from 65 to 81 dB sound pressure level (SPL) at 10 dB SNR and from 61 to 74 dB SPL at 0 dB SNR.

Laboratory testing was performed in 8 8 ft single-walled, sound-attenuating testing rooms. The interior walls of the rooms were covered by 1-inch acoustic foam, with ambient third-octave noise levels less than 20 dB SPL from 250 to 4000 Hz. Subjects performed a phoneme identification task using a one-interval nine-alternative identification procedure. At the start of each trial, nine syllables were displayed on a liquid crystal display monitor. The response choices included all possible consonants paired with the vowel that had been presented. Following a 300 ms delay, the syllable and speech-spectrum noise were presented. Subjects selected the response using a numeric keypad. After the response had been selected, the next trial began after a 200 ms delay. Each session of laboratory testing consisted of four blocks of 324 randomized trials (3 trials of each syllable at each SNR). Both trained and untrained voices were tested with the talker's voice fixed for the duration of each block.

At-Home Training

A preconfigured multimedia PC was installed in a quiet location of each subject's home. The subject selected a comfortable chair to be used in training, and the position of the chair, table, and equipment on the table were marked to assure consistent stimulus delivery. Additional components needed for PC installation (power strips, computer table, etc.) were provided when necessary. A PC modem was connected (through a splitter) to the subject's telephone line for daily uploading of training results to a laboratory computer. Instruction in PC use was also provided when necessary: 29 percent of the patients had not used PCs before and 32 percent had only minimal familiarity.

The stimuli were identical to those used in laboratory testing, with the exception that only two voices (one female and one male, counterbalanced across subjects) were used for training. The computer display contained a single icon for starting the training sequence. The training task was similar to that used in laboratory testing but with three exceptions: (1) Visual feedback (indicating the syllable actually presented, colored green if correct or red if incorrect) was given after the response on each trial, (2) stimuli were blocked by a vowel (i.e., each combination of CV/VC type and vowel was presented in its own block), and (3) SNRs were varied adaptively with a 1-up, 1-down procedure that decreased the SNR by 1 dB following each correct response and increased it by 1 dB following each incorrect response. Six blocks of 108 trials each (one per list) were completed on each training day, followed by two additional blocks that presented stimuli from all lists in random order. Depending on response speed, daily training duration ranged from approximately 35 to 70 minutes. Once training was completed, the PC automatically connected to the Internet with Windows scripting tools and the training data was uploaded to the laboratory computer. We monitored uploaded data daily to assure that the training of each subject was proceeding according to schedule. Subjects were contacted by telephone if problems became evident. The subjects were requested to train for 5 days a week for 8 weeks. Trainees completed 29 to 44 days of training in the 8-week period, with the median 37 days and the mode 40 days of training.

Statistical Methods

We analyzed data from both in-laboratory testing and at-home training by tracking a subject's percent correct on a class of test items relative to the subject's performance on the same class of test items during the baseline test. These difference scores increased sensitivity to training and acclimatization effects with the removal of the considerable between-subject variation in test and training score performance. Also, the use of simple subtraction to track performance appears to be reasonable because all subjects performed well below ceiling on the majority of test and training item types. We analyzed data significance primarily with multifactorial analysis of variance but also used multifactorial regression and correlational analyses where appropriate. In all cases, the large amount of data from the NST and training paradigm permitted us to average together sufficient numbers of binary responses so that resulting distributions had approximate Gaussian error distributions.

Experiment 2: Experienced Hearing Aid Users
Subjects

Eight experienced HA users (mean HA experience = 16 months, range 10-21 months) were recruited from a group of subjects who had participated in a previous study that evaluated acclimatization with 9- and 16-channel HAs with wide dynamic range compression [26]. As part of that study, these subjects had participated in more than 60 hours of in-laboratory NSTs with two of the four voices used in the present study. Recruitment criteria, including audiometric configurations and etiology of hearing loss, were identical to those criteria used in Experiment 1. The subjects ranged in age from 61 to 75 (mean = 67.7).

Procedures

Training and testing procedures were identical to those used in Experiment 1, except that a different set of measurements was obtained before the HA experience. The onset of the 8-week training period began 10 to 21 months after HA fitting.

RESULTS
Experiment 1: New Hearing Aid Users

Figure 3 shows the changes of NST scores relative to the initial performance measured after HA fitting (week 0). The improvement due to the amplification is reflected in the difference between unaided and week 0 scores. The effects of acclimatization to the HAs and the test procedures in the absence of training can be seen in weeks 1 to 8 for the DT control group (dashed line). The effects of the training (solid lines) are seen in weeks 1 to 8 for the IT group and weeks 9 to 16 for the DT group.


Figure 3. Differences in nonsense syllable test scores relative to post-hearing aid-fitting baseline
Performance Improvement Due to Hearing Aid Fitting

Unaided performance on the in-laboratory phoneme-recognition task averaged 35.1 and 33.5 percent for the IT and DT groups, respectively. Hearing aid fitting resulted in a significant improvement in NST scores that averaged 6.0 percent (5.8% for IT and 6.3% for DT; F1,  11 = 31.1, p < 0.001, and F1,  10 = 14.0, p < 0.01, respectively). These values are consistent with improvements in NST scores seen in other studies that used similar stimuli and amplification algorithms [53]. The IT and DT groups did not differ significantly in aided or unaided performance or in their improvements following amplification.

Acclimatization

Small performance improvements were seen in NST scores in the absence of training (Figure 3). These exceeded 3 percent by week 2 and declined to 2.4 percent by week 8, but remained significantly higher at week 8 than at week 0 (F1,  10 = 15.5, p < 0.01). A regression analysis for weeks 1 to 8 showed a nonsignificant (NS) negative performance slope during this time, indicating that virtually all the improvement occurred during the first week. These small improvements are similar to those of a recent acclimatization study where linear HAs produced an average improvement of 2.2 percent in NST scores in the first 8 weeks of use, a result attributed to procedural learning of the task rather than acclimatization [26].

Training

The mean change in phoneme-recognition performance following 8 weeks of training was 10.6 percent in the IT group and 8.8 percent in the DT group. Training-related improvements were significantly greater than the performance gains during the acclimatization period (IT group: F1,  21 = 40.5, p < 0.001; DT group: F1,  18 = 33.4, p < 0.001). Although the small difference between IT and DT training effects (1.8%) did not reach significance, it was similar to the gains observed during the acclimatization period that would have been expected to contribute to IT training effects. The fact that the IT and DT effects did not differ significantly suggests that training need not be delivered immediately after HA fitting to be effective.

Performance improvements continued throughout the training period for both groups. The combined regression analysis on log-linear coordinates showed a 0.47 positive correlation and nearly 2 percent improvement slope for each doubling of training duration (F1,  82 = 23.3, p < 0.0001, standard error of regression coefficient 0.14%). The overall improvement in NST scores from training was significantly greater than the improvement from amplification (F1,  19 = 6.6, p < 0.05, for IT and DT groups combined).

Nature of Benefits

Figure 4 plots improvement in NST scores over time for stimuli presented at 0 and 10 dB SNR. Improvements related to amplification were significantly greater for the 0 dB SNR condition (7.4% improvement) than for 10 dB SNR condition (5.5%, F1,  20 = 10.4, p < 0.01), consistent with the view that amplification primarily improves the audibility of lower-intensity speech sounds [54]. In contrast, training-related improvements in performance were greater for 10 dB (11.0%) than for 0 dB SNRs (8.7%, F1,  20 = 7.8, p < 0.05). Greater improvement for more audible stimuli is consistent with the view that perceptual training enables HA users to improve the perception of speech cues made audible at the higher SNR.


Figure 4. Percentage improvement in nonsense syllable test scores

Further understanding of HA amplification and training effects can be derived from the analysis of phonetic confusions. Stimulus-response matrices are shown in Figure 5 for aided responses before training for each stimulus syllable collapsed across the three vowels and averaged across subjects and groups. In these plots, completely accurate performance would produce large circles falling on the diagonal, while random guessing would produce uniform small circles throughout the matrix and specific confusions would produce larger circles off the diagonal. The most accurate performance is seen in Figure 5(b) for CVs with /p/, /t/, /k/, /ch/, /s/, and /sh/. In contrast, /f/ and /th/ were much more difficult and generated similar numbers of /f/, /th/, /s/, /p/, and /t/ responses. For the VCs (Figure 5(a)), /z/ shows the best performance, while subjects had particular difficulty with /b/, /TH/, and /m/. They also showed very similar patterns of responses to /v/ and /TH/, indicating that they had difficulty discriminating these voiced consonants distinguished by high-frequency frication. The data also show a large number of within-manner confusions (e.g., the nasals /m/, /n/, and /ng/ were often confused, as were the plosives /p/, /t/, and /k/).


Figure 5. Stimulus-response matrices showing probability distribution of aided responses of week 0 following presentation of each (a) vowel-consonant and (b) consonant-vowel syllables.

Changes in the stimulus-response matrices are shown in Figure 6 for (a) amplification (before training) and (b) perceptual training (after 8 weeks of training). Increases in response frequency are shown in black and decreases are in gray, with the magnitude of change reflected in the size of the circle. HA fitting produced an overall increase in accurate recognition and decrease in inaccurate responses across all phoneme types (matrix is mainly black on the diagonal and gray away from it), although only small improvements were noted for difficult consonants, particularly those distinguished by high-frequency friction (e.g., CVs /f-/ and /th-/, and VCs /-v/ and /-TH/). In contrast, adaptive training altered the stimulus-response matrices by increasing correct responses and shifting random responses toward correct-manner responses. This finding was particularly true for difficult consonants, including those normally distinguished by high frequencies. For example, /t-/ responses to /f-/ and /th-/ stimuli decreased while correct /f-/ and /th-/ responses increased. Similarly, /-z/ and /-d/ responses decreased to /-TH/ tokens, while correct, /-TH/ responses increased. The response criteria also changed. For example, /-TH/ responses to /-z/ stimuli increased along with accurate responses, as did /th-/ responses to /s-/ and /f-/.


Figure 6. Changes in stimulus-response matrices for (a) aided versus unaided listening before training and for (b) aided listening after 8 weeks of training versus before training.

Differences between improvements due to amplification and training appeared to reflect the difficulty of phoneme discrimination. Based on unaided discrimination scores, we partitioned the 18 consonants (9 initial and 9 trailing) into three sets of easy (/z/, /k/, /s/, /t/, /ch/, /sh/), medium (/d/, /n/, /f/, /g/, /v/, /p/), and difficult (/TH/, /th/, /b/, /h/, /ng/, /m/) consonants. Initial aided performance preserved this grouping (although not the rankings for individual consonants within each group) and revealed a greater number of HA-related improvements for easy consonants (9.9%) than for medium (4.9%) or difficult consonants (4.6%, main effect of phoneme difficulty: F2,  40 = 4.7, p < 0.05, Geisser-Greenhouse corrected). In contrast, training-related improvements favored difficult consonants (12.8%) over medium (9.3%) or easy consonants (7.4%, F2,  40 = 3.1, p = 0.06, Geisser-Greenhouse corrected). In the combined analysis of variance, phoneme difficulty and HA use versus training thus revealed a significant crossover interaction (F2,  40 = 7.4, p < 0.01, Geisser-Greenhouse corrected). Together with the effects of SNR, these results suggest that amplification improved performance on easily discriminated speech sounds occurring near threshold, while training improved the discrimination of difficult consonants distinguished by high-frequency speech cues.

Generalization to Untrained Voices

An important issue in assessing the efficacy of perceptual training is the extent to which training-related improvements generalize to items that were not specifically trained. We therefore tested generalization to untrained voices. Figure 7 plots performance improvement in NST scores with the black line showing performance on the trained voices and gray lines plotting performance for the untrained voices only heard during in-laboratory testing. Performance on trained voices improved significantly more than performance on untrained voices (overall, F1,  20 = 8.1; IT group, F1,  11 = 6.0, p < 0.05; DT group, F1,  8 = 2.1, p < 0.2). However, training produced highly significant improvements in NST scores to untrained voices compared with the control-group's improvement of 2.4 percent (overall, F1,  20 = 19.4, p < 0.001; IT group, F1, 21 = 14.6, p < 0.01; DT group, F1,  18 = 26.0, p < 0.001).


Figure 7. Improvements in nonsense syllable test for voices used in training (black) and voices not used in training (solid gray).
Retention

We tested retention with the NST administered 8 weeks after training ended. No significant decrement was found on retention testing compared with test results obtained immediately after training ended (8.7% vs 9.8% relative to pretraining baseline levels), and no significant differences in NST scores were found on comparisons of the two retention tests. However, some evidence of procedural forgetting was found: response times were significantly prolonged during the first test after the 8-week delay compared with the final test during the training period (F1,  16 = 11.4, p < 0.01), whereas response speed returned to posttraining levels on the second retention test (F1,  16 = 1.0, NS).

Experiment 2: Experienced Hearing Aid Users

Figure 8 (solid gray line) plots the mean change in phoneme-recognition performance over the training period for the experienced HA users in Experiment 2. Measures of the amplification benefit had been made previously with the same stimuli from the NST presented at 5 dB SNR through ear inserts incorporating signal amplification [26]. Performance averaged 36.0 and 45.8 percent correct for these subjects' initial unaided and aided conditions, respectively, indicating a robust HA amplification effect of 9.8 percent (F1,  7 = 10.6, p = 0.01). Baseline aided performance measured in the current study 1 week before training began (as in Experiment 1) averaged 52.7 percent correct, significantly higher than the 40.4 percent baseline of new HA users tested in Experiment 1 (F1,  29 = 7.50, p < 0.01). Several possible reasons may explain the improved baseline performance in these experienced subjects compared with the new HA user groups in Experiment 1: (1) Experiment 2 subjects had extensive experience (>60 h) with the NST in a previous experiment, (2) they had 10 to 21 months experience with their HAs, and (3) they had different types of HAs that were fully programmable digital devices capable of multichannel compression processing with greater than eight independent compression channels; none of the new HA users had this type of HA.


Figure 8. Training effects for new (black) (Experiment 1) and experienced (solid gray) hearing aid users (n = 8) (Experiment 2).

The critical question was, Would subjects who had extensive experience with their HAs and with the NST procedures show further benefit from adaptive training? If mere experience with the NST was a critical factor, subjects in Experiment 2 might be expected to show limited improvement because their previous experience with the NST (>60 h) exceeded the training plus laboratory testing experience of the trained subjects in Experiment 1. Despite the HA users' extensive experience, we found that training significantly improved performance among these experienced HA users (F1,  7 = 19.9, p < 0.01) (Figure 8). While the magnitude of improvement was less than that observed in Experiment 1 (F1,  27 = 6.9, p < 0.02), the difference primarily reflected improvements during the first several weeks of training. The week-by-group interaction was not significant (F < 1.0), and this lack of interaction is also apparent in the similarities of the slopes of improvement over time for the new and experienced HA user groups in Figure 8. Other aspects of training (e.g., effects of intensity, confusion patterns, and dependence on phoneme difficulty) produced effects that were similar to those found in Experiment 1, but because of the small number of subjects, these effects did not reach statistical significance. As with the subjects tested in Experiment 1, retention testing 8 weeks after training ended showed no performance decrement.

DISCUSSION
Effectiveness of Perceptual Training

The current experiments demonstrate daily focused training in adaptive phoneme identification produces substantial benefits in syllable discrimination for HA users-a benefit that significantly exceeded the initial benefit of amplification. The results suggests that (1) chronic alterations of speech-cue use play a major role in the speech-processing deficits exhibited by sufferers of hearing loss and (2) focused adaptive training is an effective rehabilitation strategy that can significantly enhance NST scores beyond benefits obtained with HA use alone.

Factors that Influence Training Effects

Each subject who underwent training in either Experiment 1 or 2 showed some improvement in NST scores. We found that increased age in our subject population was associated with poorer performance on the NST (r = 0.48, p < 0.05), consistent with other recent studies [55]. Surprisingly, older subjects improved more with training than did younger subjects, although the effect failed to reach statistical significance (r = 0.30, p > 0.10). This correlation may have reflected that subjects with lower NST scores at training onset showed significantly greater overall improvement (r = -0.66, p < 0.001). In contrast, no significant correlation was found between training-related improvements and the magnitude of initial hearing loss (e.g., at 1000 Hz, r = 0.26, NS). Thus, training appeared to be particularly effective in subjects who had disproportionate impairments in syllable discrimination compared with their audiograms. Most subjects completed 35 to 40 days of training, and no correlation was found between the minor variation in the number of days of at-home training and the magnitude in-laboratory NST improvement. However, the type of HA did appear to influence the magnitude of training effects. That is, improvements were greater in IT and DT patients wearing 3- or 4-channel HAs (11.4%, n = 10) than in patients wearing 2-channel or linear HAs (8.0%, n = 11, t(19) = 2.23, p < 0.05) even though these groups showed similar performance immediately after HA fitting (t(19) = 1.19, NS). These results are consistent with studies that reported that spontaneous perceptual learning is enhanced with multichannel HAs [26].

Optimal Duration of Training

Subjects showed continued improvement over the entire 8-week duration of training (Figure 3), with performance improving most rapidly at training onset followed by an additional improvement of approximately 2 percent for each doubling of training duration between 1 and 8 weeks. Thus, increasing the duration of training from 8 to 16 weeks would be expected to result in another 2 percent improvement in performance. While this is a nontrivial change, it is much less than the cumulative 9.8 percent improvement seen during the initial 8 weeks of training. The optimal duration of training would thus appear to depend on subject motivation and speech discrimination requirements. However, training durations longer than 16 weeks would likely have limited appeal. Of course, in the current experiments, we trained only a small proportion of English CVs and VCs. The duration of perceptual learning appears to increase with the complexity of the training material [56]. Thus, training benefits may be more substantial and increases may continue for longer training periods with more complex phonetic material.

Timing of Training

With respect to the time between HA fitting and rehabilitative training, the current results showed NS differences between immediate training and training delayed by 8 weeks. Training was also effective in the small group of experienced HA users studied in Experiment 2. Although differences in subject groups and pretraining experience complicate direct comparisons of the magnitude of training effects in Experiment 1 and Experiment 2, clearly, perceptual training was highly effective in all HA users regardless of HA experience.

Procedural and Perceptual Learning

Research on perceptual learning has identified two major components of the improvement in performance seen with adaptive training: procedural learning and perceptual learning [43]. Procedural learning involves increasing familiarity with the task and particular stimuli used in testing. Procedural learning occurs rapidly, but the improvement does not generalize effectively to different tasks [57]. In addition, procedural learning is associated with learning arbitrary procedures of the training task and shows rapid reductions in performance once practice on the task ceases [19,43]. Perceptual learning, on the other hand, involves increasing sensitivity to task-relevant distinctions in the stimuli themselves and is thought to reflect a reorganization of the neural representations that underlie discrimination and recognition. Auditory perceptual learning evolves over a course of weeks or months [56,58]. The performance improvements that we obtained showed these characteristics of perceptual learning: they emerged gradually over time and were well maintained on retention testing performed 2 months after training ended.

As in other studies of perceptual training, we expected the results of the current study to include contributions from both procedural and perceptual learning. Rapid procedural learning appeared to contribute to the early performance increases of 2 to 3 percent that were seen in the control group (Figure 3). These changes likely reflected increasing familiarity with the NST following the extensive testing (6 h) that occurred at the beginning of the control period. Procedural learning of 2 to 3 percent during the first week would be consistent with our previous findings of procedural learning effects during repeated NSTs [26]. Most likely, procedural learning effects occur in parallel with perceptual learning and thus would have also contributed to the rapid improvements seen in the IT group during the first several weeks of training.

Although studies of perceptual learning have emphasized the power of focused training, alterations of the sensory environment can also drive cortical reorganization [20-21,59]. Thus, some degree of reorganization and perceptual learning might be expected to result from the amplification of high-frequency sounds because of new HAs, even without explicit training. Some controversy exists regarding the extent to which new HA users show such acclimatization. Some studies suggest a very rapid adjustment of speech perception (with a time course similar to that of procedural learning) with performance asymptoting soon after the receipt of a new HA [12-13,60]. Other studies suggest that speech perception continues to improve for months after an HA fitting [9-11]. Work in our laboratory supports long-term acclimatization, but only for certain types of HA [26]. In the current study, the performance of control subjects increased rapidly and then declined slightly from weeks 2 to 8. An intriguing possibility is that such acclimatization failures reflect the inadequacy of perceptual learning during everyday conversations. Limited learning would be expected under these conditions because the majority of syllables are either too easy (in which case no errors are made) or too difficult (in which case the stimulus may not be heard at all) and performance feedback is variable and inconsistent. Moreover, with most HAs, the acoustic features that characterize syllables will vary with the intensity of the speaker's voice and be masked to varying degrees depending on background noise levels.

Effect of Training on Sentence Processing

A central question in assessing the effect of adaptive training is the extent of generalization of training effects to a wider range of listening situations, including different voices, different tasks (e.g., sentence comprehension, multimodal conversational speech), and different acoustic environments (e.g., multitalker situations). Results of the current study were limited to demonstrating that benefits of phoneme-recognition training on one set of voices generalized to improved recognition of syllables spoken by untrained voices. This generalization suggests that subjects are not primarily learning specific acoustic features of a particular stimulus set but rather the abstract features that define phonetic categories across talkers.

Improvement in the discrimination of isolated syllables correlates with the improved discrimination of the same syllables in sentence contexts [61]. Since the perception of words and sentences begins with syllable recognition, improved syllable discrimination would be expected to improve sentence processing, although comprehension may be limited by other factors (e.g., informational masking and semantic processing) or improved by the adoption of "top-down" listening strategies [62]. Nevertheless, higher-order processing operations depend on the extraction of syllables from the acoustical signal based on the analysis of the acoustic features that make up each utterance. Indeed, evidence from older subjects suggests that syllable discrimination skills correlate directly with verbal memory performance when cognitive factors are controlled [63-64]. This finding supports the notion that perceptual success of HA users comes at the cost of the extra effort that is needed for analyzing phonemes and might otherwise be available for analyzing sentence content. The implication is that improved syllable discrimination would result in not only improved syllable recognition but also improved sentence comprehension and memory.

The evaluation of generalization of training effects to improvement in sentence processing was not originally planned. However, once the magnitude of NST-score improvement became evident (i.e., after data from the first 12 subjects had been analyzed), we decided to assess the generalization of training-related improvements in the remaining subjects who had not yet started training (five subjects from the IT group and six subjects from the DT group). We used the Revised Speech Perception in Noise (R-SPIN) test [65]. The R-SPIN test requires the subject to listen to sentences in a background of competing multitalker babble and repeat the final word of each sentence. It consists of eight lists of 50 sentences each, divided into equal numbers of high- and low-context sentences. The high-context sentences are used primarily in evaluating the cognitive abilities of subjects. We scored only the low-context sentences in which the final word is minimally constrained by the content of the sentence. To limit item repetition, we tested with subsets of three lists (a total of 75 low-context sentences) per test. Before data collection, we tested each subject using sentences from two of the R-SPIN lists, with individual sentences presented at 8, 12, 16, and 20 dB SNR. Subsequent R-SPIN testing was done at the SNR that produced performance nearest to 50 percent correct for each subject. Nonoverlapping sets of three R-SPIN lists were used for testing at weeks 0 and 8, while a third test at week 16 reused the lists from testing of week 0. Each list provided a single score based on the percentage of low-context words repeated correctly. Each subject's score was computed as the mean of scores for the three lists on each test day.

Baseline scores ranged from 27 to 69 percent correct (mean = 51%). Training improved R-SPIN test scores by an average of 3.3 percent (range = - 6.7% to 13.3%), but training effects failed to reach statistical significance because of a lack of statistical power. The improvement may have been limited by the fact that only a small portion of the phonemes appearing in the target words of the R-SPIN test had been used in perceptual training. Because we scored only low context sentences, scores would have been minimally influenced by improvements in context provided by improved perception of trained phonemes in other portions of the sentence. Further study with more sensitive tests is needed to evaluate the impact of syllable identification training on sentence processing. Recently, more complex adaptive training sequences have shown efficacy in improving sentence test scores [66].

CONCLUSIONS

The results of this study demonstrate that phoneme recognition by HA users can be significantly enhanced through focused perceptual training. The benefit of such training in difficult listening situations exceeds the benefit provided by HAs only, generalizes to untrained voices, and persists for at least 8 weeks (the longest period over which we assessed performance). We found that immediate and delayed training produced similar benefits, and we also found benefits in experienced HA users. Further study is needed to define the optimal parameters of training, but our results suggest that training can benefit all HA users, including those who have been wearing HAs for years.

Although the improvement from both HA fitting and perceptual training was substantial, benefits appear to reflect complementary processes. Our analyses of performance on stimuli differing in difficulty or presented at different SNRs indicate that HAs provided the most benefit for easily discriminable phonemes presented near threshold, while training provided more benefit for difficult discriminations in more audible speech. These benefits were most pronounced for HA users with poor speech discrimination abilities. This finding suggests a complementary role in which HAs bring acoustic information above threshold and therefore make it accessible to the hearing impaired individual, whereas training facilitates the optimal use of that information, particularly for HA users who have difficulty processing speech.

ACKNOWLEDGMENTS

This material was based on work supported by the VA Rehabilitation Research and Development Service, grant C2975C.

The authors have declared that no competing interests exist.

REFERENCES
1. Dubno JR, Levitt H. Predicting consonant confusions from acoustic analysis. J Acoust Soc Am. 1981;69(1):249-61.
[PMID: 7217523]
2. Syka J. Plastic changes in the central auditory system after hearing loss, restoration of function, and during learning. Physiol Rev. 2002;82(3):601-36. [PMID: 12087130]
3. Kewley-Port D, Luce PA. Time-varying features of initial stop consonants in auditory running spectra: A first report. Percept Psychophys. 1984;35(4):353-60. [PMID: 6739270]
4. Revoile SG , Holden-Pitt L, Pickett JM. Perceptual cues to the voiced-voiceless distinction of final fricatives for listeners with impaired or with normal hearing. J Acoust Soc Am. 1985;77(3):1263-65. [PMID: 3980876]
5. Revoile SG , Kozma-Spytek L, Holden-Pitt L, Pickett JM, Droge J. VCVs vs CVCs for stop/fricative distinctions by hearing-impaired and normal-hearing listeners. J Acoust Soc Am. 1991;89(1):457-60. [PMID: 2002178]
6. Revoile SG , Pickett JM, Kozma-Spytek L. Spectral cues to perception of /d, n, l/ by normal- and impaired-hearing listeners. J Acoust Soc Am. 1991;90(2 Pt 1):787-98.
[PMID: 1939885]
7. Biernaskie J, Corbett D. Enriched rehabilitative training promotes improved forelimb motor function and enhanced dendritic growth after focal ischemic injury. J Neurosci. 2001;21(14):5272-80. [PMID: 11438602]
8. Willott JF. Physiological plasticity in the auditory system and its possible relevance to hearing aid use, deprivation effects, and acclimatization. Ear Hear. 1996;17(3 Suppl): 66S-77. [PMID: 8807277]
9. Cox RM, Alexander GC. Maturation of hearing aid benefit: Objective and subjective measurements. Ear Hear. 1992; 13(3):131-41. [PMID: 1397752]
10. Gatehouse S. The time course and magnitude of perceptual acclimatization to frequency responses: Evidence from monaural fitting of hearing aids. J Acoust Soc Am. 1992; 92(3):1258-68. [PMID: 1401514]
11. Gatehouse S. Role of perceptual acclimatization in the selection of frequency responses for hearing aids. J Am Acad Audiol. 1993;4(5):296-306. [PMID: 8219296]
12. Bentler RA, Niebuhr DP, Getta JP, Anderson CV. Longitudinal study of hearing aid effectiveness. II: Subjective measures. J Speech Hear Res. 1993;36(4):820-31.
[PMID: 8377494]
13. Bentler RA, Niebuhr DP, Getta JP, Anderson CV. Longitudinal study of hearing aid effectiveness. I: Objective measures. J Speech Hear Res. 1993;36(4):808-19. [PMID: 8377493]
14. Turner CW, Humes LE, Bentler RA, Cox RM. A review of past research on changes in hearing aid benefit over time. Ear Hear. 1996;17(3 Suppl):14S-25. [PMID: 8807271]
15. Surr RK, Cord MT, Walden BE. Long-term versus short-term hearing aid benefit. J Am Acad Audiol. 1998;9(3): 165-71. [PMID: 9644613] Erratum in: J Am Acad Audiol. 1998;9(5):398.
16. Buonomano DV, Merzenich MM. Cortical plasticity: From synapses to maps. Ann Rev Neurosci. 1998;21:149-86.
[PMID: 9530495]
17. Gilbert CD, Sigman M, Crist RE. The neural basis of perceptual learning. Neuron. 2001;31(5):681-97. [PMID: 11567610]
18. Rauschecker JP. Cortical map plasticity in animals and humans. Prog Brain Res. 2002;138:73-88. [PMID: 12432764]
19. Recanzone GH, Schreiner CE, Merzenich MM. Plasticity in the frequency representation of primary auditory cortex following discrimination training in adult owl monkeys. J Neurosci. 1993;13(1):87-103. [PMID: 8423485]
20. Engineer ND, Percaccio CR, Pandya PK, Moucha R, Rathbun DL, Kilgard MP. Environmental enrichment improves response strength, threshold, selectivity, and latency of auditory cortex neurons. J Neurophysiol. 2004;92(1):73-82.
[PMID: 15014105]
21. Chang EF, Merzenich MM. Environmental noise retards auditory cortical development. Science. 2003;300(5618): 498-502. [PMID: 12702879]
22. Irvine DR, Rajan R, Brown M. Injury- and use-related plasticity in adult auditory cortex. Audiol Neurootol. 2001; 6(4):192-95. [PMID: 11694726]
23. McDermott HJ, Lech M, Kornblum MS, Irvine DR. Loudness perception and frequency discrimination in subjects with steeply sloping hearing loss: Possible correlates of neural plasticity. J Acoust Soc Am. 1998;104(4):2314-25.
[PMID: 10491696]
24. Pantev C, Lutkenhoner B. Magnetoencephalographic studies of functional organization and plasticity of the human auditory cortex. J Clin Neurophysiol. 2000;17(2):130-42.
[PMID: 10831105]
25. Norena AJ, Eggermont JJ. Enriched acoustic environment after noise trauma reduces hearing loss and prevents cortical map reorganization. J Neurosci. 2005;25(3):699-705.
[PMID: 15659607]
26. Yund EW, Roup CM, Simon HJ, Bowman GA. Acclimatization in wide dynamic range multichannel compression and linear amplification hearing aids. J Rehabil Res Dev. 2006; 43(4):517-36.
27. Karni A, Sagi D. Where practice makes perfect in texture discrimination: Evidence for primary visual cortex plasticity. Proc Natl Acad Sci USA 1991;88(11):4966-70.
[PMID: 2052578]
28. Karni A, Sagi D. The time course of learning a visual skill. Nature. 1993;365(6443):250 -52. [PMID: 8371779]
29. Yund EW, Efron R. Guided search: The effects of learning. Brain Cogn. 1996;31(3):369-86. [PMID: 8812015]
30. Wright BA, Buonomano DV, Mahncke HW, Merzenich MM. Learning and generalization of auditory temporal-interval discrimination in humans. J Neurosci. 1997;17(10): 3956-63. [PMID: 9133413]
31. Karmarkar UR, Buonomano DV. Temporal specificity of perceptual learning in an auditory discrimination task. Learn Mem. 2003;10(2):141-47. [PMID: 12663752]
32. Callan DE, Tajima K, Callan AM, Kubo R, Masaki S, Akahane-Yamada R. Learning-induced neural plasticity associated with improved identification performance after training of a difficult second-language phonetic contrast. Neuroimage. 2003;19(1):113-24. [PMID: 12781731]
33. Tremblay KL, Kraus N, Carrell TD, McGee T. Central auditory system plasticity: Generalization to novel stimuli following listening training. J Acoust Soc Am. 1997;102(6): 3762-73. [PMID: 9407668]
34. Tremblay KL, Kraus N, McGee T. The time course of auditory perceptual learning: Neurophysiological changes during speech-sound training. Neuroreport. 1998;9(16):3557-60. [PMID: 9858359]
35. Tremblay KL, Kraus N. Auditory training induces asymmetrical changes in cortical neural activity. J Speech Lang Hear Res. 2002;45(3):564-72. [PMID: 12069008]
36. Shahin A, Bosnyak DJ, Trainor LJ, Roberts LE. Enhancement of neuroplastic P2 and N1c auditory evoked potentials in musicians. J Neurosci. 2003;23(13):5545-52.
[PMID: 12843255]
37. Schow RL, Balsara NR, Smedley TC, Whitcomb CJ. Aural rehabilitation by ASHA audiologists: 1980 -1990. Am J Audiol. 1993;2:28-37.
38. Tye-Murray N, Witt S, Schum L, Kelsay D, Schum DJ. Feasible aural rehabilitation services for busy clinical settings. Am J Audiol. 1994;3:33-37.
39. Chisolm TH, Abrams HB. Short- and long-term outcomes of adult audiological rehabilitation. Ear Hear. 2004;25(5): 464-77. [PMID: 15599193]
40. Abrams H, Chisolm TH, McArdle R. A cost-utility analysis of adult group audiologic rehabilitation: Are the benefits worth the cost? J Rehabil Res Dev. 2002;39(5):549-58.
41. Sweetow R, Palmer CV. Efficacy of individual auditory training in adults: A systematic review of the evidence. J Am Acad Audiol. 2005;16(7):494-504. [PMID: 16295236]
42. Kricos PB, Holmes AE. Efficacy of audiologic rehabilitation for older adults. J Am Acad Audiol. 1996;7(4):219-29.
[PMID: 8827916]
43. Robinson K, Summerfield AQ. Adult auditory learning and training. Ear Hear. 1996;17(3 Suppl):51S-65.
[PMID: 8807276]
44. Walden BE, Erdman SA, Montgomery AA, Schwartz DM, Prosek RA. Some effects of training on speech recognition by hearing-impaired adults. J Speech Hear Res. 1981; 24(2):207-16. [PMID: 7265936]
45. Merzenich MM, Jenkins WM, Johnston P, Schreiner C, Miller SL, Tallal P. Temporal processing deficits of language-learning impaired children ameliorated by training. Science. 1996;271(5245):77-81. [PMID: 8539603]
46. Tallal P, Miller SL, Bedi G , Byma G , Wang X, Nagarajan SS, Schreiner C, Jenkins WM, Merzenich MM. Language comprehension in language-learning impaired children improved with acoustically modified speech. Science. 1996;271(5245): 81-84. [PMID: 8539604]
47. Musiek F, Shinn J, Hare C. Plasticity, auditory training, and auditory processing disorders. Semin Hear. 2002;23:263-76.
48. Tallal P. Improving language and literacy is a matter of time. Nat Rev Neurosci. 2004;5(9):721-28. [PMID: 15322530]
49. Diehl SF. Listen and learn? A software review of Earobics. Lang Speech Hear Serv Sch. 1999;30:108-16.
50. Cohen W, Hodson A, O'Hare A, Boyle J, Durrani T, McCartney E, Mattey M, Naftalin L, Watson J. Effects of computer-based intervention through acoustically modified speech (FastForWord) in severe mixed receptive-expressive language impairment: Outcomes from a randomized controlled trial. J Speech Lang Hear Res. 2005;48(3):715-29.
[PMID: 16197283]
51. Wittmann M, Fink M. Time and language-Critical remarks on diagnosis and training methods of temporal-order judgment. Acta Neurobiol Exp (Wars). 2004;64(3):341-48.
[PMID: 15283477]
52. Byrne D. Dillon H, Ching T, Katsch R, Keidser G. The NAL-NL1 procedure for fitting non-linear hearing aids: Characteristics and comparisons with other procedures. J Am Acad Audiol. 2001:12(1):37-51. [PMID: 11214977]
53. Yund EW, Buckles KM. Discrimination of multichannel-compressed speech in noise: Long-term learning in hearing-impaired subjects. Ear Hear. 1995;16(4):417-27.
[PMID: 8549897]
54. Larson VD, Williams DW, Henderson WG, Luethke LE, Beck LB, Noffsinger D, Bratt GW, Dobie RA, Fausti SA, Haskell GB, Rappaport BZ, Shanks JE, Wilson RH. A multi-center, double blind clinical trial comparing benefit from three commonly used hearing aid circuits. Ear Hear. 2002;23(4):269-76. [PMID: 12195168]
55. Divenyi PL, Stark PB, Haupt KM. Decline of speech understanding and auditory thresholds in the elderly. J Acoust Soc Am. 2005;118(2):1089-1100. [PMID: 16158663]
56. Watson CS. Time course of auditory perceptual learning. Ann Otol Rhinol Laryngol Suppl. 1980;89(5 Pt 2):96 -102. [PMID: 6786201]
57. Hawkey DJ, Amitay S, Moore DR. Early and rapid perceptual learning. Nat Neurosci. 2004;7(10):1055-56.
[PMID: 15361880]
58. Fahle M. Perceptual learning: Specificity versus generalization. Curr Opin Neurobiol. 2005;15(2):154-60.
[PMID: 15831396]
59. Eggermont JJ, Komiya H. Moderate noise trauma in juvenile cats results in profound cortical topographic map changes in adulthood. Hear Res. 2000;142(1-2):89-101.
[PMID: 10748332]
60. Turner CW, Bentler RA. Does hearing aid benefit increase over time? J Acoust Soc Am. 1998;104(6):3673-74.
[PMID: 9857524]
61. Hirata Y. Training native English speakers to perceive Japanese length contrasts in word versus sentence contexts. J Acoust Soc Am. 2004;116(4 Pt 1):2384-94.
[PMID: 15532669]
62. Sweetow R. Training the adult brain to listen. Hear J. 2005; 58(6):10 -17.
63. McCoy SL, Tun PA, Cox LC, Colangelo M, Stewart RA, Wingfield A. Hearing loss and perceptual effort: Downstream effects on older adults' memory for speech. Q J Exp Psychol A. 2005;58(1):22-33. [PMID: 15881289]
64. Wingfield A, Tun PA, McCoy SL. Hearing loss in older adulthood: What it is and how it interacts with cognitive performance. Curr Dir Psychol Sci. 2005;14(3):144-48.
65. Bilger RC, Nuetzel JM, Rabinowitz WM, Rzeczkowski C. Standardization of a test of speech perception in noise. J Speech Hear Res. 1984;27(1):32-48. [PMID: 6717005]
66. Sweetow R, Henderson Sabes J. The need for and development of an adaptive listening and communication enhancement (LACETM) program. J Am Acad Audiol. 2006;17:538-58.
Submitted for publication November 7, 2005. Accepted in revised form February 23, 2006.
1Two DT subjects withdrew from the study after the delay period and so did not undergo training. Data from these subjects were included in the delay-period data for between-group comparisons but not for within-subject comparisons of training effects.
2Materials can be downloaded at http://www.neuroexpt.com/ex_files/expt_view?id=145

Go to TOP  

Go to the Contents of Vol. 43 No. 4

Last Reviewed or Updated  Tuesday, May 29, 2007 11:46 AM