Volume 49 Number 7, 2012
Pages 1005 — 1024
Abstract — Thirty-six blast-exposed patients and twenty-nine non-blast-exposed control subjects were tested on a battery of behavioral and electrophysiological tests that have been shown to be sensitive to central auditory processing deficits. Abnormal performance among the blast-exposed patients was assessed with reference to normative values established as the mean performance on each test by the control subjects plus or minus two standard deviations. Blast-exposed patients performed abnormally at rates significantly above that which would occur by chance on three of the behavioral tests of central auditory processing: the Gaps-In-Noise, Masking Level Difference, and Staggered Spondaic Words tests. The proportion of blast-exposed patients performing abnormally on a speech-in-noise test (Quick Speech-In-Noise) was also significantly above that expected by chance. These results suggest that, for some patients, blast exposure may lead to difficulties with hearing in complex auditory environments, even when peripheral hearing sensitivity is near normal limits.
Keywords: audiometric evaluation, auditory dysfunction, auditory processing disorder, blast, central auditory processing, evoked potential, hearing loss, rehabilitation, traumatic brain injury, Veterans.
The recent conflicts in Afghanistan and Iraq (Operation Iraqi Freedom/Operation Enduring Freedom/Operation New Dawn [OIF/OEF/OND]) have resulted in unprecedented rates of exposure to high-intensity blasts, often resulting in traumatic brain injury (TBI) among members of the U.S. military. The Department of Veterans Affairs (VA) 2011 TBI Comprehensive Evaluation Summary  estimated the prevalence of TBI in the OIF/OEF/OND Veteran population at 7.8 percent. While the typical focus of auditory evaluation is on damage to the peripheral auditory system, the prevalence of brain injury among those exposed to high-intensity blasts suggests that damage to the central auditory system is an equally important concern for blast-exposed persons. Discussions with clinical audiologists and OIF/OEF/OND Veterans Service Office personnel suggest that a common complaint voiced by blast-exposed Veterans is an inability to understand speech in noisy environments, even when peripheral hearing is within normal or near-normal limits. Such complaints are consistent with damage to neural networks responsible for higher-order auditory processing .
The auditory structures most vulnerable to axonal injury are the lower- and mid-brain stem nuclei, the thalamus, and the corpus callosum. Damage may include swelling, stretching, and shearing of neural connections, as well as inflammatory changes in response to tissue injury . There also may be a loss of synaptic structures connecting nuclei in the central auditory system, resulting in distorted or missing information transmitted to cortical centers [4-5]. The interhemispheric pathways connecting auditory areas of the two cerebral hemispheres run through the posterior half of the corpus callosum . The corpus callosum is a structure that may be particularly vulnerable, as it has been shown to be damaged even in non-blast-related head injury [7-8]. Axonal damage to this part of the corpus callosum would be expected to interfere with auditory and speech processing, as well as other bilaterally represented auditory cortical functions. Furthermore, recent modeling work has revealed that the blast wave itself can exert stress and strain forces on the brain that are likely to cause widespread axonal and blood vessel damage . Such impacts would not necessarily create changes visible on a medical image, but could still impair function by reducing neural transduction time, efficiency, or precision of connectivity. This wide diversity of potential damage and sites of injury also suggests that the profile of central auditory damage is likely to vary considerably among patients. For this reason, the first step in the diagnosis and treatment of blast-related dysfunction is the identification of which brain functions have been impaired.
Behavioral tests are mainstays of central auditory test batteries, and many have been shown to be both sensitive and specific to particular brain injuries. It may also be important, however, to include evoked potential (EP) measures (electrophysiological tests) of neural function  to complement the behavioral tests. The Auditory Brainstem Response (ABR) is a commonly used test that evaluates the integrity of the auditory nerve and brainstem structures, whereas measures from the auditory evoked late response reflect cortical processing . Long latency responses (LLRs), which are sensitive to impaired neuronal firing and desynchronization of auditory information, are useful tools for the assessment of cognitive capability. Prolonged latencies in LLRs would suggest interruptions in neural transmission within or between cortical networks. This could be due to reduced cortical neuron availability or diminished neural firing intensity. In addition, longer neural refractory periods can result in reduced amplitudes of event-related potentials.
The purpose of the current study was to determine whether performance on a battery of behavioral and electrophysiological tests of central auditory function differs between individuals who have recently experienced a high-explosive blast and those who have not. The study involved five behavioral and two electrophysiological tests designed to encompass aspects of central processing from the brainstem to the cortex. The selection of tests was based on the need to assess several important and potentially vulnerable aspects of auditory processing of complex sounds. These functions include the precise coding and preservation of temporal firing patterns that support speech understanding, pitch perception, and localization of sounds in the environment.
The blast-exposed group was drawn from a group of patients who, upon returning to the (former) Walter Reed Army Medical Center (WRAMC) for medical treatment after deployment in Iraq or Afghanistan, were identified by medical staff as being exposed to at least one high-explosive blast within 1 year preceding study enrollment. All participants in this group had a notation in their medical record of exposure to a blast. A subject interview was conducted in order to obtain standard demographic information, a medical history, and an audiological history (including exposures to potentially damaging noise). No participants with greater than a mild TBI (mTBI) diagnosis were approached for enrollment, and no participants with hearing losses greater than 50 dB hearing level (HL) (pure-tone average of 0.5, 1.0, and 2.0 kHz) were included. All testing of blast-exposed patients was carried out in the Army Audiology and Speech Center at WRAMC.
A control group of subjects who had not been exposed to a blast were recruited and tested at the National Center for Rehabilitative Auditory Research (NCRAR), Portland VA Medical Center (Oregon). The goal of testing this group was to establish normative data and statistical cutoffs for abnormal performance on these specific tests in a group of appropriately age- and hearing-matched control subjects. Recruitment of the control group followed testing of the patient group, which allowed statistical matching of the groups with respect to age, sex, and audiometric configuration.
In addition to the subject interview, medical records of participants were reviewed by the research team at WRAMC. Each military servicemember admitted to WRAMC from deployment who has been exposed to a blast is evaluated by the TBI team to assess the presence and severity of TBI. This evaluation consists of screening tests and subsequent detailed neuropsychological testing, if indicated. The diagnosis of TBI, and severity, are based on these tests as well as information concerning loss of consciousness, duration of any posttraumatic amnesia, alteration of consciousness, and imaging studies, when appropriate. The diagnoses of mTBI or no TBI were extracted from the medical records for each experimental subject in this study. No patients were included in this study with a diagnosis of moderate or severe TBI.
Each subject underwent a comprehensive audiometric evaluation to establish configuration, severity, and probable site of lesion of any hearing loss. Pure-tone air- and bone-conduction audiometry as well as clinical speech assessments (Quick Speech-In-Noise [QuickSIN] sentence recognition and NU-6 word recognition tests) were measured using a Grason-Stadler (GSI) GSI-61 audiometer (Eden Prairie, Minnesota) and Sennheiser HDA 200 headphones (Old Lyme, Connecticut). Immittance audiometry was conducted with the GSI Tympstar, and distortion product otoacoustic emissions (DPOAEs) were collected at WRAMC using the GSI Audera and at NCRAR using the Mimosa Acoustics HearID systems (Champaign, Illinois). DPOAEs were collected at frequencies between 0.5 and 12.0 kHz. All test equipment at the two data collection sites was exactly the same (with the exception of the otoacoustic emission testing as part of the audiometric evaluation). All testing was carried out with the subject seated or reclining comfortably in a quiet room or a sound-treated audiometric booth.
The behavioral tests used in the study were recorded versions played over a Sony CD player (Tokyo, Japan) connected to the GSI-61 clinical audiometer [12-13]. Before presentation to subjects, the test levels were calibrated using the recorded calibration tones on each test. Tests were presented at 50 dB sensation level (i.e., 50 dB above the level at which speech is detectable) unless the subject indicated discomfort at the prescribed levels, in which case small adjustments in level were permitted. Responses were made verbally or by button press, depending on the requirements of the test. Subjects were given frequent rest breaks as needed. The behavioral testing took approximately 2 hours and was carried out over two experimental sessions.
Delay or disruption in the transmission of auditory information throughout the auditory pathways would likely result in temporal processing deficits [14-15]. Musiek et al. investigated patients’ temporal patterning abilities using the Frequency Patterns (FP) test , which is thought to be sensitive to lesions in the right cortex, the corpus callosum, and the brainstem [16-17]. It is resistant to mild hearing loss .
Musiek and Pinheiro developed this test of the ability to report sequences of three tone bursts that are presented to each ear independently . Their procedures for this test were followed in this study. In each of the sequences, two tone bursts are the same frequency, while the third tone is a different frequency. Subjects were instructed to verbally repeat back the words “high” and “low” for the test items and were not allowed to hum or sing the responses. Three practice items were presented prior to the test. The right ear was tested first, followed by the left ear. If a subject incorrectly labeled any of the three tone bursts, that item was considered incorrect. Also, if the subject reversed the order of an item, that item was considered incorrect. Each ear was tested with 15 items. If all except one of the first 15 items were correct or incorrect, the testing stopped. Otherwise, the full 30-item test was administered.
Temporal resolution was tested using the Gaps-In-Noise (GIN) gap-detection task , which produces an estimate of the briefest temporal gap a listener can detect in a continuous noise stimulus. This test is also sensitive to lesions of the cortex and corpus collosum. Previous studies have found that up to 40 percent of patients with brain damage had abnormal gap-detection thresholds [15,19]. This test has been shown to have moderate sensitivity for identifying subjects with central auditory lesions and has high test-retest reliability .
The GIN test consists of a series of 6-second broadband noise segments. Each noise segment contains zero to three silent intervals (gaps). These gap durations are 2, 3, 4, 5, 6, 8, 10, 12, 15, and 20 ms and are pseudorandomized in occurrence and location within the noise segment. Following the protocol of Musiek et al. , subjects were instructed to listen for tiny “pops” or “clicks” that may or may not occur during the noise segments and to push a button to indicate that they heard the gaps. Subjects were instructed to respond immediately each time they heard a gap. Late responses were scored as misses. If there was any question about how many times a subject pressed the response button during a noise segment, the test was paused so the tester could verify the number of responses with the subject. A short practice list was presented only to one ear, with gaps that ranged from 8 to 20 ms. Each ear was tested separately, and the right ear was tested first. The test score was the percent correct responses at each gap duration, and a threshold was estimated based on the smallest gap for which the subject scored greater than 50 percent (gap detected on at least four of the six presentations), with all longer durations receiving a score greater than 50 percent.
Temporal precision of neural firing is also involved in binaural processing and localization of sound in space. The Masking Level Difference (MLD) test evaluates the integrity of the earliest sites of binaural comparison and sensitivity to interaural phase in the brainstem, as well as cortical areas sensitive to spatial representations. The MLD is a well-established psychophysical measure that has also been developed as a clinical test [20-21]. This measure of brainstem integrity is obtained by comparing the ability to detect a signal in the presence of noise that is either in phase or out of phase at the two ears. Without an intact binaural comparison system (which is located at the level of the brainstem), the two conditions are functionally equivalent, but with an intact system, threshold differences >12 dB are typically observed for low-frequency pure tones.
Binaural thresholds for a 500 Hz pure tone presented either in phase or out of phase between the two ears were determined in the presence of a binaural masking noise presented in phase. Several different signal-to-noise ratios (SNRs) were tested for the signal in phase with noise (N0S0), the signal out of phase with noise (N0Sπ), and the noise with no tone present (catch trial). The subjects were instructed to listen for the beeps in the presence of the noise and say “yes” if they heard the beeps and “no” if they did not hear the beeps. This test does not contain practice items, but the test was paused if the subject needed additional time to respond. The MLD is the difference in decibel SNR for signal thresholds for the in-phase and out-of-phase conditions.
Because of potentially compromised axonal and synaptic structures and diminished neural conduction, higher-level tasks such as dichotic listening may be affected in individuals exposed to high-explosive blasts. The Dichotic Digits (DD) test assesses dichotic listening ability using number identification with dichotic presentation of the stimuli . The test is sensitive to lesions in the primary (left) cortex and in the corpus callosum. Musiek et al. found lateral effects when using this test, mostly in the left ears of subjects with brain pathology . This test has good sensitivity to central auditory nervous system pathology, is relatively resistant to mild-to-moderate high frequency cochlear hearing loss, and has high test-retest reliability .
Following the procedures outlined by Musiek, two digits were presented to one ear and two digits were presented to the other ear . The test began with a practice that contained three test items. The subject was instructed to repeat back all digits, and order was not scored. The test contained 20 test items of four digits in each, and the individual digits were marked as a miss if the subject gave an incorrect response or failed to respond. Subjects were encouraged to guess rather than not respond at all.
The Staggered Spondaic Words (SSW) test examines the ability to segregate and interpret competing speech presented to the two ears . It is thought to be sensitive to lesions of the corpus callosum and cortex . Deficits on the SSW test, including diminished left-ear responses, would suggest interhemispheric transfer deficits at the level of the corpus callosum. This test is useful in this patient population as it is resistant to the effects of peripheral hearing impairment  and has evidence of strong reliability and validity .
The SSW test consists of 40 pairs of “spondaic words,” and each spondaic word (“spondee”) contains two syllables spoken with equal emphasis on both syllables. Furthermore, each syllable of the spondee contains a complete word, an example of which is the spondee “hotdog.” On each of the 40 trials, one spondee is presented to the left ear and one is presented to the right ear in such a way that the second syllable of the first spondee presented to one ear overlaps in time with the first syllable of the second spondee presented to the other ear . Scoring is based on the identification of the parts of the words presented in isolation and in competition, as well as the total number of errors. Each SSW item is made up of two spondaic words that are presented in a way that creates four test conditions: (1) right noncompeting syllables, (2) right competing syllables, (3) left competing syllables, and (4) left noncompeting syllables. Four practice items are presented prior to beginning the test. The test began with the first spondee presented to the right ear and the second spondee presented to the left ear. Subsequent items rotated between right ear spondee first and left ear spondee first throughout the 40 test items. Subjects were instructed to repeat back the words in the exact order they heard them presented. To score this test, we marked any incorrect words in the four test conditions as a miss and added up by test condition. In addition, if a subject repeated all of the words but in an incorrect order, the test item was marked as a reversal but the individual words were considered correct in the final count per test condition. The subject was allowed to take as much time as needed to respond to each item. Scores consisted of total errors as well as number of errors for each competing and noncompeting condition.
Both testing sites used the same equipment and the same protocols that, once established, were loaded at both sites so that the protocol settings would be exactly matched to minimize errors. Verification of the equipment was accomplished by site visits of NCRAR audiologists to the WRAMC group. This enabled two individuals to be tested at both sites, further confirming the equivalence of measurement in the two locations. The EP testing took approximately 2 hours, including setup time.
The ABR has been used to help estimate the integrity of cochlear structures and central auditory pathways from the auditory nerve to the superior olivary complex. The ABR is characterized by five or six peaks in the response waveform that occur at particular times after sound presentation. Both latency and amplitude of the peaks and relationships among the peaks are important measures of brainstem function.
The ABR was elicited using 100 µs clicks with rare-faction phase presented through ER-3A (300) insert earphones at 80 dB normal HL at a rate of 17 clicks per second for each ear independently (Etymotic Research, Inc; Elk Grove Village, Illinois). Using a Cadwell Sierra Wave EP system (Kennewick, Washington), ABRs were recorded from scalp electrode Cz (top of head) referenced to contralateral and ipsilateral mastoid electrodes, with the ground electrode placed at Fpz (forehead). Electrode placements were measured according to the International 10-20 system ; impedances were maintained at <5 k (at 30 Hz); impedance differences were ≤2 k. At least 2,000 accepted trials for each run were averaged and then repeated. If needed, a third waveform was collected to ensure repeatability. The filter settings were 100 and 3,000 Hz. ABR waveforms were averaged and analyzed by selecting visible waves I through V. The following components of the ABR were recorded, measured, and analyzed: (1) absolute latencies of one or more of the five constituent waves, (2) interpeak latencies, (3) interaural latency differences for all peaks, (4) absolute amplitudes of all visible waves, and (5) wave V/I amplitude ratios.
In general, an averaged LLR wave is composed of contributions from multiple structures and can reflect attention, cognitive, discriminative, and integrative functions of the brain. Putative neural generators for auditory LLRs include thalamic projections to the auditory cortex, primary auditory cortex, supratemporal plane, temporal-parietal association cortex, frontal cortex, reticulothalamus, and medial septal area. LLRs consist of a series of positive and negative peaks that occur from 70 to 500 ms after onset of the stimulus. For this study, the following major peaks and troughs of the LLR were measured and evaluated: in order of latency, these components are labeled N100 (N1), P160/200 (P2), N200 (N2), and P300 (P3). The first three LLR peaks are primarily exogenous: they are affected strongly by characteristics of the auditory stimuli. In contrast, the event-related P300 is an endogenous potential that is related to cognition, attention, auditory discrimination, memory, and semantic expectancy.
The four-channel Cadwell Sierra Wave EP system was used to record LLRs from gold-cup or silver-silver chloride surface electrodes affixed to the subject’s head at Cz (top of head), C3 (left side of head, approximately midway between ear and top of head), and C4 (right side of head, approximately midway between ear and top of head). A reference electrode was placed on the subject’s nose. An electrode placed at Fpz (forehead) served as the common (ground) for all preamplifiers. Eye-blink (electrooculographic or myographic) activity was monitored using the fourth channel, with an electrode located superiorly and inferiorly to the left eye. Ocular artifacts greater than 60 µV were rejected automatically. The gain of each electroencephalography channel was 100,000. The low-frequency cutoff of the bandpass filter for all LLR recordings was 1 Hz; the high-frequency cutoff was 30 Hz.
The LLR was elicited from each ear independently using an “oddball paradigm.” During each averaging epoch, the “frequent” (standard or common) signal, a 500 Hz tone, was presented 80 percent of the time at 75 dB sound pressure level (SPL). A 1,000 Hz “oddball” (deviant or rare) signal was presented at pseudorandom intervals 20 percent of the time at 75 dB SPL. Subjects were asked to count the higher-pitched (1,000 Hz) tones silently to themselves. The rise time of tonal stimuli was 10 ms, the plateau 50 ms, and the fall time 10 ms, using a Blackman envelope. Each test run was terminated when 50 to 60 acceptable oddball responses had been averaged. The final LLR comprised an average of 2 repeatable runs for a total of at least 100 individual responses to the deviant signal. The latency and amplitude from baseline of N100, P160/P200, N200, and P300 waves were analyzed.
Of the 55 blast-exposed patients who were consented at the Army Audiology and Speech Center at WRAMC, 36 completed all of the behavioral testing and 19 completed all of the electrophysiological testing. The lesser number of participants in the electrophysiological testing was due to persistent equipment failures (that resulted in a complete loss of data) rather than fatigue or discomfort. All but two of the patients who completed the electrophysiology testing also completed the behavioral testing. Each of the blast-exposed patients had been examined by the WRAMC TBI team, and 17 were diagnosed as having no TBI, while 19 were diagnosed with mTBI. The members of the blast-exposed group who completed the behavioral testing comprised a group that had an age range of 20 to 54 (mean of 32.8 years) and 32 males and 4 females.
Twenty-nine control subjects who had not been exposed to a blast were recruited and tested at the NCRAR. All control subjects completed both the behavioral and electrophysiological test batteries. The age range of the control group was from 19 to 54 (mean of 32.1 years) and consisted of 26 males and 3 females.
All but five of those blast-exposed patients who completed the behavioral testing also completed the medical history survey. Of the patients who completed the survey, 41.9 percent (13/31) reported having been exposed to more than 1 blast and 29 percent (9/31) reported being exposed to more than 10 blasts. Fifty-five percent (17/31) had serious bodily injuries as a direct result of the blast, suggesting that they were in close proximity to the blast, which is consistent with the reports that those patients gave of their blast exposure. Because the experience was too recent for an official diagnosis to be made, it was not possible to determine which of the blast-exposed participants had posttraumatic stress disorder (PTSD). Nonetheless, 13 of the 31 blast-exposed patients who completed the medical history interview answered that they had PTSD.
Two of the questions concerned whether or not participants had experienced any hearing changes following blast exposure. While only 39 percent (12/31) of those surveyed reported greater difficulties hearing in quiet after their exposure, 78 percent (25/32) reported greater difficulties hearing speech in noisy environments.
Each subject completed a full audiometric evaluation. Figure 1 shows the mean pure-tone air-conduction thresholds for the two groups of subjects. All subjects had acoustic reflex thresholds, acoustic reflex decay test results, tympanograms, and DPOAEs within normal limits (at least 6 dB above the noise floor). A repeated-measures analysis of variance was carried out on the audiometric thresholds (in decibels HL) with frequency (0.5, 1.0, 2.0, 4.0, and 8.0 kHz) and ear (left, right) as within-subjects factors and group membership (blast-exposed vs control) as a between-subjects factor. Greenhouse-Geisser corrections were applied to the analyses where indicated, in order to account for unequal variance across dependent variables. This resulted in noninteger degrees of freedom in some cases.
Mean pure-tone air-conduction thresholds for (a) blast-exposed (n = 36) and (b) non-blast-exposed (control) subjects (n = 29). HL = hearing level.
Click Image to Enlarge. View as PowerPoint Slide
Test ear was not a significant factor (F(1,63) = 2.81, p = 0.10), but test frequency was significant (F(2.27,252) = 14.08, p < 0.001) as was group membership (F(1,63) = 15.10, p < 0. 001). There was also a significant interaction between test frequency and group membership (F(4,252) = 3.07, p = 0.04). Examination of the mean audiometric data at each frequency revealed that the threshold differences across groups were usually in the range of 3 to 6 dB, although some were higher, with the largest differences occurring for 4.0 and 8.0 kHz in the left ear. Those differences were 12 and 13 dB, respectively. In all cases, the effect was consistently in the direction of greater impairment for the blast-exposed group. Although statistically significant, these small differences seldom exceeded the range of test-retest reliability for clinical audiograms and do not reach the level that is generally thought to account for differences between groups on the behavioral and electrophysiological tests used [20,24].
Table 1 shows the mean word recognition scores (WRSs) for the left and right ears, as well as the SNR values for the QuickSIN measurements made with left-ear stimulation, right-ear stimulation, and presentation of the identical QuickSIN stimuli to both ears. Three of the blast-exposed patients did not complete the WRS testing, and four did not complete the QuickSIN. To determine whether abnormal performance was statistically more likely in the blast-exposed group, we categorized individuals as performing “normally” or “abnormally” based on comparison of each score with a cutoff value calculated from the scores of the control group. Cutoff values, shown in Table 1, correspond to two standard deviations (SDs) above the mean of the control group.
Using the control data to determine the range of normal performance for a group of subjects with this age, hearing, and sex composition, we found that 12 percent (4/33) of the blast-exposed group and 10 percent (3/29) of the control group were “abnormal” on the WRS when tested at either the left or right ear. This difference was not statistically different from a distribution that would have arisen by chance, on the basis of a nonparametric chi-square test (χ2 = 0.049, df = 1, p = 0.83).
A similar analysis of the QuickSIN, however, found that 39 percent (12/31) of the blast-exposed patients performed abnormally on one or more of the measures, while only one control subject was outside the normal range. These rates of abnormal performance are significantly different from expected based on random variation (χ2 = 10.98, df = 1, p < 0.01).
The lack of a significant difference on the WRS suggests that speech understanding per se was unaffected (as was the ability to undergo basic auditory testing), whereas the significant difference on the QuickSIN suggests that the ability to understand speech in complex auditory environments (in this case, multitalker babble) may be impaired for at least some of the blast-exposed participants. Further light was shed on this difference between the groups by examining the results of the central auditory test battery.
Table 2 shows means, SDs, and ranges of performance on the five behavioral tests by the blast-exposed and control subjects. Abnormal performance was determined by comparing each subject’s score to a cutoff defined as plus (or minus, where appropriate) two SDs from the mean of the control group. Figure 2(a) displays the percentage of subjects within each group who performed abnormally on one or more of the subtests listed in Table 2. Nonparametric chi-square tests were used to determine whether or not the proportion of the blast-exposed group performing outside the normal range was statistically different from the proportion of control subjects performing outside the normal range.
(a) Percentage of blast-exposed and control subjects who demonstrated abnormal performance on at least one subtest or component of each of the behavioral tests. (b) Percentage of subjects in two groups (blast-exposed and control) who performed abnormally on from 0 to 5 behavioral tests. DD = Dichotic Digits, FP = Frequency Patterns, GIN = Gaps-In-Noise, MLD = Masking Level Difference, SSW = Staggered Spondaic Words.
Click Image to Enlarge. View as PowerPoint Slide
Table 2 shows mean accuracy, SD, and range of performance for both test groups on the FP test, for the left and right ears. For the right ear, the mean accuracy for the control subjects was 93.10 ± 14.14 percent and mean accuracy for the blast-exposed group was 85.18 ± 20.89 percent. For the left ear, the mean accuracy for the control subjects was 93.67 ± 11.21 percent and mean accuracy for the blast-exposed group was 84.35 ± 23.54 percent. The difference between these means was not significantly different for either ear (p > 0.05). Using a cutoff of 71 percent for the left-ear test and 65 percent for the right-ear test, 19 percent (7/36) of the blast-exposed patients and 7 percent (2/29) of the control subjects exhibited abnormal performance when tested at either the right or left ear. This difference was not statistically significant (χ2 = 2.12, df = 1, p = 0.15).
Table 2 shows approximate gap thresholds estimated from performance on the GIN test, for the left and right ears. For the right-ear test, mean threshold for the control group was 3.79 ± 1.29 ms and mean threshold for the blast-exposed group was 6.03 ± 3.20 ms, which was a statistically significant difference (F(1,64) = 12.39, p = 0.001). A cutoff of 6 ms indicated that 31 percent (11/36) of the blast-exposed participants had approximate gap thresholds in the abnormal range, and none of the control subjects performed abnormally (χ2 = 10.66, df = 1, p = 0.001). For the left ear, mean threshold for the control group was 4.28 ± 2.10 ms and mean threshold for the blast-exposed group was 6.44 ± 3.12 ms, which was a statistically significant difference (F(1,64) = 10.33, p < 0.01). A cutoff of 8 ms indicated that 22 percent (8/36) of the blast-exposed participants had approximate gap thresholds in the abnormal range, and one of the control subjects performed abnormally (χ2 = 4.75, df = 1, p < 0.05). Figure 3(a) shows that 39 percent (14/36) of the blast-exposed group had abnormal performance on the GIN test for either ear, as compared with 3 percent for the control group (1/29). The difference was statistically significant (χ2 = 11.37, df = 1, p < 0.01).
Grand average waveforms for Auditory Brainstem Response recordings. (a) Control subjects and (b) blast-exposed subjects. Both ears and both contralateral (Contra) and ipsilateral (Ipsi) presentations are shown. LE = left ear, RE = right ear.
Click Image to Enlarge. View as PowerPoint Slide
Average data for the MLD test are shown in Table 2. The average MLD for the control group was 13.59 ± 2.80 dB, and the average MLD for the blast-exposed group was 13.28 ± 3.74 dB. This difference was not statistically significant (F(1,64) = 0.136, p = 0.71). The N0Sπ condition, which is the SNR needed to detect a signal presented with a 180° phase reversal between the two ears, had an average threshold for the control subjects of −24.83 ± 2.70 dB and an average threshold for the blast-exposed group of −23.00 ± 3.66 dB. This difference was statistically significant (F(1,64) = 5.03, p < 0.05). The N0S0 condition, which is the SNR needed to detect a signal presented diotically (no binaural differences), had an average threshold for the control subjects of −11.24 ± 2.36 dB and an average threshold for the blast-exposed subjects of −9.72 ± 3.54 dB. This difference was also statistically significant (F(1,64) = 3.97, p = 0.05).
Based on the SDs, cutoff values were set at 8 dB for the MLD, −7 dB for N0S0, and −19 dB for N0Sπ. None of the control subjects had thresholds outside the normal range on either of the component measures, while 17 percent (6/36) of the blast-exposed were abnormal on N0S0 (χ2 = 5.33, df = 1, p < 0.05) and 11 percent (4/36) were abnormal on N0Sπ (χ2 = 3.43, df = 1, p = 0.06). For the MLD, 20 percent (7/36) of the blast-exposed subjects had MLD values of ≤8 dB, while only 3 percent (1/29) of the control subjects had MLDs in the abnormal range (χ2 = 3.81, df = 1, p = 0.05). Considering both the component subtests and the MLD score, 33 percent (12/36) of the blast-exposed were abnormal on one or more of the measures, while only 3 percent (1/29) of the control subjects had one or more scores in the abnormal range (χ2 = 8.97, df = 1, p = 0.003).
Performance on the DD test was well within the normal range for nearly all of the subjects and at or near perfect performance for many. As shown in Table 2, for the right ear, the mean accuracy for the control subjects was 97.67 ± 3.47 percent and mean accuracy for the blast-exposed group was 96.60 ± 3.97 percent. For the left ear, the mean accuracy for the control subjects was 94.31 ± 5.97 percent and mean accuracy for the blast-exposed group was 94.24 ± 5.17 percent. These differences were not statistically significant (p > 0.25). Using a cutoff of 91 percent for the right ear and 82 percent for the left ear, 11 percent (4/36) of the experimental subjects performed abnormally for either the right or left ear, as opposed to 14 percent (4/29) of control subjects. This difference was not statistically significant (χ2 = 0.11, df = 1, p = 0.74).
Table 2 shows that the mean number of total errors for the control group on the SSW was 4.14 ± 3.03, while the mean for the blast-exposed group was 10.44 ± 6.52. This difference was statistically significant (F(1,64) = 23.05, p < 0.001). The cutoff value for normal performance was 10 errors, based on mean and SD of the control group. Only 3 percent (1/29) of the control subjects had ≥10 errors, while 36 percent (13/36) of the blast-exposed subjects had between 10 and 29 total errors. In the left competing condition, 31 percent (11/36) of the blast-exposed subjects performed abnormally (≥8 errors) compared with 3 percent (1/29) of the control subjects. In the right competing condition, 39 percent (14/36) of the blast-exposed subjects performed abnormally compared with 0 percent (0/29) of the control subjects. Overall, 3 percent (1/29) of the control subjects and 44 percent (16/36) of the blast-exposed subjects performed abnormally on one or more of the subtests of the SSW. This difference was statistically significant different (χ2 = 13.98, df = 1, p < 0.001).
Figure 2(b) shows the proportion of subjects in each group who produced abnormal performance on any of the five behavioral tests. Of the blast-exposed subjects, 75 percent (27/36) had abnormal performance on at least one test, as opposed to only 24 percent (7/29) of the control subjects. Of the 36 blast-exposed subjects, 17 (44%) showed abnormal performance on two or more of the behavioral tests, while 3 of 29 (10%) control subjects were abnormal on two or more tests and only one was abnormal on three tests. No subject had abnormal performance on all five behavioral tests. There was a statistically significant difference between groups in terms of the number of abnormal test results (χ2 = 17.50, df = 4, p = 0.002).
EP measurements were carried out on 19 blast-exposed subjects and 29 control subjects. Two repeatable ABR and LLR recordings were collected and averaged for each subject. Traces were analyzed independently by three audiologists, using the software provided with the Cadwell Sierra Wave equipment. Disagreements concerning the peak locations within a trace were reconciled either by agreement of two of the three audiologists or by consultation with a highly experienced EP researcher (R. L. Folmer). In the final analysis, two blast-exposed subjects were excluded from the ABR and LLR results and one control subject was excluded from the ABR results because of unacceptable levels of artifact during the EP recording, which made the waveforms indistinguishable. All traces were baseline corrected. As the equipment was not designed for audiological testing, ABR latencies also needed to be corrected to account for the length of the insert earphone tubes used in order to correspond to standard clinical latencies (a shift of ~0.9 ms).
Average peak latencies and amplitudes for the ABR are shown in Table 3 for the blast-exposed and control groups. Peak-to-peak amplitudes were calculated as the amplitude difference between the highest peak and the following valley. If either peak or valley for waves I, III, and V were indistinguishable, values from that wave were excluded from the calculations in Table 3, as indicated by the variable numbers of subjects included in each average (n). Consistent with previous reports in the EP literature, amplitude data were more variable than latencies.
Grand averaged waveforms for the ABR measurements are shown in Figure 3 for both right and left ears as either ipsilateral or contralateral to the stimulus ear. Figure 3(a) includes the average of the control subjects (n = 29), while Figure 3(b) displays waveforms from the blast-exposed subjects (n = 19). These waveforms and the values in Table 3 were quite similar between blast-exposed and control groups, and the average peak latencies and amplitudes did not differ significantly between groups.
In contrast to the ABR waveforms, the later auditory components demonstrated different results for the two groups. The grand averaged waveforms shown in Figure 4 indicate responses to the right and to the left ear, for both the rare and the common stimuli. The mean data and SDs shown in Table 4 reflect the peak latencies and amplitudes in the grand average waveforms. Recall that baseline correction was applied to the amplitude to show the similarities between ears and the common and rare test conditions. Amplitude values are displayed in microvolts from baseline. A significant difference was seen between groups for the P300 latency in the right ear (t(42) = 2.65, p = 0.01). There was also a significant group difference for the P300 amplitude in the right ear (t(42) = −2.26, p = 0.03) and the N100 amplitude in the left ear (t(38) = 2.21, p < 0.05) for the rare stimulus condition.
Viewed as a whole, results from the EP testing indicate similar performance between the blast-exposed and control groups (as well as within groups) for the earlier components (ABR and N100), with significant differences emerging for later components (P300, particularly the right ear), reflecting higher processing stages in the auditory system and cognitive centers of the brain. These findings are consistent with Segalowitz et al.  and Alberti et al. , who also reported lower amplitudes and longer latencies for auditory P300 components in subjects with mTBI. Lew et al. also observed smaller amplitude and longer latency auditory P300s from patients with histories of severe brain injury compared with nondisabled control subjects . When the individual values for the blast-exposed patients were compared with the range of expected values based on the control data, however, significantly abnormal latencies and/or amplitudes were not observed, based on the criterion of plus or minus two SDs from the mean.
Recall that some of the experimental subjects reported they had experienced multiple blast events, while others only reported one such event. Of the 31 subjects who were willing to respond to questions about their blast-exposure experience(s), 13 reported more than one blast exposure and 18 reported only one exposure. An analysis of the rate of abnormal performances comparing these two subgroups across all of the subtests examined, as well as the total number of abnormal test results, did not result in any significant differences.
Similarly, a medically driven diagnosis of mTBI (by the WRAMC TBI team) was also examined as a potential additional factor beyond blast exposure and did not reveal any significant differences between the 19 subjects who were diagnosed with mTBI and the 17 who were not. If such a diagnosis can be assumed to indicate injury severity among the blast-exposed experimental subjects tested here, that diagnosis was not reflected in significant correlations with abnormal performance on these central auditory tests. The diagnosis of TBI also was not significantly correlated with age, pure-tone average (low frequency) (PTALF) thresholds at 0.5, 1.0, and 2.0 kHz, or with pure-tone average (high frequency) (PTAHF) thresholds at 1.0, 2.0, and 4.0 kHz. Nor was there a significant correlation between a diagnosis of mTBI and the number of tests on which performance was abnormal.
Those who completed the questionnaire and reported PTSD (13/31) were no more likely to perform abnormally on the GIN, MLD, DD, or FP tests (p > 0.50), nor was the total number of abnormal test results significantly associated with a report of PTSD. Performance on the SSW, on the other hand, was significantly associated with a report of PTSD, with 85 percent (11/13) of those reporting PTSD performing abnormally on at least one of the subtests compared with 22 percent (5/23) of those not reporting PTSD (χ2 = 13.30, df = 1, p < 0.001). While the MLD and the N0Sπ subtest of the MLD were not significantly related to a report of PTSD, the same was not true of the N0S0 subtest, on which 38 percent of those reporting PTSD (5/13) performed abnormally, but only 4 percent (1/23) of those not reporting PTSD performed outside the normal range (χ2 = 6.96, df = 1, p < 0.01). Confirmation of the relationship between a report of PTSD and difficulties detecting signals in noise was revealed by the results of the QuickSIN test, on which 62 percent (8/13) of those reporting PTSD had SNR loss values in the abnormal range, as opposed to only 5 percent (1/19) of those who did not. Note that four participants did not complete the QuickSIN, none of whom reported PTSD.
Although the incidence of hearing loss among people exposed to a high-explosive blast varies considerably (see Helfer et al.  for review), the most recent published estimate based on military medical records is about 52 percent with permanent sensorineural loss [32-33]. In order to be eligible to participate, however, subjects in this study all had to have average PTALF thresholds at 0.5, 1.0, or 2.0, kHz of <50 dB HL. Indeed, those tested generally had only mild losses (except sometimes at 4.0 kHz). PTALF thresholds were 23.33 dB or better in both ears for all blast-exposed subjects, and WRS scores were 88 percent or better at the right ear and 80 percent or better at the left ear. The fact that we were able to identify subjects who had such minor documented hearing loss is perhaps remarkable given the noise levels from a blast, as well as other likely noise exposures associated with military service. It is not totally unexpected, however, given that earplugs are issued and may be worn in environments in which blasts are encountered. Other mitigating factors can include the type of helmet worn, the physical environment (reverberant vs open field), the type of explosive, and the orientation of the ear to the blast wave.
Correlations among the behavioral test results, age, and the average PTAHF thresholds at 1.0, 2.0, and 4.0 kHz were conducted on the combined subject pool (both control and blast-exposed) in order to examine the potential impacts of these factors on performance. Age and hearing loss (as measured by the PTAHF) were significantly correlated (R(65) = 0.30, p = 0.01). PTAHF was significantly negatively correlated with the MLD test (R(65) = −0.276, p = 0.03) and with the total number of errors on the SSW test (R(65) = 0.398, p = 0.001).
To examine other factors that might explain the relationships between PTSD and abnormal test performance, PTALF for the left and right ears, PTAHF for the left and right ears, WRS for the left and right ears, and age were all compared for those participants exposed to blasts who did and did not report PTSD. Age was not significantly different between the groups (F(1,35) = 0.622, p = 0.44), nor was PTAHF for either ear (right ear: F(1,35) = 1.29, p = 0.26; left ear: F(1,35) = 0.494, p = 0.49). WRS was marginally significantly different for both ears, however, as was PTALF (right ear PTALF: F(1,35) = 11.56, p < 0.01; right ear WRS: F(1,31) = 4.03, p = 0.05; left ear PTALF: F(1,35) = 6.47, p < 0.05; left ear WRS: F(1,35) = 4.01, p = 0.05).
Total number of errors on the SSW test was significantly correlated with average estimated threshold on the GIN test (R(65) = 0.338, p = 0.006) and negatively correlated with average performance on the FP test (R(65) = −0.276, p = 0.03). Performance on the FP test was negatively correlated with average estimated threshold on the GIN test (R(65) = −0.391, p = 0.001). No other score correlations were significant.
Correlations were also calculated for abnormal versus normal performance across the five behavioral tests. Abnormal performance on the GIN test was significantly correlated with abnormal performance on the SSW test (R(65) = 0.29, p = 0.03) and with abnormal performance on the FP test (R(65) = 0.27, p = 0.04). No other correlations reflecting abnormal performance between tests were significant. The generally low correlations among the central auditory processing tests confirm that these tests, selected specifically to try to test various levels of the auditory system, do reflect separate auditory functions.
The behavioral tests used in this study were developed to assess central auditory processing abilities in a number of different groups of patients. The normative studies typically included fairly young nondisabled subjects from many different backgrounds. Etiologies considered in many of those studies included concussive head injuries incurred during sports activities and motor vehicle crashes. People with other brain injuries/pathologies, such as strokes or tumors, have also been included as experimental groups in these studies. In contrast to the experimental group studied in this research, those subjects typically had known, localized regions of brain injury. The experimental subjects evaluated in this study did not have confirmed pathologies that could be identified, but had in common exposure to the debilitating effects of a high-explosive blast during their deployments in Iraq and/or Afghanistan. Their suspected brain injuries were evaluated functionally, using criteria such as length of unconsciousness. Some were wearing protective gear when they were exposed to a blast, and they would likely have received rapid, extensive, and capable posttraumatic care.
The control group was included in this study because of possible differences in demographics between subjects who provided the published normative values and the experimental subjects studied here--in particular, hearing loss extent and configuration and age. By using age-, sex-, and hearing-loss matched control subjects, some potentially important differences between the groups, unrelated to blast-exposure history, were avoided. It should be noted that, because performance by the control group served as the “norm” against which performance of the experimental subjects was judged, a definition of a cutoff score determined as the mean plus or minus two SDs would result in one or two control subjects identified as performing abnormally. However, the relevant comparison is how many of the experimental group performed outside the normative values established in this study. This was a more conservative comparison than if the published norms had been used. For all tests, the control group of subjects assessed here provided a more stringent requirement for labeling performance as abnormal.
Figure 2(a) shows graphically which tests of central auditory function were most likely to reveal abnormal performance in subjects who had been exposed to a blast. The GIN test and the SSW both revealed substantial rates of abnormal performance relative to the age- and hearing-matched controls. These results suggest that one or both of these two tests could be useful in determining whether or not central auditory functions were impaired in a given blast-exposed patient. The MLD test also showed a significant rate of abnormal performance among the blast-exposed patients, especially when the component subtests are also considered. Finally, the QuickSIN, although not formally included in the central auditory test battery, appears to provide useful data for the clinician interested in identifying nonperipheral factors affecting hearing ability.
Although none of the tests of auditory EPs used in this study revealed significant rates of abnormal performance for the blast-exposed patients, there were several late potentials that revealed significant differences between the mean values associated with the two groups. Future work should examine whether or not it would be possible to develop stimuli that could be used to reveal abnormal values for an individual patient.
The tests used in this study were selected to assess at least two brain areas that are heavily involved in auditory processing of simple and complex sounds and that are likely to reflect damage to auditory structures and neural projections. The three tests that revealed the largest effects of blast exposure were the GIN test, the SSW test, and the MLD test. The first two have been shown to reflect damage to the cortex and corpus callosum, and the latter tests the function of the auditory brainstem. The significant correlation between blast exposure and abnormal performance on the SSW and GIN tests suggests that the cortex and corpus collosum may be involved when there is exposure to a high-intensity blast. Differences in the mean values for the late electrophysiological responses but not for the ABR also support the involvement of cortical rather than early brainstem structures. The patterns of abnormal performance revealed for the MLD and its component subtests suggest that later brainstem structures may be involved, but the data are too sparse to draw strong inferences. These data do suggest that it is at least possible, however, that blast exposure may affect either brainstem or cortical level function, with variability as to the degree to which each area is involved. As mentioned previously, it is also essential to remember that the configuration of damage is almost certain to vary among blast-exposed individuals based on the specific conditions of the exposure (or exposures).
Within the blast-exposed group, individuals who had been diagnosed with mTBI demonstrated very little difference on these tests from those without such a diagnosis. This finding indicates that impairments in central processing, whether due to auditory or other factors, should be suspected after blast exposure, even in the absence of a formal diagnosis of mTBI. In addition, subjects who had been exposed multiple times to a blast did not show significantly different performance from subjects who only reported one blast exposure. This result should be taken with a degree of skepticism, however, as number of blast events was not a major classification factor in this experiment. Further work on the effects of multiple versus single blast exposures is obviously an important target for future research. Finally, significant correlations were revealed between abnormal performance on central auditory tests and a report of PTSD. This potentially important relationship should be examined in a study specifically designed to examine PTSD as a factor in auditory function.
While it is tempting to interpret the results of this study in terms of the clinical diagnosis of central auditory processing disorder, we would strongly advise against doing so. The purpose of the study was to determine whether patients who were recently exposed to high-explosive blasts showed evidence of central auditory dysfunction as determined from a battery of established central tests, and to a significant extent, they did. However, the patients tested were all hospitalized for major injuries to areas other than the brain. The potential importance of this fact should not be ignored, as it is associated with other factors that could have affected performance, most notably the possibility that the subjects were taking significant doses of medications for non-auditory-related injuries. Attempts to determine the extent of medications used by these subjects that might affect performance on auditory tests were largely unsuccessful.
Despite this note of caution, however, it is also the case that none of the experimental subjects performed poorly on all tests, suggesting that performance was not impaired on all behavioral testing, but rather on several specific measures. If performance was impaired by overall vigilance or memory deficits, then it would be expected that two tests sharing similar presentation methods and response characteristics would produce similar results. It is instructive to note, then, that the DD and SSW tests produced markedly different rates of abnormal performance despite quite similar task demands. Specifically, both required the participant to hear and repeat back four items, two of which were presented to each ear. The difference is that the DD uses digits, which are closed-set and practiced from an early age, while the SSW uses an open set of compound words that require the participant to switch from attending to one ear to dividing attention between ears, and then to switch to attending to the other ear. The data collected here are not sufficient to determine which of these factors was driving performance and how these factors relate to blast exposure. It is a strong indication, however, that the basic ability to perform an auditory task was not responsible for the high rate of abnormal performance among the blast-exposed patients.
Performance on these central tests was not strongly correlated with a diagnosis of mTBI, suggesting that blast exposure is, of itself, a separate disordered state, not necessarily dependent on TBI. That is, the effects of a high-explosive blast on the central systems--in this case, the auditory system--may be quite different, and potentially more subtle, than the impaired functions of the brain observed in those with TBI. In light of the many complaints of auditory difficulties expressed to audiologists by Veterans who report an exposure to blasts during their deployments, it is critical to determine how central auditory function, and performance on tests such as those used here, is affected after some amount of healing time has passed for these individuals.
The blast-exposed individuals tested here were mostly young (mean age 32.8 yr) and, because of their status as warfighters, would be highly physically fit. Except for their injuries from war, these were very healthy individuals. Many of these servicemembers were exposed to more than one high-explosive blast during their tours of duty, and they often experienced other injuries, such as amputation, which for many was the immediate reason for their treatment at WRAMC. Individuals who had been diagnosed with mTBI, and therefore might be considered more seriously impaired by the blast exposure, did not on the whole perform worse on these tests than those who were not diagnosed with TBI. Even without obvious significant brain involvement resulting from their blast exposure, the performance of these young warfighters on central auditory tests indicate that a substantial number of them might be suffering from disorders associated with central auditory processing. Because of the frequency of auditory complaints voiced by these individuals, it might be prudent to include one or more tests of central auditory function in their postdeployment screenings and perhaps also for all preseparation screenings for soldiers leaving military service.
Clinic time is valuable, and it is unlikely that several tests of central auditory processing could be included in routine audiometric evaluations. However, the results of this study indicate that performing the SSW, GIN, and/or MLD tests, either alone or in combination, can provide valuable insight into the likelihood of impairment to central auditory functions and may alert clinicians to the need for further assessments.
Funding/Support: This material was based on work supported by the VA Rehabilitation Research and Development Service (Merit Review awards B5067R [PIs: Leek, Fausti] and C7755I [PIs: Gallun, Leek]), a Senior Research Career Scientist award (grant C4042L [PI: Leek]), VA Career Development II awards (grants C4963W [PI: Gallun] and C7067W [PI: Lewis]), and the VA Rehabilitation Research and Development NCRAR (Center of Excellence award C4844C).
Additional Contributions: We particularly express our gratitude and admiration to the warfighters who participated in this research at the (former) WRAMC. Drs. Frank Musiek and Richard Wilson generously provided essential testing materials. Dr. David Lilly contributed to the design of the study.
Disclaimer: The opinions and assertions presented are private views of the authors and are not to be construed as official or as necessarily reflecting the views of the VA or the Department of Defense.
This article and any supplemental material should be cited as follows:
Gallun FJ, Diedesch AC, Kubli LR, Walden TC, Folmer RL, Lewis MS, McDermott DJ, Fausti SA, Leek MR. Performance on tests of central auditory processing by individuals exposed to high-intensity blasts. J Rehabil Res Dev. 2012;49(7):1005-24.
ResearcherID: Frederick J. Gallun, PhD: G-3792-2012
Go to TOP
Last Reviewed or Updated Wednesday, October 24, 2012 1:21 PM