Journal of Rehabilitation Research & Development (JRRD)

Quick Links

  • Health Programs
  • Protect your health
  • Learn more: A-Z Health
Veterans Crisis Line Badge
 

Volume 49 Number 9, 2012
   Pages 1411 — 1420

Can structured data fields accurately measure quality of care? The example of falls

David A. Ganz, MD, PhD;1–3* Shone Almeida, MD;4 Carol P. Roth, RN, MPH;3 David B. Reuben, MD;2 Neil S. Wenger, MD, MPH3,5

1Department of Veterans Affairs Greater Los Angeles Healthcare System, Los Angeles, CA; 2Multicampus Program in Geriatric Medicine and Gerontology, David Geffen School of Medicine at University of California, Los Angeles (UCLA), Los Angeles, CA; 3RAND Health, Santa Monica, CA; 4David Geffen School of Medicine at UCLA, Los Angeles, CA; 5Division of General Internal Medicine and Health Services Research, David Geffen School of Medicine at UCLA, Los Angeles, CA

Abstract–By automating collection of data elements, electronic health records may simplify the process of measuring the quality of medical care. Using data from a quality improvement initiative in primary care medical groups, we sought to determine whether the quality of care for falls and fear of falling in outpatients aged 75 and older could be accurately measured solely from codable (non-free-text) data in a structured visit note. A traditional medical record review by trained abstractors served as the criterion standard. Among 215 patient records reviewed, we found a structured visit note in 54% of charts within 3 mo of the date patients had been identified as having falls or fear of falling. The reliability of an algorithm based on codable data was at least good (kappa of at least 0.61) compared with full medical record review for three care processes recommended for patients with two falls or one fall with injury in the past year: orthostatic vital signs, vision test/eye examination, and home safety evaluation. However, the automated algorithm routinely underestimated quality of care. Performance standards based on automated measurement of quality of care from electronic health records need to account for documentation occurring in nonstructured form.

Key words: automated data collection, electronic health record, falls, fear of falling, geriatrics, medical record review, primary care, quality improvement, quality measurement, quality of care.

Abbreviations: ACOVE = Assessing Care of Vulnerable Elders; ACOVEprime = ACOVE/Practice Redesign for Improved Medical Care for Elders; EHR = electronic health record; HSR&D = Health Services Research and Development; QI = quality indicator; SVN = structured visit note; UCLA = University of California, Los Angeles; UI = urinary incontinence; VA = Department of Veterans Affairs.
*Address all correspondence to David A. Ganz, MD, PhD; 11301 Wilshire Blvd, Building 220, Room 313 (11G), Los Angeles, CA 90073; 310-268-4110; fax: 310-268-4842.
Email: dganz@mednet.ucla.edu

More than a decade ago, a report commissioned by the National Committee for Quality Assurance outlined a road map for improving routine measurement of healthcare quality, including more efforts at automation of data collection to enable performance measurement and quality improvement [1]. Today, however, the capabilities of automated quality measurement remain limited. A review of one comprehensive quality-measurement system revealed that even in a scenario where all data came from today–s typical electronic health records (EHRs), only 28 percent of 482 different quality measures could be captured solely from automated data [2].

Where automated quality measurement is successful, it typically relies on coded data. Examples of successful automated quality measurement from EHRs include determining appropriate medication use for cardiac conditions [3–5]; ascertaining measurement or control of intermediate outcomes such as blood pressure, hemoglobin A1c, and LDL (low-density lipoprotein) cholesterol [4,6–10]; and assessing receipt of preventive services such as cervical cancer screening and mammography [10]. In contrast, automated quality measurement for the acute condition of community-acquired pneumonia is currently limited by the lack of key information in coded form [11]. Until technologies that capture care provision from free-text notes are routinely available, measurement using automated sources must rely on data where eligibility and receipt of specific care processes are available in coded format.

In the field of geriatrics, many quality measures focus less on pharmaceutical treatments and laboratory tests and more on history taking, physical examination, counseling, and care coordination [12]; these are precisely the services that are difficult to capture in coded form. Previous work has shown that the quality of care for geriatric conditions such as falls is significantly poorer than that for general medical conditions such as diabetes or hypertension [13], indicating the importance of developing better measurement systems for geriatric problems.

We recently reported results of a quality-improvement intervention implemented in five medical groups in which we used a full medical record review to judge the quality of care [14]. The intervention included use of a paper or electronic structured visit note (SVN) to prompt clinicians to provide appropriate care. In some cases, the note was also used for medical record documentation of the patient visit. Because the SVN captures data in a structured way, the data within the SVN could potentially be coded to enable automated review in settings where the SVN is electronic. The objective of the current study was to determine whether an accurate estimate of the quality of care can be derived from structured (non-free-text) fields in the SVN when the SVN is used as part of routine documentation.

The quality improvement initiative (Assessing Care of Vulnerable Elders/Practice Redesign for Improved Medical Care for Elders [ACOVEprime]) consisted of an intervention to improve the quality of care for falls and urinary incontinence (UI) provided in primary care to older adults in five community medical groups across the United States [14]. The intervention, which took place between October 30, 2006, and December 31, 2007, included the following components:

Systematic identification of patients aged –75 with a history of two or more falls in the past year, one fall with injury since the last visit, fear of falling due to balance and/or gait problems, or bothersome UI.
The SVN–a specialized version of the traditional progress note designed to help clinicians document their patient encounters for falls and UI. The condition-specific SVN relies on check boxes and fill-in spaces to assist the clinician with notation and/or serves as a prompt for necessary history taking, physical examination, diagnostic evaluation, and treatment. At most sites, the upper part of the SVN (above a dotted line) was designed to be filled in by the nurse or medical assistant prior to the patient–s seeing the primary care provider, who then filled in the remainder of the note. Some key elements of the fall evaluation were thus delegated from the primary care provider to ancillary staff. Figure 1 shows a sample SVN for falls that is used at one medical group in this study. At another medical group, the SVN was implemented as an EHR template.
Handouts that linked patients and families to community resources (e.g., exercise programs) as well as patient education materials for falls and UI.
Face-to-face clinician and staff education about falls and UI and technical assistance to implement the intervention.

At each of the five medical groups (henceforth referred to as sites A through E), one practice was selected to receive the intervention and one practice served as a control. Eligible patients in both intervention and control practices were identified using a screening process consisting of a questionnaire administered by a member of the practice support staff or filled out by the patient, depending on the medical group. The screening process included four questions:

1.
Have you fallen two or more times in the past 12 mo?
2.
Have you fallen and hurt yourself since your last visit to the doctor?
3.
Are you afraid that you might fall because of balance or walking problems?
4.
Do you have a problem with urinary incontinence (can–t always hold your urine) that is bothersome enough that you would like to know more about how it could be treated?

At both control and intervention practices, the responses to the screening questions above were presented to the treating clinician. Clinicians at control practices received no further materials or decision support. In contrast, at the intervention practices, an affirmative response to any of the screening questions prompted support staff to attach the appropriate falls or UI SVN to the chart when rooming the patient. At one intervention practice, clinicians had the option of scheduling a future visit to complete the SVN. Also, for the one medical group that had a fully functional EHR, an affirmative response to a screening question at the intervention practice prompted the nurse and clinician to use the appropriate SVN that had been programmed into the EHR as condition-specific nurse and physician templates. However, for the five intervention practices overall, the ACOVEprime protocol did not mandate that physicians use the SVN. Physicians could use the SVN as a progress note or simply as a prompt for appropriate care, at their discretion.

For the intervention practice that used the SVN as part of a fully functional EHR, the SVN was split into two templates: a nurse template and a physician template. The nurse template covered the same elements shown above the dotted line for the paper SVN in Figure 1 and was set up to require completion of all appropriate fields. The nurse template mimicked the content of the paper SVN, except that the three-item recall question was not displayed for patients with known dementia or for patients who had been tested with this question in the past 6 mo.

The physician template was designed to modularly insert into the progress note for the visit and included three tabs: a review of alcohol and medication use, structured physical examination, and treatment options, paralleling the content below the dotted line in Figure 1. However, the section on alcohol and medications differed from the paper SVN in that the physician could administer the CAGE questionnaire [15] to patients drinking more than three drinks per day, and a current problem list and medication list from the EHR was displayed, with an alert if the patient was on psychotropic medications. In addition, the tab dealing with treatment options contained fewer check boxes than the paper SVN: options for printing specific patient education handouts and ordering calcium and vitamin D were present, but instead of a specific check box for referrals (e.g., cardiology) or tests (e.g., Holter monitor), there were generic links to other screens to add/remove medications or write orders/referrals. The treatment tab also offered decision support in the form of links to information on proper management of falls, a grid to help choose the appropriate assistive device for a patient, or a link to Geriatrics at Your Fingertips [16]. Of note, documentation of diagnostic impression and plan was not covered in the physician template but in the main progress note for the visit. Data fields within the physician template were not mandated, but the physician template became part of the medical record regardless of data field completion. Thus, an SVN was present in every medical record, but data field completion could vary for physician documentation.

In this study, we focused on the three of the five intervention practices (sites A, B, and C) that chose to use the SVN as a means of documentation; the remaining two intervention practices (sites D and E) used the SVN as a prompt for appropriate care for falls or UI but did not make the SVN part of the recorded visit. Since our focus was on care for falls and mobility problems, we selected patients who screened positive for falls or fear of falling across the three intervention practices and excluded patients with UI only. We excluded patients whose first fall during the study period came to attention in the course of usual care (rather than screening), because the SVN was not a consistent part of the work flow outside of screening. In addition, we excluded a small number of patients with a history of falls or fear of falling who, based on full medical record review, were not eligible for some or all of the recommended care processes for each condition. The goal of these exclusions was to restrict to a sample in which all patients were eligible for the same care processes based on full medical record review, the criterion standard. Figure 2 shows details of patients included and excluded in this study.

For the evaluation of ACOVEprime, a stratified random sample of medical records was selected for copying (or on-site use) and medical record review; the stratification included a preference for patients with falls over those with fear of falling [14]. Sites A and B had paper medical records that were copied on-site; these records were scanned into electronic form and then deidentified. For site C, which had an EHR, the contents of each record were "printed" in deidentified form to an electronic file. Trained nurse chart abstractors collected data used to determine eligibility for and completion of care processes consistent with selected Assessing Care of Vulnerable Elders (ACOVE)-3 quality indicators (QIs) [17]. For example, one ACOVE-3 falls QI requires that patients with two falls in the past year or one fall with injury receive an appropriate gait/balance examination within 3 mo of the report of falls (see the Appendix, available online only, for a list of selected QIs used in this study). For this analysis, we only include QIs for which eligibility is determined by a positive screen for falls or fear of falling because these QIs had sufficient sample size for analysis. Other QIs for which eligibility depends on medical record content beyond the screener (e.g., offering exercise to a patient with a balance problem detected on examination) had insufficient numbers of patients for review. The medical record review typically covered 2 yr of care, including the study period of about 1 yr and about 10 mo before the study period for historical information; the exact dates of the study period varied by medical group [14].

We electronically browsed each scanned medical record from the three intervention sites for the "Falls/Mobility Problems" SVN (Figure 1) and noted whether an SVN (or EHR equivalent) was present in each patient–s medical record within 3 mo of the date the patient had a positive screen for a fall history or fear of falling. We abstracted relevant data fields from the SVN and ignored any data outside the SVN in order to mimic an automated evaluation process. Data collection was carried out by one abstractor (SA). In those few cases where markings or entries on the SVN were unclear, a second abstractor (DAG) reviewed the SVN and a consensus was reached as to the intent of the SVN author. Throughout the abstraction process, the primary abstractor was blinded to quality of care data previously obtained by full medical record review for the parent evaluation of ACOVEprime.


Figure 1. Example of structured visit note for falls.

Figure 1.

Example of structured visit note for falls.

Click Image to Enlarge. View as PowerPoint Slide

Using the SVN documentation, we created decision rules to determine whether recommended care processes were completed. Eligibility for these care processes depended on a positive response to the screening questions for falls (patients with falls were eligible for six care processes in this analysis) or fear of falling without falls (patients with fear of falling only were eligible for one care process in this analysis). Although the conditions for which the patient screened positive could be restated on the SVN, we opted to use the screening questions themselves for determination of eligibility, because data were more consistently available from the screen. Fulfillment of each care process was determined by which boxes were checked or structured fields filled in within the SVN. For example, the requirement that patients with two falls or one fall with injury receive vision testing or an eye examination could be satisfied in three ways: by checking "yes" to the question of eye examination within the past year in the "History" section, by filling in numeric results for the left and right eyes for tested visual acuity, or by checking the box "Referral for eye exam" in the "Treatment" section (Figure 1). We conferred credit for an eye examination referral, even though we did not determine whether the referral was completed, in keeping with the approach used in the full medical record review, which is to confer credit for treatments initiated regardless of whether the treatment was ultimately carried out. If no SVN was present in the record or the SVN was present more than 3 mo after the screening date, the patient did not receive credit for any care processes.

Our analyses focused on two samples: the entire intervention sample from the three sites reviewed in this study (regardless of presence/absence of SVN in the chart) and a restricted sample focusing on just those patients for whom an SVN was found in the medical record. The former sample is more generalizable (since no selection bias is associated with the decision to document using the SVN), but the latter sample is more similar to what would be expected in scenarios where SVN use is mandated in response to a positive screen, such as in an EHR. In this latter scenario, a complete set of screening results and SVNs would be available in all cases.

The sensitivity of the SVN-based scoring algorithm was compared with the criterion standard of full medical record review. Sensitivity was defined as the percentage of patients who had a specific care process completed according to the full medical record review and who also had that process completed according to the SVN. High sensitivity suggests that the SVN appropriately reflects care provided. A limitation of computing sensitivity in this way was that the SVN was reviewed as part of the full medical record review, and sensitivity may thus be inflated as a result of incorporation bias [18]. We did not compute specificity because the SVN is part of the larger medical record, and therefore, absence of care in the full medical record implies absence of care in the SVN.

We computed the kappa statistic as a measure of chance-corrected agreement between methods. Because kappa is known to be sensitive to the underlying prevalence of the item being observed (in our case the frequency with which a care process is performed according to full medical record review), the kappa statistic should be interpreted cautiously. For this study, we followed the convention that kappa results ≤0.20 are poor, between 0.21 and 0.40 are fair, between 0.41 and 0.60 are moderate, between 0.61 and 0.80 are good, and between 0.81 and 1.00 are very good [19]. Finally, we compared the percentage of care processes completed according to the SVN scoring algorithm with the percentage of care processes completed according to the criterion standard medical record scoring algorithm. Stata 11 (StataCorp; College Station, Texas) was used for all analyses.

The analysis included 215 patients (site A: 98, site B: 59, site C: 58). Patients– mean age was 83 yr (standard deviation 5), and 67 percent were women. Forty-seven percent of patients had a history of falls documented at screening; the remainder had fear of falling without a fall. An SVN was found in 54 percent of patients– charts within 3 mo of the screening date. The percentage of charts with an SVN varied by site. At sites A and B, the percentage of charts with an SVN within 3 mo of the screening date was 36 and 39 percent, respectively. At site C, where the SVN was automatically present through EHR templates for patients screening positive for intervention conditions, all patients had an SVN.

The SVN-based algorithm underestimated (or, in one instance, exactly matched) the completion of recommended care processes determined by full medical record review (Table 1). For the full sample of patients, the SVN-based method and the full medical record review were within 10 percentage points of one another for only one care process, performance of orthostatic vital signs. For the subsample of patients with an SVN in the medical record, the SVN-based method came within 10 percentage points of the full medical record review for performance of a fall history, receipt of a vision test/documentation of an eye examination, and cognitive evaluation.


Table 1. 
Structured Data (%)
Structured Data (%)
Gait, Balance, & Strength Examination for Patients with Fear of Falling Due to Balance
or Walking Problems

In the full sample, the SVN-based scoring algorithm was least sensitive for gait, balance, and strength examination (both for people who fell and those with fear of falling, with sensitivities of 45% and 51%, respectively) and most sensitive for orthostatic vital signs (sensitivity of 83%) (Table 2). In the sample restricted to patients with an SVN in the medical record, sensitivity remained lowest for gait, balance, and strength examination (sensitivities of 52% and 69% for those who fell and had fear of falling, respectively), but was upwards of 90 percent for vision test/eye examination, cognitive evaluation, and performance of a fall history.

Kappa statistics varied widely in both the full sample and the subsample with an SVN in the medical record (Table 2). For the full sample, agreement between SVN-based scoring and scoring with full medical record review was "good" (kappa between 0.61 and 0.80) for vision test/eye examination and home hazard evaluation for fall and "very good" (kappa between 0.81 and 1.00) for orthostatic vital signs. For the subsample with an SVN in the record, agreement was good for vision test/eye examination, home hazard evaluation, orthostatic vital signs, and cognitive evaluation.


Table 2. 
% Correctly Classified
% Correctly Classified
Gait, Balance, & Strength Examination for Patients with Fear of Falling Due to Balance or Walking Problems
*Kappa not computable.

In this sample of patients participating in an intervention to improve care for falls, we discern several findings relevant to the field of quality measurement and improvement. First, in the two paper-based sites, a minority of clinicians actually documented care by using the SVN. This finding could imply a lack of consistency in placing the SVN on the chart or may relate to the clinician choosing not to use the SVN as a documentation tool, especially as clinicians learned SVN content and incorporated it into their own dictations or progress notes. In addition, one of the two sites had planned for patients to return for a dedicated falls/fear of falling visit if they screened positive; however, this dedicated follow-up visit (at which the SVN would have been used) did not always occur. At the site with a fully functional EHR, although the SVN structure was present in all cases in the records we reviewed, this did not necessarily indicate that the SVN was filled out by the physician.

Mandated use of the SVN or forms to collect coded data faces considerable obstacles in light of the complex workflow that clinicians face when dealing with older patients who may have multiple diseases and complaints. Previous work has shown that an EHR designed for structured data entry may deter use by individuals without extensive computer experience, because of difficulties in adapting to new workflows [20]. In the case of the SVNs used in the current study, a modular approach that allows for specific history, examination, and treatment elements to be selectively incorporated into a more general electronic progress note may be a more realistic approach. Such an approach could first be prototyped and then undergo careful usability testing [21] and, ultimately, user training.

Second, key elements of care for falls and fear of falling were commonly documented outside the SVN. As a result, the SVN routinely underestimated completion of recommended care compared with viewing the entire medical record. This finding was especially true for gait and balance examination, one of the cornerstones of an evaluation for falls and mobility disorders. Of note, performing the gait and balance examination is one element that none of the three sites delegated to support staff, in contrast to taking orthostatic vital signs, completing vision testing/eye examination, and asking for more details about the fall, which were sometimes delegated. This finding confirms work by other investigators who have shown that clinicians will sometimes use free-text entry even when a structured data alternative is available [22].

Dresser and colleagues compared automated data extraction approaches from an EHR with full chart review for selected QIs in a managed care setting [10]. They found that automated data extraction underestimated quality of care for mammography, pediatric immunizations, and prenatal care, but slightly overestimated quality of care for cholesterol checks and cervical cancer screening. Underestimations tended to occur mainly as a result of capitation and bundled payment decreasing the amount of coded documentation for specific clinical services. The reasons for overestimation were not explored. Baker and colleagues found that rates of appropriate medication use for heart failure measured by automated review closely tracked the results of a hybrid automated/manual chart review, except for warfarin use in patients with heart failure and atrial fibrillation [5]. In this latter case, the automated method failed to pick up appropriate reasons for not using warfarin in a patient who was otherwise eligible. Although our study and these previous studies highlight different specific reasons for automated approaches underestimating recommended care, the common theme is that documenting information outside the coded fields captured by the automated algorithm will result in an underestimation of quality of care.

Third, restricting the comparison of SVN-based scoring and full medical record review to those patients who had an SVN in the medical record resulted in an increase in sensitivity, when compared with the full sample. Sensitivity increases because recommended care can only be recorded as given in the SVN-based scoring strategy when an SVN is present in the record. However, sensitivity may also increase because of selection bias associated with the decision to use the SVN, with patients who are more in need of care (and whose visits to address their condition were not deferred) more likely to have SVN-based documentation. These measurement problems can be addressed in EHRs by mandating use of the SVN for patients with a positive screen.

This study has important limitations. First, it was conducted only for a subset of quality measures related to falls and fear of falling; results may not generalize to other scenarios. Second, participating clinicians belonged to medical groups selected for their motivation to adopt the intervention, which may have caused differentially high use of intervention materials, including the SVN, compared with a more representative sample. Third, sites differed in their work flows for implementing the intervention, with varying degrees of delegation to support staff and different documentation systems (paper vs electronic). Thus, results are a composite of heterogeneous work processes. We chose not to present between-site comparisons because of small sample sizes.

We conclude that structured data should be used cautiously for measurement of quality of care and will likely underestimate actual care delivery. Furthermore, the degree to which structured data accurately represent the underlying care delivered varies in ways that are very specific to the type of care process being completed. Quality measurement systems using automated data will thus continue to benefit from validation against a full medical record review.

Study design: D. A. Ganz, S. Almeida, C. P. Roth, D. B. Reuben, N. S. Wenger.
Data collection: S. Almeida (overseen by D. A. Ganz).
Developed analysis plan: S. Almeida, C. P. Roth, D. A. Ganz.
Drafted manuscript: D. A. Ganz, S. Almeida.
Critical revision of manuscript: D. A. Ganz, S. Almeida, C. P. Roth, D. B. Reuben, N. S. Wenger.
Read and approved final manuscript: D. A. Ganz, S. Almeida, C. P. Roth, D. B. Reuben, N. S. Wenger.
Financial Disclosures: The authors have declared that no competing interests exist.
Funding/Support: D. A. Ganz was supported by the Department of Veterans Affairs (VA), Veterans Health Administration, VA Health Services Research and Development (HSR&D) Service through the VA Greater Los Angeles HSR&D Center of Excellence (project VA CD2 08–012–1) and the VA/Robert Wood Johnson Foundation Physician Faculty Scholars Program. S. Almeida was supported by a University of California, Los Angeles (UCLA) Department of Medicine Chief–s Summer Fellowship. The ACOVEprime project was supported by Atlantic Philanthropies (grants 11719 and 16367).
Additional Contributions: The authors thank Robin Beckman for programming assistance, Patty Smith for administrative support, and the two anonymous peer reviewers for their comments. A previous version of this work was presented in abstract form at the Western Student Medical Research Forum (Carmel, California) on January 29, 2010.
Institutional Review: This study conforms to the ethical principles in the Helsinki Declaration and was approved by the UCLA Institutional Review Board (UCLA G06–05–089–03B).
Participant Follow-Up: The authors do not plan to notify subjects of the publication of this study because contact information is unavailable.
1.
Schneider EC, Riehl V, Courte-Wienecke S, Eddy DM, Sennett C; National Committee for Quality Assurance. Enhancing performance measurement: NCQA–s road map for a health information framework. JAMA. 1999; 282(12):1184–90. [PMID:10501126]
http://dx.doi.org/10.1001/jama.282.12.1184
2.
Roth CP, Lim YW, Pevnick JM, Asch SM, McGlynn EA. The challenge of measuring quality of care from the electronic health record. Am J Med Qual. 2009;24(5):385–94.
[PMID:19482968]
http://dx.doi.org/10.1177/1062860609336627
3.
Weiner M, Stump TE, Callahan CM, Lewis JN, McDonald CJ. Pursuing integration of performance measures into electronic medical records: beta-adrenergic receptor antagonist medications. Qual Saf Health Care. 2005;14(2):99–106.
[PMID:15805454]
http://dx.doi.org/10.1136/qshc.2004.011049
4.
Persell SD, Wright JM, Thompson JA, Kmetik KS, Baker DW. Assessing the validity of national quality measures for coronary artery disease using an electronic health record. Arch Intern Med. 2006;166(20):2272–77.
[PMID:17101947]
http://dx.doi.org/10.1001/archinte.166.20.2272
5.
Baker DW, Persell SD, Thompson JA, Soman NS, Burgner KM, Liss D, Kmetik KS. Automated review of electronic health records to assess quality of care for outpatients with heart failure. Ann Intern Med. 2007;146(4):270–77.
[PMID:17310051]
6.
Goulet JL, Erdos J, Kancir S, Levin FL, Wright SM, Daniels SM, Nilan L, Justice AC. Measuring performance directly using the Veterans Health Administration electronic medical record: a comparison with external peer review. Med Care. 2007;45(1):73–79. [PMID:17279023]
http://dx.doi.org/10.1097/01.mlr.0000244510.09001.e5
7.
Borzecki AM, Wong AT, Hickey EC, Ash AS, Berlowitz DR. Can we use automated data to assess quality of hypertension care? Am J Manag Care. 2004;10(7 Pt 2):473–79.
[PMID:15298233]
8.
Kerr EA, Smith DM, Hogan MM, Krein SL, Pogach L, Hofer TP, Hayward RA. Comparing clinical automated, medical record, and hybrid data sources for diabetes quality measures. Jt Comm J Qual Improv. 2002;28(10):555–65.
[PMID:12369158]
9.
Persell SD, Kho AN, Thompson JA, Baker DW. Improving hypertension quality measurement using electronic health records. Med Care. 2009;47(4):388–94. [PMID:19330887]
http://dx.doi.org/10.1097/MLR.0b013e31818b070c
10.
Dresser MV, Feingold L, Rosenkranz SL, Coltin KL. Clinical quality measurement. Comparing chart review and automated methodologies. Med Care. 1997;35(6):539–52.
[PMID:9191700]
http://dx.doi.org/10.1097/00005650-199706000-00001
11.
Linder JA, Kaleba EO, Kmetik KS. Using electronic health records to measure physician performance for acute conditions in primary care: empirical evaluation of the community-acquired pneumonia clinical quality measure set. Med Care. 2009;47(2):208–16. [PMID:19169122]
http://dx.doi.org/10.1097/MLR.0b013e318189375f
12.
Wenger NS, Roth CP, Shekelle P; ACOVE Investigators. Introduction to the Assessing Care of Vulnerable Elders-3 quality indicator measurement set. J Am Geriatr Soc. 2007; 55(Suppl 2):S247–52. [PMID:17910544]
http://dx.doi.org/10.1111/j.1532-5415.2007.01328.x
13.
Wenger NS, Solomon DH, Roth CP, MacLean CH, Saliba D, Kamberg CJ, Rubenstein LZ, Young RT, Sloss EM, Louie R, Adams J, Chang JT, Venus PJ, Schnelle JF, Shekelle PG. The quality of medical care provided to vulnerable community-dwelling older patients. Ann Intern Med. 2003; 139(9):740–47. [PMID:14597458]
14.
Wenger NS, Roth CP, Hall WJ, Ganz DA, Snow V, Byrkit J, Dzielak E, Gullen DJ, Loepfe TR, Sahler C, Snooks Q, Beckman R, Adams J, Rosen M, Reuben DB. Practice redesign to improve care for falls and urinary incontinence: primary care intervention for older patients. Arch Intern Med. 2010;170(19):1765–72. [PMID:20975026]
http://dx.doi.org/10.1001/archinternmed.2010.387
15.
Mayfield D, McLeod G, Hall P. The CAGE questionnaire: validation of a new alcoholism screening instrument. Am J Psychiatry. 1974;131(10):1121–23. [PMID:4416585]
16.
Reuben DB. Geriatrics at your fingertips, 2006. 8th ed. New York (NY): American Geriatrics Society; 2006.
17.
Chang JT, Ganz DA. Quality indicators for falls and mobility problems in vulnerable elders. J Am Geriatr Soc. 2007; 55(55 Suppl 2):S327–34. [PMID:17910554]
http://dx.doi.org/10.1111/j.1532-5415.2007.01339.x
18.
Ransohoff DF, Feinstein AR. Problems of spectrum and bias in evaluating the efficacy of diagnostic tests. N Engl J Med. 1978;299(17):926–30. [PMID:692598]
http://dx.doi.org/10.1056/NEJM197810262991705
19.
Brennan P, Silman A. Statistical methods for assessing observer variability in clinical measures. BMJ. 1992; 304(6840):1491–94. [PMID:1611375]
http://dx.doi.org/10.1136/bmj.304.6840.1491
20.
Patel VL, Kushniruk AW, Yang S, Yale JF. Impact of a computer-based patient record system on data collection, knowledge organization, and reasoning. J Am Med Inform Assoc. 2000;7(6):569–85. [PMID:11062231]
http://dx.doi.org/10.1136/jamia.2000.0070569
21.
Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform. 2004;37(1):56–76.
[PMID:15016386]
http://dx.doi.org/10.1016/j.jbi.2004.01.003
22.
Roukema J, Los RK, Bleeker SE, van Ginneken AM, van der Lei J, Moll HA. Paper versus computer: feasibility of an electronic medical record in general pediatrics. Pediatrics. 2006;117(1):15–21. [PMID:16396855]
http://dx.doi.org/10.1542/peds.2004-2741
This article and any supplementary material should be cited as follows:
Ganz DA, Almeida S, Roth CP, Reuben DB, Wenger NS. Can structured data fields accurately measure quality of care? The example of falls. J Rehabil Res Dev. 2012; 49(9):1411–20.
http://dx.doi.org/10.1682/JRRD.2011.09.0184
iThenticateCrossref

Go to TOP

Last Reviewed or Updated  Monday, January 7, 2013 10:21 AM

Valid HTML 4.01 Transitional