As I write this, the 2002 Winter Olympic Games are underway. The events surrounding the judging of the ice skating competitions will forever remind us of the need for objectivity when assessing human performance and of how difficult it can be to be objective.

     Sometimes objective measurement is easy--there is no question about events like bobsledding. This is not the case, however, when artistic performance evokes an emotional and/or intuitive response, or for the deeply honest referee who finds him- or herself susceptible to the pressures that lead to the "home court advantage." The analogy to rehabilitation medicine is obvious. Assessing impacts of rehabilitation therapies can be similarly challenging as we strive to improve human performance in a different arena and to ascertain the impact of the treatments we provide. The clinician is also the advocate; the clinical researcher has a preconceived hypothesis.

     In rehabilitation, as in all clinical disciplines, decisions should be evidenced-based. Clinical decisions should be based on a foundation of data gleaned from the gold standard of trials: the randomized, controlled double-blind study, with predetermined outcome measures and a sample size calculated on the basis of a power analysis.

A photo of Mindy Aisen, MD

Mindy L. Aisen, M.D.
Director, Rehabilitation R&D
Department of Veterans Affairs
Washington, DC

      As rehabilitation researchers, we must constantly recommit ourselves to objective, critical review. VA Rehabilitation Research and Development has striven to reflect this in the makeup of and instructions to our study sections. And the review of manuscripts for the Journal of Rehabilitation Research and Development (JRRD) has followed suit. We are committed to supporting and disseminating significant rehabilitation discoveries, and to do so, we have to be ever vigilant to maintain the highest standards of objective rigorous merit review.

      Obvious, one might think, but this is not as simple as it sounds. Often we receive a paper in a highly relevant area either stating results that verify what we intuitively believe in our practice or suggesting a new approach to an old problem or covering a subject that is underserved in the literature. If the methods by which the study was completed are substandard, whom are we serving by printing the work?

      For instance, a study may use sound physiologic measures and data analysis techniques to conclude that a well-designed exercise program can be useful to a manual wheelchair user. However, if closer scrutiny reveals that the study did not use a control group or was not sufficiently powered to be statistically significant, this conclusion, while intuitively appealing, cannot be justified.

      Similarly, new engineering designs are often "tested" on a few subjects over a short period of time with feedback collected and reported as "data." Consumer feedback is critical to design, and engineers who interact in a clinical environment to gain knowledge about the impact of their designs in that environment are to be encouraged. However, if conclusions are to be disseminated, they should emanate from long-term evaluation with the use of acknowledged outcome measures. Such studies have limitations because of the small number of possible subjects, but these limitations should be incorporated into any submitted conclusions.

      Yet another common dilemma is the poorly conceived study in a subject area about which little literature exists. For instance, sports adapted for athletes with disabilities are commonly supported by many organizations. However, sports science literature is grossly lacking in addressing how serious athletes spawned by this movement can optimize their performance. Or for that matter, little scientific evidence exists to suggest whether participants in these sports are hurting or helping themselves. Anecdotal evidence suggests that the emotional boost is powerful, but what are the long-term consequences? We don't know.

      So, why do we publish these papers? We publish them because we are rooting for the home team. Our intuition, which, after all, is triggered by our experience, suggests that a new engineering design would impact quality of life. We want to encourage the athlete. We want to validate our observation that certain therapeutic protocols will prevent injury in wheelchair users. We are too compassionate to be objective. And like the referee, we throw the score, just a bit.

      Ultimately, neither does this serve the field nor, ultimately, does it serve the consumer. As a profession, we need to hold ourselves to the highest possible standards and only accept that evidence that can stand up to the closest of scrutiny. No doubt, one could point to several published papers in past issues (and possibly even in future issues) that do not meet the "gold standard of evidence." However, as we begin this new year and new volume of JRRD, our intent is to set a new standard for rehabilitation research. As we review manuscripts, we are looking for "the best of the best" and to disseminate reliable, reproducible findings for the benefit of all.

      As our readers, you can help. I encourage you to scrutinize our published papers as well and submit letters to the editor. Only as these discussions are aired, can we get closer to attaining true objectivity and optimal care for persons with disabilities.

Mindy L. Aisen, MD, Editor-in-Chief