VA Research and Development LOGO

Logo for the Journal of Rehab R&D
Vol. 39 No. 6, November/December 2002, Supplement
Pages 39-53


A journey through early augmentative communication and computer access
Gregg C. Vanderheiden, PhD
Professor, Industrial Engineering and Biomedical Engineering; Director, Trace R&D Center, University of Wisconsin at Madison
Introduction

When asked to write an account of the early days of augmentative communication and computer access, I first set out to create a documented history of all the early work and workers. I soon found that the task was beyond the time and resources I had available. For much of the very early work that I could remember, I soon realized I was unable to accurately date or document it, and the principals have died or are dispersed. After a couple of attempts I chose another approach, a personal narrative, which allowed me to both capture some important behind-the-scenes observations and to acknowledge some of those who were critical to this pioneer’s entry, survival, and credited accomplishments.

This narrative consists of three parts, and each part tells a different dimension of the story. The first deals with my entry into the field and early support from others. Part 2 discusses the three different fields that converged to form the field of Augmentative Communication. Part 3 touches on some of the early aspects of Computer Access, particularly those that grew out of Augmentative Communication.

Although the narrative does capture some of the early history in these areas, its primary purpose is to show the role of individual action or initiative. Although many advances came from organized programs over an extended period of time, much of what has happened is the result of decisions by individuals who in turn caused things to move forward or allowed others to move forward. Often progress has been a chain of such individual events. This narrative seeks to highlight the importance of such individual decisions and action, often by people outside of this field. Without those decisions, actions, or commitments—including those by people unrelated to the field—the field would not have progressed as rapidly, and in some cases would perhaps have developed quite differently.

Part 1: Tricked

Some people enter this field through an early decision to dedicate and serve. I was tricked. A fellow student, David Lamers, had heard about a young boy with athetoid cerebral palsy at a local school who could not speak, write, or type. Searching for ideas, Dave stopped by the Behavioral Cybernetics Lab at the University of Wisconsin (where I worked as a student technician) looking for another researcher. When I suggested some approaches to solving the boy's communication problems, he talked me into walking out of my job in the middle of the day and going to the school, “just quickly, so I could show him what I meant, because he didn’t understand.”

The rest of the story resembles the story of many colleagues who came to this field by accident rather than design. What I found was not a “handicapped boy” that I might help, but an intriguing, clever (and not always polite) young lad who had severe athetoid cerebral palsy. He had been home schooled until the previous year. His only means of communication was to slowly, and with great effort, point to letters of the alphabet and a few words which had been wood burned into a twelve by sixteen inch piece of plywood. The boy, whose name was Lydell, was able to communicate in this way, but only if someone could provide undivided attention for an extended period of time. As a result, he was unable to participate in class or do any real amount of homework or, importantly, any independent work. Because I was struck by the boy’s enthusiasm and also by the challenge of finding an effective interface approach for him, I quit my job and, with David Lamers, formed a volunteer group of undergraduate students to try to develop a solution for him. (Two of the ideas I had suggested previously would not work; and the third, scanning, was even slower for him than his already slow-pointing because of his severe athetoid cerebral palsy.) Andrew Volk (now with Intel), Jim Pazaris (now with Rockwell International), Candace Hill-Vegter (now with St. Paul Children’s Hospital), Bob Norton (now at Physical Science Lab), and a half dozen others (names unfortunately lost to time and the lack of any records) joined with us. David Lamers, who had started it all, had a family of three to feed, and so had to leave six months later when he graduated, as did a few others. But the student group continued, grew, came to the attention of still more individuals similar to Lydell, grew some more, and eventually developed into what is now the Trace R&D Center at the University of Wisconsin–Madison.

Critical Contributors and Unsung Heroes

There were several individuals, many of whom were not directly involved in technology and disability, without whom there would not be a Trace Center. Without their support and contributions, I would not have been able to get traction or stay in the field long enough to make a contribution beyond the typical undergraduate project level.

The first unsung hero was Professor Richard Marleau, an amazing electrical engineering professor who provided the early support and encouragement for our group. Although this had nothing to do with his area (power systems and control theory), he took all of the analog computers out of his analog computing laboratory and arranged them in the back of a storeroom. He then taught his classes out of that storeroom in order to allow our group of undergraduate students to use his (previous) teaching laboratory as a place to meet and work. Having a stable place to meet and work (along with a small budget from the department for surplus parts) gave our interdisciplinary group the location, stability, and permanence needed to carry on for the extended period it took to address this difficult problem. There is no question that Trace would not be here today, that we would not have coalesced or been able to hold the group together, without this selfless action.

The second critical contributor was Professor Daniel Geisler, who gave up a major portion of several years of his professional career to help mentor and provide the necessary university supervision for a rag tag group of students from across the campus. The group included students from occupational therapy, physical therapy, rehabilitation counseling, communicative disorders, special education, psychology, journalism, math, and applied physics, as well as electrical engineering and mechanical engineering. Other key players were Professors Leo Jedynak and David Yoder, who took turns with the baton after Dr. Geisler; and Professors Rideout (department chair), Marshall (Dean, College of Engineering), and Young (Chancellor), all of whom “flexed” university policy extensively at one point or another to allow this student group to continue and extend its work over the years. This included support from their own personal funds as well as locating a largely unknown provision in the state regulations that allowed them to permit a student (the author) to act as principal investigator on grants at a time when this was reserved for tenured faculty at the University of Wisconsin (UW). I also recently discovered that a fellow author in this publication (Dr. Dudley Childress from Northwestern University) played an important role as well. Early in the process, the UW administration (via Dr. Geisler) asked Dr. Childress to visit the student group and give the UW administration a reading as to whether the efforts were novel, useful, and safe to let continue, since there were no faculty on the UW campus at that time with sufficient background to evaluate the student group’s work.

Another key player was Dr. Max Ward, who ran the student-originated studies program at the National Science Foundation. That program provided the critical early support to our group in its second year. Dr. Ward’s personal involvement, support, and encouragement of the group was responsible for gaining the attention of the head of the NSF Directorate for Education, who allowed our student group to submit a regular peer-reviewed proposal to NSF at a time when the NSF did not have a disability-related research program.

I take the time and space to relate this much detail because it is important to realize how much is owed, not just to early mentors in our field, but to many outside the field as well. Such people stepped up and provided the environment and the support that allowed many of us to succeed. It is also important to note that many of the opportunities that were available in the past no longer seem to be there for today’s younger explorers and innovators. The student-originated studies program at NSF is gone, and many funding programs seem to be more focused on exploring and expanding existing areas rather than supporting riskier adventures into new areas. It is important to preserve those programs that continue to exist and/or develop new programs that provide young and very young students with the support and opportunities to explore new areas independently, and to develop into new leaders. This was but one story of initiation, but it shares many aspects with other early workers in other fields. Are the same opportunities available today? Are they increasing or decreasing?

Part 2: A Tale in Three Threads

In plotting the course of early augmentative communication, it is important to follow three different threads of development. One is the development of early electromechanical communication and writing systems. The second is research on ordinary (without disability) child language development, and the third is communication and language boards. These three threads developed largely independently until the 60s and 70s, when they merged to form what we now know as Augmentative and Alternative Communication (AAC). Computer access then evolved out of the interface portion of this work, particularly the thread dealing with human machine interface (with its roots in the conversation and writing machines).

Communication and Writing Devices

When our student group first set out to develop a solution for Lydell, one of the early tasks that the group took on was to gather information about systems that had been developed elsewhere. Led by Candice Hill, the group gathered and then over the years maintained a listing or registry of all of the communication technology [1] (until it was later merged into Able Data). What we found was that most of the early interface technologies first appeared in Europe. They took the form of either environmental control systems or special systems to control a typewriter. Relays and solenoids were used to control power for appliances or to activate the keys on keyboards. Stepping relays and lights were used to create scanning and encoding selection mechanisms.

Perhaps the earliest electric communication device was the POSM (Patient Operated Selector Mechanism), a sip-and-puff typewriter controller first prototyped by Reg Maling in 1960 (one of several he eventually created) (Figure 1). Reg was a volunteer visitor at Stoke Manderville hospital (for paralyzed people) and noticed that they had only a bell to communicate with. This inspired him to develop the very first POSM for them. A Communications System for the Handicapped (Comhandi) was developed in 1964, consisting of a scanning teletypewriter controller with an illuminated display. Other devices were operated by pointing light beams at photoelectric cells, such as the Patient Initiated Lightspot Operated Typewriter or PILOT (1967) (Figure 2), and the Lightspot Operated Typewriter or LOT, developed in 1973. Morse code was also used for typewriter control through voiced or sip-and-puff Morse code in the VOTEM system (1969), as well as by oral muscular control (1972). The best single source for information on many of these early systems is the 1974 book, Aids for the Severely Handicapped, edited by Keith Copeland [2].

Figure 1. The Patient Operated Selector Mechanism (POSM) provided multiple ways to control a standard typewriter. Shown here is one that allowed control through sip and puff on a gooseneck mounted mouthpiece.
Figure 1.
The Patient Operated Selector Mechanism (POSM) provided multiple ways to control a standard typewriter. Shown here is one that allowed control through sip and puff on a gooseneck mounted mouthpiece.
 
Figure 2. The Patient Initiated Light Operated Telecontrol (PILOT) allowed people to control typewriters by simply pointing a beam of light.
Figure 2.
The Patient Initiated Light Operated Telecontrol (PILOT) allowed people to control typewriters by simply pointing a beam of light.
 

These early systems gave way to transistorized systems, including systems operating through EMG (General Man-Machine Interface, or GMMI) (Figure 3) and even a pocket display, which was operated via a keyboard and called “The Talking Brooch” (Newell 1973) (Figure 4). The Talking Brooch and a communication device created by Toby Churchill called Lightwriter were probably the first portable communication aids, and the Talking Brooch is almost certainly the first wearable communication aid. Because it was designed for individuals with better motor control, it was also the first to allow face-to-face communication with near eye contact. Although portable devices like the Talking Brooch and Toby Churchill’s Lightwriter began appearing in the 1970s, the earlier systems were all stationary and were essentially special typewriting systems.

Figure 3. The General Man-Machine Interface (GMMI) allowed users to control a typewriter and other devices using only the electrical signals given off from muscles without requiring actual limb movement.
Figure 3.
The General Man-Machine Interface (GMMI) allowed users to control a typewriter and other devices using only the electrical signals given off from muscles without requiring actual limb movement.
 

Figure 4.
Figure 4.
"The Talking Brooch," a wearable communication aid, was designed for individuals who could not talk but could type on a keyboard held in the hand.
 

Interface systems crossed the Atlantic first via Canada, where they first began appearing in the late 60s and early 70s. The Comhandi was an early stationary communication aid developed in Canada. In the United States the development of communication systems began in 1971 with two students who, completely independently (one in Wisconsin and then one in Massachusetts), were drawn to the field by a particular youngster for whom they were trying to develop a communication and writing system. One effort was lead by Richard Foulds at Tufts University and the other was our work at the University of Wisconsin. The Tufts efforts led to the development of the Tufts Interactive Communicator (TIC) (Figure 5), a scanning communication aid, and later the ANTIC, the first communication aid to change its scanning order in anticipation of the next most probable letter to be typed. The work at the University of Wisconsin led to the development of the Auto Monitoring Communication Board (AutoCom) (Figure 6), a direct selection communication system, which was picked up by Telesensory Systems Inc. and Prentke Romich Company, as will as the Trine and a number of other commercial communication, writing, and computer access aids. The AutoCom represented the first portable communication (and later, computer access) aid where the user had the freedom to change their own vocabulary and rearrange the letters, words, and phrases on their aids to meet their needs.

Figure 5. The Tufts Interactor Communicator (TIC) used scanning to provide access to display and printers and later introduced changing the scanning letter arrangement dynamically based on the most probably next characters to by selected.
Figure 5.
The Tufts Interactor Communicator (TIC) used scanning to provide access to display and printers and later introduced changing the scanning letter arrangement dynamically based on the most probably next characters to by selected.
 
Figure 6. The Auto Monitoring Communication Board (AutoCom) was a user programmable communication/control aid designed to allow users with severe motor impairments to directly select words and phrases to communicate and write.
Figure 6.
The Auto Monitoring Communication Board (AutoCom) was a user programmable communication/control aid designed to allow users with severe motor impairments to directly select words and phrases to communicate and write.
 

Other notable communication aids that came out around this time included the Tracy from Trace Inc. (no relation to the Trace Center), the Portaprinter, and the Canon Communicator. The Tracy was the first device to use a binary tree display to allow individuals to send Morse Code without having to learn Morse Code. The Portaprinter was the first portable printing communication aid and consisted of a device the size of a thick briefcase with a built in scanning display and strip printer (Figure 7). The Canon Communicator from Canon Inc. was notable because it was the only communication aid to come from a major corporation and was by far the smallest, with a palm-sized keyboard unit and a strip printer all in a 3.3 x 5.2 x 1.2-inch package (Figure 8). As the field evolved, an increasing number of electronic aides became available with expanding capabilities.

Figure 7. The Portaprinter was a portable printing communication aid built into a thick briefcase like box with a scanning display and strip printer.
Figure 7.
The Portaprinter was a portable printing communication aid built into a thick briefcase like box with a scanning display and strip printer.
Figure 8. The Canon Communicator was the world's smallest printing communication aid and was particularly useful to people who were ambulatory.
Figure 8.
The Canon Communicator was the world's smallest printing communication aid and was particularly useful to people who were ambulatory.

A number of researchers experimented with speech synthesis for individuals with disabilities. Initial efforts involved stationary computers based on research from Sweden, MIT, and others. The use of synthetic voice by someone who could not speak to order a pizza (without prior warning) was probably done out of the Artificial Language Laboratory at Michigan State University, run by John Eulenberg. In the mid-70s our group at the Trace Center created a “portable” voice synthesizer by taking a commercial DecTalk synthesizer and, working with Digital Equipment Corporation (DEC), modifying the motherboard circuitry so that it could instantly turn on and off without running through a full power-up routine. Powered with gel cell batteries, the unit weighed 20 pounds and mounted to the back of a wheelchair. This first “portable” speech synthesizer later gave way to smaller and lighter systems based on the Votrax synthesizer from Federal Screw Works in Michigan. The first commercial mass-marketed communication aid with speech synthesis was probably the Handivoice from the Federal Screw Works and Phonic Ear (1978) (Figure 9). For additional information on early Augmentative Communication Aids, see “Providing a Child with a Means to Indicate” [3], The Trace Resourcebook Series [1], [4] and Electronic Devices for Rehabilitation [5].

Figure 9. The Handivoice, a portable communication aid with voice output was available in both a direct selection version (shown) and in a version used with number codes for words.
Figure 9.
The Handivoice, a portable communication aid with voice output was available in both a direct selection version (shown) and in a version used with number codes for words.

Development of Language Intervention Programs

The second important thread in the development of the current field of augmentative communication was the work done in the area of language intervention programs in general. This is an area of research and development that was carried out independently of augmentative communication, but which provided an important underlying set of principles and concepts. This area included both research in the area of normal language acquisition and also work on the development of communication systems for primates. Although there is some debate as to the direct applicability to humans of some of the techniques developed for use with primates, the influence on, and its contribution to, understanding language can be clearly seen. (Many of the symbol systems developed for primate research were not designed to be easy to learn or to optimize communication systems. Often the primary objective was to prove that the primates could acquire language. As a result, the symbol systems were designed specifically to be difficult to learn with no natural meaning or iconicity so that abstract language acquisition could be demonstrated in primates.) A complete treatment of this area is not possible here. However, a good overview of the work in language development as it related to and effected development of augmentative communication can be found in the two books by Richard L. Schiefelbusch (Nonspeech language and communication—Analysis and intervention [6] and Language intervention from ape to child [7]).

Communication Boards, Symbol Systems, and Acceleration Techniques

Independent of the electromechanical developments and the language development activities that were occurring in parallel was a series of efforts by people who were trying to provide communication to children and adults who are not able to use speech to communicate. In

some cases they were not able to communicate because of physical inability to control the speech musculature. In other cases work was carried out to try to provide speech to individuals who were unable to speak due to severe mental retardation that could affect both motor control and language. This early work was key to allowing our group, and most programs and companies working on augmentative communication aids, to develop aids that were practical and effective as daily communication aids for people with severe speech, language, and motor disabilities.

The earliest known communication board that was generally available was the F. Hall Roe Communication Board. This was a board developed for and with F. Hall Roe, who had cerebral palsy. This board became famous because the Ghora Kahn Grotto (a benevolent men’s group in Minneapolis) replicated and manufactured copies of F. Hall Roe’s communication board (Figure 10). The boards were printed on Masonite and had notches that allowed them to be mounted between the arms of a wheelchair. They included both letters and commonly used words and were used both by individuals with cerebral palsy and others in hospitals who were temporarily unable to speak. These communication boards predated much of the other work in this area. The first boards, it is believed, were distributed in the 1920s.

Figure 10. The F. Hall Roe's communication board consisted of letters and words printed on Masonite, and it was the first widely available communication aid.
Figure 10.
The F. Hall Roe's communication board consisted of letters and words printed on Masonite, and it was the first widely available communication aid.

Most of what we know today about the use of communication boards for individuals with a variety of language skill levels grew out of two independent and parallel efforts that were carried out between 1960 and 1973. One effort was led by Dr. Eugene McDonald and Adeline Schultz at the Home for the Merciful Savior for Crippled Children in Philadelphia. The second effort was carried at the same time by Beverly Vicker, a speech and language pathologist at the University of Iowa State Hospital-School. Both programs documented efforts to create communication boards for a wide variety of individuals. They included communication boards with pictures only, pictures and words, letters of the alphabet only, and combinations. Communication boards ranged from containing a few symbols to hundreds of words (some as many as 800 words). People used fingers, head sticks, pointers, and a variety of different mechanisms to communicate. Seating and positioning were recognized in both groups as critical to effective communication aid use. The McDonald/Schultz work first appeared in the Journal of Speech and Hearing Disorders in 1973 [8]. Although Beverly Vicker was discouraged from formally publishing her work as a speech pathologist, she was allowed to print up a report of her work and sell it through the local bookstore [9]. Her book, Nonoral communication system project 1964/1973, was an early mainstay and resource in this area along with the publications of McDonald and Schultz.

Around this same time, beginning in 1971, a group in Toronto, Canada, led by school teacher Shirley McNaughton, began exploring the use of a symbol system developed by Mr. Charles Bliss to allow children to be able to communicate more effectively. The Bliss symbols were selected to convey general concepts that could be combined together to form words. For example, the symbol for “long” and the symbol for “food” were used by the children to talk about spaghetti. Another early pioneer, Arlene Kraat, a speech pathologist, exemplifies how a single clinician, working alone and sharing her observations and insights with the field, can have a profound impact on both the clinical practice and research directions of a field.

The work of these early pioneers was later combined with the work being done using sign language with individuals who had mental retardation, and this combined body of work formed the underpinnings for the other communication symbol and communication rate acceleration systems to follow [10]. These included a number of different types of symbol systems, picture sets, vocabularies, and so forth, to make it easier for individuals with lower cognitive and/or communication skills to get started. These systems also made it easier for individuals with more advanced communication skills to communicate faster. The last significantly different and widely applied communication strategy was MinSpeak. This communication system uses a relatively small set of symbols and the alphabet to encode a large number of words and phrases in a way that makes it easy to remember the code and call them up quickly [11]. A summary of much of this early work and a more compete listing of the early leaders in this field can be found in the 1986 book, Augmentative Communication: An Introduction [12].

Origin of “Augmentative Communication” and the International Society for Augmentative and Alternative Communication (ISAAC)

The term Augmentative Communication was coined by this author in the chapter by Harris and Vanderheiden in the Shiefelbusch book, Nonspeech Language and Communication. The term augmentative was chosen to help offset a perception that communication boards and other similar devices were only to be used when there was no hope for speech. Speech pathologists at the time, in fact, were sometimes fired if it was discovered that they provided a communications board to one of their speech therapy clients—even if the individual had been in speech therapy in excess of a year and still was unable to utter a single intelligible word. There was a fear that the introduction of alternative mechanisms for communication would cause the individuals to cease trying to speak. Semiformal clinical research at the time, however, had shown the opposite to be true. Individuals who were given communication boards were not found to decrease in either their vocalizations or intelligibility level. Many individuals increased both their vocalizations and intelligibility after having been provided with the communication board. This was attributed to their becoming addicted to communication, their wanting to use speech whenever it worked because it was faster, and the fact that some were able to speak more easily when they had a back-up communication system because there was less pressure on them when they tried to speak. The term augmentative was coined to indicate that these nonspeech communication systems were being used by the children to augment their speech. Whenever their speech worked (and with whomever it worked), they would use speech. When it did not work, they would turn to their communication systems. (Historical note: The term in the chapter started as augmentive communication. However, after a debate I had with a copy editor, who pointed out that there was no such word as augmentive, it was changed to augmentative.)

The field really became organized with the formation of the International Society for Augmentative and Alternative Communication (ISAAC), an international, interdisciplinary organization that included engineers, therapists, teachers, researchers, and users. The name of the society and the journal (Augmentative and Alternative Communication) (AAC) stemmed from discussions between clinicians regarding the fact that, although these systems were augmentative for some individuals, for other individuals it was the only means of communication (that is, “alternative” mechanisms for communication). In the end, it was decided to embrace both, and the term “Augmentative and Alternative Communication” was coined and the Society was named.

Early work in augmentative communication was chronicled by John Eulenberg at Michigan State University in his newsletter “Communication Outlook,” which was the primary means of communication and dissemination of information about this field, especially the nonpublication aspects, in the early years. He also sponsored the meeting at which ISAAC was formed and named. The journal ACC, spearheaded by Lyle Lloyd and then David Beukelman, documented the research base for this field. David Yoder led the early efforts to gain legitimacy for ACC in the speech and hearing profession.

Part 3: Computer Access

The history of computer adaptation to allow disability access is a major story in itself and befitting its own article. The parallels with regard to the role of individuals and individual contribution, however, are noteworthy. Also of interest is the circuitous route that development took.

Efforts at computer access began in earnest with the introduction of the Apple II computer, which was the first widely available, easily programmable computer. Apple II was immediately embraced by individuals working on special communication systems and teaching mechanisms for people with disabilities. Special software was written for the Apple computer to allow the computers to be used by people who had disabilities. This included both special education training programs and education programs that paralleled regular education programs, except that they had special interfaces on them that could be operated by people with physical disabilities. A number of the computers were also programmed to function as special assistive technologies. These included systems that could be used by individuals who were blind—used as writing systems and as mechanisms for creating and printing Braille.

Later, it became clear that computers were being integrated into the classroom and into the workplace. At this time, a second line of effort was spawned. This line of research focused on developing mechanisms that would allow people who are unable to use the standard input and output mechanisms of computers to be able to use special input and output mechanisms to access and use the standard educational and work software. It was important that access to the standard software programs be provided, in order to allow people with disabilities to be able to participate in regular education and employment settings. This was first formally proposed in 1979 and published in 1980 [13] and in Computer in 1981 [14].

Special “transparent” access techniques were developed. These techniques allowed individuals to use communication aides, scanning input devices, sip-and-puff Morse code devices, and other specialized input systems to substitute for the keyboard (and later the mouse) on standard computers, in a way that the computer and software could not tell that the user was not using the standard input devices [15]. Without doubt, the most ingenious of these was developed by a cab driver (Paul Schwejda) to impress a regular fare of his (Judy McDonald). He was successful at both impressing his fare and advancing this new approach to computer access adaptations. Together the couple formed Adaptive Peripherals to market and further develop the Adaptive Firmware Card, which did near magical things to provide transparent access to Apple computers, even with games and other programs that directly read the hardware keyboard registers [16]. At the same time, special screen reading software was developed, which allowed individuals who are blind to be able to view the contents of the screen and have it read to them without interfering with the normal operation of the computer.

Over time, these techniques advanced. A number of the physical modifications developed by the Trace R&D Center became incorporated directly into computers and standard operating systems (the Macintosh OS since 1987, OS/2 and the UNIX X Window system since 1993, and Windows 95, 98, NT, ME, 2000 and XP) [15,17–22]. The programming efforts for these were led first by Charles Lee and later by Mark Novak. The history of this is interesting and the process is important to note. The original technique (StickyKeys, then called 1-Finger) was developed at Trace for DOS. Although we had many discussions with industry about incorporating it directly into DOS (and they initiated some of them), nothing happened. (Later, an add-on was developed for DOS, but DOS never did get built-in access features, although its successors all did).

Around this time, IBM was promoting MS Windows, and they instituted a program that provided free computers to any academic program that would port its work to Windows. We applied and were accepted into the program. We worked with a Windows Program Manager at Microsoft (Greg Lowney) who pirated time to help us as we developed StickyKeys, MouseKeys, and SerialKeys (keyboard emulating interface via serial port) for Win 2.0. It was an add-on, however, that users could download from Microsoft along with printer drivers. It was not yet possible to get the features built into Windows itself.

Interestingly, Apple was the first company to actually build access features directly into their product. And this work was based on the work we had begun for Windows. Apple brought Alan Brightman, PhD, into the company to better address the disability area. Alan arranged for several of us working in disability access to present to some managers and the president of Apple (John Sculley). After the presentation, the head of product development, Randy Battat, simply said, “we should do that.” He then arranged for me to visit Apple every 3 months to review all Apple products to generate and track recommendations on how to improve the design of Apple’s products to make them more accessible. He also, from the outset, set aside 10 K of the extremely valuable system disk real estate (and more later). These two moves, as well as the completely unfettered access to all research and development at Apple, was unheard of and surprised (and impressed) Apple engineers. However, even then, since there was no legal mandate to do anything, it took awhile to actually get programmers’ time. Soon StickeyKeys and MouseKeys (and Closeview, an enlargement program developed by Berkeley Systems) were built into every Macintosh shipped and installed as part of the default install. The features followed later on the Apple GS and IIe (where they had to be built directly into the hardware).

In 1988 we developed a similar package for OS/2, working with IBM. This was only partially picked up (StickyKeys) and that was only partially implemented. (A full implementation with multiple access features was completed in later years by IBM, but that version of OS/2 was never released.) In 1990 IBM approached the Trace Center and asked if we could make a package of features that would work with PC-DOS. With funding from IBM, we created AccessDOS, which had StickeyKeys (for one-finger and headstick access), MouseKeys (keyboard control of mouse pointer), RepeatKeys (adjusted and/or turned off the key repeat rate), BounceKeys (filtered out hand tremor), Slowkeys (ignored errant key presses), ToggleKeys (provided audio equivalents to the Caps, Scroll, and NumLock keys), SerialKeys (allows users to connect their assistive technologies to the serial port and create “fake” keystrokes and mouse movements), and ShowSounds (which gave different visual indicators for output from the speaker). This was the most comprehensive package of access features yet. Although this was available from IBM as a commercial product, it was not built into their PC-DOS but rather provided as a separate (free) product that could be ordered from their order center.

In 1993 Microsoft asked for (and got) IBM’s blessing and our permission to also ship AccessDOS from their site for MS-DOS. We also ported the new features to Windows 3.1, but again as an add-on. With Windows NT, access was implemented in a two-stage process. First, the core components of the access pack were built into Windows NT with the support of Paul Maritz and David McBride. However, the control panel to turn them on and operate them was not completed and therefore not shipped with the product. Thus the access features appeared and functioned like an add-on, even though much of its functionality was built into OS. (It was not until after Windows 95 that the functionality in NT would be fully built in and available without special installation.)

In 1995 Microsoft was launching a major rewrite of their operating system and its human interface. Greg Lowney, then a one-person accessibility program at Microsoft, lobbied hard along with us to get the access features built into the operating system as standard components. Progress was on again and off again until, with increasing pressure from the disability community, it made its way into the system. However, fear that the access features would interfere with operation of Windows 95 by its other customers almost caused the features again be add-ons, and not shipped on the precious system disks (but OEM only). Interestingly, Apple computer was responsible for allaying the fears. Since Microsoft writes many applications for the Macintosh, they had Macs throughout the Mac applications side of their company (though not the OS side). When it was pointed out to Microsoft decisionmakers that all of the Macs at Microsoft had the same features they were worried about and that the features had been there for over 5 years and had not been noticed (much less interfered with the computers’ operation), the concerns abated. Even then, however, it took much discussion and the presentation of a business case for inclusion before they made it into the OS as default installed features. Once into Windows 95, the features were carried across to NT and forward on all subsequent versions as built-in features that were installed by default.

Today these accessibility features are found in or on Mac, Windows, OS/2, UNIX, Linux, and Solaris OSs as well as being implemented in some non-OS-based products. They are also found in the ANSI/HFES 200 Standard for Software User Interfaces currently out for ballot. The path, though, was long, with many deaths and resurrections. It also included people who made bold commitments, such as Randy Battat at Apple, people like Alan Brightman and Greg Lowney, and many others who repeatedly and constantly pushed systems and people when others would have eased up, and a couple of key programmers who defied their supervisors and, at risk of their jobs, worked on code to create critical first implementations without which the initial versions would not have occurred. Some of their names are known to a small circle. Others are not from our field, and are unknown except to a few beyond themselves. It is not possible to know what would have happened if these people had not stepped up. But from 20 years of trying to get these things to happen, I can say that once some opportunities are lost, an idea can be delayed 5 years or more. And even then, progress only occurs when someone else steps up.

Graphic Interfaces

As character-based systems made way for the graphic use of interface, computers became easier to understand and use for many people who had disabilities. However, the graphic user interface tremendously complicated the problem of providing screen reader access for individuals who are blind. Initially many individuals who are blind lost their jobs when their companies (or universities) switched to a graphic user interface for which screen readers were not available. Later however, screen readers started becoming available for graphic systems, though for many years, each release of a new version of the OS meant broken screen readers followed by the costs for upgrading to new (compatible) screen readers when they became available. Today there is a wide variety of specialized input, screen reader, and screen magnification systems available, either as extensions to the standard operating system or as special assistive technologies. This also is a story of opportunities lost, and delays that cost people dearly for many years. Again, individual efforts played a key role in getting us to where we are now. Screen reader access is an area that I only followed and supported from the periphery, so it is not one I can tell in detail accurately and without serious background omissions.

Rules Change

The recent passage of Section 508 of the Rehabilitation Act, which has caused government to give preferential purchasing to information technologies that are accessible, has provided tremendous market incentive to companies to create accessible information technologies and to ensure that their standard software will work with assistive technologies. This has completely changed the chemistry of the situation. In the past it was rare that assistive access technologies existed or would work with new software and operating systems when they were released. Assistive technology vendors did not have the resources, and mainstream companies rarely allowed access to their products in advance to allow this. As a result, each new release caused people who had access to software on the job or classroom to lose it. The government policy to purchase products that are accessible (including compatible with assistive technologies) when available, has now begun to put compatibility with assistive technologies on a par with compatibility with printers and other show-stopping issues. This has allowed the AT vendors to get the access and support they need if they are going to have any chance of keeping up with the advancing information technologies. It has also made it much easier for those in companies who are advancing access issues to make a clear, sales-based business case for access. UK Disability Discrimination Act and the European Human Rights Act are raising the awareness of European companies as well.

Conclusion

Both the field of augmentative communication and the field of computer access seem extremely mature, widespread, and well supported today as compared to their years of early development. However, the increasing pace of innovation in our society and the increasing reliance on technologies is making both of these fields extremely challenging. Researchers and practitioners strive to develop and apply new techniques to provide individuals with disabilities better ways to communicate, write, and access computers so that they can participate and be competitive with their nondisabled peers in education, employment, and daily life. The advances to date have been the result of key people committed to ideals and persisting over time. But these advances are just as dependent on others who are not from this field, people who step up, take risks, provide the support, and make the allowances necessary for those in the field to succeed. Perhaps the greatest credit should go to those unsung heroes who make this possible when it isn't their area and it doesn't benefit them a whit in the long run. Taking risks for no personal or professional gain (and sometimes professional loss) is much harder than working hard and taking risks that advance your own cause and career. I believe we need to be aware of these people, recognize and thank them, and personally be willing to do the same outside of our field or profession when called.

Acknowledgment

All work credited to me over the years was the result of a team of individuals, both staff and students at Trace and colleagues in the field. This article is dedicated to all those without whom we would not have been able to contribute as much as we have. Especially those we did not even know about, but who used up personal capital to forward our causes.

References
1. Vanderheiden GC, editor. Non-vocal communication resource book. Baltimore: University Park Press; 1978.
2. Copeland K, editor. Aids for the severely handicapped. London: Sector Publishing Limited; 1974. (Contact Trace R&D Center for more information.)
3. Vanderheiden G. Providing a child with a means to indicate. In: Vanderheiden G, Grilley K, editors. Non-vocal communication techniques and aids for the severely physically handicapped. Baltimore, MD: University Park Press; 1976 (and later published by Pro-Ed, Austin TX).
4. Bower R, Kaull J, Sheikh N, Vanderheiden G, editors. Trace Resourcebook: Assistive Technologies for Communication, Control & Computer Access. Madison: University of Wisconsin Trace R&D Center; 1988.
5. Webster JG, Cook AM, Tompkins WJ, Vanderheiden GC, editors. Electronic devices for rehabilitation. London: Chapman and Hall Ltd; 1985.
6. Shiefelbusch RL, editor. Nonspeech language and communication: Analysis and intervention. Baltimore: University Park Press; 1980.
7. Schiefelbusch RL, Hollis JH. Language intervention from ape to child. University Park Press, Baltimore; 1979
8. McDonald E, Schultz A. Communication boards for cerebral palsied children. J Speech Hear Disord 1973 Feb;38:72–88.
9. Vicker B, editor. Non oral communication system project 1964/1973. Iowa City Iowa: The University of Iowa; 1974.
10. Vanderheiden GC, Lloyd LL. Communication systems and their components. In: Blackstone SW, Bruskin DM, editors. Augmentative communication: An introduction. Rockville, MD: American Speech, Launguage, and Hearing Association; 1986. p. 186–202.
11. Baker B. Minspeak. Byte 1982 Sep; p. 186–202.
12. Blackstone SW, Bruskin DM, editors. Augmentative communication: An introduction. Rockville, MD: American Speech, Launguage, and Hearing Association; 1986.
13. Vanderheiden G. Microcomputer aids for individuals with severe or multiple handicaps: Barriers and approaches. In: Proceedings of the IEEE Computer Society of the Application of Personal Computing to Aid the Handicapped; 1980 April 2–3; Johns Hopkins University; 1980.
14. Vanderheiden G. Practical applications of microcomputers to aid the Handicapped. In: Computer, IEEE Computer Society; January 1981.
15. Lee C, Vanderheiden G, Rasmussen A. One finger operation of the IBM family of personal computers. Proceedings of the Ninth Annual Conference on Rehabilitation Technology; 1986 Minneapolis, Minnesota. Washington, DC: RESNA: The Association for the Advancement of Rehabilitation Technology; 1986.
16. Schwejda P, Vanderheiden G. Adaptive firmware card for the Apple II. Byte 1982 Sep;276–314.
17. Lee CC, Vanderheiden GC. Keyboard equivalent for mouse input. Proceedings of the Tenth Annual Conference on Rehabilitation Technology; 1987 June; San Jose, California; 1987.
18. Lee CC, Vanderheiden GC. Accessibility of OS/2 for individuals with movement impairments: Strategies for the implementation of 1-finger, MouseKeys, and software keyboard emulating interfaces using device drivers and monitors. Proceedings of the International Conference of the Association for the Advancement of Rehabilitation Technology; 1998 Montreal, Quebec.
19. Schauer J, Rodgers BL, Kelso DP, Vanderheiden GC, Lee CC. Keyboard emulating interface (KEI) compatibility standard. Madison, WI: Trace Reprint Service, Trace R&D Center, University of Wisconsin-Madison; 1988.
20. Novak M, Schauer J, Hinkens J, Vanderheiden G. AccessDOS: Providing computer access features under DOS. Minneapolis, MN: 1991 Closing the Gap Conference.
21. Novak M, Schauer J, Hinkens J, Vanderheiden G. Providing computer access features under DOS. Proceedings of the RESNA 14th Annual Conference; 1991. p. 163–65.
22. Novak M, Vanderheiden G. Extending the user interface for X Windows to include persons with disabilities. Proceedings of the 16th Annual RESNA Conference; 1993. p. 435–36.

U.S. G.P.O. 2002–491–995: 40013

Go to TOP