by Bruce R. Baker (January 22nd, 1943 – May 7th, 2020)
Modified in 2009 and 2017 from an original article that appeared in Current Expressions – A Newsletter for Friends of Prentke Romich Company (PRC), Spring 1988.
Minspeak™ is perhaps the most widely used software in the augmentative and alternative communication (AAC) world today. When first introduced, however, it represented a substantial departure from the traditional way of looking at symbols – for computer systems in general and for language representation in particular.
Minspeak was conceived while I, Bruce Baker, was working on a doctoral program in modern linguistics and language teaching at the Middlebury Language Center in Vermont. In casting around for a dissertation topic, I was investigating the language used to describe and to interact with people who have disabilities. At a certain point in my investigations, I conducted some interviews with people who had disabilities.
The first few individuals I talked to had difficulty communicating with me, so I tried to construct a coding system to facilitate our information exchange. This led me to an investigation of AAC systems available at the time. I found these systems to be, in my opinion, linguistically primitive. They seemed to take very little advantage of the substantial and growing field of knowledge about linguistics and computational linguistics. I determined to build a system – or to have some role in building a system – that would take advantage of these modern linguistic insights.
My personal background in linguistics was from a discipline called Classical Philology. This is the study of the Greek and Latin languages and literatures. My introduction to modern linguistic theory occurred in graduate school. It was there that I learned about the newer generative grammars and the systematic, modern approach to language categorization and structure. It was there, as well, that I first learned about computational linguistics and how computers might some day master the art of translating one language into another.
As I saw the communication aids that were built in the 1970’s, it became plain to me that none of them seemed to be taking advantage of the possibilities of generating language for people with disabilities by using any of the artificial intelligence techniques that were being developed for language translation. The communication aids seemed to assume that the letter, the phoneme, or the picture with a single meaning were the only ways of handling language creation.
The language translation techniques that had been developed to that point had not really fulfilled their desired goal. The problem had been that language generation was viewed as occurring simply according to rules and divorced from its real world context. A typical story about this problem is reflected in a joke: A computer mistranslates the sentence, “The spirit was willing, but the flesh was weak” into “The vodka was good, but the meat was rotten.” Although accurate machine translation remains an elusive dream even today, much knowledge has been gained about how to analyze and generate language using computer technology.
Parsers are computer programs which analyze language. There are semantic, syntactic, and even pragmatic parsers. These programs can divide language into its constituent elements and factors. Is there an implication here that one can generate language with a computer program simply by supplying the constituent factors necessary? This indeed has been attempted with some success.
These attempts, however, never tried to generate language with a reduced number of constituent factors. In fact, in most programs the assemblage of constituent factors was greater in number than the letters it would take.
My dream was to reduce the number of constituent factors necessary by the use of artificial intelligence. To do this, I knew it was necessary to represent language on the keyboard of the computer in some other way than by the simple use of letters. Thus I turned to the use of small pictures or icons. This was the birth of Minspeak™. It occurred in the summer of 1980.
Multi-Meaning Pictures on Computer Systems
I thought of pictures because of work I had done in the hieroglyphics writing systems of ancient peoples. One picture, while not literally worth a thousand words, can represent several different ideas rather easily. Hieroglyphic systems often represent a group of ideas by the single picture. Which idea is represented by a picture is determined by the context or the sequence in which the picture is used. For instance, the Maya Indians of Central America used a picture of a shark to represent not only “sharks” but the ocean as well. Because these people regarded the ocean as green, the shark glyph was used to represent both the color green and the stone jade.
Hieroglyphic systems were basically short hands for oral cultures. They are very economical and, to the people living in the culture, relatively simple. I reasoned that the application of such multi-meaning pictures on computer systems would make possible the representation of many different constituent elements in language without recourse to hundreds and hundreds of symbols. I further reasoned that computer systems could use inputs from these multi-meaning pictures in a variety of ways to produce voice output for people with disabilities.
In the preliminary work, I featured the use of about 40 icons. The first icon was the picture of an ear. I took this to represent phatic or feedback utterances. Other icons were chosen to represent different pragmatic aspects of speech. Through combining these icons in sequences with each other, I began designing the front end of a computer system that I hoped would generate language with great efficiency.
The system evolved rapidly throughout that summer. By August 10, 1980 I was ready to initiate the process that eventually resulted in patents for the Minspeak™ technique.
Minspeak Excitement Grows
In September of 1980, I attended the President’s Conference on Non-Speaking People at the National Institute of Health in Washington, D.C. I went as Richard Creech’s chauffeur. Rick’s accomplishments in AAC had already attracted a good deal of attention. He was one of the first and finest users of commercially available speech output communication aids. By 1980, he had appeared as a guest presenter at many AAC conferences.
Rick Creech and I had become acquainted by telephone in March or April of that year. We had many long conversations by phone, and toward the end of the summer, I traveled to North Carolina so we could meet personally. It was Rick’s enthusiasm about the semantic compaction technique that kept me involved in the project at first. He told me about the number encoding system he was using, describing in detail its various aspects. He assured me that no one else was doing anything like Minspeak – that systems exploiting the artificial intelligence possibilities inherent in modern syntax and semantics were not available.
I was living not far from Washington, D.C. at the time of the President’s Conference, so I volunteered to supply Rick and his father with transportation and be their guest at the meeting. I met many people who had played important roles in the field of AAC and who are still doing so today. The field was quite small at the time, but there was great excitement.
What I found was just what Rick had described. No one else was doing anything remotely similar to what I was now calling Minspeak. The use of multi-meaning, sequenced icons as a user interface for a language data base seemed new, and not just a little far out, to many of the participants in the conference. Its power, however, was becoming clearer to me and to Rick.
I knew that computers had great potential for handling language, but my own hands-on experience was limited. I enlisted the help of Kenneth Smith, a former student of mine, who was an actuary (professional mathematician) and computer programmer. Kenneth and I conferred extensively, and in the spring of 1981, Kenneth implemented a Minspeak program on an AIM 65 computer. We used a Votrax SC01 voice synthesizer. The system was primitive, but deeply gratifying.
At this time, my vision about how to proceed was inspired by Dr. Lois Schwab of the University of Nebraska. Dr. Schwab was involved with a concept called “service learning.” One computer science student, Mark Dahmke, with funding she had organized, built a voice-output computer system for another student, Bill Rush.
I conceived, somewhat naively, of a program where one student would receive a master’s degree in engineering for designing a custom-tailored computer for a communicatively impaired individual. Another student would receive a master’s degree in speech-language pathology for designing the vocabulary and icon sequences. The device user would volunteer his or her time as well as that of the family and would receive a voice-output Minspeak system for his or her contributions. It was all very idealistic and exciting.
As I began trying to lay the groundwork for such a system, good friends in the university community advised me that universities do not have finished products as their goal. Their goals are education and research, and the internal rhythms of the university – semesters, quarters, finals, graduation, publications, and tenure – would always take precedence over products. Maintenance would become difficult. Consistency and quality assurance would suffer. As these realities became clearer to me, I began to explore the possibility of commercial manufacture.
Minspeak Meets PRC
My primary concerns were that Minspeak be embodied in a computer system that would be worthy of it, with a program that would be able to reflect Minspeak’s ability to map syntax and semantics and that this system be available to large numbers of people with communication impairments. Only commercial manufacturers, I began to feel, could meet my concerns. I presented Minspeak to a series of them. There was interest from several, but I was drawn to Prentke Romich because I sensed their commitment to the highest possible achievement for the individual user.
I felt I needed to know on a first-hand basis what communication disability was really all about, so I invited people with communication disabilities, who had also written articles about this subject, to visit me at my home. I accompanied Bill Rush to a weeklong writers’ conference in Vermont. I was a guest of Michael Williams and Carole Krezman at their home in California. My life took a dramatic change. I quit my job, sold my house, cashed in my retirement insurance, and plunged into developing the Minspeak concept full time. On several occasions, for a week at a time, I served as a solo aid to individuals with communication disabilities.
After this period of orientation, I began to write about the Minspeak concept in order to show others that the approach was sound. The first article about semantic compaction appeared in Communication Outlook. Other articles followed. I attended seminars and conventions concerning disability and had the opportunity to see the wide range of commercially available communication aids. After several conversations with Barry Romich at these conventions, I was invited to PRC in October of 1981. In December of that year, Barry and I signed our first commercial agreement.
The following summer (1982) development of a program for Minspeak on the Express III hardware (one of PRC’s devices) began. In August 1982, an article in Byte magazine appeared explaining the technique and its possible hardware embodiments. The first week of September, when we received a call from Landsburg Productions in Los Angeles concerning a segment devoted to Minspeak, development proceeded in earnest. Within two weeks, Neil Russell, the chief programmer at PRC, had a workable Minspeak program. Rick Creech came to Ohio for the taping and was “the star.”
That September began a period of intense alpha testing of the system. Two prototypes existed; Rick used one and I had the other. The program was revised and refined. At the ASHA Convention in 1983 in Cincinnati, Ohio, Minspeak became commercially available. At that time, our knowledge of how to apply it to different population groups was limited. My own experience with communication disability centered on very high-functioning users. My professional experience was in linguistics and second language education. These perspectives flavored not only the articles about Minspeak but how we presented it as well.
How the Minspeak system has evolved is a story that should be told not just by one person, but also by some of the many people who contributed to its evolution. Early contributors to the semantic compaction approach came from a variety of fields and interest areas. Some key contributors (as of 1988) were Robert Stump of the Pennsylvania Office of Vocational Rehabilitation who helped develop the first Minspeak Application Program, called Words Strategy™, and implement it with an adult in an employment setting. Joan Bruno, Ph.D. CCC-SLP, then working as a Senior Research Associate of the Prentke Romich Company, first showed me how to implement Minspeak with very young children. She authored another Minspeak Application Program called Interaction, Education, and Play. Sue Sansone, MS, LSP, of the AHRC of Suffolk County, New York, pioneered the use of Minspeak systems for adults with cognitive disabilities. Gail Van Tatenhove, MS, CCC-SLP, then working at the Communication Systems Evaluation Center, a state-wide AAC assessment center in Orlando, FL, broke more new ground for me by showing how she was using Minspeak with children and adults with multiple cognitive and physical disabilities. She authored another Minspeak Application Program, called Power ‘n Play. Tracy Kovach, Ph.D., CCC-SLP, began to spearhead ways to implement Minspeak with children with vision challenges who would access their Minspeak icons with auditory scanning. She also authored a Minspeak Application Program called Stories and Strategies for Communication.
As more and more Minspeak Application Programs were developed, Arlene Badman, MS, CCC-SLP lead a team of people to unify all the different Minspeak Application Programs. They created the Unity® Minspeak Application Program, which remains the primary language system in all English-based PRC devices today. A variation of the Unity program, called LAMP Words for Life, was developed by John Halloran, MS, CCC-SLP, for people with autism spectrum disorders.
Through the contributions of these individuals, and many others, the application of Minspeak for individuals with range of abilities and disabilities remains as vital today as it did in 1980!