Tooltips—Progress or Failure?
Making good decisions about adopting innovations
Once upon a time, not too long ago (1983 to be exact), to get a computer to do anything, a would-be computer user had to learn what instructions the computer understood and then type them on a keyboard at just the right moment without making any mistakes. “Eager adopters”
1 were thrilled and responded to error messages such as the infamous “Abort, Retry, Fail,” with renewed enthusiasm and determination to get the thing to work. The rest of the populace said, “No thanks!”
Then, along with the appearance of graphical user interfaces, came the device of issuing instructions by making a choice from a menu. You could press a key or use a mouse to click a word at the top of the screen to see a list of the available instructions. If the words were well-chosen, they let you predict reliably what the result of your choice would be. You no longer had to remember what the computer understood-you could read every instruction onscreen. This obviousness of use opened the door to the millions of “hesitant prove-its,” the people who only adopt an innovation if it can be demonstrated to be useful to them. Of course, rudimentary but perfectly serviceable menus had been implemented from time to time as part of text-based, command-line interfaces. But until they appeared as a feature in object-oriented graphical user interfaces, they hadn’t had much impact.
You can make a choice from this hierarchical menu in one swooping mouse action
Another important feature of the new interfaces was the icon. Originally from the Greek, eikon-meaning an image or figure-had been associated with the religious art in miniature of the Orthodox Eastern Churches. So icon came to mean a small picture. Icon also came to have the meaning of something done in a fixed or conventional style and, since conventional, therefore familiar and easily recognized. In this sense, Coke bottles and Andy Warhol’s images of Marilyn Monroe are icons.
Do you know what this icon means?
At first, GUI operating systems depended upon a limited set of icons to represent physical objects onscreen such as folders, pieces of paper, and trash cans. Some programs also used icons to represent frequently issued commands. Word-processing programs employed icons to let writers quickly set text justification. Drawing and painting programs let artists switch between onscreen cognates of physical tools such as paintbrushes and pencils, or choose a new type of tool that let them create lines, rectangles, or ellipses with a single drawing gesture. Users were not intimidated by the small number of icons and, since many of them quickly became industry standards, the catchword became “learn once, use many.” So far, so good!
These icons map functions to the intentions of users
As the complexity of programs ballooned, designers noticed that icons improved ease of use and increased productivity. But then, using the type of reasoning which proposes that if taking a few aspirin is helpful, swallowing a whole bottleful would be proportionately much more helpful, some developers also reasoned that more icons would be better. In a few years, swarms of icons began eating up screen real estate, squeezing the available area in which the work was done. The proliferation of icons steepened the learning curve to Everest-like proportions because most of them represented the commands they triggered in ways that were not obvious to new users. Not only did users have to learn how to work the program, they had now to learn the equivalent of what amounted to a hieroglyphic alphabet-surely a step backwards.
I just wanted to write a letter.
Could the burden imposed by the additional learning task be eased? The “more is better” line of reasoning then proposed that a user could be helped if a prompt were given about the function invoked by the icon. What if, when the cursor passed over an icon, up popped a tiny label with some informative text? And so, tooltips were born. Hurrah, success!
Or is it? If the icons were genuinely iconic in the second sense of the word, not just little pictures, but widely known and easily recognized, would the additional word-labels have been needed? A closer examination reveals that the text popped up is most often the word or words that would have appeared, or still do appear, on a pull-down menu. So it’s not really the little image that makes use obvious, it’s the text.
For an immediate confirmation of this, try a simple experiment the next time you’re sitting at your computer. Open your Web browser. Does the toolbar at the top of the browser window display icons with or without their text labels? If you’re like ninety-nine percent of Internet surfers, you’ll see labeled icons. The default display for the dominant browsers is icons plus words-implicit recognition by the software developers that icons alone don’t work. If they did work, surely the default display for the most widely used programs on earth would be textless. And if the constant display of text were bothersome, surely you would have turned it off. Most of us don’t because for rapid recognition of the desired item we need the combination of image and text.
As we have seen, although icons have been touted as “intuitive,” the buzzword implying a mixture of obviousness of purpose with speed of selection, too many icons are not obvious at all. Rather than opting to design and program a grab-bag of icons in the hope that some of them will ease use, thereby escalating the production-side costs of development and maintenance and the customer-side costs of training and frustration, a better choice would be to make the effort to present users with a limited, well-chosen set of easily recognized images.
If you do need to provide prompts, don’t just tack on a label with the name of the tool or function; use the occasion to provide a hint about how to make use of the icon. Or, if you have just a few icons on permanent display, make sure their labels are always turned on so users have a double cue to aid recognition in times of stress or haste.
This icon isn’t even named! But its use is made clear.
Controls for turning pop-up labels on or off are usually buried inconveniently in an options dialog box. To turn the labels on if a prompt is needed or off if the icons have become familiar, a user must always interrupt the flow of current work. Instead, you can make the labels accessible and easy to show or hide by providing an on/off control in every window and dialog box.
Tips are always available, one click away.
Finally, viewed from the widest perspective, falling into the reactionary pattern of responding to problems sequentially by making incremental changes is likely to lead to unexpected, costly consequences. Whenever we find ourselves attempting to fix a problem created by an earlier “solution,” we should step back to see the problem in a wider context and investigate its history. Often, as has been the case for our examination of tooltips, we will uncover an earlier set of misguided assumptions that need to be called into question. Doing so initiates an opportunity for innovation that can lead to satisfied users and a competitive advantage.
1. Rosen, Larry D., & Weil, Michelle M. Helping Clinical Staff Transition from Paper to Computers. Paper in publication. 1996. Rosen and Weil delineated a techniphobia dimension with three broad groups: Eager Adopters, who were among the first wave users of technology, expected problems, and that they could solve them; Hesitant Prove-Its, who were certain that technology has lots of problems and that they would need help to solve them; and Resisters, who avoided technology at almost all costs because they were certain that every mistake was their fault and who wouldn’t ask for help because they felt stupid, foolish, and embarrassed.