CIDM


Establishing Quality and Usability Benchmarks for Information Products


 JoAnn T. Hackos, Center Director
Introduction

Traditionally, benchmarking has been defined as the process of comparing the performance of one company's products with the products of its competitors. For example, benchmarking provides companies with a method for determining if semiconductors perform equally fast and accurately, if application software products include the same functions, or if one's automatic camera focuses as quickly and accurately as the competition's. Benchmarking also provides a method for studying internal processes to ensure that competing companies operate assembly lines as efficiently, produce the same output from the same machinery, or process paperwork as efficiently. Benchmarks are the baseline measurements, the numbers to beat.

With traditional benchmarking, most information development organizations attempt to find quantifiable characteristics to measure. While it is possible to do benchmarking without metrics, in most instances we find it easier to justify product changes to management when we can demonstrate through quantitative measurements that a problem exists.

What then does the benchmarking process mean for information products? How do we compare our publications with those of our competitors? How do we define quantifiable characteristics that predict our publications' success or define their level of quality? What is the relationship between quantifiable characteristics and the ability of our publications to meet the needs of our customers?

The danger for publications benchmarking in relying upon quantitative measures is that characteristics of documents that we can easily measure may be trivial in terms of their impact on the user.

Recently, an information developer informed me that the publications manager in her company had been asked to measure improvement in the quality of their publications. The manager had announced that the entire department would concentrate during the next year on quality improvement. To measure their improvement in quality, they would attempt to decrease the number of typing errors in their publications.

In an effort to select a benchmark for the company's announced quality improvement efforts, the publications manager had identified a publication characteristic that could be easily measured. Unfortunately, while improvements to the selected benchmark, the number of typing errors, will be easy to quantify, it will be difficult to measure the effect on the customer. As a result, the publications manager will be hard pressed to prove that eliminating or reducing typing errors will improve customer satisfaction with the documentation, decrease the company's cost of doing business, or increase the company's sales.

It's not that we believe that publications are not improved when they have fewer typing errors. Unfortunately, unless a typing error is so egregious that the intended word is obscured, it doesn't make much difference to the user's understanding, productivity, or performance. Typing errors, while embarrassing to the writer and the publications department, make little difference to customers who are concentrating on getting the information they need to do a job.

This publications manager selected typing errors as a quality benchmark because the number of errors is very easy to determine. More meaningful benchmarks, such as the usefulness of the document in facilitating learning, are much more difficult to measure.

The root problem, of course, is confusion about how to measure the quality of information products. Should we use quantitative measures that lead us to count the number of typing errors? It's tempting to select quality benchmarks that are inexpensive to measure and easy to correct. Or, should we attempt to find more meaningful measures, even if they are more difficult to analyze? We must recognize that the more meaningful the measure, the more important it will be to the publication's and the product's success.

As part of an organization's initiative to provide higher quality goods and services to its customers, information-development organizations are being asked to undertake quality initiatives and find ways to improve customer performance and satisfaction. These initiatives have left many information developers and their managers stymied. Just how does one measure the quality of information products? Do we concentrate on factors that we can easily and inexpensively quantify? We can certainly count spelling mistakes. We can eliminate typing errors and announce to the world that we have improved the quality of our information products. If we decrease the number of typing errors per hundred or thousand words, have we not increased quality? In one small way, undoubtedly we have increased our credibility in the eyes of our peers and, perhaps, our customers. However, we must also ask: Is the resultant increase in quality worth the cost of measuring and correcting the problem?

Many readers may have noted that in the past few years the number of typing errors has increased dramatically in books produced by major publishing houses. Could it be that book publishers have decided that the cost of decreasing the number of typing errors is too high in relationship to its importance to readers and that readers will continue to purchase their books whether they have some typing errors or not?

We hope that the managers of publishing houses have conducted a value analysis to help them decide how to measure the quality of their publications. In a value analysis, we compare the cost of performing a task (such as thorough proofreading) with its relative value to the customer. If it is more important to the customer that the information we provide be accurate than that it be typographically perfect, then we should put more resources into guaranteeing accuracy than in proofreading. And, if we have time to perform only one quality-assurance task, that task should be accuracy checking rather than proofreading.

Complicating the task of analyzing value is the need to compare our own quality values with the values of our customers. Most information developers value the mechanical correctness of their publications. They dislike typing errors. Typing errors make them appear less competent to themselves and to their peers. They argue that typing errors can cause problems for readers. Misreadings and resultant misinterpretations might have potential negative economic repercussions for a product manufacturer or service provider. Only by demonstrating the potential economic repercussions of typing errors, however, can communicators effectively argue for the value of performing careful proofreading.

In many instances, communicators find that making an economic argument, a business case, for the activities they perform is difficult. It may be difficult to prove that someone made an unnecessary customer-service call or refused to buy the company's products because of typing errors. In fact, making a business case becomes more difficult the farther the selected characteristic is from having a direct and noticeable impact on the customer's needs and the company's costs and profits. For that reason, we focus in this discussion on establishing quality benchmarks for our publications that are closely related to customer's needs.

Quantitative Versus Qualitative Benchmarks

To relate publications benchmarks to customer needs requires the ability to quantify the relationship between a publication's quality and a customer's requirement. For example, to quantify a publications benchmark, we may look for a relationship between an excellent table of contents and index and the ability of customers to find information they need quickly. We may discover that a table of contents containing two levels of informative headings and an index with three levels of index entries, no more than two page references per item, and the frequent use of synonyms as cross-references, permit customers to locate the answers to their questions in two minutes or less. Not only can we test for this quality, but we can show the relationship between accessibility and a reduction in customer service calls.

A quantifiable measurement of this sort is not only testable but we can associate with it a potential for cost savings. However, benchmarks need not be quantitative. Although most traditional benchmarking has focused on quantitative measurements, benchmarking can include qualitative measurements. If industry experts (pacesetters) agree that our product is easier to use than a competitor's product, we will have a positive perception to communicate to prospective customers. In this case, the benchmark is based on subjective opinion; nonetheless, it is just as relevant to our success in the market as quantitative benchmarks (see What is a benchmark?).

Measuring Customer Satisfaction with Publications

Measuring customer satisfaction with products and publications has long been used by companies interested in quality benchmarks. These companies regularly conduct customer surveys by mail, telephone, and customer-site visits. They ask customers for their opinions about the products and services provided. They often provide the customers with a rating system that differentiates among customers who are "satisfied," "dissatisfied," or have "no opinion." Periodically after an initial customer-satisfaction benchmark has been established, they reassess their customers' opinions to see if their ratings have improved.

In addition to general questions about documentation, customer surveys can also be designed to ask specific questions about

Despite the specificity of the questions, the answers are still based upon customers' opinions of the issues rather than their actual performance. As a result, customer surveys, while establishing significant quality benchmarks and measurements, rarely identify information specific enough to lead to dramatic changes in the documentation. Other benchmark techniques, described later, are more suited to discover the specific and detailed information needed to initiate change.

The Relationship of Benchmarking to Quality

If we define the quality of our information products as the extent to which we meet our customers' needs, then a benchmark based on customer satisfaction is an appropriate place to begin our process of improving quality. Unfortunately, customer satisfaction ratings, whether good or poor, provide us with little information on which to base our improvement efforts. If we subscribe to a process of improving the quality of our publications, we must establish more specific benchmarks than overall customer satisfaction.

In addition to asking our customers to rate their level of satisfaction with our publications, we can institute other methods of measuring the degree to which our publications appear to help customers accomplish their objectives or solve problems. Quality benchmarks that relate to customer satisfaction include

Measuring Customer Complaints and Calls for Assistance

Customer problems arising from information products may be gathered from a number of sources, but the chief sources of complaint data are calls to customer service and complaints registered informally through sales and field support personnel. Customers who actively register complaints about information products have frequently encountered serious problems in accessibility and usability.

To begin a quality improvement program, information developers can request that those who answer customer calls record complaints about information products and problems in using information products, as well as the total number of calls asking for assistance which might have been handled by better information products. A high number of complaints indicates significant problems with the publications. A decrease in the number of complaints should indicate an increase in publications quality.

In addition to specific complaints about publications quality, you should benchmark the number and duration of calls for assistance, as well as the type of questions that customers ask (see Customers support calls decrease significantly).

In a study of customer satisfaction, the Allen-Bradley Company reported that customer-service calls decreased from 50 a day to 2 a month after new documentation was designed and published.1

We particularly recommend that information developers conduct benchmarking studies that relate to customer-service calls because of the obvious connection between decreased number and duration of customer service calls and the cost of servicing customers. If a company receives 100 customer service calls per day at an average cost of $25 per call for 300 days a year, a reduction of 10 percent in the number of calls means a potential cost savings of $75,000 a year. A strong economic argument for decreased costs associated with improved information products is one of the most convincing arguments to make to senior management.

Meeting Customer Expectations

Many information-development organizations design user documentation in a vacuum. Information developers lack direct access to customers, have little opportunity to conduct customer studies, and often must guess, based on their own experience and learning styles, the type and extent of the information customers need to accomplish their objectives and solve problems. As a result, they may fail to meet customer expectations.

If information products fail to meet customer expectations, the result will be increased calls to customer services, increased customer complaints about publications, reduced referrals and follow-up sales, and other product-specific problems.

By studying customers and analyzing their needs in detail, the information-development organization can better decide the minimal set of information that will be useful. In the past, we have often felt compelled to include all possible information in information products, hoping that some of it would help the customer. We can no longer afford such a "supermarket" approach to document design (see Minimizing information increases customer performance).

By avoiding a supermarket approach, we can reduce publications costs. By providing just the right information to our customers, we reduce the volume of information that must be created, reviewed, printed, translated, packaged, shipped, and more. We also increase the accessibility of the information for the customer by eliminating information that is not needed and highlighting the information that is.

However, to target our communications effectively, we need to know much more than we tend to know today about who our customers are and how they use information. We need to visit customers in their "natural environments," observing their use of information when they are learning a new task or remembering how to perform a familiar task. We need to study how they use information in a crisis or when they are "getting started." Our initial visits provide a benchmark for improvements in publications quality. We discover areas where information delivery and understandability might be improved. Once we establish our initial benchmark of customer needs, we can subsequently measure our success in improving our publications through return visits and new observations.

Setting Performance Benchmarks through Usability Testing

While we can gain considerable information about customer requirements from surveys, one of the most comprehensive methods we have available for analyzing the success of our information products is usability testing. During a usability test, we directly observe customers consulting the documentation to learn and use the product. We can discover where and why they have problems using and understanding the documentation, which enables us to decide upon ways to change the documentation and eliminate the problems.

Usability testing is a powerful benchmarking tool because it allows us to establish measurable performance objectives for our documentation. In the development of a usability test plan, we must clearly state measurable hypotheses about the performance of the user in completing useful tasks. For example, we may hypothesize that a typical user should be able to install the hardware in 30 minutes by following the instructions in the documentation and the labeling on the hardware itself. If typical users are unable to complete this task in the required time during the usability test, we will use test observations to discover why the task took longer than anticipated and find ways to shorten task-completion time by providing better instructions.

In addition to studying our own documentation and products in terms of performance hypotheses, we can also conduct benchmarking usability studies of our competitors' documentation and products. If we discover in a comparative usability test that our competitors' products take less time to install, then we have a benchmark upon which to improve.

At Comtech, we conduct competitive usability tests to help our customers learn how to improve their products' performance in terms of customer usability. The most difficult aspect of these tests is to uncover the characteristics of product and documentation that impede user performance and encourage errors. Once we have thoroughly and carefully analyzed test results, we can make useful recommendations for improving product and documentation to meet or exceed competitor performance.

In measuring customer performance on specific tasks through usability testing, we concentrate on several important benchmarks:

Interestingly enough, when we attempt to establish benchmarks about time to complete tasks or number of errors for our test hypotheses, the information developers frequently can give us little benchmark information. They often do not know how long a task should take when performed by either a novice or an expert. We then use the usability test to set the performance benchmark early in the information-development life cycle. With an early benchmark in place, we can perform iterative tests to measure improvements in the quality and usability of the documentation.

When we discover that documentation experts have often not established usability objectives for the documentation, it may indicate that performance and usability objectives have not been considered in the design of either the product or the documentation. This lack of performance benchmarks points to a considerable problem we face in improving the quality of customer documentation (see Trainers need to meet performance objective).

Those of us involved in the design of documentation often have not had the same degree of accountability for the performance of our information products. As a result, we are often surprised when we are asked to define measurable performance objectives for usability testing.

We find that the very act of preparing a test plan for documentation often leads information development, as well as product design groups, to reevaluate their design processes. Information developers often begin to think differently about their processes of preparing documentation when they learn that their users must perform tasks successfully and within acceptable limits of time and error rates. Thus, benchmarking user performance can immediately have a significant effect on the perceptions and, we hope, the development processes of information-development organizations.

Studying Customer Productivity in the Workplace

While usability testing is a powerful tool for measuring task performance and establishing performance benchmarks, it generally examines user performance in a laboratory setting rather than in a real workplace environment. A laboratory setting that mimics the work environment is appropriate for studying individual task performance and can certainly be designed to provide substantial information about learning and use, but direct study in the workplace also allows us to establish comprehensive benchmarks of customer performance unavailable in a more cloistered setting. Perhaps one of the most valuable measures for technical information is referred to as "mean time to productivity."

Mean time to productivity refers to the time it takes the average user to learn how to use a product effectively and to experience the productivity gains promised in product marketing. For example, a software manufacturer may contend that the productivity of an architecture firm will increase if they acquire a computer-aided design system to prepare their architectural drawings. In evaluating such a product, the buyer is, or should be, concerned with the amount of time and money it will take for its staff members to learn to use the product effectively. Will they have to send staff to training classes? Will staff be able to learn on their own? Will staff take substantially more time to complete their work during the learning process?

User-interface design, documentation, and training are the primary tools that support the learning process among the customer's staff. If the learning time is too long or the productivity gains unacceptable, the product's reputation may suffer. By studying the use of the product and its documentation in the workplace, information developers and product designers can establish benchmarks for mean time to productivity and monitor the customer's process to ensure that the benchmarks are being realized after the new technology and documentation are introduced.

Productivity benchmarks can be established through customer-site studies as well as through laboratory usability testing. The customer-site studies may, however, provide more realistic measures of the time it takes for the user to learn the product and master its capabilities. A customer-site study might be designed to develop an initial benchmark of customer characteristics at the time the product is introduced. Subsequent analysis of the learning process and its success or problems will further provide information about the customer (see Percentage of product use increases).

Considerable attention has been paid in the press to the productivity gains claimed by the computer industry for the office workplace. It has been argued that the computer has not increased productivity in the workplace to the extent promised. As information developers, we should be interested in discovering whether or not our new products actually improve productivity over previous manual processes or other automated systems. If they do not, we have an important challenge in terms of user performance objectives.

Measuring Customer Mistakes

In some instances, particularly when computer-based products are being used internally by company staff members, it may be possible to record types and numbers of errors made by users. In fact, we may even have opportunities to record errors made by customers outside our information-development organizations. Software systems can be developed that record all keystrokes or mouse movements made by users and permit the analyst to "play back" the users' actions. Through a careful examination of keystrokes, information developers may be able to discover the types and numbers of errors made by users. In particular, they can measure the decrease in keystroke errors made following a learning period with a new system to determine the effectiveness of the interface and documentation.

In measuring mistakes made by customers, we may be able to analyze

Using Expert Analysis

To enhance data gathered through direct studies of information users in the laboratory, in the workplace, or through surveys and interviews, information-development organizations often employ the services of documentation experts to evaluate documentation. Such expert analysis is referred to as heuristic evaluation (see What is heuristic evaluation?).

When documentation experts perform a heuristic evaluation of documentation, they often provide a checklist or description of the criteria they have used to examine the documentation and a complete report of their findings and recommendations. Document designers may then use the findings and recommendations to make both immediate and long-term changes in the information products.

The success of a heuristic evaluation depends upon the experience and expertise of the evaluators. The more they know about the effective design of documents, the better the evaluation is likely to be. Evaluators who have extensive experience performing usability tests of documentation may be the best prepared because they have experienced the correlation between user problems and specific documentation flaws.

While significant in examining the details of a large sample of documentation that would be impossible to evaluate through usability tests or workplace studies, expert heuristic evaluation has limits in its usefulness. Experts can observe only the external characteristics of a technical document-page design, organization of information into chapters and sections, adherence to standards and guidelines, sentence structure, and more. Experts cannot determine if the information contained in the documentation is accurate or appropriate for the users. Experts are not users and cannot duplicate many of the performance problems users experience trying to use and learn from the documentation.

In addition to examining your organization's documentation products, expert evaluators will also be able to conduct analyses of your leading competitors. Such competitive evaluations can provide useful benchmarks from which to begin a documentation improvement process in your organization. If experts find your documentation to lack important usability features available in your competitors' documentation, you have incentive to change.

Using Usability Inspections

In a heuristic evaluation, experts bring a rich source of experience and knowledge to the evaluation. This background also allows them to weigh the importance of individual problems in a particular publication with other characteristics of that publication. An expert might decide, for example, that a high-quality index is less significant in a very brief document that includes other accessibility tools than in a long document that is poorly organized.

What an expert evaluator cannot tell in such a study, however, is if the index terms selected are the best terms for the user community. Without intimate knowledge of the user community, the expert cannot provide information on the ultimate usability of the index or other document characteristics. Expert evaluation provides benchmarks only in terms of observable document characteristics, not in terms of performance objectives. It is possible, unfortunately, to have a document that meets all observable documentation standards and still is not useful for its audience.

A few years ago a major manufacturer in the US sponsored the development of a software tool to help publications information-development organizations determine the quality and usability of product documentation. The program consisted of a set of questions that reviewers used to guide their examination of a finished document. Answers to the questions, a series of choices that rate the success of a document in meeting particular usability characteristics, were then fed into a software package. The software contained an algorithm based upon expert evaluations of documentation characteristics. The output of the software was a report rating the document in several key areas, including ease of access, task orientation, index quality, graphics, heading levels, and others.

To prepare the set of questions, the range of answers, and the weighting system used in the software, the designers solicited the evaluations of experts in judging the success of sample documents, conducted usability testing on particular document characteristics, and studied the relative importance of the characteristics designated as critical for high-tech documentation. Behind the metrics, therefore, is a set of guidelines familiar to most experienced information developers. Communicators know, for example, that manuals with indexes are more useful than manuals without indexes, that task-oriented headings help users find information, and that illustrations add to the usefulness of most manuals, especially for visually oriented users. These guidelines and standards were incorporated into the software. Such measurement software, like the guidelines and documentation standards published by many corporate information-development organizations, provides a set of benchmarks for information developers.

Unfortunately, as with heuristic evaluation, it is possible to "score high" in the measurement software and still fail to meet the needs of the users. The measurement software addresses easily observable characteristics, such as the number of index items, the ratio of headings to text, the presence of task-oriented information, and the use of examples and illustrations. The expert evaluator may take into account similar characteristics that are observable on the surface of the document design. Such observable characteristics are important aspects of good design. Documents that fail to demonstrate these characteristics are unlikely to be satisfactory to users. We know from observations and experience that many users find information more useful if it is task-oriented, heavily illustrated, well indexed, and full of good examples. Thus, the processes of expert evaluation and guided inspections of finished documents can ensure that the technical communications show no known usability defects.

Nevertheless, documents can pass inspection and still be unsatisfactory. They can contain errors in the accuracy of the information, be written at a level inappropriate for the users, contain examples that are too trite or esoteric, or be organized in a way that makes it difficult for users to find what they want. To the extent that expert evaluators or document inspection teams are not like actual users, they may miss significant problems with the documentation that will not show up until the documents are used by actual users in real situations. We have all known documents that look great, pass all the technical reviews, and are prepared with great care by talented designers and writers. Yet, the users of these documents find them impossible to use, full of misinformation and misleading advice.

While we should continue to include expert evaluations and usability inspections as part of our quality and usability metrics, we must be aware of the limitations of these techniques in finding the most egregious usability errors. Not until we test our documents with real users will we learn about the show-stopping problems that we never dreamed existed.

Referring to References in Other Publications-Technical Reviews

For publications in some industries, especially the computer hardware and software industries, reviews in industry publications (magazines, trade reports, and others) may provide a tool for benchmarking. In many industry publications, technical manuals represent one category that is rated as part of an overall product review. For example, when a magazine reviews accounting software, the reviewer may include a rating of the documentation provided. Such a reviewer's rating provides a point of comparison among competitors' documentation that can be used as a potential benchmark. If your organization's documentation is rated lower than the competitors', then you can act to improve your rating.

Not only may your documentation be rated in a table comparing features and functions of competitive products, the reviewer may choose to include an analysis of your particular documentation. In one case, when the product we had documented won a "product of the month" award, the full-page review article about the product included a discussion of the "first-class" documentation. There is nothing like a very favorable mention by an independent reviewer to convince your management that you are doing an excellent (or a not-so-excellent) job.

For the most part, product reviewers rate documentation on the basis of the presence or absence of certain features and functions. Reviewers scan for tables of contents, indexes, glossaries, illustrations, error messages, and other criteria that are easy to identify.

In standard product reviews of this sort, criteria are often published that explain the features that the reviewers are looking for in the documentation. By reviewing such a criteria list, we learned in another case that reviewers of statistical software valued the presence of an appendix that listed the equations used in the statistical calculations. Without the comment in the review criteria, we might not have included such an appendix in the software documentation we were authoring at the time.

Reviews, however, do not ordinarily evaluate either the product or its documentation on usability. In fact, many users are amazed to read favorable reviews of both products and documentation that they find difficult to use. To better serve their readers' interest in usability, in addition to features and functions, a few of the leading software product magazines have instituted competitive usability testing. That means that they subject all the competitive products to the same usability tests so that they can better report on this significant aspect of quality. The test criteria, like the criteria for the more subjective competitive review articles, are published and can be a source of information to guide development strategies.

A positive documentation review, especially one based on usability testing that includes the documentation in the test, is a welcome independent judgment and potential benchmark measuring the quality of your work. However, even a negative review can provide a valuable basis for improvement. Like any quality metric, a reviewer's measure is useful. Sometimes, it leads to specific improvements like adding an appendix on equations or developing a better index. In other instances, it can be used to support other quality measurements to determine exactly which attributes of the technical information need to be improved.

Applying Industry Standards and Guidelines

Many of the quality benchmarks and metrics discussed above take place during the information-development life cycle or after the product has been released to the customer. They allow us to make measurements of what customers need, how satisfied they are with our products and services, and the extent to which they are experiencing problems. Many information-development professionals hope, however, that they should be able to generalize from the collective studies of customers to practices and standards that they can apply to documents during development.

We would like to be able to claim that good practices in information design and writing will make a difference to our customers. For example, we believe that customers prefer step-by-step, task-oriented instruction to descriptive information about the product or process. Usability testing results often demonstrate that customers have difficulty discovering what to do from long descriptions and perform more effectively when they can follow a set of well-written, briefly stated steps. In customer satisfaction studies, customers often state that they prefer to use simple step-by-step instructions rather than reading long, detailed explanations. Consequently, many information-development organizations have put standards in place that require their writers to create task-oriented rather than descriptive text.

Once such a standard is in place, we can inspect a draft text to ensure that the standard is well understood and implemented. The usability editor can examine the organization of the text to ensure that user tasks are prominent in the table of contents and heading levels. The review teams can ensure that the task-oriented instructions are complete and correct.

With a standard in place and an inspection method available that ensures that the standard is being implemented, an information development team can create an in-process quality metric. One might measure a writer's document plan, deciding if it is organized according to a thorough understanding of user tasks (a top score of 1) or if it needs to be revised to meet the standards (a second-level score of 2). During a usability edit, the editor might also rate the writer's success in producing a task-oriented text, using a 1- or 2-point system or a more complicated scoring method. During the review inspection, a similar scoring system might measure the success of the document in meeting the standards.

A quality metrics system implemented through editing and reviews might even become an integral part of a individual's performance evaluation. However, it is vitally important that the evaluation system measures qualities that are truly customer driven. We have spent too much time in the past measuring only what we believe is important to a document. We look at issues such as grammatical correctness, accuracy in desktop publishing techniques, or attractiveness of layout. Unfortunately, we often end up measuring and evaluating what we value and neglecting what the customers really believe is important to achieving quality. A judge in a local technical-manual competition once eliminated a manual from contention for an award based solely on a single typing error. In all other respects, the manual appeared to be a well-designed and highly effective document. Such concentration on personal or "writer"-related values often makes the interests of information developers appear trivial and costly to our more business-oriented management and certainly to those customers whose needs are not being met.

In short, we need to ensure that what we measure is relevant and high on the customer's priority list. We need to avoid measuring something because it is easy to measure. When we choose to measure a quality and establish a benchmark, we make a very strong statement about our value system. It has been a truism in quality information-development organizations that "what we do not measure does not get done." We need to be extraordinarily careful that we choose to measure the right things.

Who is Responsible for Quality?

Everyone in an organization should be responsible for the quality of the work products they create. Individual information developers must understand the quality and usability requirements of the information they create for customers. Project managers are responsible for overseeing the work of individual contributors to ensure that company quality standards are enforced. Department and other functional managers are responsible for ensuring that their staff members have the training, resources, and support they need to achieve the level of quality demanded by customers and upper management. Without all levels of the organization working together to achieve quality, the efforts of small groups will often go unrecognized and unappreciated.

Quality and usability benchmarks enable us to focus on what our customers believe to be important about our publications. They provide us with both a place to start improving quality and a method of measuring the success of our efforts. Any publications organization that is being challenged or challenges itself to become more responsive to user needs must institute a benchmark process. Without an initial measurement, few real gains are likely to occur. We are much more likely simply to change something about the documentation rather than to make it more successful in meeting user needs.

1Jereb, Barry, "Plain English on the Plant Floor." Visible Language. Vol. 20 No. 2, Spring 1986, pp.219-225.


Center for Information-Development Management
http://www.infomanagementcenter.com
Voice: (303) 232-7586
Fax: (303) 232-0659