The First Step in Benchmarking: Establish Your Own Quality Measurements

Home/Publications/Best Practices Newsletter/1999 – Best Practices Newsletter/The First Step in Benchmarking: Establish Your Own Quality Measurements

CIDM

June 1999


The First Step in Benchmarking: Establish Your Own Quality Measurements


CIDMIconNewsletter

When we first announced our new benchmarking study, “Using Metrics to Manage Information Development,” we were inundated with queries about performance and productivity metrics: “how many hours per page is the industry standard?”, “how many hours per topic of online help or training-hour delivered should we figure in our budgets?”, and so on. If you follow other organizations’ management listservs, you’ve seen these queries yourself.

While this type of quantifiable data will often appease senior managers who want to make sure your group is in line with industry standards, we all know that there are countless variables which influence these measures. For example, if your group produces more information and training for new products than for new releases of old products, these measures will, of course, be higher. If you produce information for a technologically savvy audience, your information-development metrics may be lower than those of a group producing information products for a popular consumer product. In short, we can always explain away the variations between our own productivity metrics and what we might label an “industry standard.”

Further, while they assess the performance of our documentation and training teams, simplistic productivity metrics do not begin to address the quality of our information products.

The question becomes then: what type of metrics-or simply put, measurements-will better convey the success of our managerial and group efforts while also shedding light on the quality of our products.

Many organizations have tried to create their own quality benchmarks. Some groups we have consulted have measured the number of copyedit errors per page; they assume that a reduction in the number of these errors will provide evidence that their overall publication quality has improved. In truth, we know that this measure has little to do with the effectiveness-the true quality measure-of our information products.

Companies interested in quality benchmarks often use customer surveys to measure customer satisfaction. Usually, the customers are given a rating system (“satisfied,” “dissatisfied,” “no opinion”) to measure their opinions about a product. At times, direct questions are geared toward the usability and completeness of the manuals. These answers are still a measure of the customers’ opinions rather than actual performances, and they provide us with little information to focus our improvement efforts. However, they do provide us with a good benchmark against which to gauge our future improvement efforts.

Along with customer satisfaction rating systems and questions, we can implement other methods for measuring the extent to which our publications help users complete their tasks or solve their problems. Quality benchmarks that relate to customer satisfaction include

  • Counting the number of customer complaints regarding the technical publications they use
  • Counting the number, type, and complexity of customer requests for assistance, when the information they need is available or should have been available in the publications

A decrease in the number of complaint calls should indicate an increase in publication quality. Additionally, we might measure the duration (quantitative) and nature (qualitative) of assistance calls. This information is imperative to benchmark because of the connection between the number and duration of calls and the cost of customer service operations; it also allows us to demonstrate to senior management the ways in which investing in information development can more than offset costs in other areas.

By analyzing customer needs in detail, an organization can better decide the minimal amount of information necessary that will be helpful to the user. In this case, more is not always better.

One way to gather this information is to visit customers in their natural environments. In doing so, we can determine how they use information, whether it is just to “get started” or to look up how to perform a certain task. Our initial visits will provide yet another benchmark from which to measure improvements in publication quality. Once we determine our initial assessment of customer needs, we can then look toward the future in measuring our success at improving publications through return visits and new observations.

However, the best way to determine user understanding of the product and any problems they may encounter is by comprehensive usability testing. Testing is a powerful benchmarking tool because it allows us to observe the user from “start-up” to task completion, and it allows us discover when and where they have problems with the documentation-again, providing measurable performance objectives for our information products.

All this is not to say that performance, or productivity metrics, are of no use. Rather, performance and productivity metrics only have meaning or significance when you have established that you are, indeed, producing quality documentation and training that meets your users’ needs and goals. Then, once you are confident of the quality of your own work through your own internal quality and effectiveness measurements and improvements, you can begin to benchmark performance metrics with other companies who have also studied and improved their effectiveness. When comparing these quantitative performance measures, you will still have to look carefully at the nature of the work being done by the other groups, but at least you will have eliminated the ever-complex variable of quality from your comparison. CIDMIconNewsletter