Home/Publications/Best Practices Newsletter/2012 – Best Practices Newsletter/Measuring the Immeasurable—Documentation Quality

CIDM

April 2012


Measuring the Immeasurable—Documentation Quality


CIDMIconNewsletterVolker Oemisch, Alcatel-Lucent

Customer documentation leaders and their management have to make complex and far-reaching decisions and need data to validate those decisions or evaluate the outcome of pilot projects. “We outsourced our documentation development for product X to a vendor and reduced overall cost by 15 percent without impact on timelines,” may be a typical statement summarizing the findings of such a project.

Many documentation managers would agree that there are at least three interdependent and competing success criteria: cost, timeliness, and quality, assuming that the quantity is given for a specific project. Typically we have clear measures on timeliness and cost or productivity. See also the article from Mike Eleder, “The Illusive, Writing Productivity Metric: Making unit cost a competitive advantage.” But what about quality? Wouldn’t it be great if we could assess management decisions with quality scores such as “We trialed the new development process with product line A, and the quality of documentation increased from 75 to 82?” Or if we could provide clear quality data on the typical trade-offs of business decisions, such as “By cutting corners in the development process, we reduced the cost of customer documentation by 7 percent in the last release but quality dropped by 24 percent.”

Can we indeed measure the quality of documentation like we can measure the temperature with a thermometer? While documentation quality is often thought of as something “intangible” or at least very difficult to measure, we can get pretty meaningful quantitative quality measures if we focus on measuring the right things. Instead of trying to capture all aspects of documentation quality, which would result in complex, overly complicated measurements, it is critical to understand the major stakeholders and what really matters to them.

To use an example, there are many factors that determine the quality of a steak served at a restaurant, including the origin of the meat, the recipe, and the cooking process. However, if the chef knows that the only thing guests are really interested in is whether the steak is perfectly “medium rare,” then it does not make much sense to measure and manage all potential quality aspects. In this case, the focus of quality measurement should rather be on making sure that the steak is indeed “medium rare.” We will get back to this example later.

What is Quality?

But again, isn’t documentation quality too vague to measure? To find an answer to that question, let us have a look at two definitions for “Quality.” ISO 9000 defines quality as the “degree to which a set of inherent characteristics fulfills requirements.” This definition focuses on the data perspective, meaning that you can measure quality by comparing actual data with requirement data.

Peter Drucker states: “Quality in a product or service is not what the supplier puts in. It is what the customer gets out and is willing to pay for.” Drucker adds the user perspective, which is obviously very relevant for user documentation.

The ISO definition and Peter Drucker’s statement don’t contradict each other but they take different perspectives. To reconcile these two perspectives when defining documentation quality metrics, we need to reach an agreement with customers and internal stakeholders (or at least a common view) about the requirements for user documentation. In addition to the perspectives, there are other dimensions to consider, which makes measuring quality multi-dimensional:

  • Data perspective–User Perspective
  • Internal requirements–customer/user requirements
  • Leading–Real-time–Lagging metrics

Metrics are considered leading when they are measured before delivering the product, real-time if measured while the product is used, lagging if available after the product has been delivered and deployed. For example, customer survey data is lagging data.

In summary, quality indicates the degree to which a deliverable meets agreed requirements. This makes it impossible to define “quality” in the absence of clear requirements. Since documentation quality is multi-dimensional, no single metric will provide a comprehensive view of quality. However, there is typically a small number of quality attributes that are highly relevant to our internal and external stakeholders, which helps to focus on a manageable number of quality metrics.

Understanding Stakeholders—Customer and Internal Requirements

The need to understand critical quality attributes leads again to the question of how well we understand who our stakeholders are and what requirements they have. Documentation managers should at least look at two different stakeholder groups:

  • Customers /users
  • Internal stakeholders

Both stakeholder groups can have different and even contradictory requirements. For example, customers are typically not interested in the effort that is spent to meet their requirements while internal stakeholders may want to contain documentation cost as much as possible. However, there are also requirements that both stakeholder groups have in common, such as technical accuracy or complete and concise documents. See Figure 1.

Oemisch_Figure1

Figure 1: Example of Internal Stakeholder and Customer/User Requirements for Documentation

Leading and Lagging Metrics

Many companies use customer satisfaction surveys to derive metrics (satisfaction scores) on how well they meet customer requirements with respect to certain aspects of information (examples for customer documentation include ease of use, accuracy, and relevance). Customer satisfaction scores are lagging indicators, since customer feedback is collected after the product has been deployed. We should keep in mind that using average customer satisfaction scores as the only quality measure is not very useful for quality management and quality improvement, especially if the scores are not linked to specific deliverables. Not knowing customer requirements upfront and relying solely on customer satisfaction measurement after the product has been delivered is little more than a “trial and error approach.” For example, if average scores are acceptable, all we could conclude is “We don’t know what the customer needed but we met these needs fairly well.”

Measuring and correlating leading and lagging indicators, however, can be a great approach to establishing better predictability of documentation quality. When selecting the relevant leading and lagging quality measures, it is important to focus on the quality attributes that reflect critical requirements of internal and external stakeholders.

Getting back to the example used in the beginning, a cook may find out that grilling a steak for 5 minutes (data perspective, leading metric) may lead to the customer feedback about the steak being perfectly medium rare (user perspective, lagging metric). As a result, measuring the grilling time is used as a key metric to predict and manage the overall quality.

In the documentation world, a documentation team may find that accuracy and consistency of deliverables are critical stakeholder requirements. The documentation manager would decide to measure the completeness of procedure testing and the compliance with the documentation process for a number of products and compare the metrics with customer documentation satisfaction scores for the same products. If analysis shows that following a consistent information development process and a high degree of procedure testing of the documentation result in consistently high customer satisfaction, the documentation team may use process compliance (percent key process steps completed) and procedure testing (percent content independently tested) as leading indicators to manage quality and predict customer feedback on documentation.

By analyzing and correlating leading and lagging metrics, we can make documentation quality not only measurable but also more predictable and manageable.

Making Quality Metrics Manageable

When quality attributes are clearly defined, measuring quality is no longer vague. However, it can be complex since we have to consider multiple dimensions. To make quality metrics manageable and meaningful, it is helpful to

  • focus on the most relevant requirements of the stakeholders
  • determine the relevant attributes to measure how we meet the requirements
  • use leading and lagging metrics to make quality more predictable

When selecting attributes and metrics, it is most important to focus on the critical few that reflect the most relevant stakeholder requirements to make quality measurement both meaningful and manageable.

Closing the Loop with the End User

As shown before, it is important to reconcile the data perspective and the user perspective and validate the assumptions about quality metrics, especially the expectation that the selected attributes correspond with user requirements.

The traditional textbook approach is a closed-loop quality process starting with user and task analysis to determine the relevant end-user requirements that then guide the design of the user information (Figure 2). Usability testing at the end of the quality process ensures that the quality process delivers the desired result: highly effective end-user information.

Oemisch_Figure2

Figure 2: Closed-Loop Documentation Quality Process

One of the issues with this approach is that time and budget constraints can prevent documentation managers from performing usability testing with end users or in-depth customer feedback analysis of documentation. However, not managing the quality of documentation effectively when budgets are under scrutiny may lead to additional budget pressure when executives do not see the value of quality documentation, … or when they do not see the need to improve documentation for lack of clear quality indicators.

Documentation managers need to find ways out of that dilemma in order to avoid a situation where cost reduction and subsequent reduction of quality management lead to a downward spiral, with executives adopting views like “documentation is not that good anyway, why should we spend a lot of money on documentation development?”

There is no silver bullet for the way out of this dilemma, and fortunately, there are many successful approaches to quality measurement and quality management. However, as an example, let us look into one approach closely tied to the delivery of user information that can be very effective: Moving away from the “book paradigm” in favor of topic-based, more continuous content delivery.

What does topic-based, continuous delivery mean?

Abandoning the “book paradigm” is related to developing and delivering content as information topics rather than as complete books. If we consider topics the deliverables, then we can focus the development and quality improvement effort on the “most important” topics. Also, by delivering content as topics, documentation teams can collect customer feedback and calculate metrics per topic so that the feedback is very specific. Most web-based delivery systems can be enhanced to track the use of individual topics, so that more and less frequently used topics can be determined. Finally, if customers use the search functionality of the delivery interface to find relevant topics (rather than browsing complete books), search analytics can be applied to identify what customers really need and search for. Search terms can be analyzed even for content that is not available or not identified the way customers look for it.

The second part of the approach, characterized by “continuous content delivery” leverages the fact that there is no such concept as an “incomplete book” when delivering topics, so that there is an option of delivering some of the topics for a release at a later point in time, even after the product release date.

How does this approach help measure and improve quality, even when documentation development is under budget and time constraints?

Information development managers can leverage the following key benefits of topic-based, continuous delivery to improve the quality and deliver the right content:

  • Ability to obtain user feedback for individual topics (usefulness)
  • Ability to track how often certain topics are accessed (usage)
  • Search analytics (tracking the terms that users search for)
  • Real-time feedback (available when topics are used)
  • Possibility of delivering less critical topics later

Even a very simple request for feedback on individual topics (“Was this information useful?”) results in data that can be used to score the usefulness of topics. Together with data on the usage of topics, managers get two meaningful data points per topic, which allows them to categorize the topics in an analysis chart.

Figure 3 below provides a basic analysis chart showing usefulness, for instance percentage of users rating a topic “useful” on one axis, and usage, for instance frequency of user accesses on the other axis. Only two values (“High” and “Low”) per dimension are assigned. All topics fall in one of the four quadrants of the chart. Even this simple analysis of usefulness and usage enables informed decisions about where to focus scarce resources when supporting user information for a current or future release.

Oemisch_Figure3

Figure 3: Topic Usage vs. Usefulness

Possible findings and conclusions could be as follows:

  • High usage, low usefulness: Focus development and improvement effort on these topics. They will have the biggest impact in improving usability of the content and user satisfaction.
  • High usage, high usefulness: Great feedback! Maintain these topics and make sure they stay accurate when updating the information.
  • Low usage, low usefulness: Either improve these topics to make them more useful or stop supporting rarely used topics—and possibly replace some of them with more relevant topics. More relevant topics might be found by reviewing the terms that users have searched for in vain.
  • Low usage, high usefulness: Rarely used topics in this area could be delivered later, if required to “stretch the development timelines.” The extra time would help deliver the topics with decent quality.

Even in this basic example, the delivery approach and the data collected in the delivery process allow documentation managers to find an alternative to compromising quality when under budget and time constraints. With access to real-time user feedback and the flexibility of delivering individual topics, the focus can be reset on quality. As a consequence, documentation quantity, rather than quality, would become the variable element of the competing success factors.

It Can Be Done!

The example in Table 1 shows that even two straightforward quality metrics (usage and usefulness scores) can provide clear information for effective quality management. Add two easy-to-measure leading metrics that reflect internal and external stakeholder requirements, such as process compliance and degree of test coverage, and you have a powerful set of four quality metrics that cover the main quality dimensions and allow you to effectively predict and manage the quality of documentation.

Oemisch_Figure4

Table 1: Example Set of Documentation Quality Metrics

Pretty concrete and tangible, right? CIDMIconNewsletter

References

Mike Eleder

“The Illusive Writing Productivity Metric: Making Unit Cost a Competitive Advantage”

Best Practices

February 2011

Vol. 12, Issue 1

Peter Drucker

Innovation and Entrepreneurship

1985, New York, NY

Harper Collins

ISBN: 9780060913601

Oemisch-photo

Volker Oemisch

Alcatel-Lucent

volker.oemisch@alcatel-lucent.com

Volker Oemisch has 25 years of experience in managing global customer documentation and training teams and leading the company-wide standardization and re-engineering of information development frameworks. At Alcatel-Lucent, he is responsible for global information development services and managing the Alcatel-Lucent documentation program that was recognized with the 2009 CIDM Rare Bird Award.

 

We use cookies to monitor the traffic on this web site in order to provide the best experience possible. By continuing to use this site you are consenting to this practice. | Close