Home/Publications/Best Practices Newsletter/2013 – Best Practices Newsletter/Improving Product Usability with Task Complexity Metrics

CIDM

February 2013


Improving Product Usability with Task Complexity Metrics


CIDMIconNewsletter Ben Colborn, Nutanix

Improving Product Usability with Task Complexity Metrics


CIDMIconNewsletter Ben Colborn, Nutanix

Introduction

“Responding to business needs” is often on the list of reasons to migrate to structured authoring. Typically the focus is improving efficiency in production and localization, but the opportunities are much wider. Without any knowledge of or interest in structured content, the director of engineering at Nutanix made a request of information development: to assess product usability by calculating complexity metrics for each procedure and tracking these metrics from release to release. By treating content as a database (rather than merely content in a database) and taking inspiration from the field of software metrics, each procedure can be analyzed for characteristics that indicate difficulty of use. Based on automated analysis, each procedure is assigned a complexity score. The complexity scores of all procedures in the company content set are used to prioritize enhancement efforts.

The Product Usability Problem

While product usability has long been a focus of consumer-oriented companies, the consumerization trend leads users of all types of technology to expect improvements in usability, which has led to growth in the user experience field. For usability professionals, surveys, focus groups, observations, and other direct user feedback, while important, are difficult to scale and keep consistent over time. A variety of uncontrollable factors—experience with the product, preconceived opinions about the product, observational interference, expense of organizing studies, and so on—make it difficult to gather data that can be reliably compared over time. Ideally, a product team would have some proxy for user experience that would form a stable basis for comparison from version to version. This data could add a quantitative aspect to user experience research that would otherwise be out of reach for all but the largest consumer-focused organizations.

Where could such a database be found? It would have to be a comprehensive, organized description of the things that people do with a product. Doesn’t that sound like documentation?

Mark Baker distinguishes “content in a database” from “content as a database”1

  • Content in a database: “An object that can be retrieved from its indexed location, like locating a dining room chair in an IKEA warehouse.”
  • Content as a database: “A record that can be examined and presented from different angles based on different properties, which can be selected based on any of these properties, and which can be related to other records based on common properties.”

For documentation to serve as a database of user experience data, it must meet certain criteria:

  • Structured: organized by elements that describe the semantic or rhetorical function of the content.
  • Consistent: follows defined authoring standards.
  • Task oriented: focused on use of the system rather than the design of the system.
  • Topic based: divided into discrete units that can be handled independently.

If these requirements are met, we can consider analyzing the documentation to derive user experience metrics.

In typical discussions of user focus in information development, the product and tasks are taken as facts to be researched and the information development job is to develop useful and relevant procedures within that constraint. Using metrics derived from documentation adds a dimension to the equation, where documentation can influence future states of the product and tasks and not just accept them. With this approach, we can retain the focus on human engagement while also remaining scalable and consistent. Developing relevant, usable documentation is expected—and information developers have the opportunity to add more value to an organization in the form of data that can directly influence product usability.

“It would be nice for the doc team to have a metric for complexity of procedures. … That will bring objectivity to the whole topic, and we can track progress as well.”

— Binny Gill, Director of Engineering2

From Software Metrics to Documentation Metrics

When I was an undergraduate, I worked for some software metrics researchers at the University of Idaho Software Engineering Test Lab and at the Idaho National Engineering Laboratory. The mantra of these researchers was “you cannot control what you cannot measure.” Although I haven’t been paying any attention to that world for many years now, that idea has stuck with me, as have some of their perspectives on measurement.

Some well-established software metrics are defined below.

Colborn_Table1

At some abstract level, programs and procedures have something in common: they are both sequences of instructions, one for machines and the other for humans. With the shift to structured, topic-based information, technical communication has adopted a number of software engineering approaches, such as modularity, reuse, and compilation. Because of these commonalities, it is possible to create some documentation metrics that are roughly analogous to software metrics.

Colborn_Table2

A different starting point would have been to ask, what makes a procedure difficult for someone to perform? Asking that question leads to a very similar set of documentation metrics.

By taking the view of content as a database, procedural information can be handled as a database of business information in addition to its value as human-consumable information.

Gathering and Tracking Task Metrics

The foundation of tracking complexity metrics over time is developing accurate, consistent, and useful information while following good structured authoring practices within a well-defined information architecture. The technique is applicable to semantically rich, standards-based, topic-oriented structured authoring environments. As long as an organization can define what makes a procedure difficult and can express the difficulties in terms of structural elements, gathering the metrics is easily achieved.

If the documentation is developed according to an XML schema such as DITA or DocBook, the metrics can be described in terms of XPath expressions. Once you have decided what you want to measure, the XPath expressions define how to measure.

The following table shows a subset of the metrics that Nutanix gathers. Some metrics map neatly to a simple element, such as branches mapping to cross-reference (xref) elements. Others are more complex: either they map to multiple combinations of elements or they map to text inside an element. In the latter case, clear authoring standards (whether enforced by software like Schematron or Acrolinx or by a human editor) are required. For example, I want to measure the different types of command-line interfaces. These are designated by different strings inside the codeblock element, which are defined by authoring standards, not by the XML schema. Still other metrics aren’t possible with the level of granularity that I would like, such as lumping all user interface elements into uicontrol.

Colborn_Table3

Some of these metrics—such as number of steps and user-supplied information—are
fairly widely applicable, while others—such as those related to the different user interfaces—are relevant only in my environment.

While up-front deep thinking is required, the tools to analyze the procedure topics are widely available and have low or no licensing costs, only cost of development. I chose to implement the checks with PowerShell, which has good XML support. XSLT (applied within a DITA Open Toolkit plugin or not) or Schematron would have been other options.

Because the ultimate purpose of the metrics is to track improvement over time, it is necessary to design an information structure that can track per-topic metrics over time. I chose to use the id attribute of the task because file names and topic titles are more prone to change. Relying on the IDs requires information developers to exercise a bit more care in the course of their work but is not intrusive (see Figure 1).

Colborn_Figure1

Figure 1: Per-topic metrics information structure

This structure is easily translated to CSV using XSLT. The resulting CSV file is imported into a spreadsheet program for graphing and other analysis.

As the metrics model is refined over time, it is necessary to gather data on previous versions. By following good archiving and version control practices, reanalyzing previous versions is a simple matter of retrieving the files and running the program to gather the revised metrics.

Using Task Metrics

“While the engineers are focused on their individual sub-areas, the complexity of the overall workflows is not always clear to them. Using these metrics has converted a subjective and often contentious topic into an objective topic which now encourages more constructive discussions in my team. It also helps me demonstrate to upper management and stakeholders the improvements being made in an otherwise fuzzy and often overlooked aspect of system design.”

— Binny Gill, Director of Engineering3

The information development approach allows the business to derive additional value from the information. These metrics are used to prioritize user workflow improvements and allocate developer time. The director of engineering claims that the metrics have been beneficial to his organization: his team has systematic input to the interfaces they own. The metrics indicate the degree of improvement for each procedure and therefore the effectiveness of resource allocation. Engineering and information development periodically review the metrics, and they are presented as part of the monthly company-wide metrics meeting as well as to the board of directors. In addition, anecdotal comments from support engineers, systems engineers, partners, and customers can be correlated with the documentation metrics (see Figure 2).

Colborn_Figure2

Figure 2: Documentation Metrics

Within information development, the metrics can be used to identify procedures with abnormally high or low complexity, which indicate procedures that should be considered for revision.

This model does, of course, have limitations. Changes in authoring standards over time may degrade the consistency of the metrics. There is no way to distinguish whether reduction in complexity comes from improvement in the documentation or improvement in the product. And as mentioned previously, these metrics assess a proxy for user experience, not the user experience itself. They provide one additional perspective, not a replacement for other user experience investigations. A small company like mine, however, does not have a dedicated user experience research group at this stage. These metrics are the most reliable indication of how we are doing with the usability of the product.

With current focus on product design across industries, this technique is an opportunity for information development to demonstrate greater value to the organization by helping to improve user experience. One of the recent shifts in technical communication has been to treat information development not as a necessary evil but as the cultivation of a business asset. By applying insights from other disciplines, information as a business asset can have use and impact beyond its intended domain.CIDMIconNewsletter

1 Baker, M. “The difference between content in a database and content as a database.” spfe.info, Feb. 5, 2012.

2 Personal communication, April 4, 2012.

3 Personal communication, September, 2012.

 

We use cookies to monitor the traffic on this web site in order to provide the best experience possible. By continuing to use this site you are consenting to this practice. | Close