CIDM

June 2013

 


Document Assessment for Quality Improvement


CIDMIconNewsletter David Skolnick, Analoog Devices, Inc.

Introduction—Quality and DITA

When Analog Devices (ADI) started its Darwin Information Typing Architecture (DITA) publishing pilot, with plans to go to a DITA publishing process for all processor books, I made our customers a promise:

The ADI books produced with DITA will have the same or better quality of content and usability as the books previously authored with DocBook. And, after it is established, the DITA process will yield higher quality.

Not a big promise, right? Setting a goal that quality will not slip does not seem to be a challenge. But, a number of characteristics of DITA publishing (with a large group of authors) differ from DocBook publishing (with a small group of authors). These differences led ADI’s DITA publishing project team to develop its document assessment method.

The goals of this method are to manage consistent document quality and guide continuous document quality improvement.

Document Process Changes

The most fundamental difference between ADI’s previous (DocBook-based) and current (DITA-based) publishing process is the number of authors directly contributing to document sources. Changing to a DITA publishing process with a DITA-aware content management system (CMS) permitted growing the pool of authors from fewer than ten writers (professional) to more than one hundred contributors (mixture of professional writers and engineers).

The upside of the change came from encouraging authoring among the non-professional writers. Engaging these subject matter experts in the authoring process accelerated the growth and accuracy of technical content.

The downside of this change came from developing collaborative authoring skills in this community. This larger group increased the need for standards, process, and training to make these authors effective.

Other fundamental differences between ADI’s previous (DocBook-based) and current (DITA-based) publishing process are content re-use and inclusion of generated DITA content from a design database. ADI did not use rigorously managed content re-use in books before changing from DocBook to DITA. This process change grew from the many DITA features (conrefs, keyrefs, and others) that ease managed content re-use. Including generated DITA content from a design database presented additional challenges for maintaining document content quality. Guidelines for data entry and DITA element tagging were needed to promote consistent output. Also, developing consensus on these standards (which are shared among DITA authoring and non-authoring groups) was not straightforward. The resulting document field standardization has led to improved content.

The publishing process changes created the need for a new quality management method. The method needed to create a common understanding of document quality and needed to address the goals of all process stakeholders.

Document Assessment Metrics

The process of developing metrics differs for every authoring community and document type. This discussion focuses on the metrics developed by ADI for digital signal processor (DSP) hardware reference books. While metrics should vary based on the community and document type, the method of developing and applying metrics for document quality management can be applied to any DITA publishing process. Figure 1 provides an overview of the metric development process.

Skolnick_Figure1

There are probably as many systems for analyzing what matters to stakeholders as there are stakeholders. From ADI’s experience, choosing a simple process for identifying what matters is a great aid to success. Figure 2 compares two techniques. It is important to analyze stakeholder value with more than one approach, because there are multiple facets of value to consider.

Skolnick_Figure2

The funnel method examines the values that contribute to the nature of the document (Truth & Beauty). This approach identifies issues that add value to the preparation and the presentation of the content.

The silo method examines the values that readers and authors want in the document. Each stakeholder group has a unique perspective on what makes the document fit their purpose. No one reads technical documentation for fun. It is this sense of the document matching their needs that makes the document useful to its readers.

For this discussion, assume that a collaborative writing community has analyzed the values that the stakeholders prize in their documentation. The community used a number of analysis methods to reach consensus on characteristics that add value. The value list:

  • Accurate Content—The content provides correct product information that helps the customers attain their goals. The content is never misleading or inaccurate.
  • Complete Content—The content provides complete product information on all product features that the customer will use or will find interesting. There are no missing levels of detail in the content.
  • Readable Content—The content is written using a clear, accessible style. There is a good match between the reading level of the content and the customers using the content.
  • Graphical Content—Where useful, the content contains graphics that aid understanding and support the text. The graphics are consistent, professional, and used to enhance the content. The graphics are never trivial, confusing, or distracting from the written content.
  • Balanced Content—The content is balanced in a number of ways. Topics are provided for product concepts (what is it?), tasks (how is it used?), and reference (what are the facts?). Also, the same level of detail and structure is provided across all areas of the product content. The customers never encounter a situation in which they wish that all parts of the content were as good as one particular portion of the content. All portions of the content should be good and cover material in a balanced manner.
  • Producible Content—The content can be produced and delivered in a timely manner to support the product’s success with the customers. Wonderful content that cannot be produced when needed is NOT wonderful content.
  • Re-usable Content—The portions of the content that are identical for multiple products should be easily and reliably re-used for each applicable product. It is valuable both to customers and to content developers to re-use content in a way that accelerates customer success with a number of products.

Identify Metrics from Values

With the values defined, the next step is to identify corresponding metrics. The two (values/metrics) differ in that

  • values frame your vision (what success looks like) for the content
  • metrics quantify your incremental progress (steps on the path to success) for the content

In some ways, it is much easier to identify values to which the content should aspire than it is to identify metrics that measure progress toward those values. Figure 3 identifies a list of metrics and shows how they relate to the previous list of values. It is very important to note that (1) the relationships are not one-to-one and (2) values tend to span multiple metrics.

Skolnick_Figure3

Some detail on each of these metrics and their relationship to values is useful.

  • Database Review—Approximately 50 percent of the content in the example book is DITA content and SVG graphics that are generated directly from a database. This metric assesses how well the content that is output from the database meets standards for accurate, complete, readable, graphical, and balanced content.
  • Apps Review and Design Review—Application Engineers (who support the product) provide the most customer-focused content review. Design Engineers (who design the product) provide the most product-focused content review. These metrics assess how well the content reviewed by these groups meets standards for accurate and complete content.
  • Content Comply, Text Comply, and Figure Comply—Technical Writers (in their communications professional role) work with the content to bring it into compliance with the community’s style guide and grammar reference, producing a professional result. These metrics assess how well the content written/edited by this group meets standards for complete, readable, graphical, and balanced content.
  • Content Comply, Phrase Re-use, DITA Comply, and Content Re-use—Technical Writers (in their DITA experts’ role) work with the content to bring it into compliance with the community’s DITA information model, producing well-structured, valid DITA content. The information model for this document type provides both guidelines for DITA usage (the community’s agreed “best practices”) and identifies requirements for content re-use. Note that the Content Comply metric overlaps between style and DITA structure; this overlap stems from how content completeness relies both on the way content is presented (style) and the way content is ordered (structure). These metrics assess how well the content meets standards for accurate, complete, readable, graphical, balanced, producible, and re-usable content.
  • Spec Covered—Application Engineers, Design Engineers, and Technical Writers work together to verify that the information in the product specification is completely covered by the product content. Where needed, proprietary (internal-use-only) content is marked with a value in its DITA audience attribute to permit conditional publication of public (for customer) and private (for company only) versions of the content. It is not a straightforward task to make sure that specification content (often not produced in DITA) is completely represented in the product content. For explicitly stated information from the specification, the task requires editing and appropriate DITA element tagging (is this private or public information). For indirectly implied information from the specification, the task requires that subject matter experts carefully review the product content to make sure the intent of the specification is completely represented in the resulting product content.

 Identify Scale for Metric Assessments

It is not enough to identify metrics related to values. It is critical to develop a scale of compliance within the metric. As identified in Figure 1, document assessment is an iterative improvement process. This compliance scale for each metric permits incremental improvement of the content from one published version to the next, each published at a point in the product’s life cycle.

Table 1 shows example values, related metrics, and a compliance scale for each. Note the following abbreviated terms in this table:

  • Tech Pubs—the Technical Publications (professional writers) group
  • Apps—the Application Engineers
  • Design—the Design Engineers
  • Author/Editor—authors are from all three groups; editors are typically from Tech Pubs

Skolnick_Table1

Reach Consensus on Assessment Results

At each point in the product’s life cycle (typically: proposal acceptance, initial design acceptance, design completion, preliminary (limited) production, and release to full production), the content editors complete a document assessment of the content, reach consensus with the content authors on metric values in the assessment, and use the assessment result to plan the next content development goals. If the content has reached an agreed level of “quality” (for example, an assessment overall score of 70 out of 100), the content goes to the content control board to be reviewed for content “acceptance” (ready to be re-used across multiple products; no further major content development needed; only minor improvements planned for following content revisions).

To simplify the document assessment methodology, the assessment uses 10 metrics and uses a 10-point scale of acceptance within each metric. This approach provides a 100-point scale, which is comfortably familiar to the members of the writing community.

Using an MS Excel spreadsheet, the content editor populates the spreadsheet with the data (score) for the document assessment metric. The spreadsheet is set up to produce (automatically) a data-driven chart of the metric scores. It is important to consider the content structure when designing this spreadsheet. For the model document from ADI, it worked well to provide a row in the data for each chapter in the document, then produce a document assessment on a chapter-by-chapter basis. Figure 4 shows an example chart of document assessment data for one chapter.

Skolnick_Figure4

When the editor and author(s) reach consensus on the document assessment for the chapter, the overall score is recorded in the revision history for the chapter. Also, a link is provided to an online posting for the document assessment for the full publication. This combination eases planning of content development tasks for the next document revision by establishing common expectations among the editors and authors.

Note: For those that attended the 2013 CMS/DITA North America conference, a copy of the example spreadsheet was provided with the conference material. For those who missed the conference, this paper, the presentation slides, and a copy of the spreadsheet are posted to the DITA users group at <http://tech.groups.yahoo.com/group/dita-users/>

Conclusion—Not a Panacea, but Part of the Process

Using the document assessment process as part of your DITA content development process does NOT solve any content development problems or challenges. It is not intended to do so. This methodology provides one useful tool for your “DITA development toolbox”; it helps you ferret out weak, poorly developed, inconsistent content within your DITA documents. Then (its most useful feature), it provides you with the means to communicate about these issues and plan to resolve them.

To use this methodology, a content development community must have the following (or similar) resources to support it:

  • A style guide that all content developers agree to use—The technical publications group at Analog Devices uses the ReadMeFirst! style guide. To supplement this guide, the group developed an “exceptions and additions” guide that specifies (on a case-by-case basis) incidences where ADI style diverges from ReadMeFirst! or where additional information is needed to define style. <http://www.amazon.com/First-Computer-Industry-Edition-ebook/dp/B0031O40TE/ref=dp_kinw_strp_1>
  • A grammar reference that all content developers agree to use—The technical publications group at Analog Devices uses A Writer’s Reference grammar reference. <http://www.amazon.com/A-Writers-Reference-Diana-Hacker/dp/0312450257>
  • An information model that all content developers agree to use—The technical publications group at Analog Devices developed the ADI information model using the guidance of the Introduction to DITA—A User Guide to the Darwin Information Typing Architecture. This text provides a step-by-step guide for developing this model. <http://comtech-serv.com/url/2nd>

One of the most powerful features of DITA content development is its compatibility with incremental content quality improvement. The document assessment methodology provides an analysis tool that helps you identify what is valuable to you and your customers in your content, measure how well your content supports those values, and plan incremental improvement of content quality to achieve those values. CIDMIconNewsletter

About the Author:

DavidSkolnick_photo

David Skolnick
Analog Devices, Inc.
david.skolnick@analog.com

Since 1992, David Skolnick has been a Technical Writer with ADI’s Digital Signal Processor Division, Norwood, MA, where he writes and illustrates manuals and data sheets and has written or edited a number of DSP application notes. He holds a Bachelor degree in Electrical Engineering Technology and a Master of Technical and Professional Writing degree, both from Northeastern University. For more information, visit David’s profile on LinkedIn at: http://www.linkedin.com/in/davidskolnick

[/level-members]
We use cookies to monitor the traffic on this web site in order to provide the best experience possible. By continuing to use this site you are consenting to this practice. | Close