Metrics on Metrics

CIDM

December 2018


Metrics on Metrics


CIDMIconNewsletter Dawn Stevens, Kathy Madison & Sabine Ocker, Comtech Services

Each year the Center for Information-Development Management (CIDM) conducts a benchmark survey of its members to identify industry trends in key areas of technical product documentation development. In 2018, we explored the amount and types of data that departments gather and monitor to evaluate their overall effectiveness and prove their value to their corporations. Thirty-five members participated in the study, which consisted of an online survey and follow-up phone interviews.

Our research focused on critical metrics that are typically associated with the overall health and well-being of a content organization, including:

  • Customer and employee satisfaction
  • quality
  • Department and employee productivity
  • Development and production costs

Because many of our members have adopted DITA, we also examined metrics specifically designed to track content reuse.

The survey questions and interviews explored in detail the data members are tracking, the way in which they track this data, how often they gather and review the data, who they share it with, and what they do with it.

Overall Trends

Metrics Available
Although members acknowledge the importance of gathering and analyzing metrics, most reported that they formally track only data that is relatively easy to collect, such as the cost of translation or their on-time delivery percentages. Less than half the respondents have formal measures in place to gather the other metrics we asked about (Figure 1).

These low tracking percentages don’t necessarily mean no measures are in place to capture data, but often simply indicate there is not a formal, regular, consistent approach to gathering this information. Instead, a significant portion of data is gathered on an ad hoc, anecdotal basis. For example, an organization may not have a formal user feedback mechanism regarding the quality and usefulness of technical documentation, but the group often reaches out to the sales and marketing team to gather any second-hand information available.

There are many reasons why members do not have formal metrics gathering practices in place:

  • Forty-six percent report that they don’t have an appropriate tool or mechanism to collect data.
  • Twenty percent admit that they actually don’t know how to gather the data they need.
  • Six percent indicate they don’t have enough time available to implement a data gathering process and analyze the resulting data.

Nevertheless, nearly all of our members (82 percent) would like access to more metrics than they are currently gathering. For example, just under half want more meaningful customer satisfaction data that they can get on a repeatable basis.

“I have a hard time getting customer data to drive our design and help us reduce the amount of content we deliver.”

In fact, when you combine the number of groups who have a formal process in place for customer satisfaction and the number who wish they did, we find an almost 100 percent agreement that this metric is a critical and important one for establishing the value of the department.

Similarly, the majority of respondents want to understand the true cost of producing documentation. Although most know how much they spend on a yearly basis to create content (that is their overall department budget), they don’t know how those costs are allocated to an individual project or, even more specifically, a single deliverable.

The lack of both customer satisfaction data and documentation costs make it impossible to provide any sort of cost/benefit analysis to establish the overall value of the team. But in an outsourcing climate, managers must have these metrics readily available to protect their team and their resources. Unfortunately, this example is just one part of an overall mismatch trend between the data that is available and the data that is needed. Figure 2 shows the significant disconnect between the metrics members have available or can measure and the things their department is being held accountable for. For example, most managers are held accountable for content quality while only about 60 percent of that number actually have ways to measure quality. Conversely, two-thirds of the membership track translation costs, but only half are held accountable for them. The disconnect illustrates that, as one member stated, companies often gather “metrics of convenience rather than significance.”

Only nine percent of members are satisfied with the metrics they are gathering with an equal number hesitating at gathering more metrics until they learn to use the ones they have more effectively.

What I wish I had is more time to delve into the data and determine what other data might be helpful.

Data Gathering Methodologies
Most members (45 percent) gather the data they do collect manually, which typically entails tracking data and trends via spreadsheets or SharePoint forms. However, automated tools, such as Acrolinx, web analytics, and Wufoo, are in place at more than a third (37 percent) of the companies, with many respondents expressing a desire for more data automation, believing that spending less time collecting data would give them more time to analyze the data and help them drive decisions.

“I want the data to be easier to get so I can use my time to put it to use.”

Data Gathering Frequency
During the interviews, most members felt they were gathering and reviewing their data at the right cadence. However, survey results show that data is actually being gathered on a more frequent basis that it is being reviewed. The majority of respondents indicated they review data on a quarterly basis, while much of the data is gathered on a daily, weekly, or monthly basis. Even more shocking is that the number of people reporting that they never review or analyze data is larger than the number of people saying they never collect it. In other words, it’s not always an issue of not having the data; it is sometimes that the data goes unreviewed and unused, a disturbing counterpoint to the expectation that data would be analyzed more often if data collection was automated.

Use of Metrics
Members use the data they collect for multiple purposes (Figure 3), with no one purpose being clearly more important. Surprisingly, only 21 percent of respondents indicated they use metrics specifically to educate management on the value of the organization, although 56 percent did report that they share their data with their immediate and senior managers. Data does not flow downward as frequently, however; only 25 percent of respondents said that the data collected is actually available to and shared with the entire team, making them only slightly more informed than teams outside their group who receive the data from 20 percent of the respondents.

On-Time Delivery
Number collecting data: 53%

Collection method:
Manual: 63%
Automated: 30%
Anecdotal: 7%

Used to:
Evaluate staff: 31%
Educate management: 21%
Compare to previous years: 19%
Set/change priorities: 14%
Plan projects: 10%
Justify resources: 5%

Shared with:
Management: 53%
Team: 25%
Others: 22%

Fifty-three percent of respondents report that they track on-time delivery, making it the second most common metric gathered. From our interviews it seems that the popularity of the metric is not necessarily its usefulness, but the fact that it is easy to track. In fact, people who don’t track this metric are not looking for ways to effectively gather data. Instead, many seem to feel that tracking this metric is pointless. For them, on-time delivery is a non-negotiable. As a result, they always deliver “something” on time. Further, at some companies, the adoption of Agile development practices has also affected the measurement of on-time delivery as many teams simply redefine their sprint scope to ensure on-time delivery. In other words, many members see on-time delivery as a meaningless metric because it sits unmoving at 100 percent. It is the quality of the document or the wear and tear on the writers who must meet the deadline that suffers, and those are the metrics these managers feel they should be tracking.
This insight provides some clarity on how this data is used by those who collect it. If this metric is always 100%, it provides no insight for planning and prioritizing activities and resources; in fact, with only five percent of companies reporting that they use it for justifying resources, it is the least applicable metric researched for helping in that area and it has below average applicability to planning as well. On the other hand, it is one of the most used for evaluating the performance of staff.

Productivity
Number collecting data: 38%

Collection method:
Manual: 57%
Automated: 24%
Anecdotal: 19%

Used to:
Educate management: 24%
Justify resources: 22%
Compare to previous years: 17%
Evaluate staff: 17%
Change priorities: 11%
Plan projects: 9%

Shared with:
Management: 57%
Team: 28%
Others: 15%

Only 38 percent of members are measuring the productivity of their product documentation teams, which may be directly related to the fact that it is the least likely metric in this report to have an automated way to collect data. To gather this data, members manually track the following data (Figure 4):
The number of deliverables created in a year. For example, members shared metrics such as “completed 104 projects with six technical writers” and “completed 200 releases with ten writers”.
The number of topics or pages created during a specific time period. For example, “developing 2 job aids per week at 5-7 pages each, per person”.
The actual time taken to create a specific deliverable (sometimes compared to an original time budget).
The team velocity in an Agile implementation, measured by the number of completed story points in a given development sprint.

When discussing productivity measures, members alluded to the risk of “getting what you measure.” For example, when writers are evaluated by how much they produce in a given amount of time, they are motivated to create a lot of material. The question, however, is how much of that material is necessary and how much of that is well written. As a result, members caution that productivity cannot be the only thing tracked; it must be balanced with other metrics:
“Number of deliverables is not a good measure as it does not take into account the complexity of each project.”
“Productivity must be considered in tandem with quality. Creating high quality content takes more time.”
“We don’t have a unified metric to calculate productivity. It depends on the project expectations and the skill sets of the content creators.”

Content Quality
Number collecting data: 44%

Collection method:
Automated: 50%
Manual: 44%
Anecdotal: 6%

Used to:
Evaluate staff: 21%
Educate management: 20%
Justify resources: 17%
Compare to previous years: 16%
Plan projects: 14%
Change priority: 12%

Shared with:
Management: 48%
Team: 30%
Others: 22%
While members might agree that productivity must be balanced with quality measurements, only 44 percent actually track quality metrics and among those companies, the notion of what constitutes quality content differs: members report gathering statistics ranging from customer satisfaction, to readability scores, to the error density on pages submitted to editing. Nevertheless, the majority of people measuring quality include technical accuracy as their number one concern, with clarity and completeness of the content as a secondary focus (Figure 5).

Content quality is the least likely metric to be gathered on a purely anecdotal basis but is almost equally likely to be tracked through automation as manually. As a result, having the resources required to monitor and measure quality is a struggle for many. Managers report that they are constantly balancing the power of a human review against the reality of limited time and resources. With many teams losing editor resources, it is not surprising that peer reviews are the most common method for determining content quality (Figure 6) nor that only five to 20% of project time is spent on the editing process.

Although automated tools can help in some areas, such as conformance to editorial standards, they can’t evaluate relevance or whether writers are creating the right level of depth for their audience. As a result, sixty percent of the respondents are starting to turn to Web analytics, such as views per topic or view time, to provide additional insight into content quality. However, many admit that this data can be misleading or too general. For example, one group reported that the number of views for similar content range between 23 and 105 for the same time period. A low view per topic could indicate low relevancy of the content, but it could also be an indication that the content is hard to find via search or navigation. Both may be considered quality issues, but the root cause is different; managers need to separate these quality issues to determine how to best address each one.
Given that quality issues can only be addressed by the team itself, these metrics are the ones shared most regularly with the team as a whole. However, content quality is the least likely metric in the study to be compared year over year, leaving those team members to figure out on their own whether or not they are improving.

Customer Satisfaction
Number collecting data: 41%

Collection method:
Automated: 43%
Anecdotal: 30%
Manual: 27%

Used to:
Educate management: 26%
Plan projects: 18%
Compare to previous years: 17%
Justify resources: 15%
Change priority: 12%
Evaluate staff: 12%

Shared with:
Management: 53%
Team: 27%
Others: 20%

Obviously, the best approach to measuring customer satisfaction is through direct interaction, rather than trying to draw conclusions from web analytics. Unfortunately, direct interaction appears to be difficult for members to gain. Many members told us they are in the initial stages of Customer Satisfaction projects, but others said such efforts are hindered by their marketing teams. One member summed it up this way, “Measuring customer satisfaction is tricky. We have tried to infiltrate a continuous customer survey, but we were turned down by our marketing department (who feared that additional questions would be frowned upon by the customers). The existing survey only covers access to competitiveness of products, service attitude, etc.”
As a result, many groups are forced to rely on anecdotal information from marketing or technical support rather than direct feedback. In fact, anecdotal data collection is used more often in determining customer satisfaction than any other metric included in the research. Even among participants who reported they are tracking customer satisfaction data, anecdotal information is equal to formal survey data and significantly more prevalent than user comments, topic ratings, and user studies (Figure 7 on page 130).
Members with direct user research shared lukewarm statistics, with user satisfaction ratings ranging from 60 to 75 percent. These types of unremarkable scores, however, empower groups to take action. For example, despite somewhat satisfactory user data that shows a preference for PDF over HTML content of almost six to one, teams are making a move to electronic delivery of content, perhaps assuming they don’t have much to lose in this area, but they can gain in others.

Content Development Costs
Number collecting data: 29%

Collection method:
Manual: 55%
Automated: 27%
Anecdotal: 18%

Used to:
Justify resources: 25%
Plan projects: 23%
Compare to previous years: 17%
Educate management: 17%
Change priorities: 10%
Evaluate staff: 8%

Shared with:
Management: 75%
Others: 14%
Team: 11%
One metric we expected would be a focus for tracking and improvement is the cost of content development. By far, it is the metric most often shared with upper management, with 75 percent of our members reporting that this data is visible up the management ladder. However, our research uncovered that nearly half of the respondents do not actually know the cost of their deliverables (Figure 8).
The fact that development costs are not commonly known nor tracked is surprising, as monetizing content creation is an important means for a product documentation manager to justify asking for additional writer resources, and in fact, is the metric in this study flagged as the most used for this purpose as well as for planning projects.
As seen in Figure 8, people who are tracking development costs seem to know equally well how those costs divide between the weighted hourly rates of their staff, the costs of producing the deliverables (such as printing, web hosting, and so on), tool costs (such as licensing), and translations costs. One company was even willing to share a composite amount for others to benchmark against: $336/page.
Development costs are the metric least shared with the team itself and among the least often used to evaluate the team. This lack of downward visibility makes sense: although the team’s productivity certainly influences these costs, many of them are out of the team’s direct control and cost savings is not a frequent motivator for improvement. Team members often care less about saving the company money and more about saving themselves effort (through reuse). Fortunately, these two metrics complement each other, and can be used together to the manager’s advantage.

Translation Costs
Number collecting data: 65%

Collection method:
Manual: 60%
Automated: 36%
Anecdotal: 4%

Used to:
Compare to previous years: 24%
Plan projects: 22%
Justify resources: 20%
Change priorities: 15%
Educate management: 15%
Evaluate staff: 4%

Shared with:
Management: 57%
Others: 23%
Team: 20%
Although total development costs were not always known, we found that the cost of translation was separately the most commonly tracked metric in the study, with roughly two-thirds of our members tracking it. The metric may be so popular because it is easy to track (being a separate line item in the budget) or because it is tracked by someone else and reported to them. It is likely not tracked due to a high level of importance—only half of our respondents indicated that they are held accountable for these costs and it is the metric least used to educate management about the value of the documentation department.
The lack of accountability for this metric could be because translation costs are often owned by a group outside of documentation. However, documentation managers may be interested in tracking these numbers because reduced translation costs were a primary business driver for a move to DITA.

Content Reuse
Number collecting data: 24%

Collection method:
Manual: 58%
Automated: 34%
Anecdotal: 8%

Used to:
Educate management: 30%
Plan projects: 23%
Compare to previous years: 19%
Evaluate staff: 15%
Justify resources: 8%
Change priorities: 4%

Shared with:
Management: 46%
Team: 27%
Others: 27%
Another important driver for moving to DITA and an influencing factor on translation costs is content reuse, and in fact, reuse is the metric reportedly most used to educate management (likely as a reassurance that the investment in DITA is paying off). However, while 65 percent of departments are tracking translation costs, less than a quarter are actually gathering reuse metrics, and in apparently contradictory data with its primary purpose of educating managers, these metrics are also the least likely to be shared up the management chain.
A significant reason for the low number of people measuring content reuse is that it seems to be the most difficult metric for which to gather meaningful measurements. There are many accepted methods (Figure 9), but only one is “easy”—the generated report from the authoring environment. Further, each of these methods can give vastly different values, making it difficult to report a consistent, meaningful metric. In fact, every single person who said they were measuring reuse also said they nether have the volume of data they need nor feel comfortable with the accuracy of the data they have.
It is possible to get meaningful reuse metrics, however. We did hear from one person, for example, who had so much success in measuring reuse that the company stopped measuring reuse after two years of clear data that they were exceeding their targets. Similarly, another person found that their hours per deliverable decreased over 40 percent after their move to DITA, apparently due to content reuse.

Team Satisfaction
Number collecting data: 65%

Collection method:
Automated: 52%
Manual: 24%
Anecdotal: 24%

Used to:
Compare to previous years: 32%
Evaluate staff: 32%
Educate management: 16%
Change priorities: 6%
Justify resources: 6%
Plan projects: 6%

Shared with:
Management: 65%
Team: 27%
Others: 8%

Although most people seem focused on product metrics, such as quality and efficiency, in their measurements, we found that companies also put value on ensuring employee satisfaction. Hiring and training new people is a significant investment, so it is important not to overlook the overall health of the team creating that product. While most product-related metrics were tracked by more people, it is interesting to note that employee satisfaction was still more frequently tracked than development costs.
Thirty-five percent of members said they are tracking employee satisfaction, largely through automated data collection methods (employee satisfaction was the metric with the highest automation). Many companies apparently conduct an annual employee survey which provides year over year comparisons, making employee satisfaction the metric that is most often compared with previous year performance.

The Importance of Gathering Data
There is no question that accurate data in a variety of areas together paint a clear picture of the strengths and weaknesses of a department and influence management decisions in important areas such as:
Hiring and training of team members
Tool acquisition
Outsourcing and consulting services
The amount and type of deliverables created
Schedules and project plans

Arguably, metrics play an even more important role in establishing the value of the department to upper management.
The CIDM Metrics survey revealed that growth opportunities exist for our members to further mature in data collection and analysis. Establishing and following industry best practices for metrics will enable members to close the gap between the data they gather and the data they need. Fortunately, the vast majority of participants in the study recognize the need and are eager to establish a “data culture”, one where metrics play an important role in decision making.
Look for events in 2019 designed for maturing these processes, including the Best Practices conference in September which will focus on Metrics as its primary theme, and our new process maturity learning series which will provide practical tools and suggestions for the data to gather, how to gather it, and what to do with it once you have it.