Chona Shumate, Cymer, Inc.
As companies work with tighter margins and increasing pressure from stakeholders to push for operational efficiency, the importance of being data-driven has become critical. Knowing your numbers is especially critical for technical publication organizations that must continue to demonstrate value and results, lest they become victims of outsourcing because their impact to the bottom line isn’t quantified. Tech Pubs groups have always been challenged to develop meaningful metrics that techcom-illiterate executives can understand and associate with company success.
In a previous article in the Best Practices newsletter, “Influencing Change: Negotiating vs. Building a Vision,” December 2008, Vol. 10, Issue 6, pgs. 137, 140-150, I wrote about how to develop data to demonstrate need and build compelling arguments for additional resources. Fortunately, my team was successful in our attempts with that data to secure a content management system (CMS) to fulfill demand and commitments. But I learned that securing a CMS is only half the story. Data to demonstrate the return on that investment is even more important. It is no longer good enough to be merely data-driven; companies now look for managers to be results-driven with solid metrics to quantify those results. We should be aware that data is a valuable tool in establishing trust, so that the next time funding is needed, there is a proven record to build confidence in funding decisions. In this article, I take a comparative look at some examples of the metrics we used over the years and how we reassured management that their investment was not only sound, but that it delivered positive results–way positive results!
Measuring Upon a Promise When Life Was Good
A year ago, we had 10 people in our tech pubs group, and we were successful in securing funding for a DITA-compliant, integrated content management system (CMS). No small change for my executive management to invest in a small department. The data presented was compelling–but so, too, was our commitment. If we were funded for a CMS, we had promised to eliminate manual processes, streamline our workflow, and produce more information faster and with improved quality, with possibly two additional contractors.
In Q3 2009, I was asked to summarize the accomplishments using our new CMS. Yet, earlier this year, when the economy tanked and after a series of reductions in force (RIF), the team was reduced to 50 percent with five people. This reduction in force (RIF) posed what I had thought was a tough challenge to measure the success of our new authoring environment, given we were down to two writers instead of five and no contractors. As the data demonstrates, we were not only able to leverage our new system to meet current demand, but we were able to take on the daunting task of a new, massive flagship product, while still supporting our install-base products–with 50 percent fewer people! As it turned out, my company got more bang for its buck in our CMS tool than any of us had imagined. I learned that data captured before the change can be invaluable after the change when your management wants comparative analysis to validate your claims.
Comparative Metrics – The Surprises of Before and After
In my previous article, I wrote about the years-long struggle in deriving data to convince my management of our needs. The following series of comparative metrics shows how, over the years, collected data can be used to demonstrate clear achievement when investment is made in response to need. To upper management, comparative metrics are key to building confidence in making those investments. To Tech Pubs managers, demonstrating this return on investment should be the second half of the equation, just as critical as the first set of data when seeking funding.
The illustration in Figure 1 is from 2006, showing the progression of productivity levels with decreasing headcount from 1997 through 2006. The most important metric was the production cycle or throughput–how long does it take us to release an information product. The rate of change and turn-around had become critical in our work. At the time of this illustration, we had improved our processes (not yet our tools) to achieve a reduction from 10 days to 3 days with fewer people. As you’ll see later, this metric was blown out of the water in consecutive trials of measurement. We actually reduced our production cycle time down to minutes. Note that the team eventually increased in headcount to 10 people in 2008. But, read on! The story gets even better . . . .
In 2008, I used the data in Table 1 to provide options to my management on what could be done with a CMS and without one, coupled with varying degrees of additional headcount. By implementing a CMS, our forecasted metrics told us that with four additional people, we could tackle the new products coming on board, kill the backlog of projects, and take on numerous projects un-resourced and sitting in the queue. Once our CMS integrated system was fully implemented, we realized that the CMS actually allowed us to work on our new emerging technology product, as well as our backlog of current products. We even completed some of the more critical pending projects. In 2009, this work was being done with fifty percent fewer resources!
In Table 2 (Q4 2008), another (forecasted) comparison showed what was achievable given our workload with or without a CMS. Obviously with the RIF, we did not secure the two resources we had forecasted, yet our metric from 2 days to 2 hours production actually netted a cycle time of minutes, not hours for production per topic.
We exceeded estimated production cycle time, which is now approximately 12 minutes per topic.
The 2008 illustration in Figure 2 shows the calculation of this metric in dollars, estimating a 75 percent reduction in production time based on 8 hours down to 2 hours. As mentioned previously, our current production cycle time resulted in approximately 12 minutes, not 2 hours.
In Figure 3, in our forecasted data, we estimated that without a CMS, headcount had to grow exponentially with each new product to support. Starting with seven people in 2007, we projected a minor increase in headcount needed with a CMS. The resulting headcount for “with a CMS” was considerably less than what we anticipated since we were cut down to five people. Yet, we were also able to manage current products and new products.
Finally, in Figure 4, we forecasted the resources needed to support our new product coming on line. After the RIF, my concern was how to quantify what two writers could achieve over what period of time.
Based on our earlier, non-CMS system, compiled data told us that each writer averaged three procedures per week. This included research, interviews, draft development, and technical review. The content is highly technical and dense, ranging from 3 to 30 pages (or more). We estimated that 126 new procedures would be required for the initial doc set to support field servicing of the product. The graph in Figure 4 uses this metric to demonstrate how long it would take two to five writers to complete 126 procedures. My intent was to show why we needed additional contractors if the documentation was due by end of the year. Using my current two writers, the formula calculated six months of development. Yet, three months into the project, our database confirms that these two writers have developed 103 procedures out of 126. This is 82 percent of the total procedures either completed or in draft only 50 percent of the way through the project! In other words, with our CMS, two writers achieved what we estimated, using our non-CMS data, four writers to achieve.
Our “before and after” data is impressive, given we did not calculate a 50 percent reduction in force when we compiled the data prior to implementing our CMS tool. The RIF actually presented conclusive proof that 50 percent of resources achieved almost 100 percent required productivity. What is key here is that although the high operational efficiency gains with our CMS were severely underestimated, we could not know that without the original metrics.
The Positive Results “By Products”
The results “by-products” for my team actually went beyond the data in less quantitative areas of improvement. Roles and responsibilities are now very focused and specific. Our metrics are now more accurate and better automated. ROI can be calculated when needed. We can measure our processes, workflows, and output down to a fine granularity. We transitioned hours that were formerly spent on manual processes to document redesign, strategic roadmapping of our tools and technology, and new venues for information delivery. In addition, we’ve raised our visibility and value to other organizations. We now support manufacturing, delivering unconventional, new informational products. Even IT has become a closer ally, and better understands our technology and the productivity potential it garnered for us. These results, combined with our comparative post-CMS data, presented an irrefutable conclusion of success.
As a manager, a bottom line goal is not only making good on promises but confirming the trust and confidence I asked from my executives. Headcount is the highest cost to a company. Data that demonstrates improved through-put with fewer people is a powerful message. Albeit we did not expect the extent of resource savings and productivity gains that we finally achieved, we did prove that we knew where our constraints were and that our proposed solution worked. Our integrity is proven not only as a data-driven organization but a result-driven one. Capturing data with as-is processes and tools early in the processprior to securing new resources can be invaluable in demonstrating positive, results-driven proof of success.