Productivity Counts: Measuring the effectiveness and productivity of your team members

Home/Publications/Best Practices Newsletter/2010 – Best Practices Newsletter/Productivity Counts: Measuring the effectiveness and productivity of your team members

CIDM

June 2010


Productivity Counts: Measuring the effectiveness and productivity of your team members


CIDMIconNewsletterJoAnn Hackos, Comtech Services, Inc.

Revised and updated from the original in 1999.

“I have the most productive technical publications group in the valley,” Sam boasted. “We produced documents at a rate of one hour per page.” “How so?” I asked. “That’s way below industry average.” “Well, don’t tell anyone, but we counted all the pages we reproduced and shipped that came directly from the engineering group.”

Not only was Sam cheating by not counting all the work done by the engineers, he was digging his department into a hole. He needed to improve the productivity of his writing staff. They were well known in the company for producing masses of useless information, much of it full of errors. But any effort to improve the research and writing done by his department would make his productivity numbers get worse.

Measuring productivity means figuring out how many goods and services are produced and how many people it takes to produce them. If the number of people working increases (full employment these days), so should the number of goods and services they produce.

How do we bring this concept home to technical communication? Technical writers produce documents—lots of them. We could measure how much they produce by simply counting the documents. If Harry creates three new manuals in a year and Sally creates five, that means their average output is four new manuals each. If Harry produces four new manuals next year but Sally only produces three, that might indicate that their average productivity has decreased from four manuals to three and a half.

We all recognize that simplistic measurements like counting the number of manuals produced gives us some information about our department’s productivity but not much. So we start looking for more meaningful measurements.

Measurement One—the number of documents produced by our organization per year, including new and revised documents. The goal might be to produce more successful documents with the same staff.

We have traditionally counted pages rather than documents, believing that by counting pages, we even out the productivity measure. Sally may write five new manuals and Harry three in a year, but if Sally’s manuals have 50 pages each and Harry’s have 150 pages, Harry is clearly producing more pages and more words than Sally. That makes Harry more productive.

“Wait just a minute,” argues Sally, “my pages are a lot more difficult to write than Harry’s. It’s not fair to count all pages as if they were the same.” If we want to be fair to Sally, we have to add a complexity measure, giving her extra credit for producing difficult-to-write pages.

Measurement Two—the number of pages produced by our organization per year, new and revised, with a complexity weighting factor added in. The goal might be to increase the number of weighted pages produced by the same number of people.

Software developers face the same dilemma when they count lines of code produced per person per year. A sloppy, disorganized programmer might produce more lines of code than a well-organized, effective programmer, even though they both produced the same number of functions. As a result, software development measures functions produced instead of lines of code. Functions refer to parts of the program that do something useful, such as sending text to a printer, drawing a graphic dynamically, performing a mathematical calculation, or searching a database.

In technical communication, what can we measure that might be similar to functions in software? We could measure how many user tasks are successfully explained by our writers. Counting user tasks in the documentation is easy; measuring success is likely to be more difficult. We might begin with “errors.” Errors are reasonably easy to detect if we verify the accuracy of the documentation by testing it against the product. Aside from outright mistakes, however, measurement becomes more difficult. We might use usability testing of a writer’s work to see if typical users find the task-oriented information easy to follow. The more problems the users have interpreting the written instructions and background information, the less productive the writer has been.

“Well,” I already hear you arguing, “what about the complexity of the information and the level of knowledge and experience the user brings to the task? And what about the product? What if it’s so difficult to use that no one can explain it?” You’re right! But if we spend a lot of time producing information that is not useful (or products that aren’t easy to use), we certainly aren’t going to get any rewards in the long run.

Measurement Three—the number of successful sets of functional information delivered to the users each year. The goal might be to deliver more successful functionality each year in proportion to the number of people on the staff.

We can also measure writer productivity by looking at the amount of extra work for others in the department that an individual writer generates. For example, Dan is an experienced and highly competent senior writer. He follows templates scrupulously and knows the department’s style standards by heart. When his work goes to Mike in the editing group, Mike knows he can plan a quick review, handing minor comments back to Dan. When the production team gets Dan’s books and help topics, they know they’ll have only minimal cleanup to get them ready for printing and web publishing. Everyone in the internal support groups benefits from Dan’s work habits. They would call him “very productive.”

In contrast, when the editing team receives documents from Joan, they’re almost afraid to open them. They know that Joan’s work will be full of spelling mistakes (Joan can’t even be bothered to run the spell check program) and grammar errors. They also know that they’ll have to “bleed all over” Joan’s work because she has a difficult time organizing her writing and communicating ideas simply and straightforwardly. The production team also fears Joan because her documents are full of stylistic variations. She changes the standard styles constantly, which means that her files don’t convert cleanly. Joan and Dan produce the same number of pages per year, but Joan is far less productive (and everyone but Joan’s manager knows it).

Measurement Four—the amount of rework staff members generate for themselves and everyone else on the support team. An organizational goal might be to reduce the amount of rework done.

Rework could also apply to the relationship between the writers and the developers. Since we generally ask product developers to review draft documents for the accuracy of the information, the amount of rework they generate is another potential measure of our organization’s productivity.

The list of productivity measurements quickly becomes long. We might measure, for example,

  • the total number of hours it takes to move a document through the information development life cycle
  • the percentage of time spent on each phase of development activities (planning, organizing, writing, researching, editing, illustrating, and so on)
  • the number of technical communicators compared with the number of product developers in an organization

Productivity Metrics

Now let’s turn our attention to the information-development process itself and find ways to measure productivity before we have documents to count.

You might recall that our friend Sam boasted of the number of pages his staff turned out in record time. He was down somewhere around one hour per page. But he achieved this remarkable productivity by cheating. He counted work that his documentation team didn’t do.

Sam would have been much better off trying to make his staff more productive as they produced their work rather than focusing only on the output. If he had, he might have found a number of potentially fatal flaws.

Sam’s staff members love to use their desktop publishing tools. They spend many hours creating more and more elaborate page layout schemes. They add cute graphics to spice up the look of the pages and individually craft each two-page spread. In fact, if Sam looked at their overall time expenditures, he would discover that nearly 40 percent of their total project time is consumed by desktop publishing tasks.

To begin, however, Sam needs to measure the total time it takes someone in his department to complete an information-development project. Once he knows more about the total time (taking into account information type, project complexity, and total work volume with new and revised pages), he can begin to break total time down into its components.

Measuring Time on Task

We can obviously measure productivity by looking at the output of our work effort. We can also measure the efficiency with which we perform our tasks. If we can produce the same results with less effort or better results with the same effort, we have also increased our productivity.

Measurement Five—record the time it takes to develop a document or complete a project from the beginning to the end of the information-development life cycle. Total project time gives us a starting point for other measurements that help us improve the time we spend on individual tasks. We begin with total time, and then we break that time into milestones and individual tasks.

My recommendation for the percentage of total project time that should be devoted to the phase milestones as shown in Figure 1. These percentages represent the time of a traditional project which includes the use of traditional desktop publishing tools.

Hackos_Figure1

Figure 1: Phase milestones

With these guidelines as a starting point, Sam and his team leaders begin looking closely at the time spent reaching each milestone. Right away they realize that the time spent on Phase 4 is much too high. Forty percent of total time for production tasks means that little time is left for information design, content development, and validation testing.

Measurement Six—the percentage of total project time spent on each milestone (outlined in Figure 2). You may want to measure the actual percentage of time spent on each milestone compared to this (or your own) model. We have found, for example, that projects in which the writers spent less than 20 percent of total time on detailed design had problems later. Because detailed design (Phase 2) is so important to ensure well planned, well designed information, cutting the percentages meant design problems during implementation and testing. The overall project was less successful.

Hackos_Figure2

Figure 2: Total project development time

Since 40 percent of total development time in Sam’s group is spent on Phase 4 tasks, his writers spend almost no time on planning. In fact, Helen is typical of the writers in the group. She claims that she has no time for planning. She just has to get started writing, even though her manager has asked her to turn in a content outline. But Helen feels that she can be more productive if she gets started. She’s been heard to argue that she can have the whole project done in the amount of time it would take her to create a plan.

Dan is perfectly willing to develop content plans; he just bases them entirely on the product specifications. He talks about getting out to user sites—the company even promotes doing so. He’s just too busy. He can take the programmer’s specs and whip out the documentation in record time. In fact, getting the content out of the way quickly means that he can spend more time with his real love—putting everything into a help system. He is an expert at playing with the help tools. He knows all about secondary windows, hypertext links, pop-ups, and so on. If anyone looked closely, they’d discover that Dan spent 25 percent of his time on content and 75 percent on converting the text into a help system.

If Sam moved his department to an XML-based authoring and production system using the DITA standard, he could virtually eliminate all of the production time, leaving sufficient time for planning, content development, and understanding the user. On such projects, teams typically find that the overall time decreases even with the addition of new, value-added tasks to the project life cycle.

Measurement Seven—the percentage of time spent on each key project task. You may also want to measure the percentage of time spent on key tasks. For example, if you have established a percentage for validation testing, you should ensure that this time is actually used for testing. If testing is cut, you are likely to have customer satisfaction problems later.

Reclaiming Writing Time

Cadence Design Systems, several years ago, had instituted a practice that they called “Reclaiming Writing Time.” They asked every team to evaluate how much time they spent on activities that did not add value for the customer. The focus of the evaluation was to increase the percentage of time spent on developing content that customers needed. At the same time, they had to decrease the time spent on other activities because the total project time could not change.

The project goal was to reduce the time spent on non-value added activities by 5 percent per quarter. Teams had decreased the amount of time spent attending meetings, asked for and received faster equipment, reduced production time, and so on. They found ways to increase the percentage of their time producing value for the customer.

In another case, the information-development team had conducted an extensive study of its process in an effort to find ways to streamline it. They were able to reduce the total number of steps in the information-development process by eliminating redundant and unnecessarily time-consuming activities.

Today we have technology that helps us reclaim writing time. It is XML-based authoring, which eliminates desktop publishing from the life cycle. As a result we have organizations that have demonstrated remarkable productivity increases. They have measurements that track the reuse of topics among projects so that previously developed, approved, and translated content does not need to be recreated. Writers plan their projects as collaborative teams with the assistance of their information architects. During the planning phase, the team knows what topics already exist and need no change, what topics already exist and need major or minor modifications, and what topics need to be written from scratch. Productivity within such collaborative teams is remarkable. CIDMIconNewsletter

JoannPicture
 JoAnn Hackos

Comtech Services, inc.

joann.hackos@comtech-serv.com

Dr. JoAnn Hackos is President of Comtech Services Inc., a content management and information-development consultancy she founded in 1978. She is Director of The Center for Information-Development Management (CIDM), a membership organization focused on best practices in content management and information development. Dr. Hackos is a founding member with IBM of the Technical Committee for DITA at OASIS and co-editor of the DITA specification. She has been a leader in content management for technical information for more than 30 years, helping organizations move to structured authoring, minimalism, and single sourcing. She introduced content management and single sourcing to the Society for Technical Communication (STC) in 1996 and has been instrumental in developing awareness worldwide of the DITA initiative. She hosts or keynotes numerous industry conferences and workshops in the field.

We use cookies to monitor the traffic on this web site in order to provide the best experience possible. By continuing to use this site you are consenting to this practice. | Close