CIDM

June 2012


Measuring Productivity


CIDMIconNewsletterPam Swanwick and Juliet Wells Leckenby, McKesson Inc.

Every manager struggles to balance writer workload and project capacity. A simple spreadsheet-based system can help you objectively evaluate assigned tasks, task time and complexity, special projects, and even writer experience levels to more accurately assess individual workload and capacity. The result is a simple, but useful, representational graph.

In addition to measuring current team capacity and productivity, this method also provides objective metrics to better estimate future project capacity and to support performance evaluations for individual writers.

Productivity is Relative

Metrics are a necessary part of a manager’s job. We need to be able to identify high- and low-performing writers, realistically balance workloads, prove our productivity to upper management, and justify requests for additional headcount.

As a manager of a team of writers, what metrics can you use to realistically project your team’s capacity? How can you evaluate your team’s productivity rate? How can you assess the productivity of an individual writer compared to the rest of your team?

Research indicates that no industry standards are available for technical writer productivity rates. Some practices, such as page counts, have proven to be counter-productive in our experience. If a writer is evaluated by number of pages, page counts may tend to increase to the detriment of quality. In many projects, reducing page count should be the goal. Page counts also do not take into account the varying complexity levels of different deliverables; realistically, it takes longer to produce a page of highly technical material, compared to simple user-based help topics.

Measuring time spent on projects is also not a good practice. Writers might put in long hours, but how do you measure how productive they are? How do you identify a writer who is handling twice the work in half the time?

In reality, all performance evaluations of technical writing are subjective. However, if your team is working on related projects with similar outputs, it is possible to develop standard metrics to evaluate writer productivity, relative to a project’s standard deliverables and to other team members.

For this method to succeed, managers must evaluate as carefully as possible the differing values used in this measurement system and customize the inputs and calculations as appropriate for specific groups.

This method does not evaluate the quality or usefulness of documentation.

Relative Variables Within Our Team

Some teams create such diverse deliverables that no objective measurement is possible. However, our team of 20 writers produces standardized and consistent deliverables that can be reasonably compared.

  • Our team uses standard templates so that we can compare like to like.
  • Our team’s deliverables are limited and consistent across products (online help in HTML format, technical references in PDF format, quick start guides in Word format, and release notes in PDF format).
  • We trust the team to provide accurate assessments of deliverable size, complexity, and percentage of new or updated content. In some cases, we independently verify their numbers.

Relative Productivity Can Be Measured

Using the methods below, we can reasonably assess productivity in three areas:

  • Current writer workload relative to the team
  • Past performance of a writer
  • Future team capacity

In this article, we discuss evaluating writers’ current workloads relative to the team. However, you can adjust the spreadsheet formulas to measure past individual performance or future team capacity.

To measure productivity, we perform these tasks:

  1. 1. Gather data
  2. 2. Calculate work units
  3. 3. Normalize the data
  4. 4. Account for special projects
  5. 5. Normalize the data again
  6. 6. Account for job grade

We use this basic formula to calculate workload:

(# topics or pages) x (complexity of deliverable) x (% of change)

+ (% time spent on special projects)

x (job grade)

Let’s break it down.

Gathering Data

The team maintains a tracking spreadsheet for several reasons. The main reason is to track the progress of deliverables against established milestones. Among the inputs to the tracking spreadsheet are the following data points, entered by the writers and verified by the manager, if necessary:

  • Number of topics (for a help project) or pages (for a Word document or PDF). Early in the document lifecycle, this number is an estimate.
  • Complexity of the deliverable. Our team assigns a numeric value from 1 to 3, although you might develop more nuanced values. For example, we might assign release notes a value of 1 and a technical reference manual a value of 3.
  • Percentage of new or substantially revised content. For example, we assign a value of 100 percent to a document that must be written from scratch; we might estimate a value of 10 percent for minimal updates to an existing document.
  • Special projects. The writers record the percentage of time they spend on special projects. Special projects are an optional measure. Most of our writers volunteer for special projects in addition to their assigned deliverables (for example, updating standards and style guides).

Figure 1 is an example of the spreadsheet in which the writers enter the data for their projects.

S_L_Figure1

Figure 1: Spreadsheet Example

The final data point is equally important, but not entered by the writers in the spreadsheet:

  • Job grade (for example, entry level, mid-level, senior). Expectations are (and should be) different for each of these levels.

Calculating Work Units

For each deliverable, we multiply these inputs as shown in Figure 2:

(# of topics or pages) x (complexity of deliverable) x (% of new or changed content)

S_L_Figure2

Figure 2: Multiplied Inputs

We call the resulting number a work unit. We total each writer’s work units so that all individual deliverables are included. Now each writer has a number that reflects his or her total workload from all deliverables as shown in Figure 3.

S_L_Figure3

Figure 3: Work Units

Normalizing the Data

The next step is to calibrate the team’s average productivity in terms of total work units. You can derive this number using several methods, such as adding the team’s total work units and dividing by the number of writers.

However, we prefer a more subjective approach that takes into consideration the productivity level we want our writers to achieve as a team. For example, if we have 12 writers, we identify three or four writers who consistently meet the average level of productivity we expect from the team and then average their work units.

Yes, this is subjective, but in this way we can adjust for current working conditions, such as an atypical sprint to meet a tight deadline or a lull in company activity.

Determining the Productivity Factor

We take the total work units for those three or four writers and determine what number we need to divide by to make their numbers close to 100; in other words, the expected productivity is 100 percent. If Writer X has a total workload number of 1,400, dividing by 14 gets us to 100 (1400 / 14 = 100). Thus 14 becomes the productivity factor by which we divide all writers’ total work units.

Applying the Productivity Factor

The next step is to divide each writer’s total work units by the productivity factor you have established:

(writer’s total work units) / (productivity factor)

The resulting number is each writer’s current initial workload. A competent, mid-level writer’s workload number should be around 100 percent. If it is not, you should reassess your calibration numbers as shown in Figure 4.

S_L_Figure4

Figure 4: Gauging Initial Workloads

Accounting for Special Projects

Next, we add the special projects percentage to the writer’s initial workload percentage as shown in Figure 5:

(initial workload %) + (special projects %)

S_L_Figure5

Figure 5: Accounting for Special Projects

Normalizing the Data Again

At this point, we usually normalize the numbers again to bring the average back to near 100 as shown in Figure 6. In this case, we multiply all numbers by .8.

S_L_Figure6

Figure 6: Data Normalization

Accounting for Job Grade

Job grade is the final metric we factor in. We assign a multiplier value to each job grade to quantify the assumption that senior writers are expected to be more productive and maintain a heavier workload than junior writers. For a junior writer, we set the multiplier at 1.0; the mid-level writer multiplier is 0.9, and the senior writer multiplier is 0.8.

The final calculation is:

(total workload) x (job grade multiplier)

Graphing the Results

We plot the resulting value for each writer on a bar chart as shown in Figure 7. We find an acceptable range to be within 90-110 percent.

S_L_Figure7

Figure 7: Graphing the Results

Balancing Workload

What do you do about writers who are significantly above or below 100 percent? There are three factors that can be adjusted to change the percentage:

  • Shift deliverables from an overloaded writer to an under-loaded one
  • Increase or decrease a writer’s participation in special projects
  • Promote an overloaded junior or mid-level writer

The spreadsheet is very useful as a simulation tool to test various workload scenarios. Move a project to a different writer and check the graph again. Decrease someone’s participation in a special project, or evaluate how the balance changes if a junior writer is promoted. The graph immediately shows how individual adjustments impact the entire team.

How Do Writers React?

We have experienced a range of reactions from writers when presented with their metrics. (We only show a writer his or her own percentage as compared to 100 percent and to a team average, not as compared to other writers.) Some writers are doing well and are pleased or impressed to see their suspicions confirmed! (“I *knew* I was doing the work of one and half people!”) Writers with low numbers have occasionally expressed appreciation for the tangible nature of the metrics. Writers who disagree with the numbers or are dissatisfied with the process find the quantifiable nature of the productivity metrics difficult to argue with. In several cases, consistently poor performance numbers have prompted writers to leave the company on their own, sparing us the time, expense, and legal issues associated with terminating an underperforming employee.

Caveat Emptor

We have used variations of the above metrics and calculations for the past several years to accurately and consistently estimate (a) past performance of a writer relative to his or her peers, (b) current workload of each writer relative to the team, and (c) future team capacity. However, we cannot overemphasize that what we have described is ultimately a subjective process. It must be tailored by each documentation manager to suit the needs and conditions of the specific team. Perhaps you have other factors to consider: Should different work products be evaluated using different criteria? Should writer location be a factor? Are there other metrics to consider, such as compliance requirements?

We have proven that this system works for our team:

  • We never miss deadlines.
  • Our productivity rate is very high compared to similar documentation teams working on similar products with similar deliverables. This claim is based on the fact that our team has expanded to encompass three existing products and associated writers from within our company. By using this system to periodically evaluate the added writers, we have reasonably objective evidence of each writer’s performance levels and their workloads over time.
  • We have used the resulting metrics to successfully identify and cull low performers, set reasonable workload expectations for all writers, and identify and promote top performers.
  • We usually have adequate staffing because we can provide reasonably objective metrics to management when requesting staffing adjustments.

By objectively measuring what we can and consistently comparing what can be compared but not easily measured, we have made this system to measure productivity work for us. We hope you can make it work for you, too. CIDMIconNewsletter

NOTE: For business use, organizations may download and modify the spreadsheet application we’re showcasing in this article, as long as the following information is stated in the spreadsheet properties: Source: Pam Swanwick and Juliet Leckenby, McKesson Inc.

To view and/or download the spreadsheet, visit <http://www.infomanagementcenter.com/Resources/ProductivitySpreadsheet-2012.xlsx>

PJ Swanwick

Pam Swanwick

McKesson Inc. 

pam.swanwick@mckesson.com

Pam Swanwick has worked as a technical writer for more than 20 years, primarily in technology industries. For the past dozen or so years, she has focused on medical software at McKesson Inc. She managed one of McKesson’s product documentation teams for five years.

Leckenby_Juliet

Juliet Wells Leckenby

McKesson Inc.

juliet.leckenby@mckesson.com

Juliet Wells Leckenby has worked as a technical writer for almost 20 years, the last five at McKesson. She served as a team lead under Pam and is now the manager of the documentation team.

We use cookies to monitor the traffic on this web site in order to provide the best experience possible. By continuing to use this site you are consenting to this practice. | Close