Bill Hackos, Comtech Services, Inc.
The 13th annual Best Practices conference is fast approaching. This year’s conference is in San Antonio, Texas, September 12, 13, and 14, with a welcoming reception on Sunday evening, the 11th. As you may know, each year we select a conference theme and theme book. The theme for the San Antonio conference is measurement, although a significant portion of the conference will be in other areas. The theme book is How to Measure Anything: Finding the Value of “Intangibles” in Business by Douglas W. Hubbard (Wiley, 2010). This year we are also privileged to have Mr. Hubbard as our keynote speaker. We urge all conference participants to read the book in advance. You can purchase the book from Amazon in either paper or Kindle format. It is my duty each year to write a review of the theme book.
Most of us in business bring to our careers an education that is not in science or engineering. At the same time, we recognize that the more accurate and precise data and information we can bring to our business decisions, the better those decisions can be. And as publication managers, we are middle managers and must win approval from our upper managers in the form of business cases. However, many times we use the excuse that in the technical writing discipline many questions are beyond measurement. So we use our gut instincts, or we hire experts in the field to help us measure our process, progress, and results.
As publications managers, we have many decisions to make in the course of our jobs. Should we add staff? What about offshoring? What are the best software tools for our staff? Do we need training? What kind of training? Should we embrace the new technologies? Structured writing? DITA? Do we need a content management system? If so, which one is best for us? What are our requirements for acquiring new technologies? How do we best interface with other customer-facing organizations within our company? Are we more efficient with contract writers or permanent employees? Are we keeping sufficient data to measure our return on investment as we make management changes or adopt new technologies?
As managers, we need to know quantitatively where we are now and where we are headed so we can develop a sound business case for our bosses. Do we have the quantitative information for a business case? Hubbard demonstrates that anything can be measured. While we might not be able to find precise answers to our questions, we can always lessen our uncertainty about a quantity and, as a result, improve the quality of our decisions.
Hubbard avoids using the term “Metrics,” which is so popular among publications managers. Measurement is not a goal in itself. “Metrics” implies that measurement has some intrinsic benefit. It is not measurement but information gained from measurement that is the goal. Hubbard defines measurement as “a quantitatively expressed reduction of uncertainty based on one or more observations.” Just like a weather prediction of “30% chance of rain” may bring out the umbrella, a measurement has value if it reduces the uncertainty of a quantity.
Another measurement issue is the value of information. If the value of knowing something is less than the cost of collecting relevant information, the measurement has no value. One of the issues for publications managers is to decide if using contractors over permanent staff will have a positive or negative result. The problem becomes more complex when we consider offshoring to developing countries. What will be the likely benefit? What is the risk that costs will increase? Management just looks at relative salaries. Should you do the research to determine actual risks before proceeding?
Hubbard spends a lot of time discussing the concept of calibration. We know that a new thermometer may give erroneous temperature readings. Before we can trust it, we must calibrate it against a standard, a thermometer that we know is accurate. In his experience, Hubbard finds that most people are too optimistic in their estimates about measurements about which they are uncertain. This effect results in costly decisions based on inaccurate estimates. Think of what happened the last time you asked writers to estimate how long it would take to get specific writing tasks done. As managers, we’ve become skeptical about our writers’ estimates.
People can be trained to produce more even-handed estimates through the process of calibration. In a training situation, Hubbard gives participants a series of tests in which they are unlikely to know an exact value for a number of questions but can give an educated guess (estimate). He then asks participants how confident they are about their estimates.
For example: “In what year was William Shakespeare born? Give a date range that you are 90% confident the actual date lies within.”
Another example: “True or false, the ancient Romans were conquered by the ancient Greeks? How confident are you that you are correct? 50%, 60%, 70%, 80% 90% or 100%.”
After the first tests, when they see the correct answers, participants usually find that they are overconfident. However, after a series of tests, most participants learn how to be more accurate in identifying their level of confidence. Successful participants are certified as “Calibrated.” Hubbard finds that calibrated people can make better estimates of their confidence in real business-related estimates.
Hubbard spends a lot of effort describing a variety of measurement techniques. These all have value for making business decisions. But as a publications manager, not all of these techniques may be useful to you. I recommend that you skim through the chapters so that you know which techniques Hubbard describes and then go back to read in detail the techniques useful to you.
Publications managers are not only interested in the efficiency of their documentation but also in its quality based on the satisfaction of their users. Hubbard ends his book discussing techniques for measuring some of these softer areas, including preferences and attitudes.
Finally, Hubbard describes some new techniques that can be used effectively to make business-oriented measurements. These include the Internet, the GPS, and prediction markets (the wisdom of crowds).
How to Measure Anything is crammed with ideas. At times it can be slow going, especially for publications managers who may not be comfortable thinking quantitatively. However, for middle managers, quantitative analysis can have great value, not only because of its value in decision making but also its value in demonstrating to your management that you know what you’re doing and have effectively considered the benefits and risks of your decisions and recommendations.
Overall, How to Measure Anything is an important book for managers to have on their bookshelves. But it’s not intended to be read through from cover to cover at one time. I would recommend that you carefully read sections I, Measurement: The Solution Exists, and II, Before You Measure. These sections contain Hubbard’s discussion of the concepts of measurement. Then scan section III, Measurement Methods, so you have an idea of the measurement methodologies that Hubbard describes. Finally, read carefully section IV, Beyond the Basics, in which Hubbard describes many of the softer areas of measurement that would be of interest to you as a publications manager.
I’m looking forward to hearing Douglas Hubbard as the Best Practices keynote, and I’m looking forward to seeing you in San Antonio.