Inside Microsoft Office Online Assistance

Home/Publications/CIDM eNews/Information Management News 12.04/Inside Microsoft Office Online Assistance

Jonathan Price
The Communication Circle, LLC

 

Microsoft’s customer-driven publishing team recently won the CIDM Rare Bird award, which recognized their approach to user assistance as an inspiring model for other teams. But what lies behind their approach and how did they develop it? I recently talked by phone with Janet Williams Hepler, Group Manager, Office User Assistance, and Rob Ashby, User Education Manager for Microsoft Office User Assistance—Data and Discovery team.

Rob Ashby and his team were monitoring customer searches on the Microsoft Web site back before the turn of the millennium. A user would type a question into the Answer Wizard in a Microsoft Office application and get a set of possible answers. If the user clicked “None of the Above,” the software would jump to a page on the Microsoft site. Each visit to that page signaled a query that had somehow failed. Operating on a shoestring budget, Ashby’s team analyzed the “perceived failed queries” to spot holes—places where content was missing.

From the search team’s perspective, users were trying to start a conversation. “They wanted to have a conversation with us, and some of their questions we could answer, some not,” Ashby recalls.

In preparing for Office 2003, Microsoft moved to the idea of a “connected state.” The team assumed that the user would be continually, or at least frequently, connected to the Web*. Having collected and mined the data about customer queries (and sales inquiries), Ashby’s team realized that they might now use the Web to learn more about what customers were trying to do—and to write content that would help these customers do what they wanted.

Assuming that the customers could be taken unobtrusively from their application to Web content, the development team decided to do “continuous publishing.” Feedback from the Web would provide clues as to what to add, subtract, and change in help articles. As Hepler says, “In the past, we published content before release, without getting much customer data. Now, with customer data, we can learn to target their needs so the answers can become more reliable. Our content gets richer as the product takes on a life of its own in the community.”

Hepler points out that the move toward continuous publishing over the Web was not a grand plan or clear in every detail up front. “It started as a vision; we developed practices as we saw the feedback and experimented with how we could analyze data and respond. We found that 11 percent of people who look at our articles online are leaving comments. People really do want to connect with us!”

There was no moment at which someone had to make a pitch to executives. Hepler says, “We have embarked on a ten-year sea change and we are somewhere in the middle. If we had been completely cost constrained, we could not have had such a disruptive innovation on behalf of the customer. This is a big bet, undertaken partly on faith. And, yes, it is a disruptive innovation.”

Writing the invitations

The team struggled to come up with the invitations for feedback. Originally, they asked you what your intent was in coming to the site (which seems a bit challenging), and whether you had succeeded at whatever you wanted to do. All pretty open ended, somewhat like the old reader response cards at the back of a manual. If you were unhappy, you were not prodded to say what should be fixed.

But the team wanted to elicit a feeling of conversation, so they revised the initial question to ask, “Was this information helpful?” Now, if you click Yes, you get a thank you underneath the actual article, which is preserved above, in case you want to take another look (nice touch). If you click No, you still see the article, but you are asked “Why?”

“We suspected that people would not have time to give us detailed comments, so they could just pick one reason.” The team offered customers three possible categories of complaint:

  • Information is wrong
  • Needs more information
  • Not what I expected

Essentially, the users are doing the clustering for the team—identifying which type of problem they have experienced.

Once you make your choice, you can go Back or Next. If you click the Back button you go back to the first question (“Was the information helpful?”), which has the unfortunate implication that you may not have recorded a reason why you answered No in the first place (disappointing). If you click Next though, you go to a final question, based on your choice. For instance, if you claim that the information is wrong, you are asked, “What is the error? (750 characters maximum)”. A big empty box beckons invitingly for your comments and, at last, you see a Submit button.

Ashby calls these comments “verbatim,” because they are exactly what the users say. “What we thought would be important would be the verbatim, where people can actually type stuff in. Verbatim is where the value is—real qualitative information compared to ratings and page views.”

When you submit your comments, you get a brief flash saying that “Your feedback is being submitted,” followed by the thank you with the disclaimer that “We cannot respond to all comments individually.”

So there is no attempt to respond by email or chat. The idea is still to weigh all the comments and come up with changes based on volume. Any individual comment may be the one that triggers the light bulb for the writer, but whoever wrote that comment may never know. And what first drives a writer to prioritize revising a particular article is a low overall rating with a rash of comments.

So the users are getting a chance to vent—and to help others—without getting any personal benefit. They are taking part in the commons, fostering the collective work of improving the content. They see a real request for feedback and respond. So far, two million ratings and verbatim comments have poured in, and the number keeps increasing every week.

Collecting the data

The ratings and the comments (identified by type of complaint) get fed into a SQL Server database during the day and then at night, a summary database is produced, tying each article to a particular writer. That database is what the writers use through an interface called Content Watson. If you’re a writer, you type in your name and see a list of all the articles assigned to you, with the average rating for a specific time period, the change in that rating compared to the prior period of time, the number of page views (compared to the previous period), and the number of comments.

You can also merge your ratings and page views to see which articles are still above water. When you spot several articles that are well below the threshold for dissatisfaction, you apply triage, figuring out which ones are most critical to repair right away.

When you click the article title, you see a page showing the data for that article with graphs for:

  • Page hits Last 31 days
  • Page hits history
  • Rating numbers last 31 days
  • Average rating history

At that point, you can choose to view the comments. Want more information on what people were looking for? You can find out what pages they came from or, if they ran a search, what their query terms were.

Then you can analyze the article itself, figuring out where it failed for these folks. Common problems: No examples, no “See Also” links, title not matching the terms that customers use, no focus on areas where customers get in trouble, and so on. After you rewrite and post the article, you can check the results. Hopefully, your satisfaction rating goes up.

At first, writers could spot low hanging fruit pretty easily. During the beta testing, for instance, it became clear there needed to be a topic about the blind carbon copy (BCC) line in Outlook, and that article was modified drastically to meet customer expectations. But now that the more obvious problems have been solved, the teams are noticing the less visible ones. “What was noise at the beginning of the project becomes more visible as a result of increased traffic from customers,” Ashby says.

At least once a week, a team member does some data mining to look for gaps, problems, and trends, sharing the results with the team. Ashby says, “We have more data than you can shake a stick at.” Customers have responded so enthusiastically that his team has faced the challenges of scale. “We bring on people to help us with the data, help us understand the scale, look for help on campus, and help with larger data sets and clustering. The more data you get, the clearer picture you have of the customer.”

Occasionally, all this data prompts questions about the structure of the content. “Is there a better way, a better structure, to match the way people want to read this stuff? For instance, articles get better ratings than individual chunks.” To resolve these issues, there is a Content Model Review Board that contemplates the tradeoff between disrupting the existing DTD and matching customers’ conceptual models. But now the arguments are backed up with data—not just passionate appeals.

Managers track the satisfaction rating for the product, but do not rate individual writers because a writer may have inherited some topics with big problems. The teams work together to improve their content, rather than competing with each other for high rankings. And satisfaction ratings do not tell the whole story (some problems cannot be solved with content).

At first, some writers felt disoriented by new tools and new ways of thinking. To feed their content into the content management system, they had to give up their word-processing and desktop publishing tools, learn an XML editor (XMetaL), and follow a DTD created from the needs of the content set. After a while, their complaints simmered down.

Schedules changed radically as well. Instead of working for two years toward an absolutely perfect manual or help system to be released with the product, the writers must now work within a much shorter time frame—weeks instead of months or years. And they are involved in constantly improving the content, rather than publishing it and forgetting it as they move on to some other project.

But the most pivotal change has been the recognition that they can now work with customers. In a field that preaches audience analysis, the irony has always been that writers have so often been prevented from talking to actual customers. Now that the Microsoft writers can hear what customers are saying, the writers are motivated to respond more quickly and imaginatively. Hepler says, “Not a soul wants to go back. Even the staunchest resistors say they would never go back. The power of the customer data is really strong.” Now editorial decisions can be made based on actual feedback. “We want people to base decisions on data. We have been thrilled. One writer keeps a list called Top Ten Things We Have Learned from Customers, and shares that so people have a broader view.”

Looking at the numbers across all “assets”—articles, templates, and training materials—the managers review monthly scorecards, showing how many assets are being revised, how many new assets are in queue, and what some customers are saying in their comments. Another monthly report tracks trends in the data.

The team is looking into ways to allow positive feedback. (Right now, if you like the article, you are simply thanked for your feedback, but you have no opportunity to write a comment.) And there are plans to aggregate satisfaction by application and feature, to encourage Marketing, Planning, and Program Management to improve the features and the interface in those applications.

* Within Microsoft, the connected state was a large and expanding vision, known in house as Conversation 43, a discussion that led to a lot of other conversations, addressing questions such as, “So what do we do if the user is not connected?” or “How do we handle firewalls?”