Creating a Customer Survey
Too many organizations today are simply writing content for their next scheduled release without any sense of who their customers are, where they’re trying to retrieve content from, and the type of content that they’re searching for. Without this knowledge, an organization’s content can quickly head down the wrong track, become a source of frustration for the customers, and become an exercise in futility for the writers. The purpose of this article is to discuss how to implement a customer e-survey (or email survey) that will help you obtain a clearer picture of your customers so that you can better shape and target your content for them.
The following points are discussed in this article:
- Selecting an e-survey software company
- Creating the survey
- Testing the survey
- Sending out the survey
- Parsing feedback
- Making post-survey decisions
Selecting an e-survey software company
You probably don’t have the time or tools to create, send, and track the results of an e-survey. Fortunately, you can find a survey software company that does (such as Confirmit). Unfortunately, not all survey software companies are created equal, meaning that they all have areas of strengths and weaknesses. Depending on your needs, there are several issues that you should consider when selecting one:
Most survey software companies base their costs on some combination of the following:
- Publishing the survey to one or more online portals
- Completing the survey online as a respondent
- Refreshing and delivering the survey data to you on a recurring basis (for example, once a week)
Make sure that the survey software company’s costs fit within your budget, including any translation costs. These fees are usually nominal, however, and well worth the results.
You may want the e-survey to look and feel as if it came from your company. If company logos, colors, and so on, are important to you, make sure that the survey software company can generate the right look for your e-survey.
E-surveys are comprised of a number of very specific components of the data that you are trying to capture. When analyzing this data, you may want to aggregate it based on some of the individual components themselves (for instance, company size, region, and so on). Make sure that survey software company’s reporting capabilities are adept at parsing data as you will need it.
In addition to providing reporting capabilities, most survey software companies also deliver the raw data to you in the form of a spreadsheet, allowing you to create your own reports.
Think about the types of questions that you want your survey to include. The nature of these questions may require that the survey is adept at handling check boxes, radio buttons, forced and optional questions, conditions, embedded links, drop-down lists, comment fields, and so on. Make sure that the survey software company cannot only handle these design elements, but can do so in a user-friendly way.
Creating the survey
You need to target your e-survey to a group of customers that proportionately represents your overall customer base. Among other things, the representation may include geographical region and company size. Don’t limit your e-survey to just English-speaking or North American customers if you have a high enough percentage of Asia-Pacific and European customers as well. An imbalance in demographics will skew your results and invalidate the reason for doing the survey in the first place. You may need to pay a fee to have both the survey and its results translated into the necessary languages, but this fee is usually small and well worth it.
Furthermore, when selecting a sampling of actual customers, it’s a good idea to get recommendations from your Sales Department. Sales people tend to have good relationships with their customers, which establishes trust with your company. As such, a recommended customer in good standing who receives an e-survey from your company is more apt to respond to it than not.
Finally, know your customers’ culture since that knowledge will help you to better understand their responses. For example, Japanese customers tend to be more critical when asked to rate your documentation (followed by the Australians, Indians, Chinese, Germans, and French); however, they are more apt to respond to an e-survey. Conversely, Latin Americans will be more positive (followed by the Italians, British, and Americans). Given these habits, you may think that you won’t get an accurate rating of your documentation based on multi-cultural responses. However, the differences in culture won’t bias your results if you pay close attention to your response rates over time. For example, if both your Japanese and Latin American customers become more positive in subsequent surveys, then you know you are doing something right; if both become more negative, then you know you are doing something wrong.
- Keep the survey short.
A survey should be no more than 3 pages and take no more than 5 minutes to complete. The longer the survey, the lower your response rate will be. In fact, it’s a good idea to state up front that the survey will only take 5 minutes. Without this warning, customers will become suspicious and shy away from taking it. Additionally, if a survey is taking longer than expected, customers will exit out of it.
- Ask very precise questions.
Asking precise questions is often the most time-consuming part of creating a survey and usually requires several iterations with multiple stakeholders. Remember that you have very little time and space to articulate what it is that you want to know. Don’t ask open-ended questions such as “Are you happy with our documentation?” Negative responses to this question will not tell you where your problems lie. Instead, determine what it is that would make a customer happy with your documentation, and then phrase the questions accordingly. For example, ask them to rate very specific portions of your documentation on a scale of 1 to 10. Or ask them, “Is the documentation usable?” and “Is the documentation useful?” If you ask Yes or No questions, make sure that you provide the customer with an opportunity to explain a negative answer.
- Ask questions that you can act upon if the responses are negative.
Remember, one goal of the feedback is to help identify areas where the customer’s experience with your documentation can be improved. As such, ask questions that fit within the realm of your ability to make such improvements. For example, don’t ask customers if they would like to see your documentation web-published if you have no time, budget, resources, or intentions of doing so.
- Don’t give customers too many multiple choice options.
Too many options to choose from can confuse and frustrate the customer and make it difficult for you to aggregate the responses. Again, be precise. When asking our customers to select the role that they played in their companies, we listed a dozen standard industry options from which to choose. However, we realized too late that these roles were defined too technically for the types of companies that we were surveying. In the end, we combined some of these roles together at the expense and accuracy of what we were trying to capture in the first place.
- Request permission for a follow-up.
To obtain more granular feedback and to measure the success of any documentation improvements that you make as a result of your survey, you will need to follow up with additional surveys. These surveys may include a phone survey or a subsequent e-survey. In either case, ask your customers if you can contact them. Remember, some customers enjoy taking part in this type of activity and will be more than willing to engage in further outreach. For affirmative answers, make sure that you provide the customer with an opportunity to indicate the best time and phone number at which they can be reached.
Testing the survey
Before sending the e-survey out to your customers, it’s critical that you test every aspect of it. Do not rely on the survey company that you are employing to ensure that the survey experience is flawless. The survey will have your company’s name on it, and it will reflect poorly on you if the customer does not have a good experience taking it. Instead, request the services of your own QA department (or allow your writers to be QA engineers for a day) to test the following features:
- Optional questions vs. forced-answer questions
Ensure that customers can bypass optional questions without answering them, but cannot bypass forced-answer questions.
- Radio buttons vs. check boxes
Ensure that customers can select only one radio button from a multiple choice list, but can select more than one check box.
Ensure that any embedded links are live.
- Comment boxes
Ensure that customer comments are being saved.
Ensure that conditions are working properly. For example, if a customer selects Yes to the question “May we contact you?” ensure that a dialog box appears, prompting the customer for a name and phone number. Conversely, ensure that this box does not appear if the customer selects No.
Ensure that the customer can navigate up and down on the same page and back and forth between multiple pages. When doing so, ensure that the data that was previously entered is still present.
Ensure that the wording on all the pages is exactly as you requested it.
Sending out the survey
The bulk of the email solicitation should be in the form of an official invitation from your company. This invitation should include the following pieces of information:
- The purpose of the survey
- How long it will take to complete the survey
- The date the survey closes
- A request to forward the survey to the more appropriate respondent if the recipient is not the appropriate individual to be taking it
- A link to the survey
Additionally, the Subject Line of the email should be inviting (e.g. “Your input on <Your Company’s> documentation,” see Figure 1).
Figure 1: Invitation
The day on which to send the e-survey
Send the e-survey out on a Tuesday, which is considered to be the most neutral day of the week. Customers are usually too busy on a Friday or Monday to respond to (what is essentially) a favor that you are asking of them. Or, they are apt to quickly perceive the email as spam and will likely delete it. In any event, your response rate will be much lower if the survey invitation is received on any other day than Tuesday.
Stay true to the deadline that you stated in your invitation, which should be no more than 3 weeks from the date that you send it. It is then acceptable to send up to 2 subsequent reminders (again, on a Tuesday) to those customers who did not respond. Some of those customers may have simply been too busy to respond to the first invitation and need only a friendly nudge to convince them to take it.
A good response rate is anywhere between 7 and 10 percent, though this percent can be even lower (as will be discussed later). Based on your initial invitation and subsequent reminders, you can expect the following:
- 40 percent of your responses will come in during the week following your initial invitation
- 30 percent of your responses will come in during the weeks following your first and another additional 30 percent following your second reminder
- 80 percent of your responses will come in within the first 48 hours of your initial invitation or subsequent reminders
Determining when enough feedback is enough feedback (to tell me what I want to know)
After spending the time, effort, and money to ensure that your survey is properly designed, and then carefully selecting a sampling of customers to represent your customer base, you’ll want to garner enough feedback that paints an accurate picture of your customers’ overall thoughts. This need then begs the question: When do you have enough feedback to paint that picture? The answer to this question is simple: When your customers start telling you the same thing.
What is your leveler?
When you start aggregating your incoming responses using percentages, at some point you will see those percentages peak and then level off. The number of customers at which this peak occurs is what I refer to as the leveler. It’s at this point that you’ll know that you have enough feedback.
For us, the leveler was 25. We sent out our first e-survey to 725 customers and received feedback from 85 respondents. However, we noticed that after the first 25 respondents, our customers were no longer telling us anything new. So, while we were initially overjoyed to receive an impressive 12 percent response rate, we came to realize that a 4 percent response rate would have sufficed.
This is not to say that the percentages were identical between the first 25 respondents and all 85 respondents; however, they were close enough for us to start telling our story. This outcome can be exemplified in the following question that we posed: By which method do you typically retrieve our content? The results for the first 25 respondents were: Printed documentation 26.6 percent, Internet 43.3 percent, and Online Help 30.0 percent. The results for all respondents were: Printed documentation 24.8 percent, Internet 42.7 percent, and Online Help 32.3 percent.
Even though the results varied by a percentage point or two, we were nevertheless able to make the following claim:
- About a quarter of our customers use the printed documentation
- A little less than half of our customers use the Internet
- Approximately one-third of our customers use the Online Help
Identifying our leveler proved extremely invaluable to us. When we sent out our second e-survey a few months later to the same customers, our response rate was unexpectedly much lower than the 12 percent we originally achieved. Still, it was higher than our leveler, giving us the assurance that the results were accurate.
As mentioned earlier, feedback from an e-survey is typically delivered in the form of raw data contained within a spreadsheet. While there may be some aggregation done for you based on basic survey questions (for instance, company size, region), with some simple spreadsheet solutions you can aggregate this data yourself in any number of more useful ways. For example, we aggregated the results of our data by company size, only to learn that large companies and small to medium business companies were retrieving and using our documentation in vastly different ways. Individual examples of this analysis included:
- Large companies primarily use Online Help, while Small to Medium Businesses use web-published topics.
- Large companies primarily rely on configuration topics, while Small to Medium Businesses rely on troubleshooting topics.
- Large companies have multiple individuals to satisfy the various roles needed to support our product, while Small to Medium Businesses typically have just one.
While the results of this analysis may seem more like common sense than startling revelation, it was only when we began to put the individual pieces of this puzzle together that we started to see a clear picture of who our customers are. Additionally, we were happy to see that the survey data supported itself in so many ways. For example:
- The large company customer is one who plays a more defined role within the organization. As such, he is able to take the time to learn some of the more sophisticated features of our product. In doing so, he needs to spend the necessary time configuring them when in our Web User Interface. As such, his primary source of information is the Online Help.
- The small to medium business customer is one who wears many hats within the organization. As such, he has only so much time to devote to maintaining our product on a daily basis. In doing so, he is in a position to deal only with what is necessary: post-installation policy changes and (more importantly) error messages. As such, he copies and pastes any error message that he sees within our product into an Internet search engine in his attempt to find a quick solution.
Aggregating the results of our data by region also yielded its share of surprises.
Making post-survey decisions
Of course, any process changes or documentation-related decisions that you make based on the results of your e-survey will be strictly up to you. Nevertheless, you will most likely be surprised by those results and quickly learn that many of the assumptions that you once had about documentation use are simply not true. Or perhaps you will discover that what you once knew to be true has since changed. Expect the unexpected, and be aware that what you discover may be a little unsettling at first.
As stated previously, we discovered that our large companies and small to medium business companies were using our documentation in different ways. Given our limited set of resources, we initially asked ourselves: Do we focus on the larger customers who make up only 15 percent of our customer base, but use more of our documentation? Or do we focus on the smaller customers who make up 85 percent of our customer base, but use only a concentrated subset of our documentation? After analyzing their needs more carefully, we came to following decision: we focus on both.
The discovery process here was not so much the result of having analyzed individual line item survey data. Instead, it was having achieved an almost crystal clear mental image of our typical large and small to medium business customer. Once our customers came to life for us, we changed the way in which we think about, plan, write, review, architect, and publish our documentation for them. Luckily, neither we, nor our customers, became the victims of our own survey.
Happily, we are able to report that both sets of customers have responded favorably to these changes so far, allowing us to draw what may sound like a rather trite conclusion: When we listen to our customers, they will truly lead us to better ways in which we can serve them.
Additionally, we have been able to share some of these changes with other Information Development product groups within the company, as well as with the arm of the company that is responsible for company standards, guidelines, and content models. In doing so, we have been able to add input into a company-wide initiative for creating a better customer experience.
Creating an e-survey is only one way to reach out to your customers. Phone surveys, usability studies, customer visits, and social media are additional means by which you can glean even more granular information. However, an e-survey is the best place to start to get an overall pulse.
And don’t stop after just one survey. Remember to follow-up at least yearly with the same survey so that you can measure the success (or failure) of your efforts. Additionally, you may need the results of these surveys to support any business cases that you make to upper management for process changes that you want to implement.
In this age of quick information, it is safe to say that customers’ expectations are changing in accordance with the new technologies to which they are exposed. As information developers, it is our job and responsibility to learn what those expectations are, and then adapt our business processes and technologies to meet our customers’ needs. After all, it’s all about the customer!
Bill Tilley is an Information Development Manager with Symantec Corporation, where he worked since 2005. Bill’s team is responsible for developing and publishing both print and online content for one of Symantec’s largest enterprise mail security applications. Bill has over 20 years of experience in the software product life cycle, including program management and quality assurance. A firm believer in the old adage that “the customer is always right,” Bill is engaged in a number of customer partnering projects within Symantec to illicit content-related feedback directly from the customers themselves.