Jess Caldwell, Jack Henry & Associates

Completing a round of usability testing is exciting, especially as a beginner. You’ve been planning, scheduling, testing, and pouring over data for months on end. The time has finally come: the last user has finished their scenarios! You tap the stopwatch on your phone, jot down the number, and perform one final debriefing. You enter in the data for the day, close down your equipment, and breathe a sigh of relief. No more tweaking scenarios for you! You have all the answers.

You walk out the door and feel the tension finally slip from your shoulders. But then, the realization hits. How do you get the data in your spreadsheet to translate into improvements in the user interface? You imagine nightmare scenarios where your data sits, ignored, while users continue to struggle.

Unless you happen to be the project manager or development manager, you are going to need some buy-in to improve the product. It can be done with careful analysis, recommendations, and reporting.

Analyze Your Data

All the data in the world means nothing to the people you are trying to gain buy-in from without analysis. You can’t put the numbers, which make perfect sense to you in their raw state, in front of a manager and shout “Ah ha!” It doesn’t matter that 80 percent of tasks were failed. Why did they fail? You need to find the patterns in the data to make sense of the problems. Once you have a clear idea of the issues and their causes, you can start drafting recommendations.

Unless you are a glutton for punishment, or enjoy listening through hours of test recordings in one sitting, I suggest you analyze as you go. After each test session, build time into your schedule to sit down with the data for an hour or two. Look for patterns in the users’ performance. Can their struggles in multiple tasks be linked back to one cause? For example, did a user fail to find several topics because they are grouped under the same, vague heading? Then, look for patterns between multiple users. Are they all failing a particular scenario because a topic title is confusing?

Once you find the patterns, you find the real problem. If 90 percent of users failed a scenario because the topic and its short description did not contain a keyword they were looking for, that is something you can fix. Along with the percentages and time on tasks, include some qualitative data. Did a user hint at the problem, saying something like “I could not find the topic on adding metadata because the term metadata did not appear in the title”? Some people are won over by numbers, but quotes from actual users can really drive the message home.

Once you have the issues, rank them. Only in a perfect world can you fix every single problem a user finds. Depending on your test, you could uncover dozens of issues. You can either rank them by importance or group them by severity. An issue could be considered critical if it causes a work stoppage, severe if it causes the user to abandon the task, or minor if it’s a simple roadblock or hiccup. Pick the issues that would have the most impact if they were fixed, and put them at the top of the list. Then, figure out how to fix it.

Create Recommendations

It is not quite enough to expertly describe the problem with quantitative and qualitative data. The issue might be ignored or even worsened if you do not provide recommendations on how to fix it. Even if you don’t feel qualified to make recommendations for something you do not do, such as coding, remember, the recommendations are coming from customers and hold serious weight.

Recommendations are hidden in the data and in customer comments. The user might have a workaround to avoid a bug or problem. The user might also voice solutions, so it is vital to have them talk through their processes. The more you find patterns, the easier it is to find and give form to the recommendations.

Another option is to perform further testing or round tables with users. Once you have identified and organized the problems your users have experienced, conduct a test to see how they would fix the problems. Gather the users together and have them talk amongst themselves while you sit back and observe. Then, make prototypes of their solutions and conduct another round of tests. Recommendations are easier to implement when they are specific and supported by proof of improvement.

Report Your Findings

You write a final, formal report, and it is glorious. Your pages are full of precise methodology and analysis. You have multiple appendixes with raw data: the real meat of your project. If you are lucky, one person other than yourself reads it front to back. The truth is, if you are seeking buy-in from managers, they are not going to have time to read that report.

You want to hit them with a one-two punch: a high-level summary of your recommendations and a presentation. Create a single sheet with the main problems and the solutions. Include graphs and images. Then, schedule a meeting with your most important stakeholders. Create a presentation, and go over the high-level information.

Include positive feedback with the negative to diffuse any defensiveness. If you have a highlights reel that shows customers struggling with the product, this is the next best thing to having stakeholders observe testing while it happens. Be prepared, clear, and concise, and be ready for questions. Provide the report as reference material should anyone want it.

If all goes well, you can get approval right then and there.

Implement and Test Again

A tester’s work is never done, but, after a short recharge, you will be surprised to find yourself ready for another round of testing. You’ve seen the positive impact you can make on the product, and that is what keeps you coming back to test and test again. Your presentation created some interest in user testing from others in the company.

Now is the time to plan the next test. Ideally, you should strive for iterative testing with the product. The end goal—the dream really—is to make testing part of the normal sprint. Test new functionality and known issues often. Each test will generate improvements. You might not get there right away, but just keep testing.