How to Get the Most Out of Your Usability Study – Part 2
I recently worked on a series of usability testing projects for a client who was launching a new online tool for its customers. They were extremely smart to incorporate this as an integral part of their project before even considering launching the tool. Through their study, they uncovered a lot of useful insights that enabled them to dodge some bullets and that ultimately guided the final design of their tool.
So—if you’re ready to make the investment in a usability study, what do you have to do to make sure you get the data you need? In Part 1 of this series, I focused on the logistics of putting together a usability study and why it’s a good investment. In this post, I’ll focus on tips to create a flawless facilitator’s guide:
Understand your audience. It’s important to understand who your users are and what mindset or perspectives they bring that will influence how they approach the tasks. For example, older users may not be as digitally savvy as younger users. But if both younger and older users are your target users, the tool must be appropriate for both. Ask questions that will help you better understand their answers. How comfortable are they with the Internet? How often do they access it? What is their familiarity or comfort with the tool or process you are testing? Put them at ease by reminding them that you are not testing them, you are testing the tool.
Understand what you want to get out of your study. If you are testing a transaction process for example, design your tasks to reveal how well your participants understand instructions, whether they need or use help, whether they are able to complete all the steps in the process. Ask them how they would accomplish certain tasks or where they would look for certain features. Ask if they thought it was easy or difficult and why. Ask for suggestions.
Set up the scenario. How will you quickly inform your participants what the test is about and what he/she will be doing? Have a prepared introduction to the test and tasks. Script out all instructions and tasks to make sure you don’t miss important points and to insure consistency between your sessions.
Have a plan B. Consider the range of reactions you might get with your tasks and account for them in your script. This will enable you to continue smoothly without having to scramble to get back on track, and avoid compromising your data by causing your participant to lose focus.
Ask participants to think out loud as they work through the tasks. This is a way to capture some great quotes.
Watch your timing. Assuming your one-on-one user-testing session is an hour, limit the number of tasks you are testing to 3 – 5. Set a time limit for each task by testing it out prior to your actual sessions. Adjust to fit the time frame. If a user takes longer, know ahead of time what you will skip to make up the time if you are on a strict schedule.
Note anything out of the ordinary that occurred during the test that might influence results so you can take these into account when putting together your report.
Include a post-task questionnaire to get the participant’s overall feelings and attitudes about the tasks. Did they find it difficult or easy? Comfortable or frustrating?
Organize your data around the problems you are trying to solve. Based on the findings, provide visual examples of how the tool or process could be improved.
SOUND OFF: What tips do you have for creating a flawless facilitator’s guide?