This is an Eval Central archive copy, find the original at freshspectrum.com.
How do you know if a report, toolkit, or data dashboard works?
If you’re like most organizations, you probably copyedit your reports and have multiple people read them through. But how often do you actually put your reports into your users’ hands?
Today’s post is about usability testing, a.k.a. “how to assess if the thing you created is actually useful for the purpose for which it was created?” Oh, and also, “how to get the feedback you need to make the thing you created better?”
What is User Testing/Usability Testing?
So as far as this article is concerned, we’ll use the two terms “User Testing” and “Usability Testing” interchangeably.
Basically, Usability Testing is a method for testing how something is used for the purpose of improving the product. But let me pull the definition from what I think is one of the best resources for this kind of thing on the web:
In a usability-testing session, a researcher (called a “facilitator” or a “moderator”) asks a participant to perform tasks, usually using one or more specific user interfaces. While the participant completes each task, the researcher observes the participant’s behavior and listens for feedback.
From the Nielsen Norman Group’s fantastic Guide to Usability Testing
If you’re really interested in usability testing from a user experience perspective, that guide is a really good rabbit hole to fall into. This post is specifically focused on usability testing reports, toolkits, and dashboards. The kind of stuff we create all the time as evaluators and researchers.
How is Usability Testing different from Cognitive Testing?
If you have spent a significant amount of time in social science research or program evaluation, usability testing might feel pretty similar to the cognitive testing methods you might use when developing surveys. And yes, the methods used are very similar and there is a lot of overlap.
Cognitive testing is built around the cognitive process respondents use when answering questions. Here is a nice short focused guide on cognitive testing from the Harvard University Program on Survey Research.
If you try to answer a question, you’re going to need to do four things.
- First, you need to comprehend the question.
- Second, you need to retrieve the information the answer the question from memory.
- Third, you’ll need to summarize the information so that you can answer the question.
- Fourth, you’ll need to actually answer the question.
Understanding where the process breaks down can help you better understand why a survey question may or may not work so that you can fix it and make it better. The methods are designed around this cognitive process, which may be close to what you are evaluating in a run-of-the-mill usability test, but also might not be super relevant.
What is the Purpose of a Report, Toolkit, or Dashboard?
It’s really easy to badmouth a report that’s too long or really ugly. But I think the biggest problem in reporting is when a report is just plain useless. I will take a long, ugly, useful report over a short, pretty, useless report any day.
Useless reports are just collections of information someone thought should be catalogued in a pdf. Not because that information has a purpose, but because the writer just felt like it should be included.
In order to properly usability test anything, you have understand how someone would use that thing.
For example…
- a non-profit executive director might use an evaluation report to decide whether they should continue funding a particular project.
- a school board might use a COVID-19 case report to decide whether schools in their system should go virtual.
- a program director might use a needs assessment report to better tailor their programming to meet community needs.
- project staff might use a step by step guidance document to enter information about their program into a data system.
- a program officer might use a data dashboard to ensure project sites are following through on their commitments.
Try to write your own sentences.
A [insert type of person in your audience] will use [insert product] to [purpose of using the product]
What are the Basic Steps in Usability Testing?
Let’s not overcomplicate this (which is always really easy to do) and just go with the basics.
You will need…
- Someone to facilitate the usability test (usually filled by a good qualitative interviewer).
- A participant to use the product (should ideally be someone from your target audience).
- A set of tasks for the user to perform.
One of the easiest ways to go about usability testing is to ask your participant to perform a task while thinking out loud. The job of the facilitator is to observe, listen, and prompt the participant but mostly stay quiet.
Definition: In a thinking aloud test, you ask test participants to use the system while continuously thinking out loud — that is, simply verbalizing their thoughts as they move through the user interface.
For the prompts, ask questions like, “why did you go straight to that page?” or “what were you thinking when you clicked on that link?” Try to ask neutral questions and try not to influence their decisions.
For a lot of us, all of this now-a-days is likely to be performed over Zoom, or a similar tool. If that’s the case, ask your user to share their desktop screen and ideally their webcam. You want to see if they get confused and having the ability to see their face is helpful.
You don’t have to test a lot, usually you can get a good amount of information from just 5-10 users.
Depending on your needs, you can make usability testing a really formal process or keep it informal. To formalize the process, consider writing a testing protocol and training your interviewers. Also consider recording the session and/or having your project team present on the call (in listen mode) during the session.
What NOT to do when Usability Testing?
Stuff you don’t want to do.
- Talk too much (listen/watch);
- Guide their actions (you are facilitating the usability test, not the use);
- Answer all their questions (you want your tester to work through the problems as though you are not there as much as possible)
- Have multiple team members asking questions (everyone other than the facilitator should be completely silent/in listen mode.”)
Listening to a user openly critique your work can be hard. Not all comments will be useful, but try not to close your mind.