Usability Testing
A Brief overview of my strategy and process
I have been responsible for all phases of thousands of usability sessions in the following formats:
- Onsite moderated (1:1)
- Remote moderated (1:1)
- Remote un-moderated
My general process is as follows:
- Familiarize myself with what is to be tested
- Meet with stakeholders to understand:
- What is the purpose of what we are going to test
- What tasks are important to be accomplished
- Are any of the tasks more or less important than other tasks
- If this is a revised version, make myself familiar with any existing test data
- If this is a revision, interview a small subset of users about the existing version
- Identify the following:
- Is this formative or summative testing
- Sample size
- Participant demographics
- Draft (for stakeholder approval):
- Screener
- Tasks
- Session introduction with context and goals
- Assessment survey (if appropriate)
- Wrap-up questions
- Build task matrix to control for sequence bias if possible
Here's the most important thing to me at this stage, I want to be convinced that the stakeholder understands what we are going to know and not know as a result of this testing. The last thing I want is for a stakeholder to think we are testing for something that we are not, or testing for something that is not important. I go to great pains to explain our level of confidence for everything in the test. Because it is qualitative or formative testing, we can only report on the strength of the patterns I see at this point in the development. I try hard to set and manage stakeholder expectations, I'm not always successful. But I try.
- Pilot test (make adjustments if needed)
- Recruit, compensate, and schedule participants
- Arrange for facilities if onsite 1:1
- Remind participants
- Check testing equipment
- Analyze data for (if appropriate):
- Success
- Failure
- Level of difficulty
- Confidence
- Task attrition rate
- Efficiency (if appropriate, clicks, timing)
- Assessment survey (if appropriate)
- Present findings of data to stakeholders along with any recommendations
Here's my perspective on presenting findings. I keep the results of the data separate from my analysis. Let me explain, I make it clear to the stakeholders what the users did. There are times when I'm challenged about my conclusions of what was done, usually as a defense mechanism from the designers, developers, and stakeholder not having their expectations/fantasies met. I get it, this isn't my first rodeo. If the push back is especially strong, I ask if they would like more participants. Sometimes they do, and after 3 more participants...they "frequently" get what the users are doing. When that happens, I keep my mouth shut.
Having said that, the other thing which is important is that they hear my overall assessment of what I observed. I make it clear which things are critical, which are the nice to have's, and which things are not likely to have much impact at all, or just impact a small group of users. Not every user difficulty I observe requires a change, especially in formative testing. I don't test pacemakers, those have to be perfect.
I have also conducted low budget usability testing with employees for some tasks with a moderate level of confidence in the results.
Example of online interactive expert review
If you made it to this point I am encouraged. You now know more about how I think and what I bring to the table. Let's talk about what I can do for you, 1.714.357.7578 Art Zippel.