I follow two tenets when testing products with people:
- Any testing is better than no testing
- Bad testing is worse than no testing
Even when you have the opportunity to do it, messing up is terribly easy. Whether you’re doing usability testing, business analysis, ethnography, or any research where you’re tasked with extracting deep insights about people, you’ve got to put as much work into setting up your tests as you do conducting them and analyzing them.
Here are some things to keep in mind when doing your testing, along with some usability horror stories.
Get the right people
Remember when I said that any testing is better than none? I lied. You have to get several people and the right people to make your testing work.
A software consultant friend of mine was in a situation where the client wouldn’t let any of his employees talk with my friend because the client knew exactly what the employees need. I can’t think of a worse situation for starting your work (except not talking to anyone at all).
Not convincing? Let’s take the OLPC laptops. Designed for children in developing countries, someone tested it with a computer literate, British child. Obviously getting kids in third-world countries for usability testing is difficult. But testing with a kid unrepresentative of the end users? Why bother testing at all…
What can you do to make your research better? Always talk to relevant people. In my examples above, the client and the British kid are not the appropriate people to work with. Talk to the right people — people who will use the software, representative of the real users. Don’t let anyone else stand in for them.
Also, avoid an N of 1. Unless you’re designing an executive dashboard for an executive (and you’re talking with that executive), you need to talk to several people. If you’re not sure how many you need to speak with, keep interviewing until you don’t learn anything new.
Ask the right questions
Even if you have the right people for your research, it’s trivially easy to bias your results by asking the wrong questions, not following up when someone drops a hint about a related problem, or not having a goal for your testing.
For example, take Girlfriend Usability. Someone was interested in doing usability testing with Ubuntu Linux, so he got his girlfriend to sit down with it and test a few scenarios. Yes, he commited the “N of 1” error (see above), but this example also exposes other common problems you’ll encounter in your research.
First, you must have a target for your testing. Asking if Ubuntu can be “used easily by the mainstream” is far too broad to test, especially with only one subject. By the look of the tests performed, they seemed appropriate for a college student’s computer use but not appropriate for a business user or an IT professional. Figure out your target group, and build your interview/test/survey around the target users and their uses.
Once you have a target in mind, test the things that will help you draw valid, usable conclusions. Girlfriend Usability was a scattershot test of Ubuntu, OpenOffice, Pidgin, and an assortment of other applications and dialogs. Based on those tests, I can’t think of how those tests will help you draw any conclusions about the Ubuntu user experience.
A more appropriate test would be to get a group of people, give each a laptop with Ubuntu on it, and ask them to use it as their primary machine for a month. Record the desktop and their interactions, give them support as they need it, and have them note whenever they feel they have problems. Then review those moments with the person who encountered them and dig into the issues underlying the problems. Absolutely give time for those people to adjust to Ubuntu. Don’t expect someone sitting down with a new operating system to “get it” right away. (Can you say “faulty test design?” I knew you could.)
Discover underlying problems
After you’ve done your testing, you need to draw conclusions from your research. In the case of usability testing, your conclusions should be actionable improvements that will solve the problems underlying the ones you saw in your research.
Drawing any conclusions is impossible when you didn’t ask the right questions or test the right people. Say you tested one person; your conclusions may be based on the test results of an outlier or non-representative individual among the group you’re interested in. Your conclusions will be wrong; save yourself embarrassment and find more testers.
But even if you do your testing correctly, you can still sink your test by drawing the wrong conclusions and taking the wrong actions. For the best example of drawing the wrong conclusions, you need look no further than Windows Vista. Vista is a limp forward from Windows XP. That’s as far as most people get in their analysis of Vista’s poor reception.
Listen to Bill G on The Daily Show describing how Microsoft determined Vista’s improvements.
After spending time with 50 families around the world and 5 million testers, I cannot believe that Microsoft could create a product as uncompelling as Vista. Actually, I take that back. They could make an uncompelling product if they completely disregarded their research and built whatever they felt like instead. Or maybe if all 5 million testers colluded to sink Vista…
So if you did your research right, how do you avoid drawing poor conclusions? Look for the themes across your test results. Testing the Meez Facebook Application, every person mentioned “friends” but in different contexts — play with them, play against them, find them, etc. It’s a dead giveaway that friends are the secret sauce for the success (or failure) of that (and every) Facebook application.
Also preserve your tester’s words. Direct quotes from the testers are the best source for your analysis, and they’re the most powerful weapon you have to convince others that your conclusions are right. Record all of your conversations with your testers to get the quotes right. If you have to edit quotes, never rewrite them in a way that changes their meaning. Changing the quotes will bias your results and lead you down the wrong path.
Even if you dismiss all my other advice, remember these two things:
- Do your research poorly and you’ll end up with poor results
- Do your research right and you might still mess it up in the end
The “right” research is different for every case. Do your homework to figure out who the right testers are and what the appropriate tests are. With that done, you’ll be well on your way to discovering valid, actionable conclusions that everyone will love.