Yes, UX research is valid: How to handle assertions it’s not real research
— By Kayla J Heffernan & Caylie Panuccio (AKA KayCay)
You’ve watched 5 participants all struggle to use a component in usability research. The research shows it needs to change. But the person whose idea it was tells you “it’s just 5 people’s opinions” and pushes back on changes.
Exploratory research with 12 targeted participants has shown that 8 out of the 12 check their bank balances immediately upon logging into their internet banking, so you recommend exploring this further or making design tweaks. The stakeholder says no, it’s not real research and goes with what they wanted to do anyway. Or does nothing.
The other oft-repeated proverb from UX researchers (and designers) is that no one takes opportunities uncovered from qualitative research seriously.
Sound familiar?
Yeah, us too. That’s why we wrote this post — so you can combat these arguments and move forward with opportunities and designs.
Oh, the phrases you’ll hear!
Anchor linking appears to no longer work in Medium (even with the old hack), so you’ll have to scroll. Sorry!
- It’s only 5 / 10 / 12 people
- That’s an assumption/generalisation/are you sure?
- It’s just people’s opinions
- But the say-do gap
- It’s all subjective anyway
- But that’s not my experience
- I can’t make a decision based on this
- It’s not real / valid research
Other phrases you might hear that we haven’t covered include:
- These weren’t the right users (they were)
- They just need more training (they don’t)
- So what, we’re still making money / getting lots of daily active users / people doing the thing we want them to do (because it’s the right thing to do).
“It’s only 5 / 10 / 12 people”
In usability research we recruit specific participants who are reflective of the intended audience. If we have two distinct audience groups, we do two studies. We’re not just picking five random people. Five targeted participants find about 85% of the usability problems that the said group experience. After that, you’re spending money for diminishing returns. We know this from a peer reviewed academic study published in the most prestigious journal from human computer interaction research (Neilsen, Jakob and Landauer, 1993). They used math to prove this and everything!
Once you throw usability metrics in there, the number increases to 20–30. You can get away with 15 (speaking from personal experience, the margins of error sit around 10%).
For other types of UX research, it is well known that 12 interviews suffice for understanding common views and experiences of a homogenous group (Guest, Bunce, Johnson 2006). Any more than that, and you’re gonna start hearing the same things over and over again.
This article discusses saturation in research — “the amount where additional participants don’t provide any additional insights”. The author, Dr. John Latham, recommends 12–15 participants from the same group is enough.
If they still don’t ‘believe’ this, they’re now the one who doesn’t think research is valid.
Of course, your time, resources, and foci of the research may shape how many participants of various homogenous groups you want to conduct research with. For example, if the majority focus of research is with small-medium business owners in cities, that’ll make up 10 of the 12 — plus 2 from regional locations. That ratio also handily reflects the Australian metro/regional divide.
“That’s an assumption/generalisation/are you sure?”
No, it’s something several people told us that they do. If it’s come up fairly frequently that’s probably worth exploring further.
To combat this, ensure you refer to participants as “participants”, rather than “business owners”, or “chocolate-chip cookie eaters”, etc. This avoids generalising the population.
When your participants are representative of the population we do have greater generalisability and representativeness of research from sample (the people you spoke to) to population (all the people like your users).
Also — you don’t need to shoot it down. Ponder it. Chew on it a little. Then decide what you’re going to do about it.
“It’s just people’s opinions”
In positivism we believe that facts are facts and that research is uncovering the laws of the universe. That’s great for determining the laws of physics (gravity, after all, doesn’t care if you believe in it or not. Either way, you are still not going to be able to leave the Earth no matter how high you jump).
That’s not what we’re doing in UX and design research. We are exploring human behaviour to inform design and uncover potential problems to be solved, not emphatically state something to be true of every single person on the planet. All social research is based on understanding opinions; it’s interpretivism where we know knowledge is subjective and shaped by individual experiences.
These “opinions” help us understand the behaviour, motivations, needs and problems of our targeted users. This helps us structurally explore a problem that we don’t know a lot about. The results from such research allows us to identify gaps and opportunities for potential solutions.
“But the say-do gap!”
The say-do gap refers to a behaviour exhibited by research participants where they’ll say they do something, but won’t actually do it in practice (and, let’s be honest, most of us do this).
Let’s look at some examples of this.
- “I pay off my credit card each month.” This may not be true for the participant, but they might want you to think they’re responsible and money-savvy. That’s the social desirability and conformity bias popping up here.
- “I wouldn’t use a resume builder, because I know how to write a good resume.” If a participant says something like this, it could be that they’re trying to show you they’re capable.
Another form of this is where a participant may use a product differently in different contexts. This is where contextual research can come in handy. Seeing a participant use a product in their own environment means they’re much more likely to be natural in using it. Having participants interacting with a product ‘in the moment’ and getting responses, as opposed to asking what they think via a survey or during a usability test, means we’ll get a more natural response.
Anyway — yes, the say-do gap exists. However, we need to recognise that participants still *think* these things, or want us to believe that they do. That’s compelling in itself. If it’s front enough of mind for them to bring it up, it’s likely important, or at least aspirational. If its aspirational — maybe we can help it become a reality.
As outlined above, you can get around the say-do gap by coupling your interviews with in-context observation, or tracking user behaviour on-site.
A good researcher understands the say-do gap. We are not requirements gathering and building whatever a user says. We are watching them struggle and then say “that was easy” when we know it wasn’t. A good researcher is able to recognise social desirability bias, and probe to counteract this (through repeating what the participant has said, or questioning when they appear to be contradicting themselves).
In exploratory research we are examining something we don’t know about to develop preliminary ideas. This research describes a phenomenon (or phenomena) and comes to conclusions based on the data. We’re not asking 12 random people their favourite colour and using that to make a decision. We are running 12 recruited participants through structured questions designed to answer stated research questions. An experienced researcher is then using data analysis techniques to come to these conclusions, supported by the data. We use inferences from the data to understand why people behave as they do, guide design and further opportunities to consider. This is qualitative research.
Even if they were just ‘opinions’ that’s 12 additional opinions, from real users, to help make better decisions. We’re never trying to tell you exactly how the world is — we’re trying to de-risk, and help you understand your users, what they’re doing, and what that could mean.
If 12 people at a BBQ told you an online shopping site experience sucks, you’d probably believe them. Would you shop there after that?
If you placed a chair in the middle of a hallway, and you see 3 out of 5 people walk into the chair, swerve around it, or otherwise feel a bit disgruntled it was there, would you move the chair? Well, you’d hope so. Bit psychopathic if you didn’t. (Thanks Chris Marmo for that example!)
“It’s all subjective anyway”
Yup — interpretivism again. People are complex. UX is all subjective — every single users’ experience with your product is influenced by their previous knowledge, feelings, tastes, opinions, and other experiences.
That’s why you can’t be the definite test for whether your product is ‘good’ or ‘bad’ — your views are subjective too. UX is quantum — your product is both good and bad, usable and unusable (alive and dead) until you observe your users interacting with it.
We study what occurs in a particular context. We study how participants explain their statements and actions and ask what analytic sense we can make of them. We research to qualify this subjectivity.
“But that’s not my experience”
Variants of this include “my mum / husband / neighbours-best-friends-bosses-dog uses our product and doesn’t have the problems these users did” , “it’s the users fault” or “these aren’t the right users”.
Cool story. Also, irrelevant. We’ve never experienced prostate cancer, racial discrimination, naturally straight hair etc — that doesn’t mean those things don’t exist just because they don’t occur for one user group. Secondly, would you seriously discount someone’s experience if it was one of the above things? No? So, why are you doing it here?
Another variant is “I didn’t see it, so it didn’t happen”
Sometimes people simply refuse to believe what you’ve observed in user research to be true, because they weren’t there and well-this-design-is-awesome-and-I-meant-for-it-be-understood-differently-no-one-understands-my-brilliance.
Welp — you are not the user. You are likely more tech savvy than the person who will be using the thing. Just because it makes sense to you, doesn’t mean it’s going to make sense to someone else. You also work in and are familiar with the company for which you are designing. The word “beneficiary” might makes sense to you, but it certainly doesn’t make sense to the person using the platform once a fortnight.
“I can’t make a decision based on this”
Well, that depends. If you’re deciding on:
- Opportunities to pursue and/or explore further
- Whether the design you are testing is usable
- Whether the design/idea you are testing has legs (this is that old friend, value proposition testing).
You’ll have a pretty good idea after 5–12 people the direction in which to go.
If you are deciding on:
- Whether this cool new thing you found out from a round of exploratory research is a thing that’s worth pursuing investment-wise
- How big that opportunity is.
You might need some quant data to back that up.
Lastly, if you are mapping the experience of your users — 12 participants is fine (as per above).
“It’s not real /valid research”
Qualitative research is real research. We follow the scientific method — we have rigour. We’ve either learned it at uni, or learned on the job working with folks who’ve been doing it since before it was called UX. We can get very academic if you want, but that’s not going to serve our purpose either — you’ll switch off. We follow a similar process to academic research, it’s just been modified to work faster and leaner in agile, corporate contexts (Read: Agile). User interviews are a hugely popular method for UXers working in Agile environments, as you can learn quite a bit, very quickly.
For argument’s sake, let’s get academic.
UX research has roots in human factors, human computer interaction, psychology and ethnography.
- We may or may not have a hypothesis we are trying to prove, or disprove, but we always have a research question. Even in inductive research, where we start with data not a hypothesis, we still have a broad research question we are exploring such as “How do job seekers understand their Profile”.
- We evaluate and consider different research methods we could use to answer said research question(s).
- We design questions to answer the research question, ensuring they are not biasing or priming participants.
- We recruit specific participants from a defined population to take part in the research.
- We conduct the research, collecting data through observations, interactions and interviews or surveys.
- We separate, sort and synthesise the data through qualitative coding. Coding distills data to give a frame through which we can make comparisons amongst data. We usually affinity map these codes into broader groups.
- We compare and contrast the data.
- We analyse the data to make inferences and conclusions.
- When reporting on these findings, we refer to ‘participants’ instead of ‘[user group name]’ to avoid generalising.
There are literally hundreds, maybe thousands, of books written on qualitative research, University courses are taught on the subject and millions of Doctoral Dissertations have accepted using qualitative research methods. It is real research.
Findings from qualitative research are more useful to help guide decisions than a stakeholder emphatically denying what the research is showing us. What’s more scientific — some research or your gut feel?
The bottom line
If you’re a UX Researcher hopefully this article will help you fight some battles. If you’re not, hopefully you understand the validity of qualitative research methods a little better. Quantitative research, of course, does have its place in UX to quantify the experience through UX metrics (loyalty, usability, credibility, appearance etc.), quantify the size of an opportunity, or to determine a statistically significant winner in a multivariate test. No one really doubts the validity of quantitative research. It’s the “more is better” mentality.
And one more thing…
Sometimes, no matter how hard you try, how late you work, how much you cajole, you will just not get the right number of participants/the right majority/minority focus/the right kinds of users. That’s not ideal, but often we have to make do (thanks to timeframes, budget constraints, or hard to find user groups). The important thing is being able to justify why you’ve made the choices you’ve made in your recruit.
Thanks to Kayla’s Twitter friends who helped us brainstorm excuses they’ve heard about UX / Qual research not being valid.
References
Nielsen, Jakob, and Landauer, Thomas K.: “A mathematical model of the finding of usability problems,”Proceedings of ACM INTERCHI’93 Conference (Amsterdam, The Netherlands, 24–29 April 1993), pp. 206–213.
Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), 59–82. http://dx.doi.org/10.1177/1525822X05279903
Want to learn more? Here are some good books
Creswell, John W., and Cheryl N. Poth. Qualitative inquiry and research design: Choosing among five approaches. Sage publications, 2017.
Charmaz, Kathy. Constructing grounded theory. Sage, 2014.
Bernard, H. Russell, and Harvey Russell Bernard. Social research methods: Qualitative and quantitative approaches. Sage, 2013.
Williamson, Kirsty, and Graeme Johanson, eds. Research methods: information, systems, and contexts. Chandos Publishing, 2017.
Bryman, Alan. Social research methods. Oxford university press, 2016.
Who are we
Kayla Heffernan is the UX Design Lead at Seer (previously SEEK, Apparel21, IBM and Unisys). She is passionate about research with a decade of UX Research experience and a couple of theses to boot. She has 3 cats, all of whom are very friendly. Even Caylie’s partner likes them, and he hates cats.
Caylie Panuccio is a Senior UX Researcher at SEEK, previously at NAB. She is the founder of the Design Research Melbourne meetup which aims to improve the standard of design research in Melbourne, and promote researching with integrity. She’s a big believer in democratising research, but doing it well. She is currently a foster mum to a blue-tongue lizard named Banjo.