Some common reasons teams use to avoid running research studies, why they are misguided, and some tactics for overcoming them.
User research studies uncover information about users or products which helps teams make better decisions. This is rarely not useful. However, sometimes product and design teams can be reluctant to run user research. Sometimes this is for very human reasons – people could assume that discovering issues could reflect on the quality of their work. This is understandable and requires time and empathy to help colleagues overcome.
Other reasons for reluctance can be due to not understanding how research can help. In this article, we’ll look at some examples where this might occur, and some techniques for helping educate people and overcome these fears.
“It’s not ready to test yet”
Some teams assume that software needs to be nearly done before it’s worth running user research studies. This belief relies on teams believing that any issue can be fixed with ‘tweaks’, and that there is no value in research until something representative of the final product exists. However, it’s misguided for a couple of reasons.
The first problem is that waiting until it’s ‘ready’ usually means waiting until it’s too late to fix issues. The cost of making changes increases over time – it’s much cheaper to change an ‘idea’ than an almost completed app. If the core idea of the app is misguided (and it frequently is), learning that just before launch is much too late. Early research studies, looking at how users perform similar tasks, and what’s hard for them currently, can help make sure the idea itself is worth pursuing further, without wasting expensive development time.
Another problem with waiting to test is that people also become more invested in their idea the longer they spend with it. Putting weeks of development work into a feature, only to be told it needs to change, can be hard to hear. This causes people to dismiss research findings and will reduce the impact of research studies.
Teams also assume that researchers can’t work around bits that ‘aren’t ready’ yet. However, combining carefully scoped research objectives, and an understanding of the intended experience, allows researchers to recreate the missing parts of the experience where required, allowing the study to go ahead before everything is complete.
To overcome this misconception, start with education about some of the above points, so that team members are aware of the risks of waiting too long. Time spent speaking with the team to understand their current priorities can also help identify relevant research objectives at every stage of development – a researcher can help suggest potential objectives that the team wouldn’t have thought of independently. Strong communication with development teams will identify high-impact research objectives that can be addressed immediately, and which other objective moderators will need to mock-up or work around in their study designs.
“User research is too expensive”
It is true that it costs money to run user research studies. Finding the right participants takes time, and incentivising them to actually turn up requires giving them money. Similarly, inviting team members to view sessions is costing the company time, because it takes team member’s time.
As user researchers, we believe that these activities are saving the company money later down the line. This can be hard to measure – although being hard to measure doesn’t mean it doesn’t exist. These studies help save money by allowing teams to get to the best implementation of their idea sooner, and reduce the time it takes to build a useful and usable product. Acting on research studies also helps earn the company money by increasing customer retention. I touched on some techniques for measuring the Return On Investment (ROI) of research in a previous article for the Leading Research community.
Working out the ROI of user research studies for early, low commitment studies can help build confidence that research is useful and saves money. Researchers should work out which metrics are important to their company – finding more customers, keeping the customers they have, releasing sooner – and describe how their studies are moving these metrics. This makes a strong case for the financial benefit of running research studies.
”We know what’s wrong already”
Most of the time teams already know some things that are wrong with their software. Prioritisation and deadlines mean that many known issues might have to be ignored. This can cause teams to worry that they will learn nothing new from a research study – it’ll just tell them the issues they are already aware of, and not be useful.
To deal with this, the first step is to understand why the known issues aren’t being fixed. Is it a lack of evidence, which research can help with? Or is it internal politics, such as friction between teams? Uncovering this allows the value of running a research study to be decided, and research objectives to be prioritised.
Even if some problems are already known, a research study can still provide value. Understanding the impact of the issue on users will help inform the priority for the problem. Understanding a user’s existing experience may also help inspire better fixes to known problems with fewer iterations needed, by bringing “how it works” and “how people think it works” into closer alignment earlier.
Another aspect to consider when navigating these conversations is defining appropriate research objectives. How will you work with the team to decide what the next study should learn? These should be informed by upcoming decisions the team has to make, and through setting appropriate objectives (and ignoring ones that are related to already understood issues), a research study can help discover new and interesting problems rather than things the team already knows.
“User research takes too long”
Research studies take time to run. To reliably answer research objectives requires users, and getting hold of the right kind of user can take a week or more. This can make teams worried that they will learn things too late to address them and that they will have moved onto other priorities, making the study pointless.
For researchers the actual data collection, analysis, and debriefing process can be reasonably quick – in the book Building User Research Teams I talk about how to do this in two days. The challenge is therefore having appropriate warning to schedule and plan the right study for the team’s current priorities. Running the study itself is reasonably quick and fits into a team’s schedule easily.
Overcoming this is again a communications issue, and requires a researcher to anticipate what the team will want to know in one to two weeks’ time. This can be done by attending the team’s planning rituals, meeting with product managers, and keeping track of what the development team is up to while recruitment is occurring to adjust objectives closer to the time. Close collaboration with teams will lead to running studies exactly when they are needed.
Much like with the financial cost of research, running user research studies is actually saving development time, by identifying problems earlier in development so they can be prioritised and resolved promptly. It’s much harder for a development team to react to problems closer to launch, and leads to more redundant development work being thrown away. User research through development speeds up the time it takes to reach a high-quality product.
“User research is just people’s opinions, not real data like analytics”
Teams new to working with user researchers, or who have worked with disappointing researchers before may not immediately recognise the value of qualitative research. Analytics and A/B testing create trustworthy looking numbers and numerical proof of impact – which makes it easy to measure and justify. In contrast, qualitative research studies give rich information that requires analysis to indicate potential product directions or inspire solutions but is harder to turn into numbers. However as we’ve seen, not everything that matters can be measured.
When it comes to actually deciding what to do, qualitative research findings are hugely valuable and give a much stronger steer about “what should we do about this problem” than quantitative research. Discovering “39% of people click on this page vs 20% on this page” doesn’t give rich hints about how to increase the number of people who click on that page, or whether that page is achieving its goals. In contrast, understanding ”what were people trying to do when they arrived on the page” and “did they manage to do it”, and “why/why not” will inspire many informed ideas about how to increase that 39% number (or even if increasing that number is the right thing to do).
This lack of trust in qualitative data can also result from a misunderstanding about what a user researcher is doing when they talk to or observe, people. A researcher should not just be reporting what people say. The user’s thoughts and behaviour are probed so that the researcher can share the user’s understanding of the situation. This insight allows sensible decisions to be made about how to change what users think or do. This understanding, combined with a quantitative understanding of how representative each behaviour is, can be very powerful for making good product decisions.
When you shouldn’t run a research study
There are times when it truly isn’t appropriate to run research studies. Sometimes the decision has already been made, and there won’t be a chance to react to the findings of a study. Other times it isn’t evidence that is informing decision making, and research findings will be ignored. I’ve written a bit about how to recognise and overcome those situations in the book Building User Research Teams.
Challenging teams who are reluctant to run research is a key part of increasing people’s awareness of the ways in which research studies can help, and helps raise the research maturity of a company.
Leave a Reply