NEWS CASE STUDIES GUEST BLOGS EVENTS HOW TO

HOW TO

Unmoderated research is great for gathering a large volume of data over a short time period, allowing researchers to gather quantifiable numbers to make actionable decisions. However, before you dive into the world of surveys and unmoderated research, it’s important to understand its potential flaws and why they exist.

As a project ends and the data is shared or moved around, it can be hard to draw conclusions solely based on the numbers and without the right context. You have never met these people, you might not know why their motivations to give you feedback, but they have done so. And understanding every step in a participant’s decision to take a survey is important to be able to properly analyse the results and make the right decisions.

Because of this, surveys can be a blunt instrument, but that doesn’t mean they don’t hold value. One of the ways to get the most out of a survey is to ensure potential unmoderated research bias is reduced to a minimum. This is the first in a three-article series covering how to overcome bias when designing surveys.

Mis-specifications in setting objectives

Setting objectives can go either way. Every piece of research will (or should) have an objective at the start of the process, but there are three glaring mistakes to avoid in the context of unmoderated research.

▪️ Not aligning questions with objectives
Start of the survey = lovely objectives. Three weeks later = pointless results.Ask the right questions off the back of your objectives in an attempt to get the answers you want. Define your overall specifications and category sections to focus on individual goals. There are too many surveys that ask questions not relating to the original objectives and it’s confusing.

▪️ Using one survey or task to answer too many questions
Surveys and other unmoderated tasks work best when they have a clear focus and path. A lot of the time, inexperienced quantitative researchers will cram their surveys full of nice-to-know questions, rather than solving one problem at a time. Identify your audience, ask only the essential amount of questions, answer points relevant to your objectives and analyse the results.

▪️ Asking everyone everything
Not every question is right for everyone. Sure, you might want to target two demographics and compare two sets of results, but beyond that, asking too many different people that same question will skew your stats and mean the comparison of data sets is unusable. Focus on smaller splits of participants to get the right data, implement proper conditional logic and then split the data out from there.

Bias introduced via design, content, development and assumptions

When running unmoderated research, our biases will inevitably be reflected on the survey, in our communication with participants, and often beyond the researchers’ control. Let’s start with an interesting consideration.

🎨 Should you brand your survey/task?

If people are familiar with your brand, this might skew your results. In some cases, having brand familiarity can be a positive in certain types of surveys or customer feedback questionnaires. But if there were a Pepsi logo at the top of the screen and participants were asked if they preferred Pepsi or Coke, participants will lean towards Pepsi – this is a bias called ‘demand characteristics bias’, which we will cover more in the second blog.

The branding or the survey will impact your response, so be mindful of whether or not you want to give away your organisation’s identity in unmoderated research.

🧴 Showcasing prototypes

When showing prototypes and asking for feedback, ensure the prototype is in a position you are happy with. Otherwise, this is all the participants will notice.

We have previously run a prototype test where some of the links didn’t work. This was highlighted in the content, however, all the feedback we got from the survey was about the broken links and not about the other elements in the prototype – the ones that people were supposed to feed back on. This meant a lot of the other features and potential improvements were missed by the participants.

If you are doing extensive prototype testing, the questions off the back of these are incredibly important. They need to be specific and objective to avoid ambiguous answers. For example, instead of asking “what was the best thing about the prototype”, move towards questions like “do you understand your next steps after reading this section?”.

These types of questions allow you to measure how your prototype communicates information, shows its purpose and how the design and functionality is interpreted by your audience.

🙊 Words in questions and answers

Language is a limiting factor in human evolution, as different individuals will perceive the same words in different ways. This is not limited to generational differences and slang: for example, words like wicked, economical, challenge and unique all have different meanings depending on the context.

How we use language is at the core of leading questions, which unfortunately are frequently used in surveys. Here are the five core types of leading questions to watch out for:

▪️ Assumption
“How much did you enjoy the product?”
That’s an obvious one, but it’s included a lot, often in the form of ‘the best thing’.

▪️ Interlinked statements
“Most people like the design. What do you think about it?”
The respondent is being set up here to say they like the design – good for an ego boost, bad for a survey.

▪️ Direct implication
“If you liked this prototype, would you pay to use this service in future?”
This can be a hard one to get around, it’s best to stick to actual experiences in the survey before adding future hypotheticals.

▪️ Coerciveness
“You liked the product, right?”
Asking participants forcefully is a no-no.

▪️ Questions as statements
“Don’t you think…” Or “Would you agree that…” questions.

Avoid these types of questions as it adds an assumption. We’ll touch on the type of acquiescence bias this encourages in users a bit later on.

Framing questions without positive or negative sentiment removes assumption and allows the participant to think and elaborate more about their own opinions. The same can be said for multiple-choice answers. Using objective, quantifiable language in answers gives participants more clarity on exactly where they fit. Let’s take a look at a few examples.

How often do you actively workout for longer than 15 minutes?
Very often | Quite Often | Fairly often | Not that often | Not at all often

This is very open to interpretation of the participant. Instead, we should provide options such as the following:

Daily | Multiple times per week | Weekly | Less than weekly | Never

This is by no means perfect, but if I used to exercise daily, and I now exercise every other day, I would choose fairly often – but a researcher wouldn’t know the context behind my answer. Quantifiable options translate into better data.

Your language choices and the way you communicate throughout an unmoderated task heavily influence the participants’ experience. Instructions and expectations give the users context about what state of mind they need to be in for the survey and what is expected of them: this is especially important when looking at hypothetical scenarios such as a purchase decision.

✅ Acquiescence bias

Acquiescence is a posh word for agreement. This is a little more out of your control, but you are able to filter out individuals who only select options that agree with what you want to hear/read.

One way around this bias is to not use leading questions or answers in surveys and to vary the question types throughout so the user has a mix of response options and question types. Everyone has seen a survey with 14 consecutive ‘very important’ to ‘not at all important’ options: it’s easy to unconsciously go through the survey and click ‘very important’ to everything.

Get around this bias by following the practice stated above: try to avoid the subjective, leading questions and give participants freedom to think for themselves.

🔨 Question order

We get asked a lot if there is a sweet spot in surveys, and from experience working on thousands of surveys previously we can confirm there is.

In many cases, this survey won’t be the first one that an individual has ever completed. This means participants are familiar with a specific format. Following demographics and background information need to be the core questions that answer your original survey hypothesis; and after these questions are further follow-up questions going into more detail on their answers, if they are required.

Simple questions at the start set the scene for the survey and get participants in the right frame of mind to go a bit deeper in the survey. Participants suffer from survey fatigue after about five minutes, so it is important that any detailed questions that require deep thought and opinions need to come towards the start of the survey.

💌 Social desirability bias

There are two ways social desirability comes into survey bias. The first is that people think they are above average through illusion superiority – where individuals overestimate their own abilities.

If we are to look at driving ability as an example, most people think they are above-average drivers. Because of this, when assessing knowledge-based information, participants are likely to perceive themselves to be better than they actually are, something worth analysing after the survey.

As well as illusion superiority, people will not always give information that is perceived as unethical in a survey. Questions regarding health, alcohol consumption, gambling, smoking and sexual experiences should always be caveated with the purposes of the research and that the information will not be used for any other purposes than that of the original research.

What other types of bias can be overcome via design and content? We’d love to know your thoughts! To read more on the topic, check out the next two parts in this series.

PART 2: DATA COLLECTION – NON-RESPONSES, SAMPLING ERRORS
PART 3: ANALYSIS & MISINTERPRETATION OF RESULTS

 


 

Jason Stockwell, Digital Insight Lead

If you would like to find out more about our in-house participant recruitment service for user research or usability testing get in touch on 0117 921 0008 or info@peopleforresearch.co.uk.

At People for Research, we recruit participants for UX and usability testing and market research. We work with award winning UX agencies across the UK and partner up with a number of end clients who are leading the way with in-house user experience and insight.