NEWS CASE STUDIES GUEST BLOGS EVENTS HOW TO

HOW TO

This is the second blog in our unmoderated research biases series. If you didn’t read part one, click here. We talk about design, content and building surveys and how to eliminate bias at that stage.

In this second part, we’re exploring bias introduced by the participants and how to overcome them, which is a wider issue when it comes to survey responses. The reason? There isn’t a checklist to follow.

I’m going to go out on a limb here and say it’s impossible to eliminate all bias from user research. The way around bias is to recognise where it will creep in and calculate the impact it will have on the research. Once you know what the punches are and where the blows will land, you can at least look at implementing preventative measures to keep any unmoderated research bias to a minimum.

Beyond the design, there are other elements of bias introduced based on the selection of respondents and the participants themselves.

🧪 Sampling bias

It’s one of the most obvious types of bias that is often overlooked. Essentially it is not using a representative sample or a target audience in user research by either having too many or too few participants from specific demographics and ignoring others.

To use examples from historical data, in the 1948 presidential election, the Chicago Tribune famously announced Dewey beating Truman off the back of a phone survey. At the time phones were not in every household, so the sample was not representative of all voters in the country, and Truman went on to win the election.

What this means for user researchers is some members of your intended population will be higher or lower depending on your selection. This becomes problematic when researchers start to make decisions based on skewed results to suit one audience, while neglecting the needs of another.

Sampling bias isn’t always easy to spot. With a target audience, you may need to get a representative sample of that audience rather than the wider population.

👩‍🏫 Demand characteristics bias

Demand characteristics bias means participants will cotton on to the purposes of research and tailor their responses to give answers the researchers want to hear.

It is important to give participants context by all means, but as soon as a participant knows you’re looking for feedback on a specific design, they will recognise that and strike to give you the answers you want to hear, rather than giving you the honest feedback you need.

Before you start a research project, it is worth giving the participant context of what you want to improve, but don’t give away the exact purpose of the research output.

A good example would be if you’re testing accessibility design on a banking app. Say that you want to improve an app and feedback would be welcome, but don’t spend too long going into the backgrounds of the research or ask too many questions on the same subject. Participants are smart, don’t let their intuition skew your results.

🪂 Extreme responding bias

Extreme responding is a form of bias where users only select the most extreme answers and options available. This is commonly seen in Likert scales, where respondents have options between 1 and 5 and the respondent may only give answers that fit 1 or 5.

There are ways of avoiding this before the survey goes out. During the design stage, it is worth adding as much variation to questions as possible. This way these questions aren’t in a big batch that makes it easy for respondents to go through and select all extremes, for example.

Another way is to quantify responses and use objective options, rather than subjective options. What we mean when we say this is rather than putting Never | Sometimes | Often | Always as options, adding data that responds to the question, such as Every hour | Every 6 hours | Every 12 hours | Every day | Less than once a day.

If you are reviewing data in the post-analysis stage and you are conscious of this bias creeping in, you can always add filters to exclude responses that only choose extreme scenarios. This can be done before exporting the original file (within your preferred platform) or on your spreadsheet in the analysis phase of the research. We’ll talk more about the analysis in part three of this series!

😎 Finding the right audience

This is one of the more simple elements of survey bias, and it comes from not understanding the objectives and brief of the survey in the first place. The overarching statement within any product is “you are not your customer”. With this token of knowledge, it’s worth focusing your survey on people that are your customers.

A good thing to think about here is how niche your ideal participant would be. For example, if you are running a survey on road works, you may want people who live locally to the area, but you also want individuals who commute through the area or regularly drive through the route on a variety of days and times. As long as these groups are defined beforehand, yourself or a third-party like People for Research will be able to find the right people within the right splits. This is exactly what PFR’s unmoderated user recruitment focuses on, you can find out more about the service here.

🤷 Non-response bias

This is an interesting element o cover and often doesn’t get much airtime because of the ambiguity surrounding it. Non-response bias looks at the differences in potential answers between people who responded to surveys versus those who did not complete.

Participants can drop off during a survey for a number of reasons, and this drop-off rate should be taken into account just as much as anything else. There was a recent example of a government accessibility survey that caused a lot of participants to drop off as it was not accessible itself (seriously, read about it here).

Essentially, you want to know where participants are dropping off during your survey and what questions may have triggered them to do so. Sure, some of them will be time-sensitive issues, but if you are covering personal or sensitive information, some participants might not want to share this information in a survey.

What other types of bias can come up when recruiting or finding participants? We’d love to know your thoughts. Here are the other two articles in the series:

PART 1: DESIGN & CONTENT ASSUMPTIONS
PART 3: ANALYSIS & MISINTERPRETATION OF RESULTS

 


 

Jason Stockwell, Digital Insight Lead

If you would like to find out more about our in-house participant recruitment service for user research or usability testing get in touch on 0117 921 0008 or info@peopleforresearch.co.uk.

At People for Research, we recruit participants for UX and usability testing and market research. We work with award-winning UX agencies across the UK and partner up with a number of end clients who are leading the way with in-house user experience and insight.