NEWS CASE STUDIES GUEST BLOGS EVENTS HOW TO

HOW TO

Onto the final article in the series! Congrats, you made it. Unless you’ve just landed on this article and you want to read the two that came before it: click here for part one and here for part two of this series.

Today we’re looking at an element of survey bias that is “in the researchers’ hands”.

Let me clarify this statement: if individuals are improperly filling out a survey, it feels like that could have been avoided. Even if you are able to have the best design, incredible content, sound development, give thorough instructions, make a survey as simple as it can be and select an even representation of the population, there will still be a small percentage of participants that will fall foul of some of the bias elements previously mentioned in the first two blogs of this series, and that is no fault of the researcher. We are dealing with people, after all.

What I mean by “in the researchers’ hands” is that, even if you have perfect (whatever that means for you) results, it does not necessarily mean that you are going to get perfect analysis, as bias can solely impact this final stage of your research. Data analysis and survey design are two completely different specialities and should be treated as such.

🧮 Correlation and causation

There are the obvious examples of correlation not being causation. For example, Nicolas Cage films correlate with drownings in swimming pools. No, really! Here is the data:

 

But this insight if misread, might mean Nicolas Cage may never get an acting role again (and I don’t know what I would have done without Face Off). You can find more ridiculous examples here.

This doesn’t mean that all correlation stats should be overlooked, but it requires an in-depth look and context to understand why participants behave a certain way.

🍉 Segmenting to the point of no-return

Let’s say a survey contains logic where you want to look for people who are considering buying an electric car in the next 12 months. You want to know what decisions are important to the individual ahead of this purchase. Here’s our scenario:

▪️ The survey gets 1,000 responses.
▪️ 15% (150 people) want to buy a new car in the next 6 months.
▪️ 66.6% (100) were looking to buy a new car where 33.3% (50) were looking to buy a used car.
▪️  Of the new car buyers, 20% (20 people) are considering an electric car.

If you’re asking further questions about driveways, electric charging units, whether or not they are using green electricity at home, eventually you are reaching a segmentation audience of one or two people, where insights will lead to conclude that 0% of electric car buyers who have a driveway and those who have 2+ children want to have a red car. Even if only one participant says they want a red car, that could be 50% (or even 100%) of an audience.

This feels like an obvious element of data analysis, but often gets missed when segmenting large numbers down to smaller ones. As user recruitment and research professionals, we are used to dealing with large participant numbers to give us the insights we need, but making decisions when one response can sway a percentage by over 3% can make a big difference.

The key thing to note here: do not make decisions off quantitative research where the final research segment is below 50 participants or too niche when compared to your initial sample.

💭 Decision-based trends

In the social media world, Vine (similar to TikTok) was discontinued by Twitter in 2016. At the time I worked in a marketing agency, we had just released a round of short videos and GIFs for a content campaign. You can see where this is going: my point is trends like this can change very quickly. Social platforms like Facebook, Twitter and Instagram might seem like giant immovable powerhouses, but this can change in the next 2-3 years.

When scoping the market, you may be asking for news sources or people that individuals rely on for industry information, so be aware of the Milkshake Duck effect. This is where an individual gains popularity based on a wholesome moment on the internet, but are later revealed to have distasteful history or behaviour. The term was popularised after this tweet:

 

The point is, aligning yourself with an individual or brand off the back of a survey or trend might be the wrong step for you to take due to the volatile times we live in.

The third trend to look out for involves current affairs. Over the course of the pandemic, we ran a series of COVID-19 surveys where we looked at how the nation was dealing with the pandemic and some of the new behaviours people were becoming familiar with. During the course of this research, the government announced some widespread changes that enforced closures of public spaces, made face masks compulsory in closed spaces and put bans on foreign travel, all of which impacted responses to the surveys.

When collecting results from surveys that look at current affairs such as recent events, politics and sport, it’s worth noting that things can change very quickly. Think about content that will be relevant in a month’s time when performing quantitative research as that gives an opportunity to implement the results of your unmoderated research.

😈 Not considering ANY bias in a survey

There will never be surveys or other unmoderated tasks (or any kind of research) without bias, that’s a given.

The mistake people make in analysis is not discounting any results where obvious bias has occurred. This can include removing responses that do not match the original screening criteria, removing responses that only select extreme examples and removing responses that do not answer questions relating to open-text questions.

If you can spot types of bias or trends in responses, you’ll get more accurate results and better insights from quantitative research.

🌌 Internal research panels and bias – finding sceptics

Sending surveys to clients is a great way of getting customer feedback, testimonials and ideas to improve your service. But only sending surveys to clients is naïve.

Expand your audience and compare these clients, prospects, leads, suppliers and unknowns to get an understanding of your audience’s opinions on your business and even whether or not people know what your business does.

This type of research expands your discovery and consideration phase marketing to include people that your business is looking to reach in the future and how to attract people based on your design, message, location and timing.

To find out more about unmoderated research, talk to the PFR unmoderated research team about your next project. Here are the links to the other two blogs in this series, in case you’d like to go back and read them again.

PART 1: DESIGN & CONTENT ASSUMPTIONS
PART 2: DATA, SAMPLING ERRORS & MORE

 


 

Jason Stockwell, Digital Insight Lead

If you would like to find out more about our in-house participant recruitment service for user research or usability testing get in touch on 0117 921 0008 or info@peopleforresearch.co.uk.

At People for Research, we recruit participants for UX and usability testing and market research. We work with award winning UX agencies across the UK and partner up with a number of end clients who are leading the way with in-house user experience and insight.