NEWS CASE STUDIES GUEST BLOGS EVENTS HOW TO

HOW TO

Data validity is one of my favourite subjects (yes, I’m a nerd), so let’s go straight to the point of what survey data validity means and how to achieve it.

Validity focuses on the suitability and extent by which your survey (both in its entirety and on an individual question level) actually answers its intended hypothesis, objectives or business questions. By way of an extreme example, you wouldn’t use a question asking about someone’s happiness to precisely gauge their likelihood of buying a new car – the link is tenuous and the results will, therefore, be invalid.

In other words, you want your survey results to be bulletproof and you want the data collected to either prove or disprove your beliefs or assumptions. Let’s jump in and make sure you get the best results from your next survey.

🎱 Identify business objectives

It might feel like we keep repeating ourselves… but we keep saying it because it’s true: clearly identify and plan your business objectives and, more specifically, your survey’s goals at the beginning of the process.

The objectives must come first for the questions to be relevant and built with the purpose of answering them.

🎱 Choose your questions carefully

This is such an important (and lengthy) point, I re-wrote it about seven times before publishing to make sure I was communicating it clearly. Choosing your questions could be a blog in itself, but let’s focus on these two steps for now.

👉  Which question types should you use?

Will a 1 to 10 scale clearly answer a yes/no question? Even if you’re still reading that and thinking “yes, yes it will” think of the poor data analyst who has to comb through and organise this data to help you make sense of the results.

The question type is crucial when it comes to the validity of your survey data. A topic could be rendered useless if the question type is not adequate to your goals, as this means the right insights won’t be captured in an appropriate format.

👉  Capturing opinions vs. facts

Other question differentiations are around whether you are using subjective or objective questions / answer choices. For example, are you using scales (e.g. how often do you do this, from quite often to not a lot) when perhaps a more straightforward option (e.g. how many times a week do you do this) may be better suited?

When using scales, the notion of regularity or preference varies depending on the person responding. So, unless you want to gauge how people feel about something subjectively, then using clear measurables is the way to go.

Here are two examples of how you can go from scales to more defined options.

+ How often do you eat five fruit and vegetables a day?
Very/Quite/Sometimes/Rarely/Never

This is extremely subjective, as you don’t know what each person considers to be quite often or sometimes. Change the answers to produce quantifiable data: rather than a scale, ask people to choose from 6-7 days a week / 4-5 days a week / 1-3 days a week / Never.

If you want to be even more specific and add another layer to it, add a timeframe to the question and ask people to tell you how often they ate fruit and vegetables during the previous week. However, this could influence the data in an unnecessary way, so we would just add ‘on an average week’ to the question.

+ If buying a new car, how important is the colour?
Very/Quite/Neutral/Not a lot/Not at all

Analysing the data from this question is possible, but there’s a better way to ask this. If you are trying to access the buyers’ priorities, we would advise you to turn this question into a preference grid and ask the participant to determine the level of importance of different factors against each other.

Or, if you don’t need all of this information, simply change it to a yes or no question: does colour matter to you when choosing a car?

🎱 Use content that’s relatable

If people do not understand the question, the results are void and hold no meaning. For example, did you know that, in the UK, the average reading age is 9 years old? This means that your average participant can process content at the same level of someone currently in year 4 or 5.

Here are some dos and don’ts:

👉  Try to avoid USOW. Oh, you haven’t heard of that? It means “unnecessary shortening of words”. We all use abbreviations that are familiar to us (and the ones around us) and save us time every day, but remember that a lot of your users won’t be familiar with your industry’s or product’s acronyms.

👉  The same goes for internal language and jargon. Avoid using this technical wording in a survey!

👉  In general, choose straightforward words, go for simple structures when building sentences, and provide user-friendly instructions or guidance (even if you think something is obvious for you or your colleagues).

🎱 Be objective

We’ve moved on from having objectives to be objective. Here’s an obvious (and slightly over the top) example of what not to do.

+ How amazing is the Sugababes’ classic song “Overload”?
Probably the best song of all time | It’s definitely in my top 50 songs | I only listen to it once a month

Survey 101: do not lead your users! Let’s try again with something more subtle.

+ How often do you listen to music?
Often | Sometimes | Never

Still not good enough in my book.

+ How often do you listen to music?
Daily | Weekly | Monthly | Never

Or even better.

+ When did you last listen to music?
Today | Yesterday | This week | In the last month

Questions about behaviour can be effective and produce unbiased, objective data as long as they are properly designed: ask about the last time the participant did something or a typical time a participant does something, especially if you are targeting sensitive topics like gambling, health, religion, etc.

Inaccurate and biased data is the result of non-specific and/or hypothetical questions. Making questions quantifiable and objective is important to get the right results.

🎱 Maintain consistency

This is imperative, whether we are talking about content across one survey or a diary study / survey series. Use the same language and terminology throughout the research to avoid confusion.

Take the word diary, for example. If users are asked to keep a diary throughout the study and that language switches to journal in some places, the participant might think this is a separate piece of work they have to complete.

When reading through content and guidance, we don’t want it to look repetitive; but, at its core, consistency should be the priority to minimise confusion.

🎱 The importance of pilot studies

We have recently written a separate article on the importance of pilot studies. Pilot studies allow you to test your survey with a smaller group of participants before actually running it. Here are the highlights:

+ Pilot studies help with survey data validity because they help to eliminate any misleading questions.

+ If you’re feeling like the answers from your pilot study are skewed, you can do further research with this small batch of participants and try to understand where it went wrong. This ensure the content in the the final version of the quantitative research is clear.

🎱 Find the right timing

The timing of a survey plays a big part in the results when looking at reliability and validity. Let’s use another example: if you run a survey right after an extremely controversial or evocative change, this will inevitably affect results.

We noticed this recently, as we ran a series of COVID-19 surveys in the past nine months. There are a couple of elements within the pandemic specifically that I want to focus on here:

👉  People haven’t adapted to this being their “usual routine”

What I mean by this is: if asking about habits, it comes with the caveat that a lot of people are not leaving the house as often, or are experiencing feelings of anxiety and depression where they have not experienced this before. These are things worth considering during quantitative research.

👉  Changes in the world change participants’ answers

During our COVID-19 study, I can remember asking a question along the lines of “would you welcome a more strict lockdown”. We were interested in finding out what the public expected and how they wanted the government to deal with the pandemic. Low and behold, the government enforced a stricter lockdown days later, meaning half the respondents didn’t know we were heading this way, and the other half wondering how much more strict the lockdown could get.

🎱 Project scope

Documenting your initial steps will help you get the right data out of it; that’s where your initial project scope comes in.

Survey data validity focuses on answering a few core questions:

+ Who do we want to answer the survey?
+ How many responses do we need?
+ What do we want to find out?

By exploring these questions around the profiles of target audiences, we can ensure the data we get matches the effort we put in.

Survey data validity is one of the big problems that exist within surveys, and it’s something we need to work to eliminate. If you want to talk to People for Research about your next quantitative research project, get in touch.

 


 

Jason Stockwell, Digital Insight Lead

If you would like to find out more about our in-house participant recruitment service for user research or usability testing get in touch on 0117 921 0008 or info@peopleforresearch.co.uk.

At People for Research, we recruit participants for UX and usability testing and market research. We work with award winning UX agencies across the UK and partner up with a number of end clients who are leading the way with in-house user experience and insight.