11th November 2021
It’s an undeniable fact: behaviour questions are critical to surveys. We’d go as far as to say behavioural questions are the most important questions in surveys – these are the questions that get into the audience’s real-life experiences and the participants’ decision-making processes. Any good survey should include behavioural questions, and while we usually equate behavioural ‘free text’ responses with quality information, this is also the most challenging type of data when it comes to the analysis stage.
Classifying and analysing free text responses can be a huge challenge when running quantitative research, but good surveys can’t (usually) be exclusively made up of checklists and radio buttons. Depending on the topic, this can be extremely restrictive for the participants, not allowing the researcher to capture all the necessary data or even just the juicy details that give colour to black-and-white sets of data.
Behavioural questions are, indeed, the cornerstone of any important research piece. Instead of asking people the ‘what’, they focus on the ‘why’. The importance of these questions is unparalleled, as they (should) provide answers to questions with pure data without ambiguity.
Despite the wealth of information they add to quantitative research, there is a glaring problem with behavioural questions. The opportunity for variety in language and how people interpret things often leads to inconsistency, which can cause misinterpretation and confusion.
We do surveys to get the truth, and if participants rush through the task, misread information, don’t understand questions and give inaccurate answers, then the results are less likely to be reliable.
That’s not to say there won’t be quality responses in a survey; we just want to cut the wheat from the chaff with the right question types and language.
There are standard questions used in surveys: when (time-based), what/how (action-based), why (purpose-based) and who (people-based). These types of questions aren’t always asked in the same manner or order, and the language used in the questions can produce different results.
When looking at timings and frequency, a participant’s ‘last’ or ‘most common’ experiences can be very different things. If we were to look at exercise, the question ‘when did you last go for a run?’ might lead to a different answer compared with ‘how often do you run in a typical week?’.
That doesn’t mean either are bad, they’re both going to generate perfectly acceptable answers. It might mean we get different responses based on participants who consider themselves serious runners vs. casual joggers. When looking at a timeframe – for example, ‘how many times in the last 6 months have you gone for a run?‘ –, you may get a different answer again. The participant’s context will heavily dictate how they answer different types of questions.
In this case, the final example might be the correct question to gauge how much of a runner a participant is. The recency, frequency and action required, but also the purpose of the research, will shape how the questions are asked. Also, questions that focus on a ‘when’ prompt the participants to remember specific experiences.
This is the core difference when asking about a typical and specific behaviour. Typical behaviour is more ambiguous and open for interpretation, whereas specific behaviour is more concrete. Both question types have advantages and disadvantages in research; regardless, one important consideration is to ensure consistency in question types and language used. This is especially important when diving into questions after timings, and into actual experiences and actions participants went through to get to that point.
Some behaviours are hard to remember and humans are widely known as being bad witnesses, not only because time degrades our memories of events, but also because our biases affect how we perceive things.
Every time a story is retold, words are missed, facts are glossed over, and details are overlooked. That is a natural evolution in human storytelling. We tend to miss out on information that is not interesting to others and mostly dwell on things that we feel are important, and this all affects our memory and recollection of events.
Another element that changes memory is the telescoping effect. Have you ever heard someone say “wow, that feels like it happened just yesterday” when really it was years ago? How about “I can’t believe it was only a month ago” about a wedding or celebration? That is the telescoping effect.
Telescoping refers to inaccurate perceptions regarding time and causes participants to recall dates and facts in a more remote way. The idea is that time seems to shrink towards the present in the way that distant objects appear closer in a telescope.
When asked to place the time of past events, people have a systematic tendency to recall events farther back in time and more recently than is actually the case. An example would be with 9/11: people are always surprised to hear the number of years that have passed since the tragedy. Conversely, going back to the first few months of COVID-19, it might feel like a lot more time has passed than it actually has. Already it feels like pubs have been back open for ages, when realistically it has only been a few months.
These examples cause participants to misgauge time and the frequency of events. To avoid it, use real numbers in questions and facts. For example, ‘when did you receive this information’ or ‘when did you start your latest job’.
If we were to go back to the original point in the blog and whether behavioural questions are reliable, the more behaviours get drilled down into, the less truthful and reliable they become. If you’ve ever watched a police interrogation on TV, the more niche the questions get, the more likely they are to catch the interviewee out with a lie.
I’m not saying surveys are as rock-and-roll as a police interrogation, but by going from broad questions to specific categories, followed by brand recall, then future behaviour, reliability in surveys decreases significantly.
So, when should you use each of these:
– When did you last watch a movie at the cinema?
This question format usually gets more accurate responses.
– How often do you watch movies at the cinema?
This question format is more subjective, but informs the researcher on potential behavioural patterns.
There isn’t one right way to ask behavioural questions to your participants, but there is one rule we advise you to follow: keep it consistent. If you pick question format, try to stick with it throughout the quantitative task.
Certain behaviours also are also less likely to get honest and reliable answers due to social desirability. Drinking and exercise are often under and over reported by participants, respectively. Even I’m guilty of this.
Depending on what topics you’re asking, participants will react differently, but using the appropriate timeframes allows you to qualify the behaviours of participants in a more meaningful way. Here is what the Market Research Society (MRS) recommends for each behavioural question set.
Things that are very common, like media consumption, everyday items and relationships with close friends and family.
Less frequent consumer behaviours like food shopping or hobbies like exercise or watching a movie.
▪️ Six-monthly / Yearly
Banking interactions, insurance decisions, household energy suppliers. These behaviours are done between six months and a year.
Ambiguous answers inevitably lead to guessing exercises. If a participant cannot recall a specific situation (for example, if you have the option ‘I’m unsure’ in a survey and that leads the participant into more questions about the same topic they are unsure about), it’s not always useful to drill down further into that specific topic. Sometimes, it’s best to exit the process to not skew the remaining answers.
How do you feel about using behaviour questions in surveys? We have all probably used them before, and we’re surely going to keep using them! Hopefully, we can think about the best ways to ask them in every situation to get the best results for every online task.
Jason Stockwell, Digital Insight Lead
If you would like to find out more about our in-house participant recruitment service for user research or usability testing get in touch on 0117 921 0008 or firstname.lastname@example.org.
At People for Research, we recruit participants for UX and usability testing and market research. We work with award winning UX agencies across the UK and partner up with many end clients who are leading the way with in-house user experience and insight.