14th January 2020
Validity within quantitative research is a measure of how accurately the study answers the questions and hypotheses it was commissioned to answer. For research to be deemed credible, and to ensure there is no uncertainty on the integrity of the data, it is essential to achieve high validity.
In summary, research isn’t helpful at all when it doesn’t answer the questions you intend it to! In fact, it’s an absolute waste of time and budget if this is the case.
Of course, there are ways to avoid this and ensure your quantitative research gets the thumbs up from both the wider industry you operate in and the stakeholders commissioning or approving the project. Keep scrolling to read our advice.
This one is fundamental to securing valid results, as it sets the tone for the entire project. The research method you select needs to accurately reflect the type, format and depth of data you need to capture in order to suitably answer your questions.
As an example, if you are running research with participants that are lower on the digital spectrum and aren’t confident online, I would advise against incorporating complex question types, such as large grids, into your survey. Chances are the participant will get to this type of question and:
+ struggle and feel frustrated
+ input dud data just to get it over with
+ skip it entirely
None of these potential outcomes are ideal, and all severely affect the validity of the overall results.
It sounds obvious, but the question type and wording itself truly steers the validity of quantitative research. As a rule, quantitative research is usually unmoderated, so if your questions are ambiguous or do not accurately reflect what you intend to ask, there is no opportunity to provide further explanation or for participants to ask questions.
Questions must be straightforward, free of jargon, and must mean the same thing to all who read it. Getting others who are entirely removed from your research to test the survey is a great workaround – this will also allow you to check their responses do indeed answer or confirm the underlying hypothesis.
At PFR, as part of our remote unmoderated task service, we regularly offer our clients the chance to test their surveys or card sorts with a small number of participants before sending it to a large group of people.
This is about approaching your quantitative research from an entirely objective and unassuming standpoint – which can be really challenging, since unintentional bias is often a problem in quantitative studies. For example, asking a participant how frequently they bank online: whilst this is common, they may in fact prefer in branch or telephone.
To avoid guiding participants, you should camouflage the true intent of your questions, particularly when asking about brand loyalty. This can be done by simply asking what experience they have had with multiple brands or asking about general purchasing habits. Again, if your questionnaire design is done in a way whereby participants are encouraged to respond in a certain manner, your results are more likely to be invalid.
This focuses on whether the group taking part in your research is representative of your users and whether you have an adequate number of responses that can provide sound answers to your questions. Quantitative research is usually done on a large scale and for good reason, or you run the risk of getting narrow results that damage the overall validity of your study.
When asked about the biggest challenges faced in quantitative research, 37% of UX practitioners interviewed by the Norman Nielsen Group claimed that recruiting large samples of participants was the most difficult task of all.
At People for Research, we have clients who come to us with varying degrees of experience with quantitative studies, and even those most experienced benefit from our consultancy on securing valid data. We are Market Research Society trained on best practice and understand the importance of capturing actionable insights, so our full support is included in the service when you partner up with us.
If you have a quantitative project in mind or would simply like some consultancy on best practice advice, please do get in touch by emailing our Data Insights Analyst Vicky Karran – email@example.com.
Vicky Karran, Data Insights Analyst
If you would like to find out more about our in-house participant recruitment service for user research or usability testing get in touch on 0117 921 0008 or firstname.lastname@example.org.
At People for Research, we recruit participants for UX and usability testing and market research. We work with award winning UX agencies across the UK and partner up with a number of end clients who are leading the way with in-house user experience and insight.