Gearing up to run a large quantitative research project? Although often overlooked, pilot studies are one of the quickest and most efficient tactics to ensure you get the most out of unmoderated research before wasting your time and resources. In this blog we cover some of the reasons why you should run a pilot study for any new quantitative project and what you need to know to successfully set it up.
Let’s start at the beginning. Running a pilot study before sending your survey or task to hundreds or thousands of people means testing your research survey or task with a smaller audience that reflects your desired group of participants. For example, getting four or five external people to complete a survey before you get feedback from a wider community – note we said external, as in your colleagues or people who are too close to the action are not the right audience for a pilot study.
Sure, internal testing does help, but with a pilot study you can ensure instructions and language are clear and there is no internal terminology used in the research. Testing internally might save you a bit of money initially, but it could damage your whole project in the long-term.
For remote, unmoderated quantitative testing with a large group of participants, running a pilot study with a smaller group will iron out issues with questionnaires/tasks and identify the ‘must fixes’, such as the user-friendliness of the research task, navigation issues and, most importantly, bias and assumptions that are likely to undermine the end result.
Spelling mistakes, missing options, missing links… These are mistakes that anyone can make when designing a survey, a card sort or a tree test. Not only does this affect the user’s experience and the results you will collect, but it also looks sloppy.
Take this example: the meaning of our and your children is a really small error that is unlikely to get picked up by a tool like Grammarly, but would definitely be noticed by your users. This small mistake was not picked up during the survey design and internal testing stages, and it took the pilot for someone to reach out and say “we don’t have children together”.
Assumptions shouldn’t be shrugged off, as they’re a big part of research and you can fall into dangerous areas by leading participants down accidental paths that just end up proving your own assumptions. A really simple (and somewhat extreme) example is asking a question such as “how much did you like the prototype?”, which will surely give you different answers when compared to “what did you think of the prototype?”.
Sounds obvious, but when you are too close to the action and the product, you tend to unknowingly guide the participants.
A different issue that pilot studies can help to avoid is using jargon or language that is not familiar to the participants, both in the questions and in the instructions. It’s so easy to make this mistake, according to our experience.
A pilot study will highlight this issue by showing you where users took longer to complete specific sections and where the user’s answers might have been skewed by the language used or where the instructions were not clear to the participants. It’s also useful to include text boxes and allow the participants in the pilot to express their opinion and feelings.
If you have ever designed a complex survey or unmoderated task, you know conditional logic can become a nightmare when you have numerous paths available to the participants. This can cause major issues if not tested properly.
The solution is not limiting the conditional options or simplifying the task, but to get at least a couple of people to test each conditional path and see where they end up. This is essential to find issues with broken logic paths or misdirected conditional questions, an issue that happens more often than not, according to our experience. A pilot will highlight whether participants are hitting the right spots during the research task.
Hopefully, nothing too surprising. This means you either did a great job setting up your survey or task (give yourself a pat on the back) or… you are not paying enough attention to the results of the pilot. At this stage, it’s useful to get a colleague(s) – ideally someone who is not involved in the project – to look at the results and share their feedback with you.
Looking at the pilot results and finding a small issue, you may still be tempted to think that it’s extremely unlikely a participant would notice that or a user would go down that ‘hidden’ conditional path, but as design researcher Doug Collins recently tweeted: “if it can be done by a user, someone will”.
The lesson is: don’t discard any data from the pilot study, as small as it seems.
You may run the first pilot with colleagues, friends or family, but their views will inevitably bring bias into the mix; a proper pilot study will need to be done with people who represent your desired audience. For example, when setting up surveys, card sorts and tree tests for our clients, we always test the unmoderated task in-house and share our feedback, but also offer the option to test with a panel of three to six participants. Even if you are not working with a third-party agency like People for Research, you can easily use one of the many online platforms that offer user recruitment services.
Finding the right recruitment for quantitative research projects can be a massive challenge, so if you are planning to work with a supplier, align yourself with a partner who understands your needs and requirements and can find exactly the types of participants you need. Let them handle the headache of user recruitment.
Drop me an email (email@example.com) if you have any questions about pilot studies or unmoderated user research.
Jason Stockwell, Insights Marketing Manager
If you would like to find out more about our in-house participant recruitment service for user research or usability testing get in touch on 0117 921 0008 or firstname.lastname@example.org.
At People for Research, we recruit participants for UX and usability testing and market research. We work with award winning UX agencies across the UK and partner up with a number of end clients who are leading the way with in-house user experience and insight.