8 Key Steps to help find Participants for User Testing
Participant recruitment tactics, screening advice, marketing channels and five other aspects to consider when you want to find participants for user...
Unmoderated usability testing is the perfect tool for fast and efficient testing. Let’s take a closer look at how it works and how to find participants.
Have you ever taken a class to learn a new skill? It might be a pottery class, a cooking class, or even a martial arts class. In said class, you can ask questions any time – and your pottery wheel never fails you, because the instructor shows you the ropes to success.
Now, imagine you go home and purchase your own pottery wheel. You have no instructions, and no one to help you when it inevitably goes wrong. The first time you use the pottery wheel, it’s likely that your clay will end up all over the floor.
This is what unmoderated usability testing is like. But don't leave yet – it's not as disastrous as it sounds. In fact, there are plenty of benefits to having your test subjects fail.
Let's take a look at unmoderated user testing, what it means, and how it complements your other qualitative testing methods.
User testing, or usability testing, is our way of understanding how users interact with our designs. We can watch them work, listen to their thoughts as they work, and probe them for more information.
A lot of the time, we use moderated user testing. We're physically or remotely 'in the room', asking questions, giving feedback, and guiding the test subject through the design.
Unmoderated usability testing, on the other hand, is a totally hands-free process. Users are sent a link to your design, and they're free to explore it in their own time, in their own way. They might have a list of pre-written prompts or questions to answer, but they're otherwise left to their own devices.
There are plenty of reasons to test users without moderation, just as there are many reasons for the opposite. Let’s take a look.
Moderated user testing can be an enormous burden on both your money and time. With all the preparations that need to be made – recruiting participants, arranging a meeting space, having the necessary tools and materials ready – it can take weeks or even months to get started.
Unmoderated usability testing, on the other hand, can be conducted very quickly and with considerably less effort required. According to the Nielsen Norman Group, unmoderated testing can cost anywhere from 20 to 40 percent less than moderated testing and saves around 20 hours of time.
This is largely because there is no need to coordinate schedules or find a common meeting place. Participants can complete tasks at their own convenience and on their own devices, with results being automatically compiled for you.
One of the main ways that moderated and unmoderated testing differs is in their level of control. Moderated tests offer you the ability to guide participants and probe them for more information; however, this level of control can introduce bias and limit the scope of your results.
Unmoderated tests, on the other hand, are often less biased due to the low level of tester influence. What this means is that the data collected is more likely to be consistent and reflect the user’s actual experience.
This is especially important when you’re testing a new or updated version of your product, as moderated tests may introduce bias based on the moderator’s opinions. Swaying the outcome of usability tests, whether consciously or subconsciously, can lead to inaccurate data and misguided design decisions.
User testing has a number of challenges, one of the key challenges being the ability to scale. The time and effort it takes to recruit and test users in a controlled setting means that only a limited number of tests can be run. Not only that, but more users means more paid facilitators – which costs money that might not be available.
This is where unmoderated testing shines. By using a platform that can automate the process of recruiting and testing users, large numbers can be tested relatively easily and cheaply. Unmoderated testing is set up perfectly for remote testing, which makes it possible to test users from all over the globe.
If your budget for both time and money is limited, you may not have the resources to conduct traditional user testing in all of the relevant locations. This can become a problem when you are testing a product or service targeted at a global audience; without the ability to test users in all relevant geographies, you may be making decisions based on inaccurate data.
Unmoderated usability testing can help you overcome this hurdle by allowing you to test users from all over the world. Remote testing has its own complications, of course, but it certainly broadens your reach and brings you closer to a truly global perspective.
In a moderated user test, the moderator can steer the conversation and probe for more complex behaviors. With unmoderated user testing, you are relying on participants to discover these complexities on their own. This can be a disadvantage if your product is complex or if you are targeting a sophisticated audience.
Unmoderated user testing can be an incredibly effective tool – but only if you know how to use it correctly. Here are a few tips to get you started.
When it comes to user testing, it's rare that a company will choose to go it alone. It takes a lot of time, money, and expertise to create a user testing portal, design tests, recruit participants, and analyze results.
For that reason, companies use tools to help them out. There are online tools designed specifically for the purposes of:
The tool you choose will, of course, come down to the specific needs of your company; however, it will also depend upon the type of data you are trying to collect. Qualitative data relates to the feelings and reactions of users, while quantitative data looks at numerical values, such as how many people completed a task or how long it took them to do so.
This useful Venn diagram from the Nielsen Norman Group shows which tools are best for collecting different types of data:
Whichever tool you use, pairing it with our services at Respondent.io means you'll be accessing the very best panel of participants for your tests.
Plus, our API integration with unmoderated research tools like Hubble means that customers of these tools can access Respondent's panel of 3 million research participants directly from their platform.
One of the biggest dangers of unmoderated user testing is that participants can misinterpret the task they’ve been given, or simply not understand it. This can lead to inaccurate feedback and a skewed test result.
To avoid this, it’s important to set clear expectations before the test begins. Clarify the following points:
It's also crucial to give participants an overview of your company and the product you're testing. Context is everything, and this will help them contextualize their feedback and provide more meaningful insights.
The period of time after participants complete their test tasks is a critical opportunity to gather feedback. Make sure you have someone ready to take notes and capture feedback as soon as users finish their tasks, and go in-depth with probing questions.
It’s also important to remember to give feedback as well. Don’t just critique participants – offer suggestions and solutions, too. This will help keep the conversation flowing and make the process more beneficial for everyone involved.
Some of the feedback you should be looking to collect include:
Have a thorough documentation process in place for storing and sorting this feedback. Categorize feedback by task, user type, feature, or any other criteria that will be helpful for future product iterations, and make sure to track which changes were made as a result of user feedback.
After you've collected feedback and sorted it, you've reached the crux of the process – using the data to improve your product. The first step is to identify and prioritize the issues that need to be fixed.
If you have a lot of feedback, start by grouping it together by theme. Then, figure out which are the most urgent and important issues to fix. You can use different metrics to help you decide:
Once you've identified the key issues, you can start creating a plan for fixing them. This might include designing and running more tests to gather more data, or working with your development team to implement specific changes.
No matter what, don't forget to follow up with your users after you've made changes. This will help you determine whether the fixes were successful and continue to improve your product.
The final step in your unmoderated user testing journey is to monitor the outcomes. This means setting up some sort of tracking system to see how users interact with your site or app once you've made changes. You can use a variety of software programs and tools for this, including Google Analytics, Hotjar, and Mixpanel.
Make sure you track both quantitative (how many people are using a certain feature, how long they stay on the site, etc.) and qualitative (what people are saying about your site or app) data. This will give you a well-rounded view of how your users are interacting with your product.
Compare the data you collect along the way with information from the user testing sessions you conducted earlier. You should be able to see which changes had a positive or negative impact on the user experience. Use this data to continue making improvements to your site or app.
Remember that unmoderated usability testing is just one tool in your arsenal. It's important to use it in conjunction with other methods, such as moderated user testing and user surveys.
Receive a monthly review of UXR industry news, researcher interviews, and innovative recruiting tips straight to your inbox.
As with any user testing tool, there are many ways to get the process wrong when using unmoderated user testing. And, since you are investing time and energy into getting it right, the last thing you want is to make avoidable mistakes – so let's take a look at some of the potential pitfalls:
When tests are moderated, there's the added pressure of a live person on the other side of the screen watching and judging you. In some cases, a facilitator will even be in the room leading the participant through the test. This can add an extra level of pressure and encourage users to take the test more seriously.
In unmoderated user testing, there's no such pressure. Users may be less likely to take the test seriously if they know that there's nobody watching them and that their results will just be collected and analyzed later on.
This could lead to inaccurate results – so it's important to make sure that your test instructions are clear and that you're only recruiting participants who are serious about taking the test.
At Respondent.io, we erase this risk altogether by sourcing participants who are already interested in participating in your survey or study.
It's true that unmoderated user testing can open users up to exploring a design more deeply than they would in a moderated test. They have the reins, and they are free to take their time going through the test and exploring all of the different design options.
However, this can also be a disadvantage if users don't explore the design fully. There are a few reasons why this might happen:
There's always the potential that some users might not explore the design as much as you'd like them to; you can't prod them in the right direction.
Just because someone takes a user test doesn't mean they're going to provide quality feedback. In fact, it's quite possible that some users will simply not be bothered to take the time to give you useful information.
The outcome of this is wasted time, money down the drain, and all due to inaccurate results that could have been avoided with better participant selection.
Finally, there's the risk that the users you test with might not be representative of your target audience. This usually only happens if you've sent out tests to random people or if you've recruited participants through online ads.
Random people who take your test might not be users of your product, which could mean that their feedback isn't useful. And, if you recruit participants through online ads, there's always the risk that they won't be representative of your target audience – after all, not everyone who sees your ad will want to take the test.
These are fairly common pitfalls when using unmoderated usability testing, but with a bit of caution and forethought, you can avoid them and get the most out of your user tests.
Ready to make the most of all the advantages unmoderated usability testing offers? Avoid those pitfalls, and you'll be well on your way to a successful, valuable test.
Often, user testing goes wrong when testers are asked to find things that aren't necessarily there. This can be avoided by writing comprehensive questions that cover all corners of your design. By doing so, you're ensuring that testers won't miss anything, and you'll get more useful feedback as a result.
Avoid being too controlling and prescriptive in your questions, as this may prevent testers from looking at all aspects of the design. Instead, ask testers to explore and provide feedback on specific areas or features that you're interested in.
While unmoderated usability testing isn't as prone to bias as other forms of user testing, it's still important to be aware of your personal biases and blind spots. These can have a significant impact on how you interpret feedback, so it's important to remain as objective as possible when reviewing results.
Common biases to be aware of include:
If you're not sure how to test without bias, consider involving external stakeholders in the process. By getting feedback from people who aren't as close to the project, you'll be able to get a more objective perspective on how it's performing.
Sometimes, it isn't sufficient to rely on user comments and answers alone. In order to get the most accurate understanding of how users are interacting with your design, you need to be able to see it for yourself. This is where screen recording tools come in handy.
By using a tool that records user interactions, you'll be able to see exactly what they're doing on your site or app. This can help you to identify areas that need improvement, and it can also help to confirm or disprove your hypotheses about how users are using your design.
Some tools will even create a heatmap so that you can see which parts of the design are being used most frequently.
Most of all, remember that unmoderated user testing works best in conjunction with other research methods.
Why not use unmoderated testing for global results, and then use moderated sessions for more in-depth exploration? Or use unmoderated testing to get a broad understanding of how people interact with your design, and then use interviews or focus groups to probe further into user motivations and preferences?
The possibilities are endless, so experiment and find the methods that work best for you. With a well-run unmoderated usability testing session, you'll get the insights you need to make your designs even better.
If you’re ready to take on some unmoderated usability testing, but you aren’t sure where to begin sourcing participants, we can help. Make the most of our carefully vetted and compensated participants by signing up for free here.
Receive a monthly review of UXR industry news, researcher interviews, and innovative recruiting tips straight to your inbox.
Recommend Resources:
Participant recruitment tactics, screening advice, marketing channels and five other aspects to consider when you want to find participants for user...
There are many UX research tools available, each with its own strengths and weaknesses. We divided the tools into five categories: interviewing, card...
How can a company accelerate its user testing recruitment process? Here we share 10 tips to get the job done quickly.