Launching UX tests is now 3,000%* easier (*OK, the exact number is unknowable. But it's definitely better)
If you’ve logged into your WhatUsersDo account today, you might have noticed that the page behind the big red* Start a Test button looks a little bit different (don’t feel left out if you don’t have an account—you can get a free one now and join in the fun).
That’s because we’ve been working hard to make it easier and faster for you to write and launch tests that get you useful results.
Why? Well, most of us aren’t running UX tests just because we really love writing test designs (if you do, then more power to you. You’ll get a kick out of this post). Most of us want the insights that a great test design brings.
So, we listened to your feedback and streamlined the whole process to enable you to spend less time launching tests, and more time luxuriating in the sublime glow of unadulterated customer insights.
Here’s how it works:
Step 1: Choosing your users
In our previous process, we started you off writing tasks before you chose who you wanted to test with.
This wasn’t necessarily bad in itself, but we found that when starting a test, our customers generally have a clearer idea of who their audience is from the get-go, before getting into the nitty-gritty of which tasks they would like them to do.
So because we believe so strongly in putting users first, we did just that 😉
As before, you can select the device you wish to test on (desktop, smartphone or tablet) at this stage. You also still have access to your user demographics (country, gender, age range and socio-economic group).
Step 2: Writing your test
The biggest and most powerful update lies within this innocuous-looking tab. We’ve made some improvements to the test-writing process to give you more control over your task designs, and ensure you get better results at the end of it.
Let’s break it down:
Our task templates have been completely reviewed and overhauled to better reflect the types of tests you guys run on a regular basis.
Not only have we edited down the sheer volume of templates we had before (many of which were rarely if ever used), but we’ve restructured and rewritten them to be more effective at getting you the insights you’re looking for.
Wait, what? The tasks are now little boxes instead of one big box?
Yes indeed. You can now duplicate, remove and reorder your tasks to customise your tests in a modular fashion, giving you more control and better organisation over the previous Über Box of Monolithic Templatery™.
You can, as ever, also start from a blank canvas and build your own test from scratch.
Not only have we broken out each task into its own boxy little unit, we’ve also categorised them by type:
- Task – this is a standard usability task, e.g., “Find a TV that you would genuinely consider buying.”
- Set a Scenario – use this to set a scene as a poet would an epic ballad, e.g., “Imagine that you are interested in buying a new TV, and you’ve come across the following website.”
- Link to Visit – this can be used to add a URL to your test. If you’re running competitor benchmarking, for example, you may want your users to visit a different site at some point during the test.
- Verbal Response – want a spoken opinion from your users about something? Use one of these.
- Special Requirements – oooh, such a enigmatic air about this one. But it could be used for something as banal as asking your users to use Firefox for your test, or to remind them to speak in French while completing it.
Why? First, here’s some insider information—we’re doing this to pave the way for some more updates in our next release, so you can look forward to some additional analytics features based around this new system.
Secondly: we’ve designed the system to give you enough task types to choose from without overwhelming your users with a 108-step, Neverending Story of a test.
Because while the Neverending Story did in fact end (disappointingly, after just 1h 47m), a feature-length test is neither good for our testers (fatigue from doing too much stuff!) nor your videos (no time to actually watch them!).
This should give you better results when your videos come back in!
As before, we still have optional Pre-Screener questions (to help you hone the types of user who take your tests) and Exit questions (to get some additional, post-test data such as NPS or other feedback). Just as they do in real life, these book-end the tasks in the launch process.
Step 3: Launch
This hasn’t changed too much, but should give you a better summary of the test you’ve just designed (well done you, have some apple pie, unless you’d rather not).
As ever, you can at this point save your test as a draft, or schedule it for later.
Or—and let’s bask for a second in the freeing uncertainty that the unfolding future brings— throw caution to the wind, and hit that sumptuous green Launch button with the audacity deserving of the user-centric pro you are and always will be.
Log in to your account now, and give it a whirl!