Highlights from our AMA with Lee Duddell, UX Director and Founder of WhatUsersDo

Last week, on our UX community Slack channel, we hosted a live AMA session with Lee Duddell, our illustrious UX Director and founder of WhatUsersDo.

For this hour-long chat, Lee fielded questions covering absolutely anything on the subject of UX, from building a business case for user testing, to user research methods, to his opinion on expert reviews.

question mark in a light box

 

We’ll be running these AMAs on a regular basis on the WhatUsersDo Slack channel, which is also the perfect place for both UX professionals and newbies to mix it up and thrash out any UX subject they like. If you’d like to join them, please follow the link and join up to our Slack group!

Here are the some of the highlights from Lee’s AMA. First of all, an icebreaker from our own team. (Please note, some edits have been made for clarity and spelling).

Why did you get into UX testing and why should we listen to you?

Ages ago I used to be a digital manager – project, product, website, herding cats! – then I stumbled across this thing called usability testing and was astonished at how:

  1. It was a blindingly obvious way to improve experiences
  2. It empowered and motivated teams to want to change things
  3. It was pretty cumbersome to do (lab hire, costs, hassle, participants not turning up!)

So I started WhatUsersDo to sort all that out. And here we are nine years later!

I’ve worked with large and small brands to design research and embed in their internal processes, so I know a fair amount about research and UX (and cycling).

If I were reviewing existing mobile apps which are out there, do you think user research is still relevant? Is there any particular method of research you can suggest? 

User research (which tends to identify needs) could well be useful to determine how well the apps you’re reviewing meet those needs. The best methods for identifying user needs are:

  1. Interviews with users to get them to share a problem, discover a challenge or uncover an aim that an app could solve
  2. Some ethnographic research observing what people do and why.

The former tends to be more common than the latter.

What would you say is the recommended number of participants to include for conducting tests on the live site or prototypes to inform the insight? 

Rule of thumb is up to 10 participants per device, per key journey – but I’d need to see the site first as some journeys are simpler than others. Probably the biggest factor is how often you can test. With smaller participant numbers you can test, iterate, and then test again.

Do you have any go-to statistics or numbers that are 1) valid and 2) persuasive in communicating the ROI of user research? 

This linked article has many useful UX testing stats and I’ve also found that A/B testing tools are useful at proving the ROI of usability testing/research. You can do this on your own site, by running some free/cheap usability tests and use those to inform experiments that you then run on (for example) Optimizely. And then you have data on your own site to prove the case.

mobile usability report

Example from our mobile usability report

I also like talking about companies like ShopDirect and Amazon… the UK’s two leading pureplay retailers. Both have mature optimisation processes and experiment a lot. Both, and particularly ShopDirect, have invested in qualitative/small scale research to drive their A/B Testing programme and get significant results.

In this link there’s a video of Sam Barton talking about the ShopDirect journey and how they approach A/B Testing, and drive it through qualitative insight.

Is there a method for sharing research outcomes with marketing and sales teams that you’ve found particularly effective? 

I guess the generic advice is to tie your recommendations/insights/findings back to the metrics that marketing use. Although it might be hard to say: “We’ll increase acquisition by x% if we fix this UX issue” you should be able to identify the marketing metric that it relates to.

Is there any guidance you provide on distinguishing between end user personas and marketing targets (if not the same)? 

I find I have to remind marketers that their personas are stereotypes and may not even exist… and that many usability findings are universal.

I’m prepping for organization strategy and marketing planning meetings later. What piece of advice would you give me as the UX voice at that table? 

The advice is similar to your earlier question –  tie back to the metrics that the marketing team use and share how UX activities can help drive them and, of course, remove the guesswork.

You presentation could start: “In the next 20 mins I am going to explain how we maximise X for the new feature, by ensuring it is easy to use and meets real users’ needs. Here’s how….”

I’m studying a masters in HCI. I’m wondering what your opinion is of expert reviews?

Expert reviews are pointless. The time and money spent with an expert is better spent getting some real people to give feedback. If you’re using an expert, get them to look at your analytics and work out how to design usability tests to improve conversions.

Join us on the WhatUsersDo Slack channel for even more advice and guidance from the UX community.

Get a newsletter that isn't all about us...

Subscribe for weekly, hand-picked articles about UX, design, and more every Friday—from the Be Good to Your Users blog and the rest of the whole darn web.

...get a little taste right here. 👅 🍭

Main image by Emily Morter

Christopher Ratcliff
Christopher is the Content Marketing Manager of WhatUsersDo. He’s also the editor of wayward pop culture site Methods Unsound. He used to be the deputy editor of Econsultancy and editor Search Engine Watch.

Leave a Reply