Tag Archives: usability testing

5 ways to increase online donations – Free eBook

15 Jul

5 ways to increase online donations

We have put together a free ebook to help non-profit organisations increase online donations. Having spent over 100 hours studying users performing usability tests on non-profit websites, we have a wealth of useful insights to share. This guide outlines small changes that can be made to charity websites, which will encourage a user to donate. It also outlines what is likely to make a customer abandon their donation.

In this post we provide a snippet from the book. You can read Chapter 1 below and then simply Pay with a Tweet to download the full eBook.
Read more

 

 

Liked this article?

Get more specialist retail and finance UX insights straight to your inbox

  • We promise no spam, just straight up great insights from our UX experts!

 

 

Damian Rees

About Damian Rees

Damian has worked as a usability and user experience consultant for over 13 years. He has worked in senior roles within companies like the BBC and National Air Traffic Services where he has researched and designed for users in a variety of different contexts including web applications, voice recognition, and air traffic control interfaces. Follow Damian on twitter @damianrees

3 ways to stop driving customers crazy with online quote forms

16 Oct

3 ways to stop driving customers crazy with online quote forms
a n i. Y. via photopin cc

Stop driving your customers crazy by forcing them to fill in endless forms before they’ve had a chance to decide what they need. As a financial services provider, if you are doing this it’s likely you are missing out on conversions. Why? Because customers don’t always understand which product they need, there are often complex concepts they need to digest in order to make a decision and they are being bombarded with difficult questions that they hadn’t even considered!
Read more

 

 

Liked this article?

Get more specialist retail and finance UX insights straight to your inbox

  • We promise no spam, just straight up great insights from our UX experts!

 

 

Damian Rees

About Damian Rees

Damian has worked as a usability and user experience consultant for over 13 years. He has worked in senior roles within companies like the BBC and National Air Traffic Services where he has researched and designed for users in a variety of different contexts including web applications, voice recognition, and air traffic control interfaces. Follow Damian on twitter @damianrees

Stop waiting for the perfect time to run usability tests

30 Jul

No one is perfect, that's why pencils have erasers

 

“Perfectionism is the enemy of progress”. This is a quote I’ve found to be very true throughout my life. Whenever I hear it or read it, I remember to stop trying to do everything perfectly and focus on getting things done once they are good enough. Once I start focusing on good enough, my to do list shortens, I’m meeting deadlines, and generally feeling like I’m getting somewhere again. So when I see clients doing the same thing I try to encourage them to focus on progress rather than perfection.
Read more

 

 

Liked this article?

Get more specialist retail and finance UX insights straight to your inbox

  • We promise no spam, just straight up great insights from our UX experts!

 

 

Damian Rees

About Damian Rees

Damian has worked as a usability and user experience consultant for over 13 years. He has worked in senior roles within companies like the BBC and National Air Traffic Services where he has researched and designed for users in a variety of different contexts including web applications, voice recognition, and air traffic control interfaces. Follow Damian on twitter @damianrees

Insight from usability testing – how to get more online donations

6 Sep

When usability testing charity websites we see the same user need being unfulfilled time and again. Before making a decision to donate, volunteer, or fundraise for you, users need to know where the money goes.

They’ve heard about charities eating up all the money themselves and only a small amount getting to the people who need it. They want to know your charity isn’t like that. But you also know that users aren’t going to read your AGM notes and won’t invest time reading about your financial structure. So what do you do? In this article we’ll show you some of the sites doing it well and give you some inspiration on how to fix your site to generate more online donations.

Read more

 

 

Liked this article?

Get more specialist retail and finance UX insights straight to your inbox

  • We promise no spam, just straight up great insights from our UX experts!

 

 

Damian Rees

About Damian Rees

Damian has worked as a usability and user experience consultant for over 13 years. He has worked in senior roles within companies like the BBC and National Air Traffic Services where he has researched and designed for users in a variety of different contexts including web applications, voice recognition, and air traffic control interfaces. Follow Damian on twitter @damianrees

Playtesting – why you should user test designs early and often

26 Jan

This is a fantastic video from Penny-Arcade about the virtues of testing concepts and designs early and often with users. The video is aimed at games designers but applies to any designer and translates particularly well to web designers.

It’s not often that we share videos on our blog, but this one is too good not to shout about it. It delivers the message we tell our clients, but in a much better and more enjoyable medium. If you don’t mind sitting through the annoying advert at the beginning, you’ll find the reward is absolutely worth it. Enjoy!

 

 

Liked this article?

Get more specialist retail and finance UX insights straight to your inbox

  • We promise no spam, just straight up great insights from our UX experts!

 

 

Damian Rees

About Damian Rees

Damian has worked as a usability and user experience consultant for over 13 years. He has worked in senior roles within companies like the BBC and National Air Traffic Services where he has researched and designed for users in a variety of different contexts including web applications, voice recognition, and air traffic control interfaces. Follow Damian on twitter @damianrees

Loop 11 – Our Review

6 Jan

What is it?

Loop11 is an online, remote usability testing tool. In simple terms Loop 11 allows you to create an online survey that includes tasks for users to complete on a website of your choice. You have the ability to mix questions with tasks in any order you wish, and for each task you can set a goal for the user to achieve i.e. find a size 10, blue women’s jumper for under £30. You then have to set the URL that they will start on at the beginning of the task. There is a demo of how it works on the website which takes you through what your users will see when they complete a test.
 
Tasks & Questions

A step by step process of setting up your test

 
Below are two examples showing how the test looks to the user when they are completing a task (Figure 1) or answering a question (Figure 2). On each task the objective is always displayed at the top along with the progression buttons. When asking a question, Loop11 gives you various question types which allow the users to answer in an assortment of different ways, the example I have shown represents a normal text field answer/comment box.
 
Task Example

 Figure 1. An example of how a task would look like to a user 

 
Example Question

Figure 2. You can insert questions after, or before the tasks, above is an example of a text field question

 

What are its advantages?

This is the closest tool we have found so far to an actual usability test. Loop11 is one of the only sites we have found that really lets you create something similar to the usability tests we regularly carry out. You have control over the design of a proper test script which is the main reason we have found it to be so useful.

You don’t have to be there with the user (in the same room). Loop11 tests are un-moderated so all you have to do is design it, then set it live and spread the word by sending a link to your selected participants. Users can then complete the tasks in their own home at their own pace. Its main advantage over face to face usability tests is that it allows you to test as many users as you want.

Loop11 isn’t free, but it is cheaper than moderated usability testing. You also don’t have to spend much money on incentives as users are participating when they want, and where they want (we’ve written more about this on UXbooth). We still included a small incentive as a thank you for users spending their time completing the test, and as we made sure the test itself wasn’t long it worked out rather well for everyone.

You can test Loop11 on any device that browses the internet. We haven’t tested this but we would assume that in most cases this would be correct as Loop11 uses the browser and is fully online so it shouldn’t matter too much what device you use it on.

View ready made reports on participants completing your test. After you have set the test live and participants start completing it you can view reports which show how many people succeeded in completing tasks, how long it took them, and how many pages they went through to get there. This information comes in 3 exportable formats (Excel XML, PDF, CSV, and Excel XML with participants) so that you have access to the original data to do with as you please. The PDF option also exports the report version which includes graphically presented results for the overall test, and for each task itself, however we found that the excel document of raw data was the most useful as it allowed us to work with the data to produce a report with the information we required. We could then brand it and use it within our projects.

 
Sample Report

A sample version of the PDF report that Loop11 exports after a test is finished

 

What are its limitations?

Once a test is published there is no going back. You can’t edit the test after publishing it; you can only delete it altogether, so you better get it right the first time! We would recommend thinking carefully about the order of your test before adding them to the test list. Re-test, and double check as many times as possible in preview mode before putting it live.

Judging the success or failure rate can be tricky. The site tests live websites so for each task you have to set the URL you want users to start from. In order to track success and fail tracking you need to add URLs which specify the pages users reach which you consider a to count as a success. This can be problematic if your task can be completed in a variety of different ways. If you don’t anticipate them all, you could record a failure even if they succeed.

Tasks need to be carefully designed. The design of each task becomes critical when doing an un-moderated test especially with this tool as it needs to be much less fluid than a typical face to face usability test where normally we would deliberately allow users the freedom to complete the task as they would naturally. With Loop11 you are forced to be more quantitative with your approach to get a more defined fail/success criterion. Therefore we found that the tool forced us to design tasks to be much more basic and definite.  For example, in a face to face test we might ask users to define the problem they typically face and then show us how they would solve that problem using the site. With Loop 11 we would design the task to answer a very specific question we know they can find an answer to on a specific page.

Slow loading times on some websites. Sometimes the website you may want to use will perform slowly which could affect how likely people are to complete the test. We also noticed it crashed on a few sites too. We recommend checking how well a site performs in the preview mode before you commit to purchase. Loop11 do provide a piece of JavaScript that you can insert into the body text of the site to enhance the speed. Unfortunately we couldn’t do this on the site we were using which is a drawback if you are testing a site without access to the backend.

You might get more people participating than you bargained for. One quite costly drawback, especially for those of us who incentivise participants, is that despite setting a limit for the number of tests it only cuts them off after that number have completed the form. However, it cannot then stop people who have already started the test whilst the 20th (in this example) person was completing it. Therefore you may end up paying out for slightly more people than you originally wanted (we set a limit of 20 but had to pay 24 participants).

You don’t know who really completed the test properly. Probably the most obvious limitation when it comes to un-moderated testing is. Obviously you would like to think that anyone who sat down to do the test would want to do it properly, but you will get those who just rush through the tasks and questions to gain the incentive at the end. Normally you can look at the results and usually spot the person who you think didn’t complete it properly, the people who took the shortest time to complete their tasks and didn’t write any comments. The question you then have to ask yourself is whether you discount them from the testing?
 

When we recommend using it

If you’re looking for a quick, high level view of how a website is doing then this would be a good solution for you. It also helps if you are having problems testing users who are geographically spread or too time poor to meet one to one, by using Loop11 you can overcome this by testing anywhere that has internet accessibility anytime they want. Due to Loop11’s nature it is a useful tool for benchmarking a site, finding out where users are running into an issue to then carrying out some face to face usability testing to find out exactly where the problem lies. The scenario where we are likely to use it is to monitor the usability of sites we have already worked on. However, as the tool has strict success/fail metrics it is only really suited to carefully designed tasks which have a clear answer.
 
See anything I’ve missed? Just let me know!

 

 

Liked this article?

Get more specialist retail and finance UX insights straight to your inbox

  • We promise no spam, just straight up great insights from our UX experts!

 

 

Samantha Harvey

About Samantha Harvey

Sam recently graduated from Visual Communication. She joined our team in April 2011 and has been conducting user research and has been making sure our user interfaces follow good design principles. She's keen to point out our poor selection of fonts... er I mean typography (sorry Sam). Follow Samantha on twitter @samharvey_ux

Three Usability Testing Tips from a Rookie

9 Sep

With A Little Help

Having participated in several usability tests – as an observer only – you would think I’d be more prepared for when the guys turned around and told me what my next challenge would be; to carry out a usability test by myself. You would be wrong!

At this point I had only observed three sets of usability testing, from the planning stage through to producing reports. I had not been to any client feedback sessions yet, but overall I would say I had seen the majority of preparation that is required for a usability test. Upon hearing that it was to be my first time in the driving seat I was nervous, even though it was not a paid project.

I made a fair few mistakes, things that I could have done better, things I forgot entirely, and in some cases, things I just didn’t think about at all.

Here’s a breakdown of my top learning experiences from carrying out my first Usability Tests;

 

1. Don’t help the user

One of the main tasks I underestimated was how hard it would be to NOT help the individual.

Obviously as a researcher I am not meant to help a participant in their tasks, or lead them in any way. I am purely meant to relay tasks and then observe as the users’ lead me through what they are doing. However, this isn’t as easy as it sounds. I was conscious that whenever I was asked a question I faltered on what to reply. I remember reading a handy list of examples of what to say should such an occasion arise, but it’s hard to include every example for what could happen.

So my tip is:

  • Keep it friendly and chatty, but once the test commences, don’t say more than you have to.
  • When asked a question try and answer it with a question, turning it back to the participant.

 

2. Always have back-up tasks in the test script

No matter how many tasks I thought I had, users took me by surprise by going through them differently and inadvertently performing the next task whilst completing another. You can’t control what someone will do, and I was not experienced enough to take total control over every situation that arose. However, with careful planning and preparation you can undoubtedly help prevent most of the things that I struggled with through that first test.

Tip:

  • Don’t be too rigid with your test script, no user will do exactly the same thing so be prepared
  • for them to go ‘off piste’, and let them move onto other tasks if it is part of their natural journey.

  • Always have back up tasks for those participants who fly through the test.

 

3. Have the confidence to stop a user and refocus them on the task

A couple of times through the user tests I became aware that the participant was offering up more opinion than actually interacting with the website. One particular user was comparing different websites’ colour schemes and not really focusing on the task at hand. I found it hard to interrupt what he was saying as obviously I was aware of spoiling the relaxed atmosphere. Although, at the same time I needed him to focus on the task to understand what people would do when I’m not sat next to them. When a user has become too unfocused and it is not relevant to the usability test, it is necessary for the researcher to intervene.

Tip:

  • Help the user to keep focused on the tasks at hand, by politely repeating the task, and try to
  • filter the participant’s opinions.

 

More tips will follow as I become familiar with running usability tests. For now, the guys have left me to learn by trial and error (on internal projects) and it’s been a really interesting experience. I’m not sure I would have learned as much if they had just told me what to do.

Are you learning about usability testing? If you’ve got any tips you want to share then it would be great to hear from you!

 

 

Liked this article?

Get more specialist retail and finance UX insights straight to your inbox

  • We promise no spam, just straight up great insights from our UX experts!

 

 

Samantha Harvey

About Samantha Harvey

Sam recently graduated from Visual Communication. She joined our team in April 2011 and has been conducting user research and has been making sure our user interfaces follow good design principles. She's keen to point out our poor selection of fonts... er I mean typography (sorry Sam). Follow Samantha on twitter @samharvey_ux

Only five users?

8 Aug

What do you mean we only need to test with five users? Are you mad? We’re used to doing research with hundreds and thousands of users, how can you possibly suggest that five users are enough?

Ok, so I may have dramatised this a little, but we’ve had many similar conversations over the years we’ve been doing usability. It crops up on a regular basis, so we thought it was about time we anticipated the question and dealt with it here on our blog.

 

Usability research is not like other types of market research

It’s easy to see how clients can feel that five users are only a drop in the ocean when they are used to dealing with higher numbers of respondents. But this is the point. Usability research is very different to other forms of market research. Most forms of market research deal with opinion gathering. What do customers think and feel about this brand?, How do they feel about the service we are offering?, and What do they perceive our brand to offer over competitors… When dealing with opinions, clearly five people isn’t going to cut it, you need hundreds to get a meaningful measure.

Most of the usability research we conduct is designed to identify problems with a website. We’re asked ‘Why is my service not converting as well as I was expecting?’, ‘Why do users drop out at this point?’, and ‘What can we do to encourage more users to register?’… We’re asked to find problems our clients know exist somewhere in the site, and to help them fix them. It’s not about what people think or feel, it’s about observing user behaviour and isolating problems with the service.

 

How many times do you need to see a problem occur?

The great thing about searching for problems is that when you spot them you usually know it. If the zip breaks on your trousers it’s obvious there’s a problem. If you spill your coffee, spot a dent in your car… The problems are obvious. Finding problems with a website or service isn’t so clear cut, but when you watch someone struggle to complete a task because of the way it is designed, irrespective of the user’s level of skill or experience, you usually know you’ve uncovered an issue.

Let’s say we’ve just observed a user get very confused with your online checkout process. You’re pretty sure you have an issue which may be preventing people from buying. To be sure you test another person and they struggle in the same place. Then another person hits the same barrier. How many do you need to see before you’re convinced? Another 5 people? 10 more? 100? Do you need it to be statistically significant before you’re convinced?

What about discovering problems elsewhere? Let’s say you’ve noticed a couple of people trip on a kink in the carpet outside your office, or you’ve spotted a couple of people slipping up on some wet marble flooring. How many more people do you need to see hurt themselves before you’re willing to fix it?

 

Testing with more than 5 users results in diminishing returns

Sure you can test with more people, and there is a chance that you won’t uncover any issues until your 6th or 7th user. But it is very unlikely if you’ve designed your tests to cover the core areas of the service. What is more likely is that by your 6th user you’ll start to see the same patterns of behaviour repeat over and over, and your value from the tests start to diminish.

You may have seen the great diagram put forward by Jakob Nielsen to show the diminishing returns of testing with more than five users.

We agree with this entirely, and what many people forget about the accompanying article http://www.useit.com/alertbox/20000319.html  is that he advocates testing with five people, fixing the issues, and then testing again. This is very much the way we like to work. We encourage many clients, who come to us with plans to recruit 15/20 people, to reduce their scope and break the project into chunks that allow for design changes to be made before new users are brought in.

You don’t need high numbers of users to identify usability problems. Don’t confuse usability research with other types of market research. If you want to gauge opinions about competing brands, then five people isn’t anywhere near enough. But if you want to identify barriers to purchasing on your website five users is usually enough.

So, are you convinced? Are five users enough?

 

 

Liked this article?

Get more specialist retail and finance UX insights straight to your inbox

  • We promise no spam, just straight up great insights from our UX experts!

 

 

Damian Rees

About Damian Rees

Damian has worked as a usability and user experience consultant for over 13 years. He has worked in senior roles within companies like the BBC and National Air Traffic Services where he has researched and designed for users in a variety of different contexts including web applications, voice recognition, and air traffic control interfaces. Follow Damian on twitter @damianrees