I don’t mind completing surveys, I even do those phone surveys. Having working with several different marketing teams and conducted countless UX information gathering surveys over the years. I can understand the difficulties of getting a good response from people. So I don’t mind taking the time to complete the odd survey.
Still I have to wonder sometimes if the teams behind the surveys are really understanding their audience that is completing the survey in the first place.
A few weeks back our fence was blown over in a storm. We put in an insurance claim, it was processed, and we got the fence repaired. No issue, good service all round.
Then I get an email request to complete a customer satisfaction survey from my insurance company.
What is Dissatisfied
The survey seems very standard. Besides being inaccessible in parts if you only use a keyboard.
All was good until we (partner and I) complete a question that asked us to rate the service from 1-10 (10 being outstanding). We gave them a 7. The survey responds asking why we were dissatisfied. We weren’t. We just rated 7/10. Not perfect, but not dissatisfied by our ranking. But the survey consider the rating of 7/10 as dissatisfied as it changed the questioning to suit.
This assumption, that if it’s not a 9 or 10 then the customer must be displeased, doesn’t in anyway take into account the personalised rating scale of the customer. We may never give a score of 10 or 1. We could be very happy with a score of 7/10, as we were.
Lesson to be learnt here is that you can’t assume that a 7/10 or even 6/1o indicates a negative emotion or dissatisfaction from the customer. This type of survey gleans towards a negative bias or an over inflated towards the extreme positive.
Now that was a minor issue compared to the next one.
Doing the Likert Scale
The presentation of a Likert scale question is never easy. A UX professionals we are always looking for a new way to present a question or interface without promoting any bias.
However when we were presented with the following question. We both starred at the screen for a good minute before we could jointly work out what was required.
What you are meant to do, and it took us a few goes to work this out, is drag the card (on the left) to the response boxes (on the right) and drop them there. They then appear as a box with text in within the response area.
There are a number of issue with this interaction:
- It’s very different to the traditional layout, it was completely outside what we were expecting.
- Initially you can read the page question as “How would you rate your experience in terms of:” Answer – “Extremely, Very, Reasonably…”
- You may ignore the small help text “Please drag each item to a category”
- You may ignore that card completely
- There is no indication as to what an “item is” or what a “category is”, they mean the card or questions and the possible answers (on the right).
- The process of reviewing or moving categories isn’t as smooth as it could be.
- There is bias to dragging the cards (items) to the responses (categories) at the top of the page.
- It’s only usable with a pointing device.
- I just don’t even want to think about the accessibility of this.
- I still don’t know what the “+” buttons on the responses do.
Yes it’s fine when you workout what to do. But most people aren’t they concerned about the survey and are likely to leave if the question layout breaks a mental model.
This is a classic example of a new interaction technique not being the best delivery method. Sometimes the cool tech is just not the best way.
Now the concept is still valid, but it just needs a little more refinement, and maybe a little proper user testing and it would be an innovative interface.
Still sometimes I do wonder if these insurance firms employ anyone to consider the user experience of their customers.