Bad Interfaces – Technology Leading the Way


Gold wall of hanging gold cylinders with people behind it.

I don’t mind completing surveys, I even do those phone surveys.  Having working with several different marketing teams and conducted countless UX information gathering surveys over the years.  I can understand the difficulties of getting a good response from people. So I don’t mind taking the time to complete the odd survey.

Still I have to wonder sometimes if the teams behind the surveys are really understanding their audience that is completing the survey in the first place.

A few weeks back our fence was blown over in a storm.   We put in an insurance claim, it was processed, and we got the fence repaired.  No issue, good service all round.

Then I get an email request to complete a customer satisfaction survey from my insurance company.

What is Dissatisfied

The survey seems very standard. Besides being inaccessible in parts if you only use a keyboard.

All was good until we (partner and I) complete a question that asked us to rate the service from 1-10 (10 being outstanding).  We gave them a 7.   The survey responds asking why we were dissatisfied.  We weren’t.  We just rated 7/10.   Not perfect, but not dissatisfied by our ranking.   But the survey consider the rating of 7/10 as dissatisfied as it changed the questioning to suit.

This assumption, that if it’s not a 9 or 10 then the customer must be displeased, doesn’t in anyway take into account the personalised rating scale of the customer.   We may never give a score of 10 or 1.  We could be very happy with a score of 7/10, as we were.

Lesson to be learnt here is that you can’t assume that a 7/10 or even 6/1o indicates a negative emotion or dissatisfaction from the customer.   This type of survey  gleans towards a negative bias or an over inflated towards the extreme positive.

Now that was a minor issue compared to the next one.

Doing the Likert Scale

The presentation of a Likert scale question is never easy.  A UX professionals we are always looking for a new way to present a question or interface without promoting any bias.

However when we were presented with the following question.  We both starred at the screen for a good minute before we could jointly work out what was required.

Alternative likert question layout in feedback survey

What you are meant to do, and it took us a few goes to work this out, is drag the card (on the left) to the response boxes (on the right) and drop them there.  They then appear as a box with text in within the response area.

There are a number of issue with this interaction:

  • It’s very different to the traditional layout, it was completely outside what we were expecting.
  • Initially you can read the page question as “How would you rate your experience in terms of:” Answer – “Extremely, Very, Reasonably…”
  • You may ignore the small help text “Please drag each item to a category”
  • You may ignore that card completely
  • There is no indication as to what an “item is” or what a  “category is”, they mean the card or questions and the possible answers (on the right).
  • The process of reviewing or moving categories isn’t as smooth as it could be.
  • There is bias to dragging the cards (items) to the responses (categories) at the top of the page.
  • It’s only usable with a pointing device.
  • I just don’t even want to think about the accessibility of this.
  • I still don’t know what the “+” buttons on the responses do.

Yes it’s fine when you workout what to do. But most people aren’t they concerned about the survey and are likely to leave if the question layout breaks a mental model.

Overall it just seems to be a fancy “cool” javascript insert, that frankly should have been killed off or tweaked to make it usable.

This is a classic example of a new interaction technique not being the best delivery method.  Sometimes the cool tech is just not the best way.

Now the concept is still valid, but it just needs a little more refinement, and maybe a little proper user testing and it would be an innovative interface.

Still sometimes I do wonder if these insurance firms employ anyone to consider the user experience of their customers.


Tags: , , , , , , ,


  1. Yeah I see this all the time. Technology used for the sake of technology, instead of thinking about how it affects the user experience and accessibility.

    I’m almost certain these sorts of decisions come from further up the food chain, but the developer either doesn’t have the capacity to explain everything that’s wrong (with this form in this case), or doesn’t realise it’s wrong.

    Where are the form police when you need them?

  2. @mike The issue wouldn’t happen if the interaction design was left to design professionals and not people that don’t understand or have the broad range of skill sets required to see the big picture and the fine technical detail.

  3. @gary Agreed. My experience with developers is most of them don’t have that training, with the exception of a couple I know who truly do understand the consequences of poor UI, UX etc.

    And my experience with some of the bigger companies I’ve worked for is they assume it’s the developers role to tackle these things. Clearly to people like us it’s not their role, but the other problem is there is no-one around to put up their hand and say “Hang on a sec, let’s think about this. Where’s our designer?”.

  4. Yea, like you, I was expecting the typical select a radio button approach; and it took a couple of seconds to figure out the need to drag and drop.

    Drag and drop is definitely OTT for something like this. And if we compare what a respondent needs to do we have to ask ‘what were they thinking?’:

    Normal questionnaire:

    1. read the question and identify response needed
    2. click on radio button representing best response
    3. move to next question

    This drag and drop version:

    1. get to question and wonder what to do.
    2. read the question and figure out response
    3. click on card
    4. hold down mouse button drag card to drop area of choice
    5. let go of mouse button to drop card
      repeat 2-5 for next card

    A simple read + respond would have been far quicker and more efficient, but in this case, we have doubled the effort to getting the responses.

  5. How about they just make it simple and get a rating from 1-3

    1. Shithouse
    2. Prefer not to say
    3. Superb

    Just go back to basics is what I want surveyors to do – 1-10 in marketing 101 is I have no idea what to do with the data but our charter states that we must constantly get 9 or 10 out of 10 and if we don’t we must find out why so without understanding what we are asking the respondents to do!

    Bet you marketing at your insurance company has paid an external consultant heaps to come up with this “clever questioning” workflow.

  6. Hi Gary,
    Regarding the ‘What is Dissatisfied’, your experience confirms what I’ve read recently in a comment made to this post: Are Your Surveys Worth Your Customers’ Time?
    “When a customer gives a 4 (out of a scale of 5), management interprets this as a zero to the employee.” (from the first comment)
    If they are treating the responses as black or white, what is the point of offering the shades of grey?

  7. I think the little ‘+’ plus sign is to display all the cards in the category when there are cards in it? I agree that some developers do things with out really thinking how useful they are (I’ve done that before). I think it would be interesting to see the number of partially completed surveys on this one and where they leave the survey.

Comments are now closed, move along, nothing to see here.