One of the irritations of being a user experience designer is everyone believes they are a user on some level…even me. This means you are constantly bombarded with suggestions in meetings and in passing. This doesn’t happen to developers, for example. I’ve never sat in a meeting where the project manager looks a developer in the eye and says, “It might be better if you just pass a variable.” But, I have sat in meetings where every single participant has advised me on the design. This isn’t always a bad thing per se because often the ideas are fuel for another iteration…and they are often good ideas as well. Even the best design is not immune from random critiques over menu options, buttons sizes or ambiguous labels. All of this is important. But, the meeting participants are not users and they don’t have graduate degrees in HCI or human factors. Essentially what you end up with is a series of opinions from non-expert non-users.

I can get by all of this and find it a minor annoyance in most meetings. When the little things in a design encompass your meetings, you are usually at the end of your iteration and ready for testing, the way I figure it. But what I have noticed about this trend over the years of meetings – something that is truly becoming more and more of a pain point with me – is the assumption and opinion a design must be “dummy proof” or that our users will not be able to figure out a radio button from a drop menu. It isn’t just happening in meetings I attend. This is becoming a pervasive thought in the UX community and the world illustrating a true lack of insight into how humans think and use our designs.

Take, for example, a current argument taking place amongst UX professionals concerning the hamburger icon. The argument essentially employs user testing to indicate no one understands the hamburger icon and one should not use it (though some arguments, to their credit, do note context is important). The basic essence of the argument cites low discoverability, lack of recognition and lack of engagement in A/B testing to make a case users just don’t understand or use the hamburger icon. When I read these posts, I picture a user scratching their head not being able to locate the menu and abandoning the app or website. I think this is unlikely though and the argument ignores a certain amount of sense-making a user must employ in order to use a piece of software that, theoretically, will make their lives or jobs easier.

Here are some of the arguments:

Why We Banished the Hamburger Menu From Our iPhone App

Why and How to Avoid Hamburger Menus

Hamburger vs Menu: The Final AB Test

Ultimately, the bastardization of the hamburger icon is a bit of a boring argument, though, and it’s based on some pretty weak studies. The perpetrators of this argument need to ask themselves two questions. The first question they should consider/ask is: How does an icon become an icon – one that we recognize and use daily? It becomes that way through its pervasive presence. The second question these people should ask is: Are these truly the problems we should be working on in UX and went to grad school for? Yes labels and icons are important, but to think a single icon is responsible for a double increase in usage (note these are pretty small studies and fail to track usage over longer periods of time) that is steady…I’m not buying into it. “Oh wait a minute, you mean if I just change this one icon all my users will be happy and my app is now usable?!?!” User behavior is more complex than that. And, users are often smarter than we give them credit for.

I have made the case over and over that a user will often adapt to even a bad design. The bad design will eventually become convention. Think of a TV remote. Most are poorly designed. But let me take the remote you have been using for two years, switch all the buttons around to what I think is a better arrangement and see how you like it. The change in and of itself will be enough the exasperate most users. If users can adapt to poor conventions, that is not excuse to ignore good design. However, users will adapt and small changes such as Apple’s move to a flat design are little more than a bump in the road for most users. Many will tout a system as having a lack of usability – over-thinking the UX – based on design elements that have little impact on usability and often ignoring the idea that a user must have an adaptation period to a new interface. One more example: You trade your car in today and buy a new one. How much time will you spend adapting to the new and improved features? And is it worth it to get the benefit of those new features?

Back to my original point on something so simple as a Hamburger icon: How does an icon become recognizable? Through its pervasive use. An icon is a language. It’s a symbol representing a concept just like a letter represents a sound. Those who have studied linguistics will know exactly what I am referring to. For those who haven’t, a quick lesson: The letter A represents a sound. That sound could be represented by any other symbol. A letter is a symbol interchangeable with any other symbol to represent that sound (as long as there is agreement on the sound it represents) – just like an icon. That’s why letter combinations such as STR8 (straight), 4U (for you) and 2CUTE all have meaning to us in the English language.

The overall point of this article can be found in my question above: How much time should we expect a user to invest in order to use a piece of software that, theoretically, will make their lives easier? Perhaps this is less of an issue with simplistic systems or smaller sites. But I have worked in healthcare for a decade now and the systems are robust and complex. But theoretically, they allow you to complete tasks more quickly and achieve that which you could not achieve before. A piece of software is a tool that augments our natural abilities as humans – not too much different than a steam shovel, a calculator or a pencil and paper. How much of an investment are you willing to make in order to learn how to use something that will save you time and energy? This, of course, depends on the upfront investment. A word processor is a good example. Curse Microsoft Word all that you will. But are you willing to go back to a typewriter? (Someone will read this someday who has never used a typewriter.)

This argument presupposes we can all agree on what a “bad design” is though. But is this true? Is a bad design a design that is not simple to use? That has steadily become our definition over the past decade as computers and smart devices have moved towards more simplistic models. But this ignores the expert or “power user” versus the user who only cares about simplicity and completing a few simple tasks. Expert users often wish to use more complex features and scenarios in a software platform or interface. I recently had a discussion with a colleague on interfaces where he believed an interface should immediately make sense to the user. I emphatically disagreed as this is largely dependent on the complexity of the system and tasks the user may wish to accomplish. Of course, I believe there is a place for the simple interface that perhaps allows the user to accomplish a few small tasks. But, we were not discussing a simple piece of software such as a to-do list application. Rather we were discussing a robust healthcare system capable of performing hundreds of tasks.

I recently came across a podcast from 99% Invisible titled Of Mice and Men. It briefly details the life of Doug Engelbart – the inventor of the mouse – and provides insight into his thoughts on augmentation. Engelbart had invented a supplement to the mouse called the keyset. The keyset allows a user to employ a mouse with one hand and control all keyboard actions (via the keyset) with the other hand. He didn’t design it for simplicity and usability wasn’t the primary concern. His primary intention was to allow a human to do more and do it more efficiently after they had learned how to use this device. Engelbart believed humans – with the right investment of time – could do so much more with a computer than what they could do at the time. The upfront investment would give the user a high ROI. This is an idea that has intrigued me for years now. And it begs the question of how much time a designer can expect the user to invest in learning a new interface or technology, which will significantly automate their lives.

I don’t think the movement towards more simplistic design is bad at all. But I know you can do more with Microsoft Word or Excel. I know your TV probably has great features you haven’t bothered to figure out. Even a new car today will often allow you to manipulate the settings and customize it to your own preferences. It’s difficult to figure these things out sometimes. But, it doesn’t mean the devices lack usability. They are complex devices capable of more and more each year. The question is: How much effort are you, as a user, willing to expend to maximize the benefits of these technologies and ultimately save yourself time and effort in the long run?

Image courtesy of: Harold Copping [Public domain], via Wikimedia Commons

%d bloggers like this: