Select Page

Are we shortchanging users with simplistic designs?

by Apr 20, 2016Design, User Experience / UX0 comments

I initially thought it was an April Fool’s Day joke. The Nielsen Norman Group recently published an article titled Difficult Designs Are Better (for Humanity) by Kara Pernice. I read it. Then I read it again. I reflected on it for a few days and thought they might be serious. Then I read it a third time looking for some sort of big reveal. Was this was a joke on April first? There was no overt indication this was posted in jest, but I have since revised my initial opinion and am only left with a firm assumption this is, indeed, an April Fool’s Day joke using a tongue in cheek tone and a subtle reference to April first at the end.

If you haven’t read the article or don’t choose to follow the link above, Pernice essentially states users can receive physical and cognitive benefits from designs that are not too simplistic and, in fact, more difficult to use. And though I suspect there is more than a bit of humor behind the writing of this piece, I also believe a subtle point is being made. It’s similar to a point I made in an article last year – Do We Over-Think UX Design…And Underestimate the User?

In this article, I argue a few points. The first point is we shouldn’t underestimate the user’s intelligence or the human ability to make sense of an interface. Essentially, users experiment with an interface – they play with it and get to know it. Usability testing often misses this aspect of familiarity and it’s one of the primary problems I have with only using this method to test a design. Just because a user doesn’t immediately understand an icon or how an interface works, doesn’t mean they will abandon it or forever miss key components of functionality. Humans have a unique ability to understand and use tools – whether the tool is a rock to smash nuts for early hominoids or an advanced interface to manage a patient’s health record – to augment their ability.

The second point I make concerns human augmentation and how much time a user should invest in learning a new tool that will ultimately save them an immense amount of time. The example I use (and there are many like it) is a word processor versus the typewriter. A word processor takes some time to learn if you have never used one. However, the upfront time it takes to learn it will save you hours upon hours over using a typewriter once you do learn it. I also use an example of Donald Engelbart’s keyset, which had a tremendous learning curve, but significantly augmented human-computer interaction.

I think what I find most intriguing about this second point is our relationship with technology as humans and how quickly we adapt to a new technology and forget how difficult a given task or workflow was before we encountered the innovation. We, in essence, take technology for granted.

Lewis C.K. comically has a routine on this very aspect of human behavior.

Human augmentation and the technologies enabling augmentation are not just something we take for granted. They are also, and quite often, the foundational element of our very undoing. Consider Automation Bias where we become so reliant on a machine performing a task we fail to adequately compensate for error or any deviation of the automated workflow. This reliance on technology can have catastrophic, exponential effects because automation will allow us to more quickly (and unthinkingly) duplicate those errors.

An example: In many hospitals, the distribution of medication is controlled through a series of automations using computers and electronic medicine cabinets. There are numerous stories of incorrect medications being administered as a result of automation. In one instance, 6 newborn babies were injected with an adult dose of blood thinner (killing three) when nurses used an automated system in administering routine doses. The automation allowed for doses to be more quickly administered, duplicating the error.

Another example, and a less tragic one, is how the software I am using to write this article continually auto-corrects Pernice’s name to “Bernice” each time I type it. If I fix this through the settings, I potentially compromise the spell correct feature and may create more problems than I solve.

In the Nielsen Norman article, Pernice humorously makes a number of points – one of which relates to Google returning top search results to address a user’s information needs. Pernice asks if this is really serving the user or if it would be better to force them to search harder to find information. Whether the article Pernice writes is tongue in cheek or serious, there is a point to be made here beyond humans simply taking technology for granted.

Does technology make us dumber? This question brings to mind a favorite article I have used in many classrooms over the years as a teaching tool – Does Google Make Us Stupid? The article’s primary focus is in how the web has reshaped our reading and thinking patterns – not necessarily made us dumber. This change in our cognition has been corroborated in various studies and I have written a little bit about this phenomenon before.

But here is the real question: Do we lose abilities when we augment ourselves using technologies? A real basic example of this is the use of power equipment. Digging a ditch by hand makes us strong. Sitting on a backhoe does not. A more complex example might be the ability to read a map – the “old fashioned kind” like we used to use in the 20th century prior to the proliferation of Google Maps. It’s just a theory, but I’d be willing to bet there is a whole generation that cannot read a paper map today.

Google is one of the best examples of our over reliance on technology. Just using the search engine alone exposes how much it reshapes our thinking. It’s posited most users rarely move beyond the first page. So an information seeker may conclude there are no available sources on their topic since they do not appear within the first 10 results (or it will simply take more effort to find the answer than the user is willing to expend). (A Google search often provides millions of results beyond the first 10 results on the first page.) Let’s not forget, as well, this is only a single search engine and the seeker may likely never use an alternate search engine, a meta-search engine or a sophisticated database.

Let’s suppose a person conducts a Google search and does find a source. They read the source and, perhaps, it answers their question. How fully do they understand the issue? If it was a simple question such as the weather for the day or something similar, perhaps there is little need to dig deeper. But if they are seeking an answer to anything with a modicum of depth to it, they could easily miss out on the holistic nature of the information they seek.

Wikipedia will give you an overview of a topic. But how many users bother to investigate the sources used? Truly understanding a topic often involves seeking out more than a single source or finding the root study or research that is foundational to the topic. I remember when Wikipedia and Google were lambasted by my librarian and academic colleagues many years ago. They believed these sources were inaccurate (and probably felt threatened by them). I think Wikipedia and Google are great tools and can be highly accurate when used correctly. They are truly only a small part of the problem when considering our lack of information literacy. Human behavior and our laziness coupled with ignorance in research methods are much larger problems. In fact, the only problem with tools like Wikipedia and Google is they induce laziness.

This is part of what Pernice is getting at in her article and some of what I wrote about in my own article. However, I think the question – does technology make us dumber – is the wrong question to ask. At the very least, it is the wrong way to look at the problem and ignores some basic aspects of human behavior and cognition. It is most certain technology changes the way we think. But in many instances, it improves our abilities. And brain plasticity research illustrates how quickly we can adapt to either a new technology or the disappearance of an existing technology.

Let’s consider smartphones. I used to have a dozen or more phone numbers memorized. Today, I couldn’t tell you my own daughter’s phone number. It’s stored in the cloud and I don’t need to remember it. But suppose I go back to an old touchtone phone. I would quickly begin remembering those numbers I used on a regular basis. Electronic maps are a similar analogy. If Google Maps disappeared tomorrow, I’d be able to go back to paper maps (grudgingly). The upshot to having these tools is I free my cognitive capacities for other tasks…in theory. Assuming I don’t just spend the time I save with technology sitting on the couch watching reality shows, I should theoretically have the ability to reallocate some my cognition (and my time) to more useful tasks than memorizing numbers, using a typewriter or going to the library to look up basic information.

So when you look at technologies that have appeared over the past few decades, they aren’t really making us dumber. You aren’t dumb simply because you don’t remember phone numbers or use paper maps. You aren’t dumb because you struggle to read long posts like this. You have just adapted and are no longer required to remember useless tidbits of information. I have faith humans could adapt to not using technology in the same fashion we have adapted to using technology.

Is it wrong to make an interface to simple? Is it wrong to write shorter posts because you know people might not read a longer one? Is it a disservice to humans when we make technology more usable? I would propose it is not wrong. What is wrong with all of this are our assumptions – both as users and designers.

As designers, assuming our users are stupid or don’t have the ability to manage a complex interface is a disservice to them. Humans adapt and are often ingeniously clever in terms of how they use the products we design. If there is an easier way to design it, by all means we should make it simpler. Give the user an extra few minutes in their day to do something other than navigating your complex interface. Maybe they’ll do something great with the time they save.

As users, our assumptions are often troublesome in two respects. First, how much time is learning a new technology worth if it saves you hours, days, months or years over a lifetime? Do you owe it to yourself to spend the extra time learning how to use a word processor to write your next book versus using a typewriter or writing it by hand? Second, our assumptions concerning technology are often incorrect and lead to problems such as automation bias and mistaking the authority of a technology.

I’ve written about automation bias above and it directly ties in to our reliance on technology as an authority. BJ Fogg has conducted research regarding how humans often assume computers are authoritative. We project authority on technologies. You assume, for example, the flight prices you receive from Expedia are the best and not manipulated. You wager the authority of Google to answer a basic question.

But as I mention above, it may not be that technology makes us dumber or even lazier. It may be that technology simply changes our cognition making us think we are smarter than we really are. David McRaney recently featured this topic on his podcast You Are Not So Smart. He interviews Matthew Fisher who has recently conducted research on how we use Google. Fisher’s research suggests, “the side effect of a familiarity with search engines is an inflated sense of internal knowledge. Habitual googling leads us to mistakenly believe we know more than we actually do about any given subject – and here is the crazy part – that intuition persists even in moments in which we no longer have access to the internet. The more you use Google, it seems, the smarter you feel without it.”

The idea that technology somehow makes us dumber or lazier or has some other disastrous consequences is not a new one and goes as far back as the Greek philosophers (such as Plato) where reliance on the written text (and how fixed writing is) would somehow make us less smart or lesser humans. We certainly have not become less smart as a species since the time of the Greeks – at least not with all things considered.

The greatest risk our use of technology poses is the very problem Fisher’s research underscores – that we will think we are smarter than we are. When we begin to rely on technologies and assume we are more intelligent than we really are, we create blind spots. These blind spots cause a sense of self-delusion and open the opportunity for error.

Mark Twain once wrote: It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so. I always say: I am always uncertain about those who are so very certain.

We should strive for a state of healthy skepticism in our relationships with technologies – whether it be Google or simply using an interface or assimilating the feedback we receive from a computer program. Technologies are created by humans and thus it is as James Reason’s research found: We have the ability to build errors into our systems as humans creating technology and those errors are dangerous when we unquestionably use them.

And as for the luddite view we will become less human or even dumber for using these technologies? I have more faith in human ingenuity and the plasticity of our brains than perhaps I should. However, Albert Einstein once wrote: Only two things are infinite, the universe and human stupidity, and I’m not sure about the former.