I recently came across an article concerning Facebook and the loading animations they use in their iOS app, The Psychology of Waiting, Loading Animations, and Facebook. Facebook uses a custom loading animation distinct to their app. The article stated using the spinning native iOS loading animation changed users’ perceptions of whether the app was to blame for excessive loading time or whether the iOS system was at fault. That is, when the native iOS spinner was used to indicate load time, users were more likely to blame their phone for the slowness (because that symbol represents Apple’s method of indicating load time). But, when the custom loading animation was displayed, users were more likely to blame the app for excessive load times. The source is supposedly an A to B test done by Facebook. However, the author was unable to confirm the accuracy of the sources.
Let’s suppose this story is true and users will blame the system if you use the native iOS controls. After all, this sounds pretty reasonable. The ultimate question herein is what ethical implications there are to this? As UX professionals, is it ethical to design a feature that blatantly misleads the enduser?
That’s the question a colleague of mine asked when I sent the article link to him. I was impressed with him and less impressed with myself since I hadn’t even remotely thought about this. At first glance, I thought the technique was ingenious. My colleague’s response opened my eyes to another way of seeing this.
We don’t often think too much about ethics as UX professionals. But, there are a variety of reasons we should and areas of our profession where ethics can become pertinent. In the case of the Facebook example above – assuming it is true – it seems this is only unethical once you discover how to mislead the user and intentionally devise a means to do so. Moreover, there is a question as to how much responsibility a designer has as a result of users’ poor mental models (or users simply not understanding the inner workings of some technologies). The counterpoint, of course, is that we should strive to design technologies that do not require elaborate mental models or perhaps are a bit more transparent. I am not sure how realistic that is given the complexity of many UX designed technologies such as Internet websites, mobile devices and apps of all flavors.
The primary question my colleague and I had was whether there are justifiable instances in which we can trick or mislead users. A lot of the writing I will do on this site in future months involves the perception of time. And there are many instances in which we play “shell games” with users to change their perception of how much time has elapsed. Is this ethical? We are, as a matter of fact, actually creating a better experience for the user with less discomfort in doing this, which is part of our jobs – to create better user experiences. But in the case of time perception and using means to distract users, we are intentionally misleading the user.
In my recent writing on the perception of time, I have enlisted several examples of elevators to illustrate the difference between waiting and occupied time. One example, I have yet to write on is the “Close Door” button in elevators. My colleague brought up pedestrian walk sign buttons in our conversation as another example. These buttons are claimed by many to be illusions. Some opinions state the close door button doesn’t work because it would be a violation of the American Disabilities Act if it did. Others contend that the close door button is necessary as part of local and state fire codes. I’m quite certain not all elevators meet federal, state and local standards and am pretty sure the one in my building does not meet any standard, which is evidenced by both its perpetual malfunction and 2-year expired inspection certificate. So the debate may continue forever.
Don’t Be Dumb has an entertaining 3-minute video on the topic.
One things is certain: There exist controls commonly referred to as placebo buttons. In the case of pedestrian walk signs, many of them were created prior to electronic traffic controls and signals and thus they no longer work. Some cities simply have not removed them. Many office thermostats are placebo controls placed to quell the cubicle masses, but not actually allow them to adjust temperatures. These devices give the user a sense of control, which seems to ease human anxiety in situations where they must wait for more than 5 seconds.
David McRaney wrote a great piece on placebo buttons in his book, You are Now Less Dumb, for those who may be interested. However, the question is in the ethics of such devices. Is it okay to trick people by using placebo buttons? The pharmaceutical industry has long known about the placebo affect and it is one to the plagues in many of their randomized controlled trials. If the pill makes you feel less depressed or have less pain, does it matter? It does if the person selling it to you knows it is snake oil and is profiting from it. Placebo buttons don’t necessarily fall into the same category. First, they don’t actually do any real harm. And for the most part, they are not profit-driven devices (though one could make a case that the placebo thermostat is a money saving concept).
Incidentally: In the case of crosswalks and as Malcolm Gladwell outlines in his book, What the Dog Saw, people are more likely to be hit by motor vehicles in crosswalks using pedestrian lights. This is because crosswalk lights allow us to offload cognition and in so doing, we don’t look left and right as often in such situations. Thus, anyone running a red light (which happens more often than most of us would care to believe) will do so and we will often walk into the line of the vehicle only focusing on the light indicating it is safe for us to walk. It is our mistaken belief this light indicates a safe situation when it does not. The light is merely automated to change when the streetlight for cross-traffic changes. It does nothing to prevent cross-traffic from initiating an unsafe situation. In this light, placebo buttons may be a small ethical dilemma by comparison.
The question with placebo buttons and other methods of misleading users is how comfortable you are as a designer employing such tactics. My undergraduate degree was a double major with philosophy representing one of the majors. My studies in philosophy emphasized post-modern ethics. Ethical theories rarely delineate a clear right from wrong in hazy areas such as these. That isn’t to say there are not areas of UX where ethics can clearly delineate between right and wrong. Stephen P. Anderson outlines some clear cases where ethics, UX and design don’t mix so well such as designing an ad or website for a tobacco company, employing unscrupulous advertising techniques on websites or designing for a company that clearly ignores the user experience in favor of more profitable areas of its websites or systems.
One such story Anderson detailed:
“during the redesign of their website, huge amounts of time and money were dedicated to the sales portions of the site. We designed a more engaging homepage, improved how services were explained and promoted on the site, and made the enrollment forms much easier to use. But, when we got to the support side of the site—answering FAQs about itemized vs. non-itemized receipts, clearly explaining the process associated with filing a claim, and so on—the client was content to leave these pages untouched. By leaving these pages difficult to use, they were able to reduce the number of claims processed. They knew that customers frustrated by the process would often give up hope of ever seeing their money reimbursed.”
Anderson also notes ethical issues in persuasive design noted by BJ Fogg in his book, Persuasive Technology: Using Computers to Change What We Think and Do. Fogg devotes an entire section of his book and each chapter to issues resulting from the use of persuasive techniques in design.
Things are not always as they seem – especially not in UX where we often blend psychology, marketing techniques and design to provide our users with an optimal experience. It isn’t expected we should come to any clear answers through this column or any other on the web. However, initiating a dialogue and considering such issues, at the very least, creates an awareness of design and ethics we may not have held before.
Towards an Ethics of Persuasion Stephen P. Anderson