Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Bottlenecks: Aligning UX Design with User Psychology
Bottlenecks: Aligning UX Design with User Psychology
Bottlenecks: Aligning UX Design with User Psychology
Ebook435 pages5 hours

Bottlenecks: Aligning UX Design with User Psychology

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Learn the psychological constrictions of attention, perception, memory, disposition, motivation, and social influence that determine whether customers will be receptive to your digital innovations.

Bottlenecks: Aligning UX Design with User Psychology fills a need for entrepreneurs, designers, and marketing professionals in the application of foundational psychology to user-experience design. The first generation of books on the topic focused on web pages and cognitive psychology. This book covers apps, social media, in-car infotainment, and multiplayer video games, and it explores the crucial roles played by behaviorism, development, personality, and social psychology. Author David Evans is an experimental psychology Ph.D. and senior manager of consumer research at Microsoft who recounts high-stakes case studies in which behavioral theory aligned digital designs with the bottlenecks in human nature to the benefit of users and businesses alike.

Innova

tors in design and students of psychology will learn:

  • The psychological processes determining users’ perception of, engagement with, and recommendation of digital innovations
  • Examples of interfaces before and after simple psychological alignments that vastly enhanced their effectiveness
  • Strategies for marketing and product development in an age of social media and behavioral targeting
  • Hypotheses for research that both academics and enterprises can perform to better meet users’ needs

Who This Book Is For

Designers and entrepreneurs will use this book to give their innovations an edge on what are increasingly competitive platforms such as apps, bots, in-car apps, augmented reality content. Usability researchers and market researchers will leverage it to enhance their consulting and reporting. Students and lecturers in psychology departments will want it to help land employment in the private sector. 

Praise

“Bottlenecks’ is a tight and eminently actionable read for business leaders in startups and enterprises alike. Evans gives us a rich sense of key psychological processes and even richer examples of them in action.” - Nir Eyal, Author of Hooked: How to Build Habit-Forming Products

 “Clients frequently ask our UX researchers and designers for deeper truths about why certain designs work and others fail. Bottlenecks offers practical explanations and evidence based on the idea that human cognition did not begin with the digital age.” - John Dirks, UX Director and Partner, Blink UX

 “Bottlenecks brings together two very important aspects of user experience design: understanding users and translating this into business impact. A must-read for anyone who wants to learn both.” - Josh Lamar, Sr. UX Lead, Microsoft Outlook


LanguageEnglish
PublisherApress
Release dateFeb 11, 2017
ISBN9781484225806
Bottlenecks: Aligning UX Design with User Psychology

Related to Bottlenecks

Related ebooks

Computers For You

View More

Related articles

Reviews for Bottlenecks

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Bottlenecks - David C. Evans

    Part IThe Bottlenecks of Attention

    ©  David C. Evans 2017

    David C. EvansBottleneckshttps://doi.org/10.1007/978-1-4842-2580-6_1

    1. Foveal Acuity

    David C. Evans¹ 

    (1)

    Kenmore, Washington, USA

    You worked hard to digitize your ideas and send them our way in the form of light and sound. But they must be encoded into neural impulses for your app to work and your business model to succeed. From a business perspective, a meme that never enters a brain is the tree that falls in the proverbial empty forest—it doesn’t exist.

    To cross the organic boundary into our nervous systems, the first requirement is that it must fall in our line of sight. That statement may be painfully obvious to you, but it is an even bigger pain point for us. Your meme will fail if the light from it only reaches our peripheral vision where we can neither read nor see color.

    Several billion-dollar examples instantly leap to mind: navigating while driving, video calling, and seeing display ads on web sites. To illustrate, most if not all of the point-of-interest icons designed for this dashboard navigational display are difficult or impossible to be seen or appreciated while driving (Figure 1-1). It was someone’s job (maybe yours) to make these memes, like the hamburger icon, or Korean, Italian, and American flags for different ethnic restaurants, but they may never actually enter a driver’s brain in the moment when they might be of use.

    ../images/434261_1_En_1_Chapter/434261_1_En_1_Fig1_HTML.jpg

    Figure 1-1

    Dashboard display.

    Consider closely the back wall of our eyeballs and you’ll understand what you’re up against. Our retinae have a lot of neurons to catch the light, but the cone-shaped neurons that let us see color and the detailed edges of characters are concentrated in one tiny area, called the fovea , which is opposite our pupils (Figure 1-2). i Our fovea are amazingly sensitive when we point them your way: we can detect whether or not you’re holding a quarter from 90 yards off. But if we’re looking just to the right or left of you, our acuity plummets to only 30% of what it is when we look straight at you. A little further off, our acuity drops to 10%. ii

    ../images/434261_1_En_1_Chapter/434261_1_En_1_Fig2_HTML.jpg

    Figure 1-2

    Diagram of the fovea. iii

    What that means for your meme is that we cannot read it if we’re looking a mere six degrees to the left or right. At the typical distance to a screen, we’re blind to symbolic information a mere five characters away from where we are focused. Stare at the period at the end of this sentence and count the number of words you can make out past it. Not too many. Perhaps we could read your meme in our peripheral vision if you increased your font size. But you’d have to increase it 400% if we’re looking six degrees off, 3000% if we’re looking 20 degrees off, and 9000% if we’re looking 30 degrees off. Good luck doing that on a smartphone screen or a dashboard display.

    Key Point

    Your meme will fail if the light from it only reaches our peripheral vision, where we can neither read nor see color.

    But even useful memes are impeded by our anatomy if they are not designed in harmony with it. For example, many human-factors experts consider video calling to be among the slowest-spreading meme in the history of tech inventions (Figure 1-3). We’ve had video calling technology longer than we’ve had microwave ovens or camcorders. And yet, while the penetration of those other inventions is all but complete in developed markets, as are other forms of communication like texting, still only a fraction of us use video calling daily or monthly, if at all.

    ../images/434261_1_En_1_Chapter/434261_1_En_1_Fig3_HTML.png

    Figure 1-3

    Technology adoption rates. iv

    Why would this be? Many factors could be to blame, but the 20 degree offset or more between webcams and the eyes of the person we are talking with might be one. Because no one yet has invented a webcam, native or peripheral, that sits right behind the monitor, only on top or to the side of it, we never get to look directly into the gaze of our friends and family members while we talk (Figure 1-4). Nor do they look into ours, because to do so, we’d both have to look directly into the cameras, at which point we could no longer make out each other’s faces. The problem persists even on smaller devices because our foveal acuity is so narrow (Figure 1-5).

    Thus we’ve had face-to-face calling for over 85 years, but never quite eye-to-eye calling. The best that the top video-calling applications have ever given us is a view of our friends’ eyes looking away from us as we look at them (although interestingly, their ads never show it this way). This mismatch with human nature has proved to have a very slow rate of adoption, far slower than voice-to-voice calling did before it.

    ../images/434261_1_En_1_Chapter/434261_1_En_1_Fig4_HTML.png

    Figure 1-4

    Sensitivity to gaze direction from Chen (2002). Original caption: The contour curves mark how far away in degrees of visual angle the looker could look above, below, to the left, and to the right of the camera without losing eye contact. The three curves indicate where eye contact was maintained more than 10%, 50%, and 90% of the time. The percentiles are the average of sixteen observers. The camera is at the graph origin. v

    ../images/434261_1_En_1_Chapter/434261_1_En_1_Fig5a_HTML.jpg../images/434261_1_En_1_Chapter/434261_1_En_1_Fig5b_HTML.jpg

    Figure 1-5

    (a) The typical experience with video calls in which, when we look at others’ faces, we see them looking away. (b) Looking into the camera directs our gaze appropriately, but now we can no longer make out each other’s faces. This artificial view is shown in most advertising for video-calling services.

    Or consider ads on web sites. By 2016, U.S. companies alone were spending over $30 billion on internet display ads, vi over half of which didn’t display on a screen long enough to be viewable (half of their pixels were rendered for less than a second). vii And of those that did, a vast majority were hitting our peripheral retina, where we can’t make them out as we read the content elsewhere on the page. We need only point our fovea five characters to the left or right, doing whatever it is we came to do, and your ads were lost on us. Let the sheer waste of that and the lack of ROI sink in as a result of this incredibly powerful psychological bottleneck. Not to mention the inaccuracy of reach statistics, which only measure whether the ad was queried from a server, ignoring whether it landed on a fovea, or even a peripheral retina. This is not the path to memetic fitness, let alone marketing success and profitability.

    And then there are our cars, the next big battleground for tech dominance. Whoever prevails in this context must find design solutions to accommodate the fact that we must point our fovea forward out of the windshield while we drive. This is because our fovea are also required for depth perception, something our peripheral vision is incapable of, and thus many states mandate we keep them on the stop-and-go traffic ahead. The problem is that you need to rethink the traditional monitor. Positioned currently where the radio traditionally sits, or on a smartphone held in a driver’s hand, it is so far away from our foveal vision that we expose ourselves to real danger in trying to view any of your memes shown there (Figure 1-6). In a 2014 report, the U.S. National Transportation Safety Board listed visual and manual distractions on their most wanted list of critical changes to reduce accidents and save lives (in addition to cognitive distractions, which we’ll return to later). They specifically referenced the growing development of integrated technologies in vehicles and its potential to contribute to a disturbing growth in the number of accidents due to distracted operators. viii

    ../images/434261_1_En_1_Chapter/434261_1_En_1_Fig6_HTML.jpg

    Figure 1-6

    Dashboard display challenge. Most design elements on dashboard displays will be unreadable by drivers focusing on the traffic ahead unless they are projected onto the windshield. ix

    Certainly, if self-driving cars proliferate, then the entire interior of cars can be redesigned and turned into a media room or a productivity center (which will spark its own platform for competing memes). The speed by which this technology proliferates will depend on the incidence of fatal crashes, like the 2016 accident by a self-driving Tesla, and on whether drivers will legally be allowed to let their attention wander. x

    But for those of us who continue to drive, whether out of economics or the pace of change, our retinal anatomy would predict that our windshields will become our monitors, where your digital memes will be displayed. Clearly, they must not compete with the things we need to see outside the car, but instead augment them. The first memes to warrant display on windshields will make road hazards like falling rocks and crossing deer more visible, forewarn us of tight curves, and signal slowdowns in traffic. After the first wave of safety memes is established, next will come convenience memes: those that enhance street signs and outline upcoming freeway exits. Finally, with a considerable testing, providing jobs for memetic engineers, the third wave of commercial memes will arise on our windshields: digital billboards pointing the way to gas stations and restaurants, specially adapted for the windshield. Commercial logos have been displayed on GPS units and heads-up displays already for some time; maybe on windshields they will finally hit our fovea and enter our brains.

    Is there a limit to the content that can be projected on a windshield? Of course there is. But scarcity is the foundation of value, so this only drives up the price for a placement. How can legislators help? Not by banning windshield displays altogether, but by establishing a data-driven regulatory agency, in the United States perhaps under the National Transportation Safety Board (NTSB) or the National Highway Traffic Safety Administration (NHTSA), which approves memes like the FDA approves drugs. Broadly, windshield memes must be shown empirically to…

    Increase drivers’ safety, not imperil it

    Improve our driving, not impair it

    Augment reality, not distract from it

    As you see, our psychological constrictions matter, starting with the nerves in our eyeballs. But this is only the beginning, since our attentional capacity is just as laser-thin.

    Notes

    i.

    Jonas, J. B., Schneider, U., Naumann, G.O.H. (1992). Count and density of human retinal photoreceptors. Graefe’s Archive for Clinical and Experimental Ophthalmology, 230 (6), 505–510.

    ii.

    Anstis, S. M. (1974). A chart demonstrating variations in acuity with retinal position. Vision Research, 14, 589–592. Retrieved from http://anstislab.ucsd.edu/2012/11/20/peripheral-acuity/.

    iii.

    Used with permission from Cellfield Canada Inc.

    iv.

    Adapted from Felton, N. (2008, February 10). Consumption spreads faster today (graphic). The New York Times. Retrieved from http://www.nytimes.com/imagepages/2008/02/10/opinion/10op.graphic.ready.html.

    See also Rainie, L. & Zickuhr, K. (2010). Video calling and video chat. Pew Research Center’s Internet & American Life Project. Retrieved from http://www.pewinternet.org/2010/10/13/video-calling-and-video-chat/.

    See also Poltrock, S. (2012, October 19). Why has workplace video conferencing been so slow to catch on? Presentation for Psychster Labs.

    v.

    Chen, M. (2002, April 20). Leveraging the asymmetric sensitivity of eye contact for videoconferencing. Presentation given at CHI, Minneapolis, MN. Retrieved from http://dl.acm.org/citation.cfm?id=503386&CFID=864400319&CFTOKEN=50601798. Copyright ACM Inc. Used with permission.

    vi.

    eMarketer. (2016, January 11). US digital display ad spending to surpass search ad spending in 2016. Retrieved from http://www.emarketer.com/Article/US-Digital-Display-Ad-Spending-Surpass-Search-Ad-Spending-2016/1013442.

    vii.

    Loechner, T. (2013, Oct 30). 54% of digital ads aren’t viewable, and even ‘viewability’ is in a black box. MediaPost. Retrieved from http://www.mediapost.com/publications/article/212460/54-of-digital-ads-arent-viewable-and-even-view.html.

    viii.

    National Transportation Safety Board. (2014). NTSB most wanted list: Critical changes needed to reduce transportation accidents and save lives. Retrieved from http://www.ntsb.gov/safety/mwl/Documents/2014/03_MWL_EliminateDistraction.pdf.

    ix.

    Evans, G. A. (Photographer). (2016, November).

    x.

    Stoll, J. D. (2016, July 22). Tesla autopilot crash shouldn’t slow self-driving development, regulator says. The Wall Street Journal. Retrieved from http://www.wsj.com/articles/tesla-autopilot-crash-shouldnt-slow-self-driving-development-regulator-says-1469200956.

    ©  David C. Evans 2017

    David C. EvansBottleneckshttps://doi.org/10.1007/978-1-4842-2580-6_2

    2. Task Orientation

    David C. Evans¹ 

    (1)

    Kenmore, Washington, USA

    To place your meme precisely where we will be directing our fovea, and thus our attention, the first idea that likely occurs to you is to learn our goals and you would not be wrong. Goals serve a directive function, psychologists Locke and Latham wrote in 2002, summarizing 35 years of research on the topic. [T]hey direct attention and effort toward goal-relevant activities and away from goal-irrelevant activities. i But we want you to take a step back even from that. The first thing you must do is learn whether or not we even have a goal. If we do, then any meme that interrupts us will be ignored as a frustrating distraction. If we do not, we will be receptive to unsolicited and unexpected memes, although we will resist any effortful concentration required to engage with you.

    Key Point

    To meet our goals as users of your meme, the first thing you must do is learn whether or not we even have a goal.

    In 1991, psychologists were given a research instrument as important to them as the telescope was to Galileo: functional magnetic resonance imagery. fMRI allowed them to see small changes in cerebral blood flow as we think or feel different things. For the first time, neuroscientists could examine the brain while we were awake and alive rather than anaesthetized or dead. So they began asking us to perform specific tasks to learn which areas of the brain were responsible for executing them.

    By 2014, Daniel Levitin, a neuroscientist on the front lines of the imaging revolution, summarized one of the biggest neuroscientific discoveries of the last twenty years. ii This was the existence of two fundamental patterns of activity in the cortex: the task-positive network and the task-negative network (Figure 2-1). According to Levitin…

    thetask-positive networkis

    the state you’re in when you’re intensely focused on a task such as doing your taxes, writing a report, or navigating through an unfamiliar city. This stay-on-task mode is… [one] dominant mode of attention, and it is responsible for so many high-level things we do that researchers have named it ‘the central executive.’

    thetask-negative networkis

    the mind-wandering mode…a special brain network that supports a more fluid and nonlinear mode of thinking…[in which] thoughts seem to move seamlessly from one to another, there’s a merging of ideas, visual images, and sounds of past, present, and future.

    ../images/434261_1_En_2_Chapter/434261_1_En_2_Fig1_HTML.jpg

    Figure 2-1

    Brain regions in the (a) task-positive and (b) task-negative networks. iii

    Essentially, fMRI studies showed first that no matter what problem they asked us to solve or task they asked us to perform, a similar network of pathways was activated. This was the task-positive network , including the pre-frontal, medial, and occipital lobes and other loci involved in processing language, symbols, and mental models.

    The task-negative network was discovered more or less by accident, according to neuroscientist Matthew Lieberman. iv During most neuroimaging sessions, the researchers didn’t slide us in and out of the fMRI between tasks, but instead, they left us in there with the machine running. During the downtime, when we were listening to the hum of the electromagnets and solving no specific problem, the task-negative network appeared in our brains, and it too was remarkably consistent in its pattern. Our brains were defaulting to a state in which the medial areas deep in our cortex were at work as well as the hippocampus. When asked what we were thinking about, we typically replied we were daydreaming, remembering, and pondering over social situations.

    Just like the functioning of our hearts and our kidneys, neuroscientists realized that there is no resting state for the brain. We are either solving an advanced symbolic problem like only our species can, or daydreaming to consolidate memories, see new connections, and try to understand the people around us.

    As an inventor and promoter of digital media and experiences, your first objective is to understand whether we come to you in a task-positive or a task-negative mindset.

    This is precisely what Allrecipes.com did, the largest community cooking web site in the world (Figure 2-2). They fielded a survey via a popup window with one question:

    If you had to choose just one, which statement below best describes your visit today?

    I had a specific goal. I knew what I was looking for or hoping to accomplish.

    I did not have a specific goal. I was exploring and just looking for interesting information rather than something specific.

    ../images/434261_1_En_2_Chapter/434261_1_En_2_Fig2_HTML.png

    Figure 2-2

    Task orientation in visits to Allrecipes.com.v

    Allrecipes.com found that about 73% of us were task-positive and 27% were task-negative (Figure 2-2). As task-positive users, we were trying to make progress toward a known outcome, and so we wanted to be efficient and productive. We might be trying to figure out, for example, how to make a pomegranate reduction sauce for a lamb roast. As task-negative users, we were passing time, open to ideas, and just being a part of the community. We might be, say, getting new ideas in advance of a holiday, triggering memories of dishes we once loved but forgot, or looking to see what others were talking about.

    But the more important lesson that Allrecipes learned was that task-positive and task-negative visitors used totally different navigation features of their site (Figure 2-3). Task-positive users among us tended to use a search field. This was attentionally the most economical way for us to get our reward. By contrast, task-negative users tended to browse the body of the site, clicking the pictures, links, and graphics in the hope that we would discover what we didn’t know enough to search for.

    The realization that most of its users were task-positive helped Allrecipes.com make much smarter decisions about which features to invest in. They knew they had to have a very smart search algorithm and excellent search returns, since the majority of their users were task-positive. However, they could not ignore the minority who were task-negative and clicking links and going down rabbit holes, because these viewers were spending more time on the site and loading a lot more pages and hence more ads.

    ../images/434261_1_En_2_Chapter/434261_1_En_2_Fig3_HTML.png

    Figure 2-3

    Navigational preferences for Allrecipes.com visitors with a goal or no goal.

    Allrecipes understood also that the mix of task-positive and task-negative users on their site was a function of their acquisition strategy. At the time this survey was conducted, Allrecipes engaged primarily in search marketing, and so the majority of us were coming in from Google. This predisposed us to being task-positive. We knew what we were looking for and we had begun looking for it well before arriving on Allrecipes’ domain. But later, Allrecipes put out a print magazine at the checkout aisle in grocery stores. This helped bring in more task-negative users who weren’t looking for anything in particular, but just wanted to browse cooking content. Balancing out the mix of task-orientation of their users with their acquisition marketing was an important way for Allrecipes to both meet our needs and overtake Foodnetwork.com as the largest global cooking community.

    Key Point

    When we are task-positive, we resist intrusions and find them distracting. When we are task-negative, will resist effortful tasks and welcome intrusions.

    In essence, when we are in a task-positive mode, we point our fovea where we consciously choose to, in the service of trying to reach our goals, and we actively avoid everything else, which we treat as an unwanted distraction. If a majority of your users are oriented this way, as with many productivity platforms (Microsoft Office, Slack), you should avoid ad-support as a business model and instead steer toward a subscription model. If you display any ads at all, only do so in search results, because interstitial ads will perform poorly and be an annoyance. Your essential design strategy is to help us find what we’re looking for. Discard the thinking that, if you build it, we will come in favor of, learn where we’re looking and be waiting.

    But the strategy reverses when we are in a task-negative mode. Now we welcome the attentional intrusions, including advertisements, and we resist expending the effort and concentration needed to solve things. If you’re building a mind-wandering app or web site like Flipboard, Reddit, Funnyordie, and most news aggregators, fill the real estate with thumbnails, headlines, and links, and load this content endlessly as we scroll down so there is effectively no end to the page. (This is the place for if you build it, we will come thinking.) Use machine learning to suggest content that is similar in category as what we’ve previously clicked (e.g., sports, election coverage). Adopt a social-marketing strategy where you post this same content elsewhere on the web. If you do use search marketing, optimize to search terms that are more general (e.g., holiday recipes) than specific (e.g., balsamic reduction sauce.). In your own interface or on your own domain, do not in any way puzzle us with difficult navigation, advanced search forms, lengthy registrations, or anything that would require task-positive mental effort. On these sites you should follow the advice of usability guru Steve Krug and don’t make us think.

    Enjoying the preview?
    Page 1 of 1