Luke Wroblewski has some good advice on getting users to do what you want without driving them nuts. In any successful app, users develop muscle memory around common tasks. This means it’s difficult to push users to complete secondary tasks — like adding a user photo or connecting with friends — that will improve their experience but are rarely the reason you open the app.
His solution is to style these tasks like other items in a user’s feed and insert them seamlessly. He didn’t give numbers, but said the “use of the Find Friends feature shot up dramatically” after implementing this change.
It’s a simple observation, but a powerful one for me. Changing someone’s behavior is difficult and there’s no reason to take on this task when it’s unnecessary. I would just be careful to respect the user’s intentions and avoid polluting a stream with too much unexpected content.
I’m glad someone catalogued these. Tricking your audience is never the right approach.
Normally when you think of “bad design”, you think of laziness or mistakes. These are known as design anti-patterns. Dark Patterns are different - they are not mistakes, they are carefully crafted with a solid understanding of human psychology, and they do not have the user’s interests in mind.
This is good advice and I’m going to liberally quote Mr. Hockenberry.
Even folks that have access to the latest and greatest technology are preferring the iPad over more complex devices. Initial statistics also show that the iPad has an older demographic.
Of course, 300,000 geeks like you and me don’t fall into that category. We’re the first ones standing in line at the Apple store, and the first ones to use all this cool new software. And we know all the things that apps “used to do”. And we want all sorts of other bells and whistles. And we’re wrong.
Craig goes on to explain a good example of simplicity.
In the case of Instapaper, the feature doesn’t even show up in the UI until you go into the Settings app and add your account information. My mom will never see it.
Remember that talk I gave at SVA last month? Me too! They’ve now posted the video online, so you can watch me talking if you don’t like reading or pictures. If you prefer those, you can check out the post and see the slides.
SVA has also posted all of the videos online. I recommend you watch the other two; they were great.
I shouldn’t be surprised by how much people still use bookmarks, but I am. I also think their decision to put cut/copy/paste “directly to the right of the Firefox button” might not be necessary. People who use the menu items probably use them in every application, so moving them to a custom place is unnecessary. [via Ben Fry]
As discussed, I gave a talk last night about The Tablet. Thanks very much to Liz for organizing the event. When I began planning my talk, I found it was easier to write it out as a blog post so I could find the narrative. I did just that.
What follows is the blog post and some of the imagery attached. At the very end, I included my slides from the talk, which have some additional imagery. (If you’re more of a visual person, skip to my slides on Slideshare.)
Up until now, we’ve done most of our reading using a single layer of data. This works well when you have abundant space, but breaks down when you try to work on a smaller device. As we pack more and more data into smaller spaces, we need to consider how this data is presented. The answer that provides the best compromise of accessibility and usability is to layer our data using modal dialogs. And now, a story.
During college, I oftentimes bought my textbooks used, primarily because they were cheaper. The cheapest books were thoroughly marked up, with notes in the margins and important phrases highlighted. Sometimes, it was great to already have the important bits noted for me, but most of the time I just wanted to read. My wish was to be able to remove that layer of data only temporarily. Little did I know that 10 years later, that would be possible.
When data is presenting it a single layer, ancillary data exists separately from the primary text. When you’re studying, you write down the important parts in a notepad and create study tools with flash cards. When you’re watching a film, the credits appear at the end of the film and the deleted scenes are accessed in another menu entirely. When you’re reading a novel, contextual content is often in appendices and definitions are, well, in your dictionary.
The iPhone and other smartphones have improved the situation. Instead of having to make a note during a movie or keep your finger on the current page while flipping to the appendix, you can pull out your phone (or laptop or whatever) and look up the information. Of course, that is still two information sources in the same plane.
It’s also gotten a lot easier on the web. Sites like the New York Times offer the ability to double-click a word and get the definition. Flickr lets you annotate photos with text. The Definitive Guide to Django provides an online version of the book that lets you comment on each paragraph. As the ultimate example, Google lets you overlay a variety of information on top of a map.
Since we still do most of our reading on paper, we’ve been stuck with just a single layer of data. The best we’ve got are footnotes and notes in the margin. The introduction of the Kindle has provided a suitable replacement for reading devices. Having 1,000s of books in your hand is wonderful, but the Kindle only provides two layers of data: text and definitions. And without a touch screen, trying to get a definition is tough. You have to navigate to the word with the thumb nubbin before the definition pops up. It takes you out of the flow of reading a lot more than clicking a mouse or tapping the word.
How This Would Work
Bringing the multitouch interface to such a large surface area will allow us to bring far more layers of data to a document. Let’s come back to our studying example. My wife is taking an Anatomy & Physiology class and has a test coming up. She has out her text book, flash cards, a notebook and a reading guide. While going over her notes, she might want to refer back to the source text for some additional information. She has to find the right page, then find the right paragraph and look for context.
Now, let’s say Apple or some inventive fellow builds an iPad application meant for studying. You can download your textbook and, as you read, tap on a paragraph to open up a modal dialog for taking notes. Or maybe you just select some text and copy the text into your notebook. Next time you go through the book, you’ll see a little speech bubble, like the Django book example, alongside the text. When it’s time to study, click the ‘View notes’ button and you’ll see a version of just the text you’ve highlighted and your notes. Back to the book. If something you’re reading is confusing, selecting text could let you define or Google it. If that doesn’t pan out, you can add a public note. Your friends in the class would be notified and can answer your question. The answer will show up in context. Taking the social element further, being able to view your study partner’s notes overlaid on your page could answer questions you didn’t know you had.
Below are some design explorations I put together to illustrate the example.
These types of interaction could be carried over to a work of literature. If you’re in a book club, the reading questions could come be visible at relevant point. You could make notes in the margin that the rest of your book club could see. There’s also an opportunity for authors to provide something like a director’s commentary. When you find out SPOILER ALERT that Bella choses Edward over Jacob, Stephenie Meyer could put in a note explaining that it took her months to make this decision and it was only after talking to a bellhop at the Paris in Las Vegas that she made her decision. Or, possibly more interesting to some of you, how Malcolm Gladwell did his research about Hush Puppies.
As a final example, adding a touch interface to films, means one of my personal dreams can be fulfilled. When you want to know more about an actor, pause the movie and tap his face. Using iPhoto’s facial recognition software and a partnership with iMDB, the actor’s name and his last 5 films will pop up in a modal dialog. They’ll also be a link to any relevant extras that include that actor.
What We’ve Learned
There is nothing wrong with the old way of studying or reading, so long as you have all of your information around you. The challenge of bringing a comparable or better experience to the iPad is finding a way to improve portability without sacrificing accessibility.
Using modal dialogues and layering data lets you display ancillary content without taking away from the source text. Since that’s why people are coming to your content, that should have the focus. Providing the rest of the data should be seamless, but natural. Finding that balance will lead to an engaging (and hopefully unforgettable) experience.
Update (4/2/10): Video!
SVA has kindly posted a video of my talk online. Please enjoy, if you prefer moving pictures:
I’d heard about this for a while, but it’s quite a treat. You should absolutely read the whole thing, but this was my favorite excerpt.
Exploratory learning can be engineered into repeatable systems: Moments of delight and skill acquisition are highly reproducible. All you need is a well designed and balanced system of interconnected feedback loops that helps guide and encourage the formation of new skills.
I just purchased Heavy Rain for my PS3. It’s been getting fantastic reviews, but I’ll talk more about the gameplay after I make some more progress.
In order to play the game, you need to copy a multiple-gigabyte-sized installaition package to your PS3’s hard drive. It takes about 15 minutes. Quantic Dream, the game’s developers, obviously knew this, so they included a graphical sheet of paper with the game and displayed instructions for building the origami figure featured prominently in the game.
When people talk about user experience, this is the kind of thing I like to bring up. Quantic Dream knew you couldn’t do anything else and you’d likely be annoyed at the wait, so they gave you a fun task to while away the time. Before I even played the game, I loved it.
One of the tenets of the GTD philosophy is pushing anything you can’t do in a couple minutes into a queue. It’s one of the few pieces of advice that’s really stuck with me. As a result, I’ve grown accustomed to estimating an article’s length based on a quick glance.
Unfortunately, this is difficult to do without some scrolling. The easiest method is to look at the scroll bar to determine the length, but this is not reliable. There’s no way to know the number of comments or how long a sidebar might be. To the right, you can see an illustration of what I mean. The top screenshot shows the site as it is now. Below it are a version that hides comments and then sidebar items.
I have no answer, just a wish.
This is the point in the post where you would normally expect to find a wonderfully crafted solution, but I don’t have one. I’ve considered JS bookmarklets that would automagically show an indicator next to the scroll bar that marks the end of the article, but there would be a ton of edge cases. It also doesn’t handle page counts.
The best case scenario is the universal adoption of the HTML5 article element. And while I’m dreaming big, I’d love some metadata as parameters: character count, number of pages, etc. (and a coal-burning pizza oven, please).
In the end, I’ve outsourced my standardization of content and read articles via Instapaper on my iPhone. It standardizes article layout, making it easy to determine length (the scroll bar works perfectly here). If they started displaying word counts on the website, I’d squee. Thusly, I’m really looking forward to Marco’s iPad version of Instapaper. It’s going to solve all of my problems.
One of these classes, held in a room called the Elvis Presley, happened to contain many bleary-eyed C.L.T. members who had just come from the graveyard shift, where Zappos’s basic assumption of human beings’ essential good nature sometimes rubs up against some uglier truths…[T]here’s the caller who, Zuniga said, will just “breathe kind of hard.” It turns out that there are limits to Zappos’s customer service: callers who truly overstep boundaries are sent to a top-secret eternal hold loop known internally as the Abyss.
That’s funny, but I wonder how they determined that this is the best way to deal with difficult customers. I’m sure they’ve tried a variety of methods, but it seems that admitting this infinite loop exists goes against their experience-based brand. If I were to call them and be put on hold by someone, I’d certainly fear they’ve given up hope for me.
Are there other companies with similar policies? Once you’ve been Abyssed, are you blacklisted?1 I’m sure the law of diminishing returns applies here, but I’d like to think that I’d be given another chance down the road, even if I’d abused my privilege.
Recently, I watched an episode of House M.D. where a patient had Munchausen’s Disease, a disease where someone fakes an illness to receive attention. Once diagnosed, patients have a difficult time receiving care. ↩