Today, Khoi and Scott are launching a fantastic new iPad app and I want you to know about it! Mixel allows you to create and remix collages of images using your fingers and your iPad. It’s easy to do and a lot of fun, even if you just want to consume others’ work. I’ll let Khoi explain it in his own words.
But we chose collage for a very important reason: it makes art easy. Photos, the component pieces of every collage, are among the most social and viral content on the Web, and allowing people to combine them into new, highly specific expressions of who they are and what they’re interested in is powerful. Collage also has a wonderfully accessible quality; few people are comfortable with a brush or a drawing implement, but almost everyone is comfortable cutting up images and recombining them in new, expressive, surprising or hilarious ways. We all used to do this as kids.
I feel like I’ve harped on this before, but I think David succinctly describes what print companies are doing wrong with their digital magazines.
I think it’s a stretch to say “printed magazines work better in every way.” Print doesn’t work better for breaking news, interactive visualizations, collaborative real-time problem solving, blogging whimsical ideas that aren’t print-worthy (a la The New Yorker), community, e-commerce, location-aware behavior or searching archives.
Boom. Tablets have the potential to bring the best of the web and print together, but no one is going to buy a more expensive, harder-to-read version of a print magazine. But if you bring social and interactive elements, then it becomes something really interesting.
Craig Mod’s list of things an eReader should not do. Some of are painfully obvious and this one is my favorite.
Does a PDF export of your content provide a basically identical reading experience as your ereader? Would a PDF actually provide a better reading experience (zoomable, searchable, real text)? Then your ereader’s plagued by confused incompetence.
The New Yorker is now on the iPad (no subscriber discount) and that’s cool, but this video is waaaay cooler. In addition to starring Jason Schwartzman, it is directed by Roman Coppola. They’re friends.
Ideo considers three aspects of the digital book’s future. It’s a nicely done video and it takes some features I looked at in my Multi-Layered iPad talk to a whole new world (a dazzling place, for you and me). Below is the video and some interesting points about each of the interfaces.
It provides a view into what your friends and other important people think about the content you’re reading.
I like how easy it is to scan through a book and find passages that are important. These would be a great research tool.
Showing debates that started from a particular passage is interesting. It’d be nice if you could actually respond to the discussion inside the app. Bringing in outside content is a great start, but no one has built a reader that lets you discuss the media inside the app.
It’s designed to show you what your colleagues, or social network, are reading.
In theory, you can have a discussion about something inside the app, but you’re not actually reading the app here (I think), so it’s still not a direct link between consuming and creating.
A more fully-featured (and vaporware) television companion concept that would be lovely. In theory, you are watching the show on your television and watching a simulcast on your iPad with modal dialogues over the video (e.g., to buy an album, see some trivia, etc.). I dreamt of this feature 7 years ago and I’d love to see it come to fruition.
I’m all for making the television experience more engaging, but it seems like this app is pretty standard with “polls, trivia and other content timed to be relevant to what is transpiring in the ‘Generation’ storylines.” I won’t judge it until I try it, but I’m much more interested in how it’s displaying information.
The application works by using the iPad’s microphone to pick up on audio cues embedded within the TV episode itself, allowing the application to sync up with what the viewer is watching.
Pretty smart. I hope the content strategists are as smart as the technologists.
It’s no surprise I like this — it’s essentially Canabalt on skis with tricks. Also, I’m mostly posting this because the iOS app was having trouble tweeting my current high score of 8,341,511. Considering the world leader is over 3 billion, I’ve got a ways to go.
Update: Okay, it’s 5 minutes later and I just got 27.6 million. Maybe I’ll stop posting scores for a bit.
Remember that talk I gave at SVA last month? Me too! They’ve now posted the video online, so you can watch me talking if you don’t like reading or pictures. If you prefer those, you can check out the post and see the slides.
SVA has also posted all of the videos online. I recommend you watch the other two; they were great.
On the one hand, it’s awesome that the iPad is providing a platform for a million interfaces. The ability to have dozens of simple games in your backpack is liberating for the same reason the iPod and Kindle were liberating. Packing for vacation with your kids is going to be so easy. On the other hand, do we need a $650 Scrabble machine? [via Cameron Moll]
Like many of my fellow geeks, I broke down and ordered an iPad. I ended up going with the wifi version and not just because I can get it faster. Primarily, it comes down to when I use 3G data.
I have wifi at work and wifi at home. Most hotels I visit have wifi. Most conferences I attend have it, even if it’s oftentimes slow. The only time I regularly use 3G is in a car and when I’m about town. I’m pretty confident I won’t be pulling out my iPad on the street corner while I’m looking for a nearby dry cleaners. I might miss having it in the car (big maps and GPS is tempting), but I came to a seemingly obvious conclusion: I have an iPhone.
My guess is that 70-80% of the first round of iPad buyers also have an iPhone and 90+% have a 3G-capable smartphone. In many cases, the iPhone is capable of what we need and is oftentimes better suited. The smaller form factor is more discreet and makes it much easier to fit in your pocket. People don’t pull out their laptops at dinner when they’re looking for a delicious ice cream spot, but they definitely pull out their Blackberry or iPhone.
As discussed, I gave a talk last night about The Tablet. Thanks very much to Liz for organizing the event. When I began planning my talk, I found it was easier to write it out as a blog post so I could find the narrative. I did just that.
What follows is the blog post and some of the imagery attached. At the very end, I included my slides from the talk, which have some additional imagery. (If you’re more of a visual person, skip to my slides on Slideshare.)
Up until now, we’ve done most of our reading using a single layer of data. This works well when you have abundant space, but breaks down when you try to work on a smaller device. As we pack more and more data into smaller spaces, we need to consider how this data is presented. The answer that provides the best compromise of accessibility and usability is to layer our data using modal dialogs. And now, a story.
During college, I oftentimes bought my textbooks used, primarily because they were cheaper. The cheapest books were thoroughly marked up, with notes in the margins and important phrases highlighted. Sometimes, it was great to already have the important bits noted for me, but most of the time I just wanted to read. My wish was to be able to remove that layer of data only temporarily. Little did I know that 10 years later, that would be possible.
When data is presenting it a single layer, ancillary data exists separately from the primary text. When you’re studying, you write down the important parts in a notepad and create study tools with flash cards. When you’re watching a film, the credits appear at the end of the film and the deleted scenes are accessed in another menu entirely. When you’re reading a novel, contextual content is often in appendices and definitions are, well, in your dictionary.
The iPhone and other smartphones have improved the situation. Instead of having to make a note during a movie or keep your finger on the current page while flipping to the appendix, you can pull out your phone (or laptop or whatever) and look up the information. Of course, that is still two information sources in the same plane.
It’s also gotten a lot easier on the web. Sites like the New York Times offer the ability to double-click a word and get the definition. Flickr lets you annotate photos with text. The Definitive Guide to Django provides an online version of the book that lets you comment on each paragraph. As the ultimate example, Google lets you overlay a variety of information on top of a map.
Since we still do most of our reading on paper, we’ve been stuck with just a single layer of data. The best we’ve got are footnotes and notes in the margin. The introduction of the Kindle has provided a suitable replacement for reading devices. Having 1,000s of books in your hand is wonderful, but the Kindle only provides two layers of data: text and definitions. And without a touch screen, trying to get a definition is tough. You have to navigate to the word with the thumb nubbin before the definition pops up. It takes you out of the flow of reading a lot more than clicking a mouse or tapping the word.
How This Would Work
Bringing the multitouch interface to such a large surface area will allow us to bring far more layers of data to a document. Let’s come back to our studying example. My wife is taking an Anatomy & Physiology class and has a test coming up. She has out her text book, flash cards, a notebook and a reading guide. While going over her notes, she might want to refer back to the source text for some additional information. She has to find the right page, then find the right paragraph and look for context.
Now, let’s say Apple or some inventive fellow builds an iPad application meant for studying. You can download your textbook and, as you read, tap on a paragraph to open up a modal dialog for taking notes. Or maybe you just select some text and copy the text into your notebook. Next time you go through the book, you’ll see a little speech bubble, like the Django book example, alongside the text. When it’s time to study, click the ‘View notes’ button and you’ll see a version of just the text you’ve highlighted and your notes. Back to the book. If something you’re reading is confusing, selecting text could let you define or Google it. If that doesn’t pan out, you can add a public note. Your friends in the class would be notified and can answer your question. The answer will show up in context. Taking the social element further, being able to view your study partner’s notes overlaid on your page could answer questions you didn’t know you had.
Below are some design explorations I put together to illustrate the example.
These types of interaction could be carried over to a work of literature. If you’re in a book club, the reading questions could come be visible at relevant point. You could make notes in the margin that the rest of your book club could see. There’s also an opportunity for authors to provide something like a director’s commentary. When you find out SPOILER ALERT that Bella choses Edward over Jacob, Stephenie Meyer could put in a note explaining that it took her months to make this decision and it was only after talking to a bellhop at the Paris in Las Vegas that she made her decision. Or, possibly more interesting to some of you, how Malcolm Gladwell did his research about Hush Puppies.
As a final example, adding a touch interface to films, means one of my personal dreams can be fulfilled. When you want to know more about an actor, pause the movie and tap his face. Using iPhoto’s facial recognition software and a partnership with iMDB, the actor’s name and his last 5 films will pop up in a modal dialog. They’ll also be a link to any relevant extras that include that actor.
What We’ve Learned
There is nothing wrong with the old way of studying or reading, so long as you have all of your information around you. The challenge of bringing a comparable or better experience to the iPad is finding a way to improve portability without sacrificing accessibility.
Using modal dialogues and layering data lets you display ancillary content without taking away from the source text. Since that’s why people are coming to your content, that should have the focus. Providing the rest of the data should be seamless, but natural. Finding that balance will lead to an engaging (and hopefully unforgettable) experience.
Update (4/2/10): Video!
SVA has kindly posted a video of my talk online. Please enjoy, if you prefer moving pictures:
Typical consumer family:
1 iMac for everyone, 1 MacBook for travel, 1 iPad for the couch and 2 iPhones
Professional user family:
1 Mac Pro for home/office, 1 Macbook Pro for the road, 1 iPad for the couch and/or clients and 2 iPhones
And when they're better, an AppleTV in every room.
My friend Adam, and others I'm sure, are concerned about this being a peripheral. While you clearly need a computer to sync the devlice, but I don't think there's reason for concern. If it's not true already, most homes will have one computer that acts as home base. You'll keep your music, movies, contacts, etc. stored there and everything else will sync with it. If you already have a laptop and a desktop, the laptop is as much a peripheral as an iPad would be.
Sure, the iPad is more geared towards consumption, but so what? As I've read a few times lately, the vast majority of time that most users spend on a computer is consumption, so it only makes sense to optimize towards that. If you don't want a consumption device, there are plenty of other options at your disposal.