User Experience Interface Design

Welcome to the UX/UI Design Fundamentals Prep Course for the Bitmaker Labs User Experience & Interface Design 12 week immersive.

This course is intended to develop you as a strong Product Designer with front-end development knowledge. You’ll examine the fundamentals of user-centred design, including user research, information architecture, interaction design, UI design, and usability testing. By the end of the course, you will be able to design and build basic prototypes using Adobe Illustrator, HTML, CSS, basic JavaScript and jQuery.

Structure of this Prep Course

There are five main parts to this prep-course. They are not intended to replace the actual curriculum, but are just here to get you started. It’s essential that you complete each part in order:

Part 1: An Introduction

Part 2: Tools of the Trade

Part 3: Researching Design

Part 4: Learning the Basics of Code

Part 5: Design and Code Your First Homepage

Part 1: An Introduction

Once the course gets started, we’ll have a lot to cover, so it’s important to get a good understanding of the fundamentals to start strong.

While you complete this prep course, you might have questions and may need a hand— that’s awesome. These are three resources to help you out.

Your most important support resource: us  each other

Get to know your classmates, and work through designing things together. That’s what the entire course is about. Connect through Twitter, Facebook, or Google+. Open communication is key.

Feel free to reach out to @bitmakerlabs or @tailoredUX with the hashtag #DesignPrep for questions about this prep course over Twitter. We’ll make sure you’re using your resources, but will be happy to point you in the right direction and help out along the way.

If you have questions about enrolling in the course, or find yourself having a hard time with the prep course, don’t hesitate to reach out to Bitmaker Labs ( or tailoredUX ( with the subject “Design Prep”.

Part 2: Tools of the Trade

We’ve broken down this section of the tutorial between all the tools you’ll need to know to make sure you’re well equipped with a jump start into getting down to design & development from Day 1.

Part 3: Researching Design

So what is UX?

The definition of User Experience Design, UX, or Experience Design, can be really controversial. UX is a way of thinking, can involve (but doesn’t necessarily have to) use “Design Thinking” (which is a major topic in and of itself), and can be approached in a lot of different ways.

During the course, we’re going to focus on practical applications of “UX” to web and mobile apps, and cut through the fuzziness of the meta conversation around UX. Our definition for UX is through the perspective of its’ application in designing software, or digital products.

In preparing for this, we will research and reflect ourselves on what we believe the definitions are for User Experience, and the disciplines that surround UX that are practiced in their own right:

Section 1: User Experience Design

Section 2: User Research

Section 3: Information Architecture

Section 4: Interaction Design

Section 5: User Interface Design

Section 6: Usability

Your first task is to research and write about these elements of UX Design:

Research on each of these topics, and structure your findings in any way you’d like.

But do write them down on a piece of paper, or post-its, or a notebook.

Then write a series of blog posts on Medium with your findings. Take pictures of your research and include them in your posts.

To help you with structuring your topics, and get you thinking about questioning design, here are the topics of your blog posts. The ultimate point is to take away the ambiguity so that we’ll all be talking the same thing when we kick off the course, and actually tangbile conclusions to with each other.

Blog post #1: User Experience Design

What is UX? Who defines UX? Whose job is “UX Design”? Are there “UX Designer” jobs out there? What are the differences between them? Use images in your post talking about this, and write about them.

See Gube, Jacob (2010). What is User Experience Design? Overview, Tools and Resources. Published in Smashing Magazine

Blog post #2: User Research

What is User Research? Are there any jobs dedicated to solely User Research? What does a User Researcher do for product design?

See Kuniavsky, Mike (2003). Crafting a User Research Plan. Published in Adaptive Path

Blog post #3: Information Architecture

What is Information Architecture? What’s a job description for an Information Architect? What does an Information Architect do day to day? What does Information Architecture look like? Use images in your post talking about this, and write about them.

See Wodtke, Christina (2009). The Elements of Social Architecture. Published in A List Apart.

Blog post #4: Interaction Design

What is an interaction Designer? What’s the job description of an Interaction Designer? What does an Interaction Designer do day to day? What does interaction design look like? Use images in your post talking about this, and write about them.

See Maier, Andrew (2009). Complete Beginner’s Guide to Interaction Design. Published in UX Booth.

Blog post #5: UI Design

What is UI Design? What does a UI Designer do day to day? What does UI Design look like? Find examples of 5 beautiful home pages for design agencies. Find examples of 5 beautiful home pages of designers. Use images in your post to reflect on why you liked them.

See (2011). User Interface Design Basics. Published on Usability.gov

Blog post #6: Usability

What is Usability? Whose job is it to manage the usability of an application? What’s the difference between “User Research” and “Usability”? What are the different types of usability testing you can carry out? Hint: start with Heuristic Testing.

See Nielsen, Jakob (2012). Usability 101: Introduction to Usability. Published in

Note: your research skills, and the way you approach thinking about design is the #1 thing that will help you be successful in this course. These blog posts have one real purposes: challenge your research skills and how you communicate your work.

We’d like you to publish each blog post on Medium and share with your peers, and with us over Twitter.

Part 4: HTML & CSS

We don’t like to think of things in terms of hours, or tasks. We’d just like you to take the time to really dive into, and have fun with a web environment that teaches HTML & CSS.

Our UX/UI Fundamentals course will cover the same concepts that are introduced in this Code Academy course. The magic will be in applying them to real, hands on projects with a design focus.

To prepare, CodeAcademy has a great interactive tutorial to run you through the basics:

Completing this code academy prep course is mandatory to start our program. Send section completion screenshots to

Part 5: Design & Code Your First Homepage

After you’ve completed the HTML/CSS course on CodeAcademy, you should be familiar with the basics of how to build a profile page for yourself.

1. Create a narrative

Your profile page should answer the following questions:

i. Who are you? Where are you from? 
ii. What do you currently do? What do you want to do? 
iii. Let’s see you! Do you have an avatar / picture?

iv. Have you done any type of design related work? 
v. Have you built anything lately?
vi. Have you completed a non-design related project?

Answer these questions for that work:

i. Show us a picture / image of it 
ii. Give it a title 
iii. Describe what it is? 
iv. What did you learn from this project?
v. How do you think it relates to UX Design?

2. Sketch a rough layout on paper

Sketch how you’d lay this content out on a single page site. Use your blog post #5 as inspiration and look for examples on Dribbble of designers home pages. Mine for example is pretty old right now (I can’t wait to design a new one with you when the course starts!), but it still tells a pretty good story.

3. Design directly in code

Once you’re done designing, using your sketches of the layout you’d like, and using the inspiration from your research in blog post v, develop a basic 1 page layout with your answers from task 1. Pay attention to things like: font-size for headers, sub-headers, and body copy, and font weight for all of these things as well.

We don’t want you to dwell too much on the prettiness of the colours, so use when deciding on a colour scheme, use this for inspiration (and actual colours!)

4. Publish your site on Dropbox

Follow the instructions on this page to host your site on dropbox. It’s the fastest way to get your page out there. Share it on twitter with the hashtag #helloworldUX

That’s it!

This all completes your prep course requirements for the User Experience & Interface Design Prep Course.

If you have any questions about the full course, feel free to

If you’d like to speak to the designers, you can find us on twitter here: @tailoredUX,@mustefaJ, and @sidramatik. We’re happy to chat, and look forward to seeing you in July.


The Year of Interaction Design Tools

A brief survey on the state of interaction + prototyping

We’re six months in to 2014, and designers have the widest selection of interaction and prototyping tools to date. The community is blossoming, and bridging the gap from mockup to working prototype is now easier than ever.
Inspired by the upcoming Pixate, I’ve compiled a brief breakdown of the state of interaction design tools in 2014.

$16+ a month | Web app
Webflow is a web app that allows users to design websites in the browser and that are completely coded and live. Webflow’s friendly WYSIWYG editor gives designers full control over the output, including the appearance on mobile devices.

Webflow is continuously adding new features — including web fonts, video support, continued W3C compliance, interactive states — and it even includes hosting, too.

Free | Web app
Marvel is a free web app for prototyping for web and mobile designs. With Marvel, work is done entirely online — but syncs with your Dropbox, allowing you to easily pull in mockups from your private or company files. Marvel also supports PSD files — so you don’t have to convert files before creating prototypes.

$99 | Mac & Windows
Macaw is a desktop WYSIWYG design tool that outputs working, live code. It’s particular strength lies in creating responsive designs: Macaw’s built-in breakpoint editor makes it easy to create pixel-perfect designs at any screen size.
Although no coding knowledge is required, basic experience with HTML and CSS certainly enhances the final product.

Free | Web app
InVision is not strictly an interaction design tool. InVision is used for prototyping and productivity. For prototyping, InVision allows you to add interactive events to static images to demo user interaction. At the same time, InVision’s project management features allow both stakeholders and designers to interact with prototypes before development begins.

$20 a month | Web app

Flinto allows you to create interactive prototypes that are useable both on the web and on a mobile device. Flinto allows designers to take static images and create prototypes that can be scrolled, rotated, and interacted with however you want.
One particularly enticing feature: Flinto prototypes can be used natively on Android and iOS devices — so, you can interact with your latest Instacart-for-Cats prototype on your own iPhone or Android device.

Quartz Composer

Quartz Composer is a developer tool created by Apple for use in motion graphics. Although not its original purpose, Quartz Composer has been adopted by the interaction design community as a code-free option for showing animation and state change in designs.
Quartz Composer can be daunting at first glance —with a significant learning curve. Luckily, support is growing. Facebook and IDEO have recently released new Quartz Composer libraries to make the program more accessible to first time users.

Free | Mac only
Origami is a Quartz Composer library created by the Facebook Design team to help prototype interaction on mobile devices. It allows designers to easily replicate common animation and interactions that occur on mobile devices — think animating image transitions or button presses.
Like Quartz Composer, Origami is particularly useful for prototyping, but it doesn’t output useable code. Its claim to fame: Origami was instrumental in designing Facebook’s most recent mobile app, Paper.

Free | Mac only
Like Origami, Avocado aims to improve Quartz Composer by adding a library that mimics common interaction on mobile devices.
While Origami focuses on interaction and animation, Avocado focuses on replicating common UI elements for iOS — for example, a working iOS keyboard — allowing users to prototype ideas for iOS without using any code.

Free | Javascript framework
Framer.js is a Javascript framework for prototyping event-triggered animation. Framer boasts many features — one being a built-in generator that processes layer groups from your own PSD files and outputs each group into layers that form the basis of your project.
Framer is a Javascript framework — so unlike other options here, it doesrequire an understanding of HTML, CSS, and Javascript. However, it’s not bound to any specific program. You can host it and interact with it anywhere and everywhere.


Mobile Nav could be costing you half your user engagement

So, you have a mobile app where there are more pages or sections than can fit on a mobile screen at once. Your first thought might be to create a tabbed design, with a row of tabs along the top or buttons along the bottom.

But wait… that extra row of tabs or buttons wastes a lot of valuable real estate on a small mobile display, so let’s not do that. Instead, let’s move the options into a side menu, or side drawer, as our Android team keep reminding me it’s called.

If your mobile app has multiple views then I would be surprised if this subject has not been vigorously debated by your team:

  1. Persist all the navigation options on screen at all times so your users have clear visibility of all the main app views and single-click access to them.
  2. Or, free up screen real estate by moving the options into a side menu.

The side menu has become fashionable on Android but not yet taken off on iPhone… and so another factor that enters the discussion is the desire for your Android and iOS apps to have similar navigation and user journeys, or not.

I thought it worth sharing our experience.

Usability vs. clean design

Side and top nav 730x428 UX designers: Side drawer navigation could be costing you half your user engagement

When we first started zeeboxwe began with a tabbed design with a row of buttons along the top. Our reasoning was simple: “Out of sight, out of mind” – i.e. if you don’t see the set of available options then you’re not going to know that they exist.

For example, in the above images, if you don’t see a GUIDE option then how would you know to go to the menu to look for it? And if you discovered it once, would you remember that each time you returned to zeebox? Even if you did, it would be two clicks to get to the guide rather than one.

On the other hand, the design looks so much cleaner without that ugly row of buttons along the top, moving the navigation into a side menu really lets the content breathe.

The idea of moving app navigation off-screen into a side menu – also known as a hamburger menu or navigation drawers – seems to have originated about 18 months ago.

Around Sep 2013 Facebook switched to a new side menu design – or at least my Facebook app did as part of its A/B test. Surely if Facebook was doing this, then it had to be good… right?

The friendly and wonderfully helpful Google Play team suggested thatnavigation drawers (which I’m referring to here as a side menu or side navigation) were the new way to go and would be the preferred design pattern for our Android app.

And so about six months ago, we decided to take the plunge and switch to a side navigation. To make sure people knew about all the available views and options we had the app start up by showing the navigation drawer open:

Side nav hint UX designers: Side drawer navigation could be costing you half your user engagement

When we launched the new version the user reviews were great (“Love the new design, 5 stars”).

But when we looked at our analytics, it was a disaster! Engagement time was halved!

It looked like “out of sight, out of mind” really was the case.

The surprising truth

After realizing the gravity of the situation, we rushed out an update two weeks later that restored the top navigation as the default. We also provided a settings option that allowed users to turn on side navigation if they preferred so as to not upset those people who had loved the new side menu.

Anyway, cut to six months later.

The zeebox app has really come a long way in those months, we have a new My TV page that’s a constantly updated personal feed of news, TV shows starting for you, and posts for shows and from people you’re following. The My TV page is the place that our users want to see. But we wanted another go at letting the content breathe, so it was time to try that side navigation experiment again…

However, having learned our lesson, this time we’re going to do it the smart way: we’re going to A/B test it.

Our favorite A/B test tools and methodology

Lately we’ve become big fans of A/B testing, both with users coming into the office to test interactive Flinto prototypes and with A/B configuration built into our production app.

We start by creating mock-ups of various design concepts. We use Flinto to turn those into interactive prototypes that look just like the real thing, but which are built and iterated in minutes or hours.

You can see a couple of our Flinto prototypes here and here – click the links on an iPhone for best effect. Tap an hold anywhere on the page to see where all the interactive hotspots are, then tap on a hotspot as if you’re using the real app.

We advertise for users who love TV, anything from The Voice to Downton Abbey. Twice a week we have four to five people come by our office to our virtual lounge where they try out the various concepts and prototypes we’ve prepared.

Sometimes you’re able to get a clear design winner from that small user sample. But in other cases, like for side navigation, you really need to sample thousands of people using the real app. And for that you need A/B testing.

For mobile app A/B testing, we use Swrve – it’s the most sophisticated A/B testing product I’ve found. It provides not just useful features like Goal Seeking (the A/B test server can automatically switch all users to the best option once a clear winner has been determined) but Swrve also lets you serve customised experiences for every individual user.

For example, if you’re a Comcast subscriber and we notice that you haven’t yet discovered that zeebox can act as a remote control for your Xfinity box, then Swrve could instruct the zeebox app to pop a message telling you  about that, with the timing of the message adjusted on a daily basis for optimal effect.

Anyway, we decided to go with a 15/85 test, where 15 percent of users were served the side navigation and 85 percent got the top navigation.

We launched the new version, waited 48 hours, checked the stats… would things be different this time…

The answer was a resounding No.

Results 730x455 UX designers: Side drawer navigation could be costing you half your user engagement

Weekly frequency was down. Daily frequency was down. Time spent in app was down. The side nav was as big a disaster as the first time round.

The good news is that, thanks to A/B testing, this time we could simply flip a switch on the server and set 100 percent of users to top navigation.

Given that the discussions about top or side nav are very likely a topic of debate in your company, I thought it worth sharing our experience.

Back when we did our above A/B test and concluded that the side nav was not for us, Facebook launched its new navigation on iPhone, with a persistent bottom navigation on every page. So, on iPhone, the app has a persistent lower navigation.

However, on Android it’s, well… variable. Looking at Facebook on my Android phone (below left) vs. on my colleague’s phone (below right), Facebook must be A/B testing this right now as some people are seeing top navigation and others side navigation. I’d love to know what Facebook are seeing in terms of engagement with each…

facebook ab 730x570 UX designers: Side drawer navigation could be costing you half your user engagement

When does side navigation ever come into play?

My take-away from all of this is that if most of the user experience takes place in a single view, and it’s only things like user settings and options that need to be accessed in separate screens, then keeping the main UI nice and clean by burying those in a side menu is the way to go.

On the other hand, if your app has multiple views that users will engage with somewhat equally, then side navigation could be costing you a great deal of your potential user engagement, and interaction with those part of the app accessed via the side menu.


Hamburger vs Menu: The Final AB Test

In case you missed this post about an AB test on mobile menu icons, make sure you check out the comments. There are some very interesting insights about A/B testing and its shortcomings.

The post went a tiny bit viral, and suddenly it wasn’t just my mother reading this blog.

Three things I learned:

1. This iconhamburgerhas lots of names: hamburgersandwich, and even hotdog ?! What it actually is, is a list icon. We’ve just co-opted it to mean a navigation menu.

2. When something gets noticed, some people get a little mean (source)

3. One commenter said I was the Dunning–Kruger effect in action. This phenomenon is when you try to sound clever but are actually a dumbass.

Thanks for the vote of no-confidence.

In this hyper-connected world full of rockstar developers and super-smart designers, I’m humbled on a minute-by-minute basis. I might need to start attaching positive affirmation stickers on my laptop.

The Final Hamburger A/B Test

I do enjoy A/B testing, but conclude what you want. I’m not an expert, nor am I advising anything, but sharing what happened on a single website.

Using a commercial A/B testing service can get very expensive very quickly, and well beyond the budgets of small-time web designers and developers. So, hopefully, these posts are helpful for some of you.

If you are using social sharing buttons, you might find these tests interesting.

Variation 1

Bordered list icon (hamburger).


Variation 2

Bordered word menu.



240,000 unique mobile visitors were served the A/B test.

VariationUnique VisitorsUnique ClicksHamburger1205431211Menu1211521455


The test was large enough to achieve statistical significance.

The MENU button was clicked by 20% more unique visitors than the HAMBURGER button.

Where things get interesting is when we break down the data a little:

Unique VisitorsHamburger ClicksMenu ClicksiOS1480979060.61%11430.77%Android872452160.25%2370.27%

There is very little difference in the Android user preference, but their lack of engagement is disturbing.


Hamburger icons may appear to be ubiquitous, but they are not the only option.

There is an issue that is much more important:

Android users are almost 3x less likely to click a navigation button than iOS users.



  • These are the results from one website (see more about demographics here).
  • The test was done using some in-house code, so I cannot guarantee the perfect execution of code across all devices. I do not have time or capacity to rigorously test code like the big commercial AB testing services like Optimizely. Bear in mind that to run this test with Optimizely would have cost $859 (I kid you not).
  • I can’t measure intent with this test. I’m measuring clicks on a webpage. Maybe the user thought menu as a list of food to order. Maybe they wondered what the hamburger icon was and tapped it. Who knows. AB testing cannot tell you this.


How Do Users Really Hold Mobile Devices?

As UX professionals, we all pay a lot of attention to users’ needs. When designing for mobile devices, we’re aware that there are some additional things that we must consider—such as how the context in which users employ their devices changes their interactions or usage patterns. [1] However, some time ago, I noticed a gap in our understanding: How do people actually carry and hold their mobile devices? These devices are not like computers that sit on people’s tables or desks. Instead, people can use mobile devices when they’re standing, walking, riding a bus, or doing just about anything. Users have to hold a device in a way that lets them view its screen, while providing input.

In the past year or so, there have been many discussions about how users hold their mobile devices—most notably Josh Clark’s. [2] But I suspect that some of what we’ve been reading may not be on track. First, we see a lot of assumptions—for example, that all people hold mobile devices with one hand because they’re the right size for that—well, at least the iPhone is. [3] Many of these discussions have assumed that people are all the same and do not adapt to different situations, which is not my experience in any area involving real people—much less with the unexpected ways in which people use mobile devices.

For years, I’ve been referring to my own research and observations on mobile device use, which indicate that people grasp their mobile phones in many ways—not always one handed. But some of my data was getting very old, so included a lot of information about hardware input methods using keyboard- and keypad-driven devices that accommodate the limited reach of fingers or thumbs. These old mobile phones differ greatly from the touchscreen devices that many are now using.

Modern Mobile Phones Are Different

“I’ve carried out a fresh study of the way people naturally hold and interact with their mobile devices.”

Everything changes with touchscreens. On today’s smartphones, almost the entire front surface is a screen. Users need to be able to see the whole screen, and may also need to touch any part of it to provide input. Since my old data was mostly from observations of users in the lab—using keyboard-centric devices in too many cases—I needed to do some new research on current devices. My data needed to be more unimpeachable, both in terms of its scale and the testing environment of my research.

So, I’ve carried out a fresh study of the way people naturally hold and interact with their mobile devices. For two months, ending on January 8, 2013, I—and a few other researchers—made 1,333 observations of people using mobile devices on the street, in airports, at bus stops, in cafes, on trains and busses—wherever we might see them. Of these people, 780 were touching the screen to scroll or to type, tap, or use other gestures to enter data. The rest were just listening to, looking at, or talking on their mobile devices.

What My Data Does Not Tell You

Before I get too far, I want to emphasize what the data from this study is not. I did not record what individuals were doing because that would have been too intrusive. Similarly, there is no demographic data about the users, and I did not try to identify their devices.

Most important, there is no count of the total number of people that we encountered. Please do not take the total number of our observations and surmise that n% of people are typing on their phone at any one moment. While we can assume that a huge percentage of all people have a mobile device, many of these devices were not visible and people weren’t interacting with them during our observations, so we could not capture this data.

Since we made our observations in public, we encountered very few tablets, so these are not part of the data set. The largest device that we captured in the data set was the Samsung Galaxy Note 2.

What We Do Know

“In over 40% of our observations, a user was interacting with a mobile phone without inputting any data via key or screen.”

In over 40% of our observations, a user was interacting with a mobile phone without inputting any data via key or screen. Figure 1 provides a visual breakdown of the data from our observations.

Figure 1—Summary of how people hold and interact with mobile phones

Summary of how people hold and interact with mobile phones

To see the complete data set:

Voice calls occupied 22% of the users, while 18.9% were engaged in passive activities—most listening to audio and some watching a video. We considered interactions to be voice calls only if users were holding their phone to their ear, so we undoubtedly counted some calls as apparent passive use.

The users who we observed touching their phone’s screens or buttons held their phones in three basic ways:

  • one handed—49%
  • cradled—36%
  • two handed—15%

While most of the people that we observed touching their screen used one hand, very large numbers also used other methods. Even the least-used case, two-handed use, is large enough that you should consider it during design.

In the following sections, I’ll describe and show a diagram of each of these methods of holding a mobile phone, along with providing some more detailed data and general observations about why I believe people hold a mobile phone in a particular way.

In Figures 2–4, the diagrams that appear on the mobile phones’ screens are approximate reach charts, in which the colors indicate what areas a user can reach with the finger or thumb to interact with the screen. Green indicates the area a user can reach easily; yellow, an area that requires a stretch; and red, an area that requires users to shift the way in which they’re holding a device. Of course, these areas are only approximate and vary for different individuals, as well as according to the specific way in which a user is holding a phone and the phone’s size.

Users Switch How They Hold a Mobile Phone

“The way in which users hold their phone is not a static state. Users change the way they’re holding their phone very often—sometimes every few seconds.”

Before I get to the details, I want to point out one more limitation of the data-gathering method that we used. The way in which users hold their phone is not a static state. Users change the way they’re holding their phone very often—sometimes every few seconds. Users’ changing the way they held their phone seemed to relate to their switching tasks. While I couldn’t always tell exactly what users were doing when they shifted the way they were holding their phone, I sometimes could look over their shoulder or see the types of gestures they were performing. Tapping, scrolling, and typing behaviors look very different from one another, so were easy to differentiate.

I have repeatedly observed cases such as individuals casually scrolling with one hand, then using their other hand to get additional reach, then switching to two-handed use to type, switching back to cradling the phone with two hands—just by not using their left hand to type anymore—tapping a few more keys, then going back to one-handed use and scrolling. Similar interactions are common.

One-Handed Use

“The 49% of users who use just one hand typically hold their phone in a variety of positions.”

While I originally expected holding and using a mobile phone with one hand to be a simple case, the 49% of users who use just one hand typically hold their phone in a variety of positions. Two of these are illustrated in Figure 2, but other positions and ways of holding a mobile phone with one hand are possible. Left-handers do the opposite.

Figure 2—Two methods of holding a touchscreen phone with one hand

Two methods of holding a touchscreen phone with one hand

Note—The thumb joint is higher in the image on the right. Some users seemed to position their hand by considering the reach they would need. For example, they would hold the phone so they could easily reach the top of the screen rather than the bottom.

One-handed use—with the

  • right thumb on the screen—67%
  • left thumb on the screen—33%

I am not sure what to make of these handedness figures. The rate of left-handedness for one-handed use doesn’t seem to correlate with the rate of left-handedness in the general population—about 10%—especially in comparison to the very different left-handed rate for cradling—21%. Other needs such as using the dominant hand—or, more specifically, the right hand—for other tasks may drive handedness. [4]

One-handed use seems to be highly correlated with users’ simultaneously performing other tasks. Many of those using one hand to hold their phone were carrying out other tasks such as carrying bags, steadying themselves when in transit, climbing stairs, opening doors, holding babies, and so on.

Cradling in Two Hands

Cradling is my term for using two hands to hold a mobile phone, but using only one hand to touch the screen or buttons.”

Cradling is my term for using two hands to hold a mobile phone, but using only one hand to touch the screen or buttons, as shown in Figure 3. The 36% of users who cradle their mobile phone use it in two different ways: with their thumb or finger. Cradling a phone in two hands gives more support than one-handed use and allows users to interact freely with their phone using either their thumb or finger.

Figure 3—The two methods of cradling a mobile phone

The two methods of cradling a mobile phone

Cradling—with a

  • thumb on the screen—72%
  • finger on the screen—28%

With thumb usage, users merely added a hand to stabilize the phone for one-handed use. A smaller percentage of users employed a second type of cradling, in which they held the phone with one hand and used a finger to interact with the screen. This is similar to the way people use pens with their mobile devices. (We observed so few people using pens with their mobile devices—only about six—that I have not included them as a separate category in the data set.)

Cradling—in the

  • left hand—79%
  • right hand—21%

Anecdotally, people often switched between one-handed use and cradling. I believe this was sometimes for situational security—such as while stepping off a curb or when being jostled by passersby—but sometimes to gain extra reach for on-screen controls outside the normal reach.

Two-Handed Use

“We traditionally associate two-handed use with typing on the QWERTY thumbboards of devices like the classic Blackberry or on slide-out keyboards.”

We traditionally associate two-handed use with typing on the QWERTY thumbboards of devices like the classic Blackberry or on slide-out keyboards. Two-handed use is prevalent among 15% of mobile phone users. In two-handed use, as shown in Figure 4, users cradle their mobile phone in their fingers and use both thumbs to provide input—much as they would on a desktop keyboard.

Two-handed use—when holding a phone

  • vertically, in portrait mode—90%
  • horizontally, in landscape mode—10%

Figure 4—Two-handed use when holding a phone vertically or horizontally

Two-handed use when holding a phone vertically or horizontally

People often switched between two-handed use and cradling, with users typing with both thumbs, then simply no longer using one hand for input and reverting to using just one of the thumbs consistently for interacting with the screen.

However, not all thumb use was for typing. Some users seemed to be adept at tapping the screen with both thumbs or just one thumb. For example, a user might scroll with the right thumb, then tap a link with the left thumb moments later.

Also notable is the overwhelming use of devices in their vertical orientation, or portrait mode—despite theories about the ease of typing with a larger keyboard area. However, a large percentage of slide-out keyboards force landscape use. [5] All ways of holding a phone typically orient the device vertically, but for two-handed use, use of landscape mode was unexpectedly low. Though several of my clients have received numerous customer complaints in app store reviews for not supporting landscape mode.

What Do These Findings Mean?

“Some designers may interpret charts of one-handed use to mean that they should place low-priority or dangerous functions in the hard to reach area in the upper-left corner of the screen. But I wouldn’t recommend that.”

I expect some to argue that one-handed use is the ideal—and that assuming one-handed use is a safe bet when designing for almost half of all users. But I see more complexity.

Some designers may interpret charts of one-handed use to mean that they should place low-priority or dangerous functions in the hard to reach area in the upper-left corner of the screen. [6] But I wouldn’t recommend that. What if a user sees buttons at the top, so switches to cradling his phone to more easily reach all functionality on the screen—or just prefers holding it that way all the time?

Even if we don’t understand why there are such large percentages for handedness, we cannot assume that people will hold their phone in their right or left hand. When targeting browsers or mobile-device operating systems, I am always uncomfortable ignoring anything with a market share over 5%. That’s a general baseline for me, though I adjust it for individual clients or products. But I would never, ever ignore 20 to 30% of my user base. While I am personally very right handed, now that I have these numbers, I am spending a lot more time paying attention to how interactions might work when using the left hand.

Another factor that I had not adequately considered until putting together these diagrams is how much of the screen a finger may obscure when holding a mobile phone in any of these ways. With the display occupying so much of the device’s surface, this may explain part of the reason for a user’s shifting of his or her grasp. As designers, we should always be aware of what content a person’s fingers might obscure anywhere across the whole screen. Just remembering that a tapping finger or thumb hides a button’s label is not enough.

Now, my inclination to test my user interface designs on devices is stronger than ever. Whether I’ve created a working prototype, screen images, or just a paper prototype that I’ve printed at scale, I put it on a mobile device or an object with similar dimensions and hold it in all of the ways that users would be likely to hold it to ensure that my fingers don’t obscure essential content and that buttons users would need to reach aren’t difficult to reach.

Next Steps

“With clear correlations between tasks and ways of holding a phone, we could surmise likely ways of holding devices for particular types of interactions rather than making possibly false assumptions….”

I don’t consider this the ultimate study on how users hold mobile devices, and I would like to see someone do more work on it, even if I’m not the one to carry it out. It would be very helpful to get some solid figures on how much people switch the ways they’re holding their mobile phone—from one-handed use to cradling to two-handed use. Having accurate percentages for how many users prefer each way of holding a phone would be useful. Do all users hold their phones in all three of these ways at different times? This is not entirely clear. It would also be helpful to determine which ways of holding a mobile phone are appropriate for specific tasks. With clear correlations between tasks and ways of holding a phone, we could surmise likely ways of holding devices for particular types of interactions rather than making possibly false assumptions based on our own behavior and preferences.

- See more at:


Premium vs Freemium vs Subscription

For mobile apps, there are three dominant pricing strategies: Premium, Subscription and Freemium.

According to a report by app-store analytics company, Distimo, freemium now accounts for 71% of Apple AppStore revenues in the US, up from somewhere around 50% last year, and rising. In Asia, freemium is 90% of App Store revenues.

71% app revenue from freemium

Is freemium always the most optimal? What factors should you consider when choosing a pricing strategy?

Firstly, here is what these different pricing models mean, as applied to mobile apps:

Premium apps (or Paid apps) have an upfront price before they can even be downloaded. Similar to licensed software, except that the App Store makes all future upgrades to the premium app free once purchased.

Contrast this with freemium (a portmanteau of “free” and “premium”), where the app is free to download and use. However, some features inside the app are unavailable until you pay for them. App stores make it dead simple for developers to charge small amounts of money inside the app.

Subscriptions are a regular fixed fee the user is charged automatically via the App Store for using the app. Magazines in the iOS Newsstand are usually subscription-based. Subscriptions can actually overlap with either premium or freemium models. For example, Spotify requires you to have a subscription to even use the app (premium), while Pandora is closer to freemium where you can pay a subscription to get ad-free and unlimited hours of music.

How do you decide which to choose?

Premium has very limited use today

Perception is a large factor in how high a price is acceptable, and going premium helps create that perception. With premium, every user pays upfront, but the amount each user pays is fixed, regardless of how much utility each gets. Also, users have come to expect apps in the $2 – $5 range (a few niche apps can charge $10-$20) and there is no way to get higher ARPU.

In general, premium works in the following situations:

1. There is a strong demand for your app – niche areas are good candidates here.

2. You have a strong brand already and can establish trust with users where they are willing to pay before they download the app.

3. There is not a lot of competition that will almost certainly drive the price down.

4. You don’t care much about reach – which will be much smaller because of the “pay gate” into your app.

5. There are no ongoing feature or content costs that can drive up the average cost of of supporting a user to levels higher than what the user paid for the app.

Freemium helped create those million-dollars-per-day games

Freemium was popularized by casual games in Japan and Korea and has quickly become the winning model in mobile apps. It works really well in the following situations:

1. There is enough competition that users invariably have other cheaper or free options (low barrier to entry). This has been partly the reason for most mobile apps today moving to freemium.

2. When reach is important. You need large amount of users quickly to create network effects. For example, to drive virality or gather significant data.

3. When there are a small % of power users willing to pay significantly more than the other light users. The pay-as-you-go model facilitates this. Hard core users can drive as much as 100 – 1000 times more revenue than the other users. This is why some free-to-play games are reaching a $1MM per day revenue runrate.

chart of freemium vs fixed price

More users in freemium, but a small percentage of them drive astronomical amounts of revenue.

The mobile app economy has progressed to a point today where all of the above usually hold true. Freemium does require some operational effort in managing your free and paying users, converting the former to the latter, and driving repeat purchases. However, this helps create a sustainable business rather than a one-time hit.

Subscriptions: the Holy Grail

A subscription provides a guarantee of repeat transactions and hence, businesses with subscription revenues tend to be valued far higher. The subscription price is usually smaller than the one-time price to incentivize the user into a longer term commitment. As the seller, you make up for the discount by being guaranteed future transactions.

A single issue of The New Yorker costs $6, but a year’s subscription of 52 issues is $70. That is a 77% discount and is still better, revenue-wise, than selling individual copies. Of course, they make a lot of money through advertising, so a guaranteed customer in the future is far more important to them. This may not be the case in all businesses. Amazon gives you a ~15% discount to “subscribe” to certain household items.

The subscription model does have a few drawbacks:

1. Like premium, there is a single price for all users, regardless of how much they use. There is a lot of money left on the table because hardcore users willing to pay a lot more cap out at the fixed price. There is some more money left on the table where the subscription price and commitment is too high for a large number of light users who will not sign up.

2. Tiered subscriptions does let you set different prices, but still creates ceilings on both price and consumption. Netflix has a 1 DVD plan for $8, 2 for $12, all the way up to 8 for $44. The lost opportunity? (a) A user can only do 8 DVDs. There may be a few subscribers who want much more. (b) A $44 price tag for DVD rental has a sticker shock. It is easier to charge $5.50 eight times than to charge $44 one time.

So when should you use subscriptions? Whenever you can. Subscriptions are considered the holy grail of revenue models. But it is important to make sure you are not leaving money on the table just to get a subscription commitment from a customer.

The ideal pricing strategy

loaded grocery shopping cart

Escalators are faster than stairs, even faster if you run up them.

At the risk of over-generalizing, a combination of freemium + subscription is the ideal for most apps.

Not everything being sold lends itself to subscriptions. Impulse purchases typically don’t. For example, a power-up that will help you cross this level in a game, or an Instagram filter that makes this photo of yours look exceptional. In such cases, it is best to start with a freemium model and then offer subscriptions to your  regular customers to drive  purchases further. Content (magazines, music streaming, movie streaming)  has more commonly been a subscription business. For these, it is a good idea to start with the accepted subscription, but remove any ceiling on purchases and consumption by offering more content that can be purchased in-app in addition to the subscription.

The mobile app economy is already large and growing even bigger. Get all the value you can out of it.

Drop in a comment, or shoot us an email if you want to chat about this for your app. We’re happy to help!


Some design trend data

Great post and data deconstruction. Best viewed from Source

It took a surprisingly long time and about fifteen million get requests to scrape meta data for every upload (as of the end of August ‘13) on Dribbble, the popular design community. I ended up lopping off the first half-year-or-so of activity on the site as the community was growing in fits and spurts and the data were inconsistent and basically all over the place. In sum, my little Heroku-deployed Go app collected information about 638,271 uploads and 3,479,698 taggings, and stored it all in a PostgreSQL database.1


In the interest of being a good internet citizen, I rate-limited the hell out of it. Strangely, the scraper looped through the full range of unique upload IDs about a dozen times, and it seemed to pick up a few more uploads each run (albeit only a handful the final time through), which would indicate there was some kind of silent failure occurring. If these numbers seem off or my method seems flawed, I’m interested in hearing about it.

I’ve included the taggings-per-upload ratio in Fig. A to add meaning to the trend data that follows. Since 2010, the average Dribbble upload went from having less than 4 tags to over 6, so an increase in incidence of a particular tag could indicate nothing more than users’ increasingly prolific tagging (i.e. a change in Dribbble usage behaviour, not a trend in web design). I considered normalising the later spark-lines against that increase, but my eyeballing of the data suggests that the increase in tagging coincides with an increase in tags (the size of the tag set increased). There might be a clever way to properly normalise the data to diminish the change in Dribbbler usage and highlight actual data but until I figure that out or somebody emails it to me, be aware you’re looking through a glass darkly.2

The rise of ‘flat design’

Dark glass or not, there are a few striking vistas. The most spectacular is the rise of flat design. The community’s usage of the ‘flat’ tag was a rounding-error above zero until the late autumn of 2012 when it began a rapid ascent. (It’s probably not worth noting that Scott Forstall departed Apple on 29 October 2012 — but you don’t need to be Tim Cook to know which way the wind blows.)

Fig. B: Rise of ‘flat’ on Dribbble,  CLICK HERE TO VIEW

In August of 2013, more than one in every ten Dribbble uploads was tagged ‘flat’. But Fig. Bcharts two trends, really. One is the rise in a style of visual design that’s been called flat, which is typified by a lack of texture, gradient, drop shadow. And the other trend is the propagation of the term ‘flat’ to describe this style of design. Charting related stylistic tags (‘minimal’, ‘simple’, ‘metro’) suggests a trend that’s no less obvious, but perhaps a little less dramatic.

If the conversation within the software design community over the past year is any indication, we might expect the trend in Fig. B to be mirrored by a precipitous decline in the use of skeuomorphism. Of course the word skeuomorphism was really only introduced as a shibboleth among the proponents of so-called ‘flat design’, as a pejorative description of an outmoded look.

Fig. C: ‘Skeumorphism’ on Dribbble,  CLICK HERE TO VIEW

The word ‘skeuomorphism’ was first used as a tag in June of 2011 by Eugene Cheporov (and ‘skeuomorphic’ predated ‘skeuomorphism’ by a couple of months: Joshua Blankenship tagged an upload with itin September of that year). Its use since then has been extremely seldom (peaking in January 2013 with a meagre fourteen taggings out of more than 20,000 uploads), and often self-conscious (the first upload tagged ‘skeuomorphic’ was titled Because there aren’t enough skeuomorphic shots on Dribbble…).

Fig. D: Google Trends,  CLICK HERE TO VIEW

But uploads of the style that came to be pejoratively called ‘skeuomorphism’ were far more prevalent far earlier. Take a look through some Dribbble uploads from 2011: there’s a lot of carefully stitched upholstery, glossy leather, and slickly rendered machinery.3 So I tried to find some related tags to use as a proxy. ‘Noise’ and ‘texture’ both overlap with the style that I would call ‘skeuomorphic’ (not in its proper sense, but the gaudy, drop-shadow-y style it’s come to mean). They also constitute trends of their own (and even overlap with ‘flat’ more than never), so, as ever, it’s a dark glass.

Fig. E: Mobile computing platforms on Dribbble,  CLICK HERE TO VIEW

It is, I am sure, an extremely significant indicator of the developer ecosystems of the major mobile computing platforms of the moment that iOS is about ten times as popular as Android within the Dribbble design community. In Android’s best month, about one in a hundred uploads were so tagged; in iOS’s, about one in ten. And woe betide Windows Phone, for which there isn’t even enough data to mock.

Read from this what you will; I don’t have the heart for it. But the data is pretty definitive: iOS has the critical mass of designer mind-share.

The reason I began this little-big-data adventure was because I had a hunch we’d seeMark Simonson’s Proxima Nova suffer a dramatic decline in usage the moment Hoefler Frere-Jones made available for web-use the typeface for which Proxima so often has been called to mimic, Tobias Frere-Jones’s Gotham. I thought a) the entire project would be reasonably quick; b) it’d make a very satisfying graph; c) glory. Proxima Nova’s use on the web has often struck me as we-want-a-web-font-but-also-want-Gotham. I admit I’ve done the very same once or twice. It’s a poor imitation, but bold and in caps, it ticks a lot of the same boxes.

Fig. F: Sans-serifs on Dribbble,  CLICK HERE TO VIEW

Of course: the data are too subtle for my ham-fisted brain. Gotham’s usage since 2010 is basically in decline. I suppose it’s probably in decline since 4th November 2008. And, again, its decline here is not necessarily meaningful: this may be highlighting changes in the Dribbble community, changes in Dribbble usage patterns, or actual changes in the typeface’s relative popularity. And Proxima Nova does follow the sort of trend we might have expected. It rises, presumably roughly with Typekit’s own usage, and then tapers off much like Gotham, whose coat-tails it supposedly rides.

I’ve included a few other popular sans-serifs to round out the picture. Poor Helvetica, presumably in decline since its Hustwit-driven revival in 2007. Myriad gets surprisingly little attention on Dribbble, considering how much real-world usage it sees. That humanist Open Sans looks like it’s on the way up. There isn’t much data yet, but I imagine in another couple of years we’ll be able to chart it satisfyingly against Freight Sans and extrapolate some far-fetched conclusions about their respective platforms (Typekit vs Google Fonts).

Fig. G: Serifs on Dribbble,  CLICK HERE TO VIEW

My attempt to reproduceFig. F but for serif type families was a complete failure. Fig. G demonstrates just how unsatisfying the data are in this regard. Is serif usage more fragmented? Or less exciting to tag? I suppose people have been (this is just my own hunch, not based on any formal data) using serifs for text and sans-serifs for display type, and Dribbble tends to highlight display text more than nicely typeset paragraphs. Nevertheless, it’s curious.

Pardon our progress

This data took far too long to collect, and far too long to (very amateurishly) analyse. There’s a lot to be improved about the analysis, most of which I’ve mentioned as it’s come up above. Dribbble itself is a limited sample, and the data I’ve collected isn’t normalised against changes in Dribbble’s community or usage patterns. Similarly, I’ve collected all tags on all uploads to Dribbble. It might make more sense to limit our data to uploads or uploaders that have a modicum of credibility (i.e. somehow incorporate the ‘like’ or ‘follower’ counts), or even somehow weigh data-points correspondingly. It might be interesting to chart the spread of tags across the community. (Do tags begin in the periphery and spread inward, or do they begin with the dribbblerati and spread outward?) And finally: is there any way to address the fact that Dribbble uploaders don’t consistently tag in a useful or elucidatory way?

Fig. H: Big (American) holidays on Dribbble,  CLICK HERE TO VIEW

I can nowise claim any kind of even passing statistical know-how, so I’ve uploaded a ~70mb CSV to Github. Please do fork, analyse, graph, chart, blog, correct, augment.4 That beautiful and meaningful chart I was fantasising about (with the rise of ‘flat’, decline of ‘skeuomorphism’, resurgence of ‘Gotham’, and short-lived reign of ‘Proxima Nova’) didn’t quite materialise. But I’m reasonably happy with Fig. H.


Best Bus Stop Ever. We wanted to make everyday life better with mobile, so we brought in a few surprises to a bus stop. We put up a poster featuring a URL. We waited for people to visit the mobile site. When they pressed the button, the fun began.  Source

Multi-Device Web Design: An Evolution

As mobile devices have continued to evolve and spread, so has the process of designing and developing Web sites and services that work across a diverse range of devices. From responsive Web design to future friendly thinking, here’s how I’ve seen things evolve over the past year and a half.

Device Love

If you haven’t been keeping up with all the detailed conversations about multi-device Web design, I hope this overview and set of resources can quickly bring you up to speed. I’m only covering the last 18 months because it has been a very exciting time with lots of new ideas and voices. Prior to these developments, most multi-device Web design problems were solved with device detection and many still are. But the introduction of Responsive Web Design really stirred things up.

Responsive Web Design

Responsive Web Design is a combination of fluid grids and images with media queries to change layout based on the size of a device viewport. It uses feature detection (mostly on the client) to determine available screen capabilities and adapt accordingly. RWD is most useful for layout but some have extended it to interactive elements as well (although this often requires Javascript).

Responsive Web Design allows you to use a single URL structure for a site, thereby removing the need for separate mobile, tablet, desktop, etc. sites.

For a short overview read Ethan Marcotte’s original article. For the full story read Ethan Marcotte’s book. For a deeper dive into the philosophy behind RWD, read over Jeremy Keith’s supporting arguments. To see a lot of responsive layout examples, browse around the site.


Responsive Web Design isn’t a silver bullet for mobile Web experiences. Not only does client-side adaptation require a careful approach, but it can also be difficult to optimize source order, media, third-party widgets, URL structure, and application design within a RWD solution.

Jason Grigsby has written up many of the reasons RWD doesn’t instantly provide a mobile solution especially for images. I’ve documented (with concrete) examples why we opted for separate mobile and desktop templates in my last startup -a technique that’s also employed by many Web companies like Facebook, Twitter, Google, etc. In short, separation tends to give greater ability to optimize specifically for mobile.

Mobile First Responsive Design

Mobile First Responsive Design

Mobile First Responsive Design takes Responsive Web Design and flips the process around to address some of the media query challenges outlined above. Instead of starting with a desktop site, you start with the mobile site and then progressively enhance to devices with larger screens.

The Yiibu team was one of the first to apply this approach and wrote about how they did it. Jason Grigsby has put together an overview and analysis of where Mobile First Responsive Design is being applied. Brad Frost has a more high-level write-up of the approach. For a more in-depth technical discussion, check out the thread about mobile-first media queries on the HMTL5 boilerplate project.


Many folks are working through the challenges of designing Web sites for multiple devices. This includes detailed overviews of how to set up Mobile First Responsive Design markup, style sheet, and Javascript solutions.

Ethan Marcotte has shared what it takes for teams of developers and designers to collaborate on a responsive workflow based on lessons learned on the Boston Globe redesign. Scott Jehl outlined what Javascript is doing (PDF) behind the scenes of the Globe redesign (hint: a lot!).

Stephanie Rieger assembled a detailed overview (PDF) of a real-world mobile first responsive design solution for hundreds of devices. Stephan Hay put together a pragmatic overview of designing with media queries.

Media adaptation remains a big challenge for cross-device design. In particular, images, videos, data tables, fonts, and many other “widgets” need special care. Jason Grigsby has written up the situation with images and compiled many approaches for makingimages responsive. A number of solutions have also emerged for handling things likevideos and data tables.

Server Side Components

Combining Mobile First Responsive Design with server side component (not full page) optimization is a way to extend client-side only solutions. With this technique, a single set of page templates define an entire Web site for all devices but key components within that site have device-class specific implementations that are rendered server side. Done right, this technique can deliver the best of both worlds without the challenges that can hamper each.

I’ve put together an overview of how a Responsive Design + Server Side Componentsstructure can work with concrete examples. Bryan Rieger has outlined an extensive set of thoughts on server-side adaption techniques and Lyza Gardner has a complete overview of how all these techniques can work together. After analyzing many client-side solutions to dynamic images, Jason Grigsby outlined why using a server-side solution is probably the most future friendly.

Future Friendly

Future Thinking

If all the considerations above seem like a lot to take in to create a Web site, they are. We are in a period of transition and still figuring things out. So expect to be learning and iterating a lot. That’s both exciting and daunting.

It also prepares you for what’s ahead. We’ve just begun to see the onset of cheap networked devices of every shape and size. The zombie apocalypse of devices is coming. And while we can’t know exactly what the future will bring, we can strive to design and develop in a future-friendly way so we are better prepared for what’s next.


I referenced lots of great multi-device Web design resources above. Here they are in one list. Read them in order and rock the future Web!


Data by Luke ~ Mobile Devices Per Day

Data Monday: Mobile Devices Per Day

Last year, I illustrated the importance of mobile by highlighting how many devices enter the World each day compared to the number of people born per day on our planet. The ratio was striking. Looking at the same figures today is even more sobering.

The End of 2011

The total of smartphones entering the World per day was about 1.45M devicescompared to 371,124 births.

number of mobile devices in 2011

The End of 2012

The total of smartphones entering the World per day was about 3.6M devices again compared to the same 371 thousand births.

number of mobile devices in 2012

Hope you’ve got you multi-device design strategy in order.


Why Mobile Matters

When I initially proposed the idea of Mobile First over three years ago, there were a lot of skeptics. The situation today has a lot more people convinced that taking mobile seriously matters. But just in case some people remain unconvinced, here’s a really vivid way of explaining the situation.

Number of Mobile Devices

number of mobile devices

Every day 371,124 children are born across the World.

number of mobile devices

Every day 377,900 iPhones are sold across the World.

number of mobile devices

Every day 700,000 Android devices are activated across the World.

number of mobile devices

Looking at the total of iOS devices (iPhones plus iPads and iPod Touches) sold per day brings the total of Apple mobile devices sold per day to 562,000. Together with Android devices that’s 1.27M mobile devices sold or activated per day compared to 371,124 children born.

number of mobile devices

But there’s more. Nokia sold 200,000+ smartphones a day (and 958k feature phones). RIM sold 143,000 Blackberries a day at the end of 2011. This brings the total of smartphones entering the World per day to about 1.45M devices again compared to 317,124 births.

Share of Personal Computing

Clearly there’s a lot of mobile devices coming into the World. That’s having a huge impact on the personal computing market. Looking at data compiled by Asymco, the first 15 years of personal computing consisted of a few manufacturers trying to figure things out (Amiga, Atari, Apple).

history of personal computing

The next 15 years were completely dominated by Microsoft’s WinTel platform with Apple barely hanging on.

history of personal computing

Fast forward to the past 3 years and you can see a huge shift underway. Apple and Android are eating into personal computing in a massive way. That’s because today’s mobile devices aren’t just phones they’re the most personal form of computer we have: always with us, always connected, and highly capable.

history of personal computing

Real Opportunity

As mobile devices take over personal computing, a lot of opportunity is created for software companies and services. Consider mobile payments on PayPal. In 2009, mobile payments totaled $141 million. At the end of 2011, that number had grown to $4 billion. You read that right from $141 million to $4 billion over the course of three years.

PayPal mobile payments

eBay has seen a similar trend. eBay reached $5 billion in mobile GMV (gross merchandise volume) in 2011, more than doubling 2010′s GMV of $2 billion. This was up from $600 million in 2009.

eBay mobile payments

Hopefully this sample of data points helps some mobile skeptics see the opportunity we’re facing. If you need even more convincing check out my ongoing series of data posts for a deeper look at the mobile market and beyond.


Mobile first: Luke Wroblewski interview

The Mobile First philosophy has radically changed how professionals approach Web design and become the way companies as diverse as Facebook and IBM build their products.

The Mobile First approach is to start designing for mobile devices — which typically have less screen size and less capabilities than desktops — and progressively enhance the product; so that desktops get an enhanced site experience rather than mobiles getting a pared down one.

We grabbed the opportunity to interview Luke Wroblewski, who first defined the Mobile First concept back in 2009, about how he used the principle to create Polar at his latest startup, Input Factory Inc. where he is co-founder and CEO.

How did you get started building apps and what kept you fascinated?

The first mobile apps I worked on were during my time at Yahoo! I joined the Yahoo! Search team back in 2005 and a bit later was heading up a small “tiger team” focused on ideas for products that were 3 to 5 years out. At that time I was fascinated with where the web was going and in particular with mobile.

We started out building some experiences for newer Nokia devices, as Nokia was the big player back then. Soon after though, Apple announced the app store for iOS and we jumped into iOS applications as well. At the time we were experimenting with services that connected mobile apps to networked TVs and more traditional computers like laptops and desktops. It was a really great opportunity to explore what was coming and we came up with a bunch of concepts that I’m still passionate about today.

I think that’s what keeps me interested in this space. You can see the future: more connected devices of every shape and size; interactions between those devices; more real time access to useful information, services, and people –it’s all coming. But these things don’t just appear out of thin air. They take years of effort, trial and error to make real. So I keep at it because I keep seeing a more and more exciting future ahead.

When developing a product, how do you identify a gap in such a saturated market?

I don’t. If a market is saturated I think that’s a great sign it’s interesting on a number of levels. For me, its much more important to focus on problems I can understand and actually do something about. When you have deep experience in an area, you can often see a future other people can’t.

For instance, I’ve dug really deep into web form and mobile design. I even wrote two popular books on these topics. So I feel like I’ve increased my ability to see problems in these areas. And when I look across the Internet I see lots of people eager to share their opinions and get the opinions of others. But the solutions out there are just really bad.

You’ve got surveys that basically consist of multiple pages of form elements: checkboxes, radio buttons, text fields, and so on. Because they’re so painful to complete, companies are paying people to take these surveys and even then the participation rates are really low. I look at this and think: we can do better.

It doesn’t really matter if there are a lot of companies out there with apps for making surveys or soliciting feedback. If you see a problem and think you can do a better job solving it, that’s the on-ramp. That may sound overly confident but I think you need confidence to get out there and start doing your own thing. You have to believe in it or no one else will.

Why is the Mobile First approach a better way to do things?

Well the reasons have been stacking up over the years. But when I outlined the idea originally I pointed to three main reasons: growth, focus, and innovation.

Growth is pretty obvious these days. More than 2 million modern mobile devices enter the world each day. Compare that to the 371,000 children born per day and you can quickly see how these numbers add up. All these devices connected to networks is a huge opportunity and many companies are now feeling it in their stats as mobile begins to take over other kinds of devices in usage.

Focus comes from the natural constraints of mobile. These devices need to be portable, so their screens are small, they connect to networks anywhere and everywhere with varying success, and they get used in very diverse environments often full of distractions. These constraints push you toward more focused, simplified solutions. You can only fit so much on the screen, people often have to wait for it, and they’re unlikely to give you their full attention. So make it easy to understand and use and focus on the important things first. Mobile is a great forcing function for simplicity.

But mobile isn’t just about constraining yourself; quite the opposite. There are lots of things that make mobile experiences more powerful and engaging. Not least of which is the fact that mobile devices can be used all the time and just abut anywhere. That not only creates new uses but also means people can be connected throughout the entire day.

If that weren’t enough, due to the capabilities of mobile devices, we have new ways of creating experiences. Thanks to local detection technology, we know where people are down to 10 meters. Thanks to integrated cameras and microphones we can take in visual and audio input, process it, change it, and share it. Thanks to motion sensors we can tell where a device is in three-dimensional space, and the list goes on.

It’s easy to dismiss these capabilities as technology for technology’s sake. Instead think of them as new techniques or paints on your design palette. With them, you can paint a totally unique user experience that allows you to innovate and move beyond existing solutions that came about before these technologies were available to us.

How do you know your design is hitting the mark, especially when aiming for speed and simplicity?

Well, you can test for both. In fact, that’s exactly what we did for our app, Polar. We designed Polar for mobile first and foremost so speed and simplicity were, of course, top of my mind. Earlier I mentioned that we thought we could make collecting and sharing opinions fast, easy, and fun, almost the exact opposite of what most survey tools are like online today.

Polar is our first attempt to do that. The most important interactions in Polar are collecting and sharing opinions and, as a result, we spent a lot of time trying to get those interactions right. To make sure they were fast and easy, we used one-handed, timed tests. Our goal was to allow anyone to vote on 10 polls or create a new poll in under 60 seconds.

If you design for the extremes, the middle usually works itself out. To quote Dan Formosa at Smart Design: When they designed garden shears, they tested them with people who had arthritis. If this “extreme” case could use the garden shears, then anyone could. That’s the same approach we take with timed, one-thumb use. If you can make it work for that extreme it will work for everyone.

Is there anything you can do from a design perspective to make sure that people who download the app actually use it?

Sure. Let them actually use it. In all seriousness, so many apps start off the process by wanting to tell you all about themselves and having you tell them all about you. Fill in this form, give us your phone number, take this intro tour, and so on. All this instead of just letting you get in and start using the app you’re spending all your time memorizing which gestures the app has and connecting to Facebook.

No actually, you’re skipping past all that trying to actually use the app. So my approach is just get to the good stuff. Let me say, however, that I know this approach is controversial. There are a number of examples out there that show forcing registration up front increases your sign-up numbers. Which means you increased the number of people who filled out a form. But they’ve never used your service. They might not even know what it is, so how valuable are these users?

I’m biased toward people who’ve seen and used the app, then decided to sign-up as a result. That means they liked what they saw enough to take the next step. The total number of sign-ups may be lower but the number of qualified users may end up being higher. At least that’s our approach!

Have you heard horror stories of people screwing up their signup process in their products?

The best one I saw recently was published by Greg Nudleman. It was an app for finding nearby restrooms made by Charmin. The intro process was so labor intensive that Greg very appropriately titled his write up: Let Them Pee!

You’ve use the term “gradual engagement” a lot in the past when you talk about sign up process, can you elaborate on what that is?

Sure. Gradual engagement is an alternative to the sign-up form issue I just described. Through gradual engagement, we can communicate what apps do and why people should care by allowing them to actually interact with them in gradual ways.

For example, Polar is all about sharing and collecting opinions. So we allow anyone opening the app for the first time to vote on the polls they see. 88% of people who download the app do just that. We hold on to all their votes locally so if they ever take the next step on the “walkway” (when you want to leave a comment or create a poll of your own), all your votes carry over to your account. This is what I mean by gradually getting people to understand and use your service. It’s about creating a clear and welcome “pathway” vs. putting up walls.

There has been much debate recently over whether improved device capabilities will render mobile-first obsolete, where do you stand on this?

I certainly hope all our devices keep getting better and that we develop new ways of interacting with information and with each other. So I’m not building a moat around mobile or anything. That said, the idea of having a connected device with you anywhere and everything is really powerful.

For proof, just look at Flurry’s recent analysis of user sessions and activity across all phone and tablet sizes. The clear winner for both was the “medium” sized (3.5”-5”) phone. I think this is a testament to the value of a portable computer that you can turn to at any moment for answers, conversations, and frankly just about anything. That kind of mobility and its importance does not show any signs of letting up in the near future. So I’m still really bullish on mobile.

We’d like to thank Luke for taking the time to answer these questions.

Do you take a Mobile First approach? What stops you from using an app or mobile site? Let us know in the comments.

Featured image/thumbnail, mobile internet image via Shutterstock.


Go Big by Going Home

Launch Day: the culmination of thousands of hours of focused, dedicated work; hundreds of scrapped ideas that will never see the light of day; dozens of sleepless nights; a single burning desire that united a team to build something for the world with their hands and with their minds.

It is also, as we like to say at Facebook, a marker in a journey that is 1% finished.

Less than a week ago, we announced Facebook Home to the world, a people-centric family of apps for Android that elevates the way you share and connect with the people you care about. It will be downloadable for free via Google Play and bundled with the new HTC First phone available this Friday. Already, the team is looking ahead to the next iteration, to the improvements that will make Facebook Home that much smoother or more intuitive, to the seedlings of ideas that will blossom into the next big evolution of the product. There is no time for rest; Launch Day is but one of many milestones on the march towards making the world more open and connected.

At the same time, milestones are a useful time to reflect, to pop up a level from the dizzying pace on the ground and see if there are any lessons, any kernels of truths, to file away for the future. As design manager for Facebook Home, these are the three things that I am taking away from the experience of building our first version of the product.

A big project must have a big vision.

Facebook Home isn’t the kind of thing you approach by looking at what you’ve already built and saying, “Hmm, how can we make some improvements?” On the contrary, this was a project with a strong vision put forth by Mark to “make the content that people want to see—new messages and notifications and updates about the people around you—as accessible as possible.” Eventually, this came to mean “a news feed-like experience on the lock screen” and “the lock screen and the home screen are one and the same.”

I can with a very straight face say that these last two statements seemed crazy at first. Like, really, really ridiculously crazy. There’s not a single other example of a lock screen and a home screen being the same thing. That’s because the two serve different functions. A lock screen has to prevent accidental taps, show you the time, make sure you know which phone is yours, give you quick access to your last app, support a notifications system, etc. A home screen has to be extremely efficient at launching apps. We worried about what it would mean to muddle and combine the two. For instance, app-launching would have to be behind a swipe gesture in our model, which meant it wouldn’t be instantly accessible from tapping the home button. Would that be a problem? Not to mention, if News Feed was your lock screen, how on earth could we also cram notifications on there? And make the News Feed ambient enough to not distract you from getting to your apps? AND support all the functionality of stories such as liking, commenting, ‘continue reading…’, displaying photos and status updates and check-ins, advancing to the next story, tapping on links, etc? AND ensure that your phone wasn’t vulnerable to a slew of butt-typed ‘ghgggehgghg’-esque comments?

It’s likely, as a v1 product, that we didn’t get all of those things right. At the same time, ten months later, looking at Home and where we ended up, the solutions to some of the problems listed above seem obvious. I take that as a good sign. (My first boss once told me, “The sign of a successful design is that it seems obvious in retrospect. But those are usually the hardest solutions to come up with.”) For that I give Joey, Francis, Justin and Mac—the four phenomenal product designers on Facebook Home—every iota of credit. But it was Mark’s vision that laid the groundwork, that pushed the team to achieve what at first seemed an unlikely and somewhat audacious task. I would be lying if I said that no skepticism ever presented itself. It did, and quite strongly. There were periods when the vision seemed to demand too much, seemed to place too many constraints on the design.

But then again, that’s also the sign of a powerful vision.

Give designers the room to dream.

I’ll let you in on a little secret. The Chat Heads feature was actually conceived of originally by Joey and Brandon prior to Facebook Home, in the context of something else they were working on at the time. Nobody asked them to design Chat Heads. Nobody went up to them and said, “Hey, please put together some design ideas for how we might build a lightweight, simple chatting interface on mobile.” Nobody handed them a spec or rattled off some guidelines for what they should do. No, the idea for Chat Heads came about because those designers saw a problem—chatting on mobile devices is hard—and they had the space and freedom to do something about it. Being in the early phases of their respective projects helped—theirs was an environment of exploration, when things were still ambiguous and crazy blue-sky ideas were encouraged. Having periods like that when designers can have the freedom to explore and dream up kind-of-out-there solutions is essential for good design ideas to flourish. If you are always executing on a week-by-week roadmap and running the product development process like a bootcamp, it’s likely you will get some optimization wins, but full-blown new concepts are not usually born from those environments. There needs to be time for both an execute-and-optimize strategy in design, as well as room and space for more creative, bigger-picture solutions.

You don’t design something like Facebook Home using Photoshop.

I touched on this point earlier in How to Survive in Design (and in a Zombie Apocalypse), but something like Facebook Home is completely beyond the abilities of Photoshop as a design tool. How can we talk about physics-based UIs and panels and bubbles that can be flung across the screen if we’re sitting around looking at static mocks? (Hint: we can’t.) It’s no secret that many of us on the Facebook Design team are avid users of QuartzComposer, a visual prototyping tool that lets you create hi-fidelity demos that look and feel like exactly what you want the end product to be. We’ve given a few talks on QC in the past, and its presence at Facebook (introduced by Mike Matas a few years back) has changed the way we design. Not only does QC make working with engineers much easier, it’s also incredibly effective at telling the story of a design. When you see a live, polished, interactable demo, you can instantly understand how something is meant to work and feel, in a way that words or long descriptions or wireframes will never be able to achieve. And that leads to better feedback, and better iterations, and ultimately a better end product. When you are working on something for which the interactions matter so greatly—in this case, a gesture-rich, heavily physics-based ui—anything less simply will not do.

I want to end by shining the spotlight on the designers that made Facebook Home what it is. Now, of course, Facebook Home is far more than just design—it is top-notch engineering and strong leadership and talented content and research and partnerships and marketing. But as this is a post about design, this too, is a highlight about design.

So often at big companies, the work of individuals are blended into one big, faceless corporation. And yet, at the heart of any product are the people who made it their life’s work. So let me just say this: I could not be prouder to work with a design team so talented, so dedicated, and so unwavering in their desire for quality. Joey Flynn—thank you for bauble, for cover feed, and for setting the bar so high in everything you see and touch. Francis Luu—your positivity is a ray of light, and so is your work on notifications, blues clues, and the end-to-end install flow.Justin Stahl—you started at Facebook and six months later you emerged with a launcher, an app, a preso, and more. Mac Tyler—your energy is an inexplicable well of awesomeness, thank you pouring yourself into saving the day again and again. Skyler Vander Molen—the experience site for Facebook Home is one of the best we’ve ever built. Thank you. Thank you. It’s been an honor and a pleasure working with you. Now onwards, onto v2, v3, and the many other milestones on a journey that’s just begun.


NEO Window Shopping. [adidas NEO] is taking window shopping to a new level with an interactive digital window concept that connects to your smartphone. Now it is possible to shop at our store after hours without an app or scanning a QR code. Article: Source

Mobile First and Foundation 4

Article: Source

The other day, our friend and advisor Luke Wroblewski stopped by for a chat with Jonathan, Chris and myself. We’re in the midst of working on the finishing touches to Foundation 4, polishing the chrome and making her seaworthy. And Luke’s visit was a pleasant distraction.


Luke turned us on to Mobile First and his work has greatly influenced how we’re approaching Foundation 4, which we talked about during our conversation with him. While we talked, Chris was furiously pounding away on the code — he can pat his head and rub his tummy at the same time.

Some good stuff was said and we didn’t want you to feel left out. Here are a few snippets from our conversation:

Mobile First and Responsive

Luke: Step out to any street corner and people have their face in a smartphone. That trend doesn’t show any signs of letting up. In fact, it’s constantly growing. I think the whole idea of Mobile First is reaching all these audiences anywhere and everywhere. ‘Cause you can pull out your mobile device anywhere and everywhere. All the kinds of things people are doing are all the kinds of things we used to make websites about — buying things, looking up information, taking to their friends, killing some downtime, anything and everything is now mobile.

Mobile First, in a responsive paradigm, for me, forces you to focus on the stuff that matters, front and center. So what you see is people designing things desktop site down. What you end up is that they cram everything and the kitchen sink into the site. They make it huge and bloated in terms of performance, in terms of content, in terms of features. What they end up doing to get down to a mobile view, they stack everything into one long list. It’s huge and it takes forever. It basically creates a crappy mobile experience.

Shifting Paradigms

Jonathan: We’re doing something different with 4 than we did with 3. When we did 3, we said “2 is dead.” With 4, 3 is still there. Because even with our clients, it’s going to be another year of us beating the drum as much as we can to get our clients to sign up doing things Mobile First.

The nice thing about Foundation is we’ve always built Foundation so that it’s probably six months to a year ahead of where we are.

Luke: That’s an interesting philosophy. Sorta building ahead of where your clients are and bring them there over time and learn the lessons.

Jonathan: We have to drag them kicking and screaming. On the way, we get there ourselves.

Luke: I talk with a lot of companies around this sort of stuff. All of them know the terms. They know responsive web design. They know Mobile First. They know that they should be acting on it. But what’s really holding them back is their existing properties and processes. To be clear, what makes people uncomfortable is that it’s a different way of working. It’s different than what we’ve been doing for 20-plus years.

My counter argument to that is that it’s a pretty different web, pretty different world than it was 20 years ago. If you’re expecting things that worked for you 20 years ago work today, I don’t think that’s a viable way to run a business.

The other argument that I hear is that it costs more, takes more time. My response is: OK, so you can keep making a desktop and laptop site and just not have all these new audiences on tablets, on smartphones and all that stuff. If you want more usage on these more devices, more online time, you have to invest a little more. It’s not going to come for free. Nobody just comes hands you a pile of money or customers if you do nothing.

Jonathan: At some point, it’s just going to be the cost of doing business.

Forward the (Mobile First) Foundation

Jonathan: Luke got us turned on to the whole thing. We had lunch … how long ago?

Chris: Back in November … maybe September …

Jonathan: About six months ago, we had lunch with Luke. And Luke was like beating us over the head with “Foundation ought to go Mobile First”. And we talked about it before but that was the first conversation where we got to the end of it and was “like OK that makes some sense.”

Chris: He made us look at Zepto too.

Jonathan: He turned us on to Zepto. So that was a good conversation. I think it was a confluence of — he made a pretty good case for it. Honestly, I think, at last to me, the best case so far. Since we’re doing things mobile first, technically, we have the capability with Foundation to build experiences that don’t suck for like really old phones and feature phones. We’re not going to inherit all the styles we try to cram in there. It will actually be a mobile site.

So we can broaden our appeal by simplifying what we present for devices like that or older browsers like IE6 or 7. You could reasonably say you can build a site for IE6 using Foundation 4, which wasn’t the case with Foundation 3. That was a win.

Luke: To build on that. The promise of tomorrow, for me, is more and more multi-device web. There’s no shortage of devices.

Toward Tomorrow and Beyond

Luke: I think that it’s encouraging to see that more and more people are understanding this opportunity and jumping on it. You guys, potential working with clients, using Foundation — it’s a great vehicle understanding kind of what they’re inevitably going to have to do on the Web. I appreciate that you guys are moving it forward and pushing it past where the clients are right now. In the end, I think it’s going to be good for you and for them. It’s not a negative thing for me. I do agree that change is hard. It’s inevitable to deny that the mobile thing is here. And you’re going to have to deal with it. And eventually deal with it in a good way.

Jonathan: Pretty stoked to where Foundation is going. We wouldn’t have taken the direction we did if you hadn’t badger us for the last year and a half.

A crazier prediction: iPhone Plus is real, and huge

Article: Source

So far, I’m betting on an A5X-powered Retina iPad Mini by this fall. While I’m making semi-crazy predictions about future iOS products so I can look back on this in a year and probably feel like an idiot for being so wrong, here’s one more.

The recently rumored, larger-screened “iPhone Math”, or more likely “iPhone Plus”, is plausible as an additional model (not a replacement) alongside the 4” iPhone. And there’s a good chance that it would have a 4.94”, 16:9 screen.

The theory is easy to understand: perform John Gruber’s Mini-predicting mathbackwards. The iPad Mini uses iPhone 3GS-density screens at iPad resolution. What if an iPhone Plus used Retina iPad screens with iPhone 5 resolution, keeping the rest of the design sized like an iPhone 5?

Its 640 × 1136, 264 DPI screen would measure 4.94” diagonally, and it would look roughly like this next to an iPhone 5:


(Please pardon the flaws caused by my amateur Photoshop skills.)

It looks a bit crazy, but it’s not that implausible. To see it at scale:

By keeping the pixel dimensions the same as the iPhone 5, no app changes would be necessary. While the larger screen would hinder one-handed use, two-handed use would actually be easier because the touch targets would all be larger, and UIKit’s standard metrics and controls still work well at that physical size.

Here’s how it would look in Apple’s lineup:


Why would Apple release this?

First and foremost, there’s significant demand for larger-screened phones. As much aswe make fun of the Galaxy Note, it sells surprisingly well, especially outside of the United States. Other large Android phones sell very well almost everywhere.

The iPhone has lost a significant number of sales by buyers either wanting a larger screen or being drawn to how much better the large screens look in stores. Here’s how this theoretical iPhone Plus looks next to the large-screened competition:

From left: iPhone 5, Galaxy S III, iPhone Plus mockup, Galaxy Note II.

Now, imagine that lineup without the iPhone Plus mockup. That’s how the shelf looks today when a buyer goes into a phone store. See the problem?

An iPhone Plus almost as big as a Galaxy Note isn’t ideal for many people, but it doesn’t need to be quite that large to accommodate a 4.94” screen. It’s clear that other manufacturers have found designs and techniques to make larger-screened phones require smaller bezels. Apple could achieve similar results and shrink the “forehead” and “chin” even further, limited primarily by the size of the Home button and the desire to keep the forehead and chin equal height.

A 4.94”-screened iPhone doesn’t sound too ridiculous these days.

Buyers wanting a small phone or better one-handed operation could still buy a 4” iPhone, and people who want a large screen would finally have an iPhone as an option.

iFrame. We convert your presentation to the application for the iPad, and there is nothing extra: no buttons or select documents or receiving network indicator - only your presentation. We follow the rule: the simpler - the better. Simply, does not mean poorer: the iFrame-presentation can insert pictures, galleries, video and 3D-view. Easier, more convenient means. Source