We were asked by the Museum of Contemporary Art Antwerp to create a digital wall in their newly renovated wing. The entire ground floor has been repurposed to expose the vast collection of artworks curated by the Museum. Our touch-wall pulls in data from the Ensembles.org archive, more then 10 years of carefully collected information about all the owned artworks. Applying a recognisable interactive element, we invite the museum visitor, young and old, to play and discover.
These days museums have large parts of their collection locked away in huge archives. The Fashion Museum of Antwerp (MoMu) transforms this unfortunate situation into an asset by digitally showcasing the hidden treasures of their archives to the public. Providing a dynamic view into the collections of MoMu on this enormous 10 square meter multitouch surface is taken care of by a single computer, rendering 8xHD at a resolution of 7680x2160 at 60fps. All heavy image rendering and video processing is done by a node.js CMS in combination with phantom.js running on a Linux server. The whole setup supports 80 simultaneous touches.
Welcome to Generation IP:2025 by Virgin Media Business -- an in-depth study carried out in conjunction with The Future Laboratory - which provides an exciting glimpse into a hyper-connected Britain in just thirteen years' time. Find out more on what the future holds in 2025.
Virgin Media Business http://www.virginmediabusiness.co.uk/generationip/
RSA Insurance Business - End of Year Party 2013. For 6 years with RSA Group develop digital brand strategy, covering institutional solutions, promotional, loyalty programs and incentive and management for different areas of the company. For the year-end event company developed the digital accreditation system, games for touch tables, interactive kiosks and wall. Source
DISPLAX · Skin Ultra · 40 Touch. On February 4 DISPLAX announced the launch of the SKIN ULTRA, a new high performance 40 touch technology. SKIN ULTRA can be used in flat, curved and folding touch screen designs and demonstrates performance that is comparable to the sensitivity of tablets and smartphones. Sizes start 47” and will scale up to 84” later in 2014. Source
Tactile Rendering of 3D Features on Touch Surfaces. In this project, we develop and apply a tactile rendering algorithm to simulate rich 3D geometric features (such as bumps, ridges, edges, protrusions, texture etc.) on touch screen surfaces. The underlying hypothesis is that when a finger slides on an object then minute surface variations are sensed by friction-sensitive mechanoreceptors in the skin. Thus, modulating the friction forces between the fingertip and the touch surface would create illusion of surface variations. We propose that the percept of a 3D “bump” is created when local gradients of the virtual bump are mapped to lateral friction forces. Source
Oqtopus Table LCD. DISPLAX Skin Multitouch is the first large format through glass multitouch foil that offers 20, 10 or 2 independent touches. With multiple independent touches the DISPLAX Skin Multitouch can turn most any surface into an interactive playground for high performance games and manipulating images, or create a serious workspace for demanding tasks. DISPLAX is ideal for combining with an ordinary LCD or any GlassVu screen to make your storefront window interactive at all hours of the day. Source
Multitouch Skin LCD. DISPLAX Skin Multitouch is the first large format through glass multitouch foil that offers 20, 10 or 2 independent touches. With multiple independent touches the DISPLAX Skin Multitouch can turn most any surface into an interactive playground for high performance games and manipulating images, or create a serious workspace for demanding tasks. DISPLAX is ideal for combining with an ordinary LCD or any GlassVu screen to make your storefront window interactive at all hours of the day. Source
Mirror Screen LCD. DISPLAX Skin Multitouch is the first large format through glass multitouch foil that offers 20, 10 or 2 independent touches. With multiple independent touches the DISPLAX Skin Multitouch can turn most any surface into an interactive playground for high performance games and manipulating images, or create a serious workspace for demanding tasks. DISPLAX is ideal for combining with an ordinary LCD or any GlassVu screen to make your storefront window interactive at all hours of the day. Source
LG Transparent LCD. DISPLAX Skin Multitouch is the first large format through glass multitouch foil that offers 20, 10 or 2 independent touches. With multiple independent touches the DISPLAX Skin Multitouch can turn most any surface into an interactive playground for high performance games and manipulating images, or create a serious workspace for demanding tasks. DISPLAX is ideal for combining with an ordinary LCD or any GlassVu screen to make your storefront window interactive at all hours of the day. Source
This topic describes the touch interactions for Windows 8 and provides guidelines for designing good touch interactions. For a handy downloadable version of this topic, go here.
- Use the Windows 8 touch language.
Windows 8 provides a concise set of touch interactions used consistently throughout the system. Applying this language consistently makes your app feel familiar to what users already know. This increases user confidence by making your app easier to learn and use.
- Use fingers for what they’re good at.
A mouse and pen are precise, while fingers aren’t, and small targets require precision. Use large targets that support direct manipulation and provide rich touch interaction data. Swiping down on a large item is quick and easy because the entire item is a target for selection.
- Browse content with touch.
Semantic Zoom and panning make navigation fast and fluid. Instead of putting content in multiple tabs or pages, use large canvases that support panning and Semantic Zoom.
- Provide feedback.
Increase user confidence by providing immediate visual feedback whenever the screen is touched. Interactive elements should react by changing color, changing size, or by moving. Items that are not interactive should show system touch visuals only when the screen is touched.
- Content follows finger.
Elements that can be moved or dragged by a user, such as a canvas or a slider, should follow the user’s finger when moving. Buttons and other elements that do not move should return to their default state when the user slides or lifts their finger off the element.
- Keep interactions reversible.
If you pick up a book, you can put it back down where you found it. Touch interactions should behave in a similar way—they should be reversible. Provide visual feedback to indicate what will happen when the user lifts their finger. This will make your app safe to explore using touch.
- Allow any number of fingers.
People often touch with more than one finger and don’t even realize it. That’s why touch interactions shouldn’t change radically based on the number of fingers touching the screen. Just like the real world, sliding something with one or three fingers shouldn’t make a difference.
- Keep interactions untimed.
Interactions that require compound gestures such as double tap or press and hold need to be performed within a certain amount of time. Avoid timed interactions like these because they are often triggered accidentally and are difficult to time correctly.
This list describes the standard touch-related terms used in Windows 8.
Important To avoid confusing users, please do not create custom interactions that duplicate or redefine existing, standard interactions.
- Press and hold to learn.
This touch interaction causes detailed information or teaching visuals (for example, a tooltip or context menu) to be displayed without a commitment to an action. Anything displayed this way should not prevent users from panning if they begin sliding their finger.
- Tap for primary action.
Tapping on an element invokes its primary action, for instance launching an app or executing a command.
- Slide to pan.
Slide is used primarily for panning interactions but can also be used for moving, drawing, or writing. Slide can also be used to target small, densely packed elements by scrubbing (sliding the finger over related objects such as radio buttons).
- Swipe to select, command, and move.
Sliding the finger a short distance, perpendicular to the panning direction, selects objects in a list or grid (ListView and GridLayout controls). Display the app bar with relevant commands when objects are selected.
- Pinch and stretch to zoom.
While the pinch and stretch gestures are commonly used for resizing, they also enable jumping to the beginning, end, or anywhere within the content with Semantic Zoom. A SemanticZoom control provides a zoomed out view for showing groups of items and quick ways to dive back into them.
- Turn to rotate.
Rotating with two or more fingers causes an object to rotate. Rotate the device itself to rotate the entire screen.
- Swipe from edge for app commands.
App commands are revealed by swiping from the bottom or top edge of the screen. Use the app bar to display app commands.
- Swipe from edge for system commands.
Swiping from the right edge of the screen reveals the charms that expose system commands.
Swiping from the left edge cycles through currently running apps.
Sliding from the top edge toward the bottom edge of the screen closes the current app.
Sliding from the top edge down and to the left or right edge snaps the current app to that side of the screen.
Note Users can perform direct manipulations like the slide-to-pan, pinch-to-zoom, and turn-to-rotate interactions simultaneously and with any number of touch points.
Designing for touch is more than designing what’s displayed on the screen. It requires designing for how the device will be held (grip).
Typically, different people have a few favorite grips when holding a tablet.
The current task and how it’s presented usually determines which grip is used. However, the immediate environment and physical comfort also affect how long a grip is used and how often it’s changed.
Try optimizing your app for different kinds of grips. But if an interaction naturally lends itself to a specific grip, optimize for that.
Interaction areas: Because slates are most often held along the side, the bottom corners and sides are ideal locations for interactive elements.
Reading areas: Content in the top half of the screen is easier to see than content in the bottom half, which is often blocked by the hands or ignored.
Four most common grips: While there are many ways to hold a tablet, these four grips are most commonly used.
GripGrip and interactionDesign considerationsOne hand holding, one hand interacting with light to medium interaction
- Right or bottom edges offer quick interaction.
- Lower right corner might be occluded by hand and wrist.
- Limited reaching makes touching more accurate.
- Reading, browsing, email, and light typing.
Two hands holding, thumbs interacting with light to medium interaction
- Lower left and right corners offer quick interaction.
- Anchored thumbs increase touching accuracy.
- Anything in the middle of the screen is difficult to reach.
- Touching middle of screen requires changing posture.
- Reading, browsing, light typing, gaming.
Device rests on table or legs, two hands interacting with light to heavy interaction
- Bottom of the screen offers quick interaction.
- Lower corners might be occluded by hands and wrists.
- Reduced need for reaching makes touching more accurate.
- Reading, browsing, email, heavy typing.
Device rests on table or stand, with or without interaction
- Bottom of screen offers quick interaction.
- Touching top of the screen occludes content.
- Touching top of screen might knock a docked device off balance.
- Interaction at a distance reduces readability and accuracy.
- Increase target size to improve readability and precision.
- Watching a movie, listening to music.
Size vs. efficiency: Target size influences error rate
There’s no perfect size for touch targets. Different sizes work for different situations. Actions with severe consequences (such as delete and close) or frequently used actions should use large touch targets. Infrequently used actions with minor consequences can use small targets.
People often blame themselves for having “fat fingers.” But even baby fingers are wider than most touch targets.
The image on the left shows the width of the average adult finger is about 11 millimeters (mm) wide, while a baby’s is 8 mm, and some basketball players have fingers wider than 19 mm!
Target size guidelines: Here are some guidelines for deciding how large or small to make your touch targets.
7x7 mm: Recommended minimum size
7x7 mm is a good minimum size if touching the wrong target can be corrected in one or two gestures or within five seconds. Padding between targets is just as important as target size.
When accuracy matters
Close, delete, and other actions with severe consequences can’t afford accidental taps. Use 9x9 mm targets if touching the wrong target requires more than two gestures, five seconds, or a major context change to correct.
When it just won’t fit
If you find yourself cramming things to fit, it’s okay to use 5x5 mm targets as long as touching the wrong target can be corrected with one gesture. Using 2 mm of padding between targets is extremely important in this case.
Most people are right handed
Most people hold a slate with their left hand and touch it with their right. In general, elements placed on the right side are easier to touch, and putting them on the right prevents occlusion of the main area of the screen.
As you plan the UI and the interactions supported by your app, always keep in mind the wide range of abilities, disabilities, and preferences of your users. Following accessible design principles from the beginning helps make your app accessible to the widest possible audience. For more info on planning for accessibility, see Design for accessibility.
Learn how to organize the content in your Windows Store app so your users can navigate easily and intuitively. Using the right navigation patterns helps you limit the controls that are persistently on screen, such as tabs. This lets people focus on the current content.
Most Windows Store apps in Windows 8 will use a hierarchical system of navigation. This pattern is common and will be familiar to people, but is made even better by the Hub navigation pattern. This pattern makes Windows Store apps fast and fluid while still being easy to use.
This pattern is best for apps with large content collections or many distinct sections of content for a user to explore.
The essence of Hub design is the separation of content into different sections and different levels of detail.
Hub pages are the user’s entry point to the app. Here content is displayed in a rich horizontally panning view allowing users to get a glimpse of what’s new and available.
The Hub consists of different categories of content, each of which maps to the app’s Section pages. Each Section should bubble up content or functionality. The Hub should offer a lot of visual variety, engage users, and draw them in to different parts of the app.
Section pages are the second level of an app. Here content can be displayed in any form that best represents the scenario and content the Section contains.
The Section page consists of individual items, each of which has its own Detail page. Section pages may also take advantage of grouping and a panorama style layout.
Detail pages are the third level of an app. Here the details of individual items are displayed, the format of which may vary tremendously depending upon the particular type of content.
The Detail page consists of item details or functionality. Detail pages may contain a lot of information or may contain a single object, such as a picture or video.
Many Windows Store apps in Windows 8 use a flat system of navigation. This pattern is often seen in games, browsers, or document creation apps, where the user moves between pages, tabs, or modes that all reside at the same hierarchical level.
This pattern is best when the core scenario involves fast switching between a small number of pages or tabs.
The essence of the Flat system is the separation of content into different pages.
Top app bar
The top app bar is great for switching between multiple contexts. Examples include tabs, documents, and messaging or game sessions.
This bar is a transient element that resides at the top of the screen, and is made visible when users swipe from the top or bottom edge. While formatting of items in the bar can vary, a typical treatment is the use of a simple thumbnail.
Unlike the hierarchical system, there is typically no persistent back button or navigation stack in the flat system, so moving between pages is usually done through direct links within the content or the top app bar.
You can choose to include other functionality within the top app bar, such as adding a ‘+’ button to create a new tab, page, or session.
The following show the anatomy navigating between sections in an app, between different levels in the hierarchy, and within a single app page.
Header and Back button
The header labels the current page and is useful for wayfinding. The Back button makes it fast to get back to where you were.
The Hub page pulls information from different areas of the application onto one screen. It gives the user a bird’s-eye view of everything available in the app.
Content sections, or categories
Content sections can be formatted to best display the functionality or items they promote.
Semantic zoom: navigating between levels in a hierarchy
Semantic zoom makes scanning and moving around a view fast and fluid, especially when the view is a long panning list.
Top app bar
The top app bar contains transient access to navigation controls or to other areas of the app.
The header menu is available from anywhere in the app, and allows users to quickly jump from one section of the app to another
The home link, located at the bottom of the header menu, is a quick way to get back to the root of the app.
Bottom app bar
The bottom app bar contains transient access to commands relevant to a particular view.
These commands change the way in which content is displayed within a specific view. The best place for them to reside is in the app bar.
Swiping from the edge of the screen is what makes the app bars and charms appear.
Users can navigate within apps and throughout the system by swiping a finger or thumb from an edge. In order to use Windows Store apps efficiently, users learn what each of the following edge swipes does:
- Swiping from the bottom or top edge of the screen reveals the navigation and command app bars.
- Swiping from the right edge of the screen reveals the charms that expose system commands.
- Swiping from the left edge cycles through currently running apps.
- Sliding from the top edge toward the bottom edge of the screen closes the current app.
- Sliding from the top edge down and to the left or right edge snaps the current app to that side of the screen.
We will use a sample app called Food with Friends to illustrate a pattern for using the back button, header menu, and content sections to navigate a Windows Store app.
The header menu contains a link to each section page (level 2) as well as a link back to the hub (level 1), enabling users to move around the app quickly. The menu appears at each level and on every page of the app, making it an efficient and reliable way for users to get where they want to go.
Users can tap on the section label to drill in to the corresponding page for that section. Provide a visual cue, likeView all (x), to indicate to users that there are more items in this section that what is shown in the hub. Using this pattern avoids the need to use a tile space or place a link within the content.
Using this pattern, this is what the navigation diagram would look like for the Food with Friends example. This is a simplified diagram showing only canonical examples of navigation elements, used as representatives of everything that’s interactive.
Another part of app navigation is determining when, where, and how to give users more control over the way they experience content. Filters, pivots, sorts and view switchers are all things to consider in your app design.
TermDefinitionExampleFilterRemoving or hiding content within a data set, based on some criteria.When looking for a game to play, you might choose to view only those games categorized as “adventure.”PivotReorganizing content within a data set, based on some criteria.When looking at a music collection, you might choose to organize songs by artist, album, or genre.SortChanging the order in which content is displayed within a data set.When browsing for an article to read in a news app, you might choose to see the most recent articles listed first.ViewChanging the style or method in which content is displayed.When browsing for a place to eat in a restaurant-finding app, you might choose to view restaurants on a map instead of in a list.
Use on-canvas controls for filtering, pivoting, or sorting when finding an item is a primary task, like in a collection or search result page.
Controls should go into the app bar, if the focus of the app is on browsing for content, like a magazine or shopping app.
For filtering and sorting content within a collection view, filter and sort commands can be placed in a row between the header and content. In the following example, the view is filtered to show only TV episodes, sorted and grouped by series.
In this example for a marketplace app, drop-down selection controls filter the content for the current view. As the menus show, the currently active filter appears selected in the drop-down list.
The top app bar is used primarily for navigating sections or pages of an app that use the Flat navigation pattern. Sometimes called a navigation app bar, it can also be used along with the Hierarchical pattern, in lieu of the header menu, to provide global navigation controls. The top app bar should show up on every page and at all levels of the app to provide users with a convenient, consistent way to get around.
In this finance app example, the hub (L1) promotes sections of the app (Headlines, Watchlist) to the hub, and the section headers link in to them. At the section level (L2), when the top app bar is invoked by swiping the top or bottom edge, the user has access to the root and all other sections of the app.
The app bar is used primarily as a commanding surface, but it can also be used to alter the way in which content is being viewed. Switching views, pivoting, filtering and sorting can all be done by using the app bar. Don’t use the app bar for navigating from one place in the app to another. All app bar items should act on the content currently in view.
In this calendar app example, the view defaults to a month view, which this app has optimized for. Commands to choose other calendar views are in the app bar, accessed by swiping from the top or bottom edge. Other commands, such as making a new appointment, may appear in the bar as well.
In the All Restaurants page of the Food with Friends example, options for viewing items as a list or map are available, as are filtering and sorting the view based on certain criteria such as cost, location, and rating. Here, filtering options are exposed as controls in a menu Flyout.
For UNIQLO’s advertising campaign of the famous UT t-shirts series we created the interactive booth which has been located in the heart of “Mega Belaya Dacha” shopping mall for the whole month. Using multi-touch displays with virtual catalog visitors could browse numerous UT t-shirts designs. Source
Source. KRAFT Macaroni & Cheese, delivers an iPad app that stops the wastage (at the hands of Kids) of tonnes of pasta pieces the world over, and creates a new, super cool way for those same kids to create all the best Macaroni Art they can possibly imagine.
This Grolsch multi-screen campaign is an interesting example of extending a TVC into interactive an online video and mobile experience, that in turn drives retail foot traffic. Starting with a TVC introducing a bold character, the ad then challenges you to go online to continue the conversation.
The next great gadget might be one you don’t even touch. Here are five experts’ thoughts on what it means, and what the future might look like.
Much of the current crop of gadgetry runs on touchscreens, but it won’t always be that way. We’re already seeing a generation of gadgets that do away with screens entirely, starting with the early success of the Kinect. A more precise gesture-tracking module, the Leap Motion controller, is shipping out to nearly 30,000 developers this fall, planting seeds for a post-touch takeover in the next few years. In an interview this summer, Valve’s Gabe Newell put it this way:
You have to look at what’s going to happen post-tablet. If you look at the mouse and keyboard, it was stable for about 25 years. I think touch will be stable for about 10 years. I think post-touch, and we’ll be stable for a really long time — for another 25 years.
But one big question still hasn’t been answered: what is it good for? Post-touch hasn’t found the killer use case that the mouse found with GUIs and the touchscreen found with mobile web browsing and apps — but it’s not for lack of trying. We’ve had a flood of prototypes, demos and art projects, any one of which could flourish into an industry — that is, once every laptop comes with a near-field depth camera. As for which will take off…it’s anyone’s guess. But some guesses are better than others:
Post-Touch Means Smaller Gestures
Michael Buckwald, CEO of Leap Motion
This technology is a fundamental transformation akin to the mouse because, if done correctly, it can be just an unambiguously better way of doing a large number of things. It’s everything from the way people interact with their social graph and see their Facebook connections, to the way surgeons interact with things in the operating room, to how engineers build and interact with 3D models. We expect all those things to change.
What’s bad about touch is that it has to be one-to-one to make sense, so if I want to move something from the top right corner to the bottom left corner, I have to move my finger that distance. Even on a tablet that starts to feel a little inefficient, and when you get to a giant touchscreen like a 22-inch monitor or a touch-TV, it’s radically inefficient and extremely tiring. What we’re able to do because the user is back from the screen and not physically touching it, is have that same feeling of connectedness. We envision people moving their fingers just a couple of millimeters really, and moving the cursor across the entire screen based on those movements.
Post-Touch Cameras Will Come With Every Laptop
Doug Carmean, Researcher-at-Large for Intel Labs
As soon as next year, you’ll be able to see Logitech near-field depth cameras integrated everywhere there’s a standard webcam in laptops. And they’ll have the capabilities to do fine-motor control detection. Think about what you could do with that. You can use them for feature recognition. You can start doing emotion detection. Those are all things that I’ve seen that are in R&D today, that you could project forward.
Another aspect of that is that with Kinect, people are going beyond skeletal tracking and doing full-on 3D maps of bodies and they’re projecting them into space. And the 3D mapping stuff allows you to create much more compelling systems for both augmented reality and virtual reality than we’ve seen in the past.
Post-Touch Is Better At 3D
James Alliban, interaction designer
I think post-touch will be best for creative software – like 3D packages and Photoshop. Navigating around a 3D environment and tweaking vertices and polygons makes far more sense in a gesture enabled space over the standard 2D-only input devices. I suspect it will also make sense for casual web browsing. I’ve been banging on about Augmented Reality eyewear and HUDs for a couple of years now so I’m fascinated to see how Google gets on with Project Glass. I’m fairly certain the first iteration will be disappointing (at least for what I have in mind), but I’m looking forward to 5-10 years down the line, when we have embedded depth-sensing tech that allows for gesture controlled interfaces, when the digital layer is seamlessly integrated into your surroundings and the high resolution image and wide field of vision allows for a fully immersive experience.
Whenever we see gesture enabled interface demos they tend to be computer science guys moving and zooming stock photos, waving frantically at a large screen. This isn’t a great look for the future of the interface. There’s a term for the physical effects that long term exposure to gestural input can have on a person — Gorilla Arm. Tom Cruise apparently suffered terribly from this while filming Minority Report. The argument against most gesture-enabled computing is that it looks exhausting. Great for blasting zombies but complete overkill for updating a spreadsheet.
Post-Touch Could Use Your Voice, Or Your Eyes
Andrew Hudson-Smith, Director of the Centre for Advanced Spatial Analysis
Post-touch has the potential for instant information retrieval based on eye tracking, voice recognition and augmented reality display technologies. By simply ‘looking’ at an object for a set amount of time — say three seconds — information can be retrieved and displayed. You could compare prices in supermarkets by “eye-scanning” objects, for instance.
Post-Touch Can Capture The Whole Body
Myron Krueger’s Videoplace, 1989
Casey Reas, co-creator of Processing
The history of human-computer interaction moves toward interfaces that respond to our complete bodies. The new class of touch screens are an extraordinary step after decades of the keyboard and mouse as the primary interfaces, but they only utilized a narrow part of what hands can do.
The Videoplace installation (1985) by Myron Krueger set us on this new path decades ago, controlled by the full silhouette of a body in motion. I have no idea where full-body and gestural interfaces will lead us, but I do know that artists using Processing, Cinder, OpenFrameworks and other related frameworks are discovering what it will be.
Where does that leave us? Well, depth cameras are available, but whether you’ll be training them on your fingertips, your eyeballs or your whole body is up for debate, just like the question of whether you’ll be using it to make art, play games or retouch photos.
The only thing they need is momentum, the kind of inevitability touchscreens got after the first iPhone launch. It could come from Microsoft, Leap, or somewhere we haven’t even heard of — but however it happens, it’s going to take some getting used to.
LG Music Apps. Sosolimited created a series of music apps for the LG booth at the CTIA Conference. Inspired by musical features on LG’s new line of phones, the apps are played with touchscreens embedded in turntable interfaces.
DeltaZone at Madison Square Garden. Delta Air Lines’ Touch the Future of Travel and a newly refreshed yet still iconic Madison Square Garden is here. A personalized, curated way for travelers to discover new destinations, collecting content from around the globe and enjoying fantastic vistas that transport them into the magic of destination travel and discovery.