A travel to a deranged city. A city immersed in chaos, bad publicity and broken souls. Directed by Nathaniel Brown & Gustavo Torres. 2014.
The art of motion control. More at http://www.taomc.com/sisyphus
"Dreamscape" is an interpretation of vision and representation of 'hypercube' via explorations on architectural landscape. This lighting sculpture was inspired and manifested by the characteristic of 'hypercube' and its fourth dimension. It is an affair of finding the attraction and hidden messages within this cubical structure. Each layer of the structure creates a new dimension with different depth of field. Seeing through layers of the sculpture in different angles, it gives us a visual attraction and aesthetic experiences of light in the motion, illusion and dimension.
This blog is a quick look at some simple Ui motion design principles. There isn’t too much documented about this area of mobile ui design and I thought there would be some value in expressing my views. From what is documented already I urge anyone interested in ui motion to check out Pasquale D’Silva http://psql.me/ and Johannes Tonollo’s meaningful transitions http://www.ui-transitions.com/#home.
These basic principles I’ve outlined focus more on the what and why, rather than the how to of motion / animation. With the increasing emphasis on motion (largely thanks to the more paired back design of iOS 7) its important that it is implemented with the same integrity and purpose as all the other aspects of Ui design. With the exclusion of skeuomorphic design there is now a freedom for content to behave in an unrestricted manner. Gone are the awkward and sometimes absurd transitions that appeared to break all laws of their pre-defined physical environment. Now the space has opened for there to be a much richer identity and defined language or landscape to mobile Ui and motion is very much an integral part of that.
The most obvious principle is that any motion or animation should be of the highest standard possible. Apps should look at going beyond “out of the box” motion solutions and making something bespoke and truly engaging. Within the app the motion should convey a distinct character, whilst keeping a clear consistency throughout. Behaving in an expected manner will help maintain a stable relationship with the user keeping them engaged with experience.
Motion should help ease the user through the experience. It should establish the “physical space” of the app by the way objects come on and off the screen or into focus. It should aid the flow of actions, giving clear guidance before, during and after. It should serve as a guide, keeping the user orientated and preventing them from feeling lost, reducing the need for additional graphics explaining where they are or have been.
Motion should give context to the content on screen by detailing the physical state of those assets and the environment they reside in. Without the restrictions of skeuomorphism the ui is free to behave in any manner with out seeming contradictory to its pre defined environment. Adding a stretch or deform to an object or applying inertia to a simple list scroll can make the experience much more playful and engaging.
Motion should be responsive and intuitive. It should react in a way that reassures the user, rather than surprises or confusess them. The reaction of the ui, to a users actions, should be complementary and comprehensible.
It should evoke a positive emotional response, whether that be comfort from a smooth scroll or excitement because of a well executed animation transition.
Steer clear of distracting or even confusing the user with too much animation, Subtlety is key. It should be used to maintain or help focus. Not take it away. Also don’t over elaborate aspects such as screen transitions. This becomes increasingly frustrating to the user over time. Or if they are simply left waiting for what seems like “forever”. (there are no examples of this as its a not to do - and i’m not in the habit of kicking peoples work in the nuts)
Based in Canada, designer Thibault Sld explores the realm where “geometry, light, mechanisms and interaction collide,” by creating interactive displays and lights that respond to exterior input. One of his most captivating ideas is Hexi, an interactive array of 60 hexagonal modules embedded with mechanical servos that use data from a nearby depth camera to physically respond to nearby motion. It would be amazing to see an entire room or hallway covered in something like this. You can learn more over on his website, or watch the video above to see it in motion. Source
TECH SESSION 002: LEAP MOTION CONTROLLED 3D VIEWER. Using Leap Motion, 3D objects can be viewed and controlled using different hand gestures - swiping, tilting and zooming. Motion Controlled 3D Viewer is ideally used for different products such as real estate buildings, cars, and other products, to give your customers a unique way to engage with your brand. Source
Corpo Elettrico 3.0 - Performed At The Invisible Dog Art Center Brooklym NY (Long Version). People can change the sound through a simple hand gesture. Users can select four kinds of sounds, which can then be modified through hand movements above the table. By opening your hands, you may exit the sound selection and move on to modify another sound. A molecule is placed at the centre of the graphic interface, and every sound is represented by an atom. The molecule represents our planet, and the hand gestures represent the modifications made by people. Every sound is obtained from the recording of a natural sound, and the users’ hand gestures render this sound more or less artificial. Source
The FinRay structure is a prototype for an emotive wall. The emotive FinRay wall is composed of seven separate wall pieces, which can swing their body back and forth. Embedded in the skin of the FinRay are arrays of LED luminaries, which can be programmed individually to give personalized information or just to create an ambient luminescent atmosphere. While the primary synchronous behavior of the firefly is flashing light, the primary synchronous behavior of the FinRay is movement. In the installation, the FinRays are aligned in a row in 7 wall pieces, herein referred to as nodes. The synchronous behavior between the FinRay nodes contrasts with the motion produced by the presence of the participant. The result is a series of complex wave patterns that propagate through the FinRay structure as a whole. The propagating movements of the FinRay are expressed in the changing patterns of light and sound. The LED skins respond directly to user presence by glowing brighter when users are near, and glowing dimmer as they move away. The synchronous and asynchronous behaviors are reflected in the sound design as changes in intensity in response to the FinRay movement. Moments of synchronous behavior are represented by calmer sounds, while asynchronous behavior results more intense sound. Source
It sounds like the computer system on a Bird of Prey on Star Trek, but Kling-Net is at the heart of where VJ and media server maker ArKaos – and a lot of the live visual world – is headed. Forget the phrase “projection mapping,” and think “video mapping” and “pixel mapping.” Yes, projectors will still be a big part of live visuals. But they’ll be just one outlet for visual performance, alongside increasingly-sophisticated LED lighting. And LED lighting is increasingly looking like a major choice in installations.
Let me put that in less technical terms: more bright shiny, shiny from the LEDs.
So, LED is amazing technology for visualization. But it can also be amazingly hard to set up. The race now becomes about how to make that technology easier to access. ArKaos’ play is doing just that with Kling-Net. (Kahplah! Sorry, I’ll stop that now.)
It’s not rocket science to do cool stuff like this with pricey media servers sold for expensive applications. But what about the independent visual artists, the experimental creators pushing the envelope? The good news is, ArKaos is bringing the same media server tech back to GrandVJ, an app that’s far more affordable.
Marco Hinic is engine architect and CEO of ArKaos and creator of Kling-Net. (I love that, in this business, the engineers typically run the show.) Marco tells CDM about what their goals are for the tech:
Kling-Net has been designed to create plug and play LED devices and we are converting more and more manufacturers to work with us. There is nothing else like that on the market and we hope it will enable a new level of integration for many users.
The good news is that we will also support Kling-Net in GrandVJ. Indeed this why we are late with releasing GrandVJ, while working on the video mapper in MediaMaster we added the functionality where a layer can be sent to a mapping surface and / or to Kling-Net – ArtNet. To support that in GrandVJ we need to rework it’s core engine and that’s what we are doing now… We are now looking forward to present that at NAMM with ADJ. Ed.: NAMM is the music trade show in California held in January. Mark your calendars. -PK
The other side of the equation, apart from ArKaos’ software, is the lighting fixtures. Recently, ArKaos announced Chauvet is bringing Kling-Net support to their lighting, which you can see in the video and image below.
Kling-Net is a complex technology, so I’ll do two things. First, I’ll direct you to ArKaos’ site that explains how this works:
Kling-Net @ ArKaos
Second, I ask you to tell us what your experience is with these kinds of applications, and what you’d like to know. We’re fortunate enough to get to talk directly to Marco about the technology he engineered, so, seriously, ask anything. I look forward to your questions.
The future of mapping as technique is doing more than just mapping as a technique. And so, having seen how ArKaos targeted LED lighting, here’s the popular MadMapper working with a wide array of lighting, via DMX.
Aptly-dubbed “MadLight,” lighting is the banner feature of MadMapper 1.3. With the use of the ArtNET protocol – in turn converted to DMX – you can mix mapping with lighting and stage rigs. The implementation is unique: the light itself, in color and intensity, becomes a “fixture” in your mapping setup, so that the light itself is controller by the video source. There’s an editor and library for the fixtures, too.
Other highlights of this release:
- Automatic channel numbering
- 7 types of pixel configurations (RGB, RGBL, RGBW, LRGBW, RGBWL, L, CMY)
- ArtNet support with up to 131,072 channels of DMX, and real-time fixture preview.
- Real-time pixel mapping setup
- Numerical input for parameters (at last – seriously annoying in previous releases, great to see this fixed)
- Syphon improvements
As with Modul8, signing means no nagging when launching on Mountain Lion. I know many artists going nuts over that change from Apple, but it really seems no cause for alarm – signing is very different from the App Store, in that it’s a basic way of verifying software that’s common across operating systems. And as with GarageCUBE’s updates, it can all continue working.
The lighting stuff is great, but I expect many CDM readers will be more excited about the 1.4 beta, thanks to complete Quartz Composer integration.
Also in that version, among other enhancements:
- Lock and unlock presets, for editing
- Improved render performance via the Display Link high priority thread
- Tab and Shift-Tab through Surfaces, Handles (again, finally!)
- Copy and paste between presets or MadMapper instances.
- Zoom to fit current surface selection
And, if you find one of the first five unknown bugs, they give you a t-shirt. Cute.
Minority Report. The sci-fi flick was released in cinemas over a decade ago, but viewers are still captivated by the idea of accessing and moving data with their hands.
The idea still feels like a pipe-dream however, given that many of us still work with a traditional keyboard and mouse/trackpad setup in our office and home. The dream of swiping through the air or talking to a personal assistant like Jarvis from Iron Man feels exactly that – a dream, but nothing more.
Many firms are exploring these ideas, however and pushing what’s possible with our present technological capabilities. The results are fascinating and often deviate from the input and interfaces that we’ve come to long for in Star Trek (holodeck please!)
These are useful, experimental platforms that will influence the way people engage and work with technology. Better yet, some of them are available right now.
It’s refreshing to point or swipe at a screen and see content react accordingly. There’s an immediate connection of both action and reaction that clearly reflects how the user would interact with a physical object in the ‘real’ world.
Leap Motion is this and more. The device, announced last year, is unique because it’s tiny and unobtrusive. A tiny metallic bar sits between the keyboard and monitor, which anyone can then approach and start using simply by waving their hand.
It’s incredibly accurate, offering detailed handwriting, precision pinch-to-zoom and all sorts of intuitive hand gestures that are both natural and concise. No standing up or arm-flapping required. Just quick and effortless interaction.
A casual onlooker might not see the advantages of MYO straight-away. The user attached an inconspicuous armband, which is used to measure various muscle activity as they wave and point at the screen.
The clear advantage over Leap Motion is that it isn’t location-specific. Promotional videos have shown the user walking away from a desktop computer and then altering the volume on the other side of the room, simply by moving his wrist in a circular motion.
Professionals delivering a presentation can forego a remote and simply swipe two fingers in the air to move to the next slide. The use-cases are almost endless, although the clear limitation is that it’s following only one portion of the body. One arm, one control input.
The applications for hardware-enabled virtual reality experiences are mouth-watering. Being able to walk through a digitally rendered field and look left and right, at will, to see what’s around lends an entirely new level of depth and immersion. The opportunity to combine this with sound and touch feedback also hints at fully realized worlds for the user to explore.
It’s still early days and in truth, the hardware is far from perfect. Yet the promise of building an immersive, one-to-one first person perspective has been realized and that’s fascinating in its own right. These futuristic goggles are being aimed at gamers in particular, but the opportunity to use it for personal computing is also plain to see.
From the moment Google formally unveiled Glass at its I/O developer conference in 2012, everyone couldn’t stop talking about it. The idea of wearing a pair of perfectly normal glasses, fitted with a powerful computer and head-mounted display seemed impossible.
Yet Google seems to have nailed it. Even better, the company plans to release the device to the public in the not-so-distant future. It offers a point-of-view camera capable of shooting photos and 720p HD video, as well as a small touchpad for navigating menus and the onscreen interface.
Glass appears to be the epiphany of mobile computing. The device can be taken anywhere and is accessible at anytime. It’s alo small and relatively inconspicuous, which means the device is out-of-the-way when the user wants to focus on their surroundings.
Kinect, version 2.0
Motion controls have had a tough old-time in the video game industry. When the Nintendo Wii was launched, it heralded a new age of remote waggling in the living room. Third-party developers struggled to take advantage of the technology in a meaningful way, however and had to compete with counter-offers from both Microsoft and Sony.
Kinect, a motion sensor that uses an infrared projector and camera to analyze the player’s movements, was a novel idea when it launched in 2010. The ability to track the entire body produced a couple of memorable experiences such as Dance Central and Child of Eden, but it suffered from frequent accuracy issues.
Microsoft unveiled the next version of Kinect simultaneously with the Xbox One, its new video game console launching this year. The resolution has been upped to 1080p, and an ultra wide-angle lens means that it can be used in the even the smallest apartments.
The kicker, however, is that like the original Kinect, Microsoft will also belaunching it for Windows next year. The previous version was embraced by the modding community and resulted in a number of innovative and off-the-wall experiments. Even better hardware should produce more of the same.
Still looking to re-enact Minority Report? G-speak, built by Oblong Industries, is the closest working product to realizing that dream. Users don a pair of specialized gloves that can then be used to interact with data through various arm movements and hand gestures.
It integrates with large screens and multiple surfaces, encouraging large-scale collaborative projects and a more direct approach to problem-solving.
The gloves themselves are a little unattractive, but it means that anyone can use the system without re-calibrating the hardware. It’s a little way off Tony Stark’s personal lab in Iron Man 3, but the groundwork is there to realize this pioneering form of interaction.
Google Talking Shoe
At South by Southwest (SXSW) 2013, Google showed up with an interactive playground and a pair of talking sneakers, nicknamed henceforth as ‘The Talking Shoe’.
It’s not a consumer product – which is probably a good choice – but it does highlight the sort of wacky, off the wall thinking that even high-profile companies such as Google are coming up with.
This high-tech pair of trainers comes equipped with a pressure sensor, accelerometer and gyroscope, which tracks the user’s movements to deliver progress reports, advice and general abuse, such as: “Sitting down on the job, are we?”
It’s all a bit silly, but isn’t that what experimental projects are all about?
Microsoft: Live, Work, Play
Microsoft has built what it likes to call an ‘Envisioning Center’, where employees develop and prototype ideas that could be used by consumers in the next five to ten years. Back in March, the company released a promotional video offering a glimpse of the future, which involved an awful lot of touchscreens that are connected with one another.
Forget wallpaper, as some of these screens will take up the entire wall in your living room, kitchen or bedroom. Users are seen clamping a Surface tablet into a large desk – similar to what an architect might use – which is equipped with a huge touchscreen for cross-platform creation.
The same device, but fitted in the kitchen, expands on the concept of pinning artwork and notes to the refrigerator door, enabling users to bring up photos and word documents created on any device around the home.
Each wall screen is also fitted with a Kinect-style webcam for analyzing objects in the room. One demonstration has the user raising an ingredient and asking what he should cook with it; cue a series of recipes and step-by-step instructions, displayed on a kitchen table-top.
Touch screen devices are nothing new, but the idea of seamlessly combining them into one unified surface, alongside huge mounted wall screens, could easily change users’ behavior and workflow around the house.
Remember those amazing wristwatches that James Bond used to wear? The Rolex with a small laser beam in Never Say Never Again, or the Seiko Quartz watch with a built-in telex for sending mission critical messages? Well, unfortunately those don’t exist.
What we do have, however, is Pebble. It’s the first truly successful smartwatch, combining expansive functionality with attractive, robust hardware. Funded via Kickstarter – and breaking a few records in the process – Pebble offers a small e-ink display that communicates with an Android or iOS device over Bluetooth.
The Pebble comes with a few apps pre-installed, but the company’s open SDK means that anyone should be able to push the platform forward with new and interesting software.
One of the more interesting trends in the last few years has been the development of voice recognition software for various hardware ecosystems. Siri is one of the most notable, having been launched by Apple in October 2011 as a personal assistant for the iPhone and iPad. Users can use everyday words and phrases to execute tasks for a number of different applications, including reminders, email and weather.
Google has introduced its own interpretation as part of its standalone Google Search app for iOS and Android. It’s fast, accurate and intuitive, to the point where Google has also decided to introduce it as part of the Chrome web browser on the desktop.
Barking commands at a nearby smartphone or tablet can feel a little jarring at first, but the applications are numerous. Being able to prepare a dish in the kitchen and ask for the next instruction, for instance, without washing your hands and swiping across the screen is rather helpful.
The aforementioned Kinect controller is also starting to use this technology for the living room; users will be able to simply say “Xbox, ESPN” to switchover to a live sports game without rooting around for the controller.
Touch, speech, gestures. It’s a brave new world
The emergence of all these platforms points to a future where a traditional keyboard and mouse might be the exception, rather than the rule, for interacting with technology.
That’s not to say we’ll stop using laptops in the next 12 months, or we’ll all be shouting at our monitors in the office for nine hours straight, but there’s a clear opportunity to try new, experimental ideas with our current rate of technological advancements.
The future of user interface is therefore bright, unknown and also pretty darn exciting.
3Gear SDK Demo: Add gestures to your applications. Our technology enables the Kinect to reconstruct a finger-precise representation of what the hands are doing. This allows us to build simple and intuitive interactions that leverage small, comfortable gestures: pinching and small wrist movements instead of sweeping arm motions, for example. Source
Multitouch Experience Cube an eye-catching interactive trade show event installation. The walkable LED cube for the GRASS GmbH showcase has been the highlight at the Interzum 2011 trade show in Cologne, the international leading fair for suppliers to the furniture industry.
V Motion Project: Music Video powered by Kinect. They created this piece by hacking the Kinect motion tracking software and integrated it with audio production software, The V Motion Project created a tool that could transform the body’s movements into music. It’s great to see the Kinect technology being used in innovative and creative ways like this and it helps that the music track is pretty cool.