Experimentation and education are a continual fascination for me especially when it comes to future interfaces. Here is some FUI inspiration and a vision into what our near future interactions will look like.
Experimentation and education are a continual fascination for me especially when it comes to future interfaces. Here is some FUI inspiration and a vision into what our near future interactions will look like.
Key themes: reasons to be hopeful
Key themes: reasons to be concerned
Read the full article at http://pewrsr.ch/1oCzpWE
We live in an era of accelerating change, when scientific and technological advancements are arriving rapidly. As a result, we are developing a new language to describe our civilization as it evolves. Here are 20 terms and concepts that you’ll need to navigate our future.
Back in 2007 I put together a list of terms every self-respecting futurist should be familiar with. But now, some seven years later, it’s time for an update. I reached out to several futurists, asking them which terms or phrases have emerged or gained relevance since that time. These forward-looking thinkers provided me with some fascinating and provocative suggestions — some familiar to me, others completely new, and some a refinement of earlier conceptions. Here are their submissions, including a few of my own.
Futurist and scifi novelist David Brin suggested this one. It’s kind of a mash-up between Steve Mann’s sousveillance and Jamais Cascio’s Participatory Panopticon, and a furtherance of his own Transparent Society concept. Brin describes it as: “reciprocal vision and supervision, combining surveillance with aggressively effective sousveillance.” He says it’s “scrutiny from below.” As Brin told io9:
Folks are rightfully worried about surveillance powers that expand every day. Cameras grow quicker, better, smaller, more numerous and mobile at a rate much faster than Moore’s Law (i.e. Brin’s corollary). Liberals foresee Big Brother arising from an oligarchy and faceless corporations, while conservatives fret that Orwellian masters will take over from academia and faceless bureaucrats. Which fear has some validity? All of the above. While millions take Orwell’s warning seriously, the normal reflex is to whine: “Stoplooking at us!” It cannot work. But what if, instead of whining, we all looked back? Countering surveillance with aggressively effective sousveillance — or scrutiny from below? Say by having citizen-access cameras in the camera control rooms, letting us watch the watchers?
Brin says that reciprocal vision and supervision will be hard to enact and establish, but that it has one advantage over “don’t look at us” laws, namely that it actually has a chance of working. (Image credit: 24Novembers/Shutterstock)
This particular meme — suggested to me by the Institute for the Future's Distinguished FellowJamais Cascio — has only recently hit the radar. “It’s in-vitro fertilization,” he says, “but with a germline-genetic mod twist.” Recently sanctioned by the UK, this is the biotechnological advance where a baby can have three genetic parents via sperm, egg, and (separately) mitochondria. It’s meant as a way to flush-out debilitating genetic diseases. But it could also be used for the practice of human trait selection, or so-called “designer babies”. The procedure iscurrently being reviewed for use in the United States. The era of multiplex parents has all but arrived.
In three to five years, a baby will be born with two genetic mothers and one father. This could prove to be a boon for polyamorous families of the… Read…
Futurist and scifi novelist Ramez Naam says we should be aware of the potential for “technological unemployment.” He describes it as unemployment created by the deployment of technology that can replace human labor. As he told io9,
For example, the potential unemployment of taxi drivers, truck drivers, and so on created by self-driving cars. The phenomenon is an old one, dating back for centuries, and spurred the original Luddite movement, as Ned Ludd is said to have destroyed knitting frames for fear that they would replace human weavers. Technological unemployment in the past has been clearly outpaced (in the long term) by the creation of new wealth from automation and the opening of new job niches for humans, higher in levels of abstraction. The question in the modern age is whether the higher-than-ever speed of such displacement of humans can be matched by the pace of humans developing new skills, and/or by changes in social systems to spread the wealth created.
Indeed, the potential for robotics and AI to replace workers of all stripes is significant, leading to worries of massive rates of unemployment and subsequent social upheaval. These concerns have given rise to another must-know term that could serve as a potential antidote: guaranteed minimum income. (Image credit: Ociacia/Shutterstock)
In the future, people won’t be confined to their meatspace bodies. This is what futurist and transhumanist Natasha Vita-More describes as the “Substrate-Autonomous Person.” Eventually, she says, people will be able to form identities in numerous substrates, such as using a “platform diverse body” (a future body that is wearable/usable in the physical/material world — but also exists in computational environments and virtual systems) to route their identity across the biosphere, cybersphere, and virtual environments.
We’re still decades — if not centuries — away from being able to transfer a mind to a supercomputer. It’s a fantastic future prospect that… Read…
"This person would form identities," she told me. "But they would consider their personhood, or sense of identity, to be associated with the environment rather than one exclusive body." Depending on the platform, the substrate-autonomous person would upload and download into a form or shape (body) that conforms to the environment. So, for a biospheric environment, the person would use a biological body, for the Metaverse, a person would use an avatar, and for virtual reality, the person would use a digital form.
If you want to know about the future of artificial intelligence then you must read documentary filmmaker James Barrat’s new book Our Final… Read…
It’s time to retire the term ‘Technological Singularity.’ The reason, says the Future of Humanity Institute's Stuart Armstrong, is that it has accumulated far too much baggage, including quasi-religious connotations. It's not a good description of what might happen when artificial intelligence matches and then exceeds human capacities, he says. What's more, different people interpret it differently, and it only describes a limited aspect of much broader concept. In its place, Armstrong says we should use a term devised by the computer scientist I. J. Good back in 1967: the “Intelligence explosion.” As Armstrong told io9,
It describes the apparent sudden increase in the intelligence of an artificial system such as an AI. There are several scenarios for this: it could be that the system radically self improves itself, finding that as it becomes more intelligent, it’s easier for it to become more intelligent still. But it could also be that human intelligence clusters pretty close in mindspace, so a slowly improving AI could shoot rapidly across the distance that separates the village idiot from Einstein. Or it could just be that there are strong skill returns to intelligence, so that an entity need only be slightly more intelligent that humans to become vastly more powerful. In all cases, the fate of life on Earth is likely to be shaped mainly by such “super-intelligences”.
Image credit: sakkmesterke/Shutterstock.
While many futurists extol radical life extension on humanitarian grounds, few consider the astounding fiscal benefits that are to be had through the advent of anti-aging biotechnologies. The Longevity Dividend, as suggested to me by bioethicist James Hughes of the IEET, is the “assertion by biogerontologists that the savings to society of extending healthy life expectancy with therapies that slow the aging process would far exceed the cost of developing and providing them, or of providing additional years of old age assistance.” Longer healthy life expectancy would reduce medical and nursing expenditures, argues Hughes, while allowing more seniors to remain independent and in the labor force. No doubt, the corporate race toprolong life is heating up in recognition of the tremendous amounts of money to be made — and saved — through preventative medicines.
Google has announced Calico, a new company that will focus on health and well-being. But its ultimate purpose is to radically extend the human… Read…
This concept was suggested by our very own Annalee Newitz, editor-in-chief of io9 and author ofScatter, Adapt And Remember. The idea of repressive desublimation was first developed by by political philosopher Herbert Marcuse in his groundbreaking book Eros and Civilization. Newitz says:
It refers to the kind of soft authoritarianism preferred by wealthy, consumer culture societies that want to repress political dissent. In such societies, pop culture encourages people to desublimate or express their desires, whether those are for sex, drugs or violent video games. At the same time, they’re discouraged from questioning corporate and government authorities. As a result, people feel as if they live in a free society even though they may be under constant surveillance and forced to work at mind-numbing jobs. Basically, consumerism and so-called liberal values distract people from social repression.
With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human… Read…
Sometimes referred to as IA, this is a specific subset of human enhancement — the augmentation of human intellectual capabilities via technology. “It is often positioned as either a complement to or a competitor to the creation of Artificial Intelligence,” says Ramez Naam. “In reality there is no mutual exclusion between these technologies.” Interestingly, Naam says IA could be a partial solution to the problem of technological unemployment — as a way for humans, or posthumans, to “keep up” with advancing AI and to stay in the loop.
This is another term suggested by Stuart Armstrong. He describes it as
the application of cost-effectiveness to charity and other altruistic pursuits. Just as some engineering approaches can be thousands of times more effective at solving problems than others, some charities are thousands of time more effective than others, and some altruistic career paths are thousands of times more effective than others. And increased efficiency translates into many more lives saved, many more people given better outcomes and opportunities throughout the world. It is argued that when charity can be made more effective in this way, it is a moral duty to do so: inefficiency is akin to letting people die.
On a somewhat related note, James Hughes says moral enhancement is another must-know term for futurists of the 21st Century. Also known as virtue engineering, it’s the use of drugs and wearable or implanted devices to enhance self-control, empathy, fairness, mindfulness, intelligence and spiritual experiences.
This one comes via Max More, president and CEO of the Alcor Life Extension Foundation. It’s an interesting and obverse take on the precautionary principle. “Our freedom to innovate technologically is highly valuable — even critical — to humanity,” he told io9. “This implies several imperatives when restrictive measures are proposed: Assess risks and opportunities according to available science, not popular perception. Account for both the costs of the restrictions themselves, and those of opportunities foregone. Favor measures that are proportionate to the probability and magnitude of impacts, and that have a high expectation value. Protect people’s freedom to experiment, innovate, and progress.”
Jamais Cascio suggested this term, though he admits it’s not widely used. Mules are unexpected events — a parallel to Black Swans — that aren’t just outside of our knowledge, but outside of our understanding of how the world works. It’s named after Asimov’s Mule from the Foundation series.
Another must-know term submitted by Cascio, described as “the current geologic age, characterized by substantial alterations of ecosystems through human activity.” (Image credit: NASA/NOAA).
Unlike Moore’s Law, where things are speeding up, Eroom’s Law describes — at least in the pharmaceutical industry — things that are slowing down (which is why it’s Moore’s Law spelled backwards). Ramez Naam says the rate of new drugs developed per dollar spent by the industry has dropped by roughly a factor of 100 over the last 60 years. “Many reasons are proposed for this, including over-regulation, the plucking of low-hanging fruit, diminishing returns of understanding more and more complex systems, and so on,” he told io9.
Natasha Vita-More describes this as the ability of a species to produce variants more apt or powerful than those currently existing within a species:
One way of looking at evolvability is to consider any system — a society or culture, for example, that has evolvable characteristics. Incidentally, it seems that today’s culture is more emergent and mutable than physiological changes occurring in human biology. In the course of a few thousand years, human tools, language, and culture have evolved manifold. The use of tools within a culture has been shaped by the culture and shows observable evolvability-from stones to computers-while human physiology has remained nearly the same.
Artificial wombs are a staple of science fiction, but could we really build one? As time passes, we’re inching closer and closer to the day when it… Read…
"This is any device, whether biological or technological, that allows humans to reproduce without using a woman’s uterus,” says Annalee Newitz. Sometimes called a “uterine replicator,” she says these devices would liberate women from the biological difficulties of pregnancy, and free the very act of reproduction from traditional male-female pairings. “Artificial wombs might develop alongside social structures that support families with more than two parents, as well as gay marriage,” says Newitz.
Whole brain emulations, says Stuart Armstrong, are human brains that have been copied into a computer, and that are then run according to the laws of physics, aiming to reproduce the behaviour of human minds within a digital form. As he told io9,
These days, people worry about robots stealing our jobs. But maybe we should be more concerned about massive populations of computerized human… Read…
They are dependent on certain (mild) assumptions on how the brain works, and requires certain enabling technologies, such as scanning devices to make the original brain model, good understanding of biochemistry to run it properly, and sufficiently powerful computers to run it in the first place. There are plausible technology paths that could allow such emulations around 2070 or so, with some large uncertainties. If such emulations are developed, they would revolutionise health, society and economics. For instance, allowing people to survive in digital form, and creating the possibility of “copyable human capital”: skilled, trained and effective workers that can be copied as needed to serve any business purpose.
Armstrong says this also raises great concern over wages, and over the eventual deletion of such copies.
What will happen in the days after the birth of the first true artificial intelligence? If things continue apace, this could prove to be the most… Read…
Ramez Naam says this term has gone somewhat out of favor, but it’s still a very important one. It refers to the vast majority of all ‘artificial intelligence’ work that produces useful pattern matching or information processing capabilities, but with no bearing on creating a self-aware sentient being. “Google Search, IBM’s Watson, self-driving cars, autonomous drones, face recognition, some medical diagnostics, and algorithmic stock market traders are all examples of ‘weak AI’,” says Naam. “The large majority of all commercial and research work in AI, machine learning, and related fields is in ‘weak AI’.”
Naam argues that this trend — and the motivations for it — is one of the arguments for the Singularity being further than it appears.
In what might be the first documented case of technologically-assisted interspecies telepathy, an international team of researchers has successfully… Read…
Imagine the fantastic prospect of creating interfaces that connect the brains of two (or more) humans. Already today, scientists have created interfaces that allow humans to move the limb — or in this case, the tail — of another animal. At first, these technologies will be used for therapeutic purposes; they could be used to help people relearn how to use previously paralyzed limbs. More radically, it could eventually be used for recreational purposes. Humans could voluntarily couple themselves and move each other’s body parts.
This refers to any situation in which new algorithms can suddenly and dramatically exploit existing computational power far more efficiently than before. This is likely to happen when tons of computational power remains untapped, and when previously used algorithms were suboptimal. This is an important concept as far as the development of AGI (artificial general intelligence) is concerned. As noted by Less Wrong, it
signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an intelligence explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an existential risk.
Luke Muehlhauser from the Machine Intelligence Research Institute (MIRI) describes it this way:
Suppose that computing power continues to double according to Moore’s law, but figuring out the algorithms for human-like general intelligence proves to be fiendishly difficult. When the software for general intelligence is finally realized, there could exist a ‘computing overhang’: tremendous amounts of cheap computing power available to run [AIs]. AIs could be copied across the hardware base, causing the AI population to quickly surpass the human population.
I’m sure we missed many must-know terms. Please add your own suggestions to comments.
Anti Facial Recognition Visor
Interesting approach to avoid identification from cameras by lighting key areas of the face (video embedded below, via the great DigInfo):
This is the world’s first pair of glasses which prevent facial recognition by cameras. They are currently under development by Japan’s National Institute of Informatics.
Photos taken without people’s knowledge can violate privacy. For example, photos may be posted online, along with metadata including the time and location. But by wearing this device, you can stop your privacy from being infringed in such ways.
"You can try wearing sunglasses. But sunglasses alone can’t prevent face detection. Because face detection uses features like the eyes and nose, it’s hard to prevent just by concealing your eyes. This is the privacy visor I have developed, which uses 11 near-infrared LEDs. I’m switching it on now. It prevents face detection, like this."
"Light from these near-infrared LEDs can’t be seen by the human eye, but when it passes through a camera’s imaging device, it appears bright. The LEDs are installed in these locations because, a feature of face detection is, the eyes and part of the nose appear dark, while another part of the nose appears bright. So, by placing light sources mostly near dark parts of the face, we’ve succeeded in canceling face detection characteristics, making face detection fail."
Compared with previous ways of physically hiding the face, this technology can protect privacy without obstructing communication, as all users need to do is wear a pair of glasses.
A 3D Printing system that can create forms without the hindrance of gravity - video embedded below:
A brand new method of additive manufacturing. This patent-pending method allows for creating 3D objects on any given working surface independently of its inclination and smoothness, and without a need of additional support structures. Conventional methods of additive manufacturing have been affected both by gravity and printing environment: creation of 3D objects on irregular, or non-horizontal surfaces has so far been treated as impossible . By using innovative extrusion technology we are now able to neutralize the effect of gravity during the course of the printing process. This method gives us a flexibility to create truly natural objects by making 3D curves instead of 2D layers. Unlike 2D layers that are ignorant to the structure of the object, the 3D curves can follow exact stress lines of a custom shape. Finally, our new out of the box printing method can help manufacture structures of almost any size and shape.
More at the project’s website here
Google’s SCHAFT Takes Home Gold in DARPA Robot Olympics
The DARPA Robotics Challenge Trials 2013 completed this weekend in Florida with 16 teams all vying for the top prize of $2 million dollars.
According to DARPA, “The DRC is a competition of robot systems and software teams vying to develop robots capable of assisting humans in responding to natural and man-made disasters. Technologies resulting from the DRC will transform the field of robotics and catapult forward development of robots featuring task-level autonomy that can operate in the hazardous, degraded conditions common in disaster zones”.
All 16 robots were required to complete eight tasks as part of the challenge:
Gizmodo reports that “with 27 out of a possible 32 points in eight challenges, SCHAFT pulled out a decisive victory”.
SCHAFT is a 4ft 11 two-legged robot that was developed by a spin-off from the University of Tokyo’s Jouhou System Kougaku lab, which Google recently revealed it had acquired.
Black Phoenix is a fictional military corporation that manufactures robots in a not-so-distant future. The idea is creating an album that would be full of designs that could represent a whole line of products from utility and semi-civilian drones to multi-purpose mobile weaponry systems and vehicles.
“Black Phoenix Project” is a collaboration with photographer Maria Skotnikova who is responsible for creating HDR Environment Maps that I used as lighting source as well as backplates. Visit Maria’s website here.
The images below represent “10 Days of Mech” session. The goal during this exercise was to create 1 mech design every day in 3d, from start to finish, without creating preliminary 2d sketches, during non-stop 10 days period. The first 8 designs followed this rule and the 9th design “Ambulance Mech” took 2 days as I wanted to show “an open cockpit” version of it, which took an extra day. So after the exercise was over I decided to make an extra design ( with another 2 days) as a bonus entry just to make it to “10” as a total number of robots.
Before starting this exercise I spent some R&D time establishing the overall workflow for speed-modeling and tried different techniques that enabled me to accelerate design process in 3d. The workflow included re-using premade kit-bash parts, graphics/decals, non-subdivision based concept modeling and image-based lighting for the final rendering. Click here to read more about the work-flow. Click here to visit the online-store where you can purchase original kit-bash sets that were used for the “Black Phoenix” Project designs.
Wal-Mart Canada have launched digital signs at bus stops where customers can use their mobile devices to scan products on posters and have the goods delivered to their homes for free. The campaign will last four weeks. Since consumers are typically pressed for time, this is one way of adding value, Simon Rodrigue, vice president of e-commerce for Wal-Mart Canada, said in a statement. “This campaign allows us to help Torontonians shop for essentials on the go, anywhere, at any time.”
“What we used to have before is, here is something we have on sale; please come to our store and buy it. Now what we’re saying is we have this product on sale; buy it right this instant,” says David Elsner, manager, retail consulting services at PwC. “That cash register is in their hand. They can make that purchase.”
Virtual supermarkets are popping up in subway stations in South Korea, where commuters can virtually shop for items while waiting for the train to come. Customers simply scan an item’s QR code using the free "Homeplus" app and can have it delivered to their doorstep before they even get home. Ranked as the 2nd most hard-working country in the world to Japan, South Korea is rewarding its workers with this timesaving gem.
The IBM “5 in 5″ is the eighth year in a row that IBM has made predictions about technology, and this year’s prognostications are sure to get people talking. We discussed them with Bernie Meyerson, the vice president of innovation at IBM, and he told us that the goal of the predictions is to better marshal the company’s resources in order to make them come true.
“We try to get a sense of where the world is going because that focuses where we put our efforts,” Meyerson said. “The harder part is nailing down what you want to focus on. Unless you stick your neck out and say this is where the world is going, it’s hard to you can turn around and say you will get there first. These are seminal shifts. We want to be there, enabling them.”
(See our complete interview with Meyerson here).
In a nutshell, IBM says:
Meyerson said that this year’s ideas are based on the fact that everything will learn. Machines will learn about us, reason, and engage in a much more natural and personalized way. IBM can already figure out your personality by deciphering 200 of your tweets, and its capability to read your wishes will only get better. The innovations are being enabled by cloud computing, big data analytics (the company recently formed its own customer-focused big data analytics lab), and adaptive learning technologies. IBM believes the technologies will be developed with the appropriate safeguards for privacy and security, but each of these predictions raises additional privacy and security issues.
As computers get smarter and more compact, they will be built into more devices that help us do things when we need them done. IBM believes that these breakthroughs in computing will amplify our human abilities. The company came up with the predictions by querying its 220,000 technical people in a bottoms-up fashion and tapping the leadership of its vast research labs in a top-down effort.
Here’s some more detailed description and analysis on the predictions.
Globally, two out of three adults haven’t gotten the equivalent of a high school education. But IBM believes the classrooms of the future will give educators the tools to learn about every student, providing them with a tailored curriculum from kindergarten to high school.
“Your teacher spends time getting to know you every year,” Meyerson said. “What if they already knew everything about how you learn?”
In the next five years, IBM believes teachers will use “longitudinal data” such as test scores, attendance, and student behavior on electronic learning platforms — and not just the results of aptitude tests. Sophisticated analytics delivered over the cloud will help teachers make decisions about which students are at risk, their roadblocks, and the way to help them. IBM is working on a research project with the Gwinnett County Public Schools in Georgia, the 14th largest school district in the U.S. with 170,000 students. The goal is to increase the district’s graduation rate. And after a $10 billion investment in analytics, IBM believes it can harness big data to help students out.
“You’ll be able to pick up problems like dyslexia instantly,” Meyerson said. “If a child has extraordinary abilities, they can be recognized. With 30 kids in a class, a teacher cannot do it themselves. This doesn’t replace them. It allows them to be far more effective. Right now, the experience in a big box store doesn’t resemble this, but it will get there.”
Online sales topped $1 trillion worldwide last year, and many physical retailers have gone out of business as they fail to compete on price with the likes of Amazon. But innovations for physical stores will make buying local turn out better. Retailers will use the immediacy of the store and proximity to customers to create experiences that online-only retail can’t replicate. The innovations will bring the power of the Web right to where the shopper can touch it. Retailers could rely on artificial intelligence akin to IBM’s Watson, which played Jeopardy better than many human competitors. The Web can make sales associates smarter, and augmented reality can deliver more information to the store shelves. With these technologies, stores will be able to anticipate what a shopper most wants and needs.
And they won’t have to wait two days for shipping.
“The store will ask if you would like to see a certain camera and have a salesperson meet you in a certain aisle where it is located,” Meyerson said. “The ability to do this painlessly, without the normal hassle of trying to find help, is very powerful.”
This technology will get so good that online retailers are likely to set up retail showrooms to help their own sales.
“It has been physical against online,” Meyerson said. “But in this case, it is combining them. What that enables you to do is that mom-and-pop stores can offer the same services as the big online retailers. The tech they have to serve you is as good as anything in online shopping. It is an interesting evolution but it is coming.”
Global cancer rates are expected to jump by 75 percent by 2030. IBM wants computers to help doctors understand how a tumor affects a patient down to their DNA. They could then figure out what medications will best work against the cancer, and fulfill it with a personalized cancer treatment plan. The hope is that genomic insights will reduce the time it takes to find a treatment down from weeks to minutes.
“The ability to correlate a person’s DNA against the results of treatment with a certain protocol could be a huge breakthrough,” Meyerson said. It’ll be able to scan your DNA and find out if any magic bullet treatments exist that will address your particular ailment.
IBM recently made a breakthrough with a nanomedicine that it can engineer to latch on to fungal cells in the body and attack them by piercing their cell membranes. The fungi won’t be able to adapt to these kinds of physical attacks easily. That sort of advance, where the attack is tailored against particular kinds of cells, will be more common in the future.
We have multiple passwords, identifications, and devices than ever before. But security across them is highly fragmented. In 2012, 12 million people were victims of identity fraud in the U.S. In five years, IBM envisions a digital guardian that will become trained to focus on the people and items it’s entrusted with. This smart guardian will sort through contextual, situational, and historical data to verify a person’s identity on different devices. The guardian can learn about a user and make an inference about behavior that is out of the norm and may be the result of someone stealing that person’s identity. With 360 degrees of data about someone, it will be much harder to steal an identity.
“In this case, you don’t look for the signature of an attack,” Meyerson said. “It looks at your behavior with a device and spots something anomalous. It screams when there is something out of the norm.”
IBM says that, by 2030, the towns and cities of the developing world will make up 80 percent of urban humanity and by 2050, seven out of every 10 people will be a city dweller. To deal with that growth, the only way cities can manage is to have automation, where smarter cities can understand in real-time how billions of events occur as computers learn to understand what people need, what they like, what they do, and how they move from place to place.
IBM predicts that cities will digest information freely provided by citizens to place resources where they are needed. Mobile devices and social engagement will help citizens strike up a conversation with their city leaders. Such a concept is already in motion in Brazil, where IBM researchers are working with a crowdsourcing tool that people can use to report accessibility problems, via their mobile phones, to help those with disabilities better navigate urban streets.
Of course, as in the upcoming video game Watch Dogs from Ubisoft, a bad guy could hack into the city and use its monitoring systems in nefarious ways. But Meyerson said, “I’d rather have the city linked. Then I can protect it. You have an agent that looks over the city. If some wise guy wants to make the sewage pumps run backwards, the system will shut that down.”
The advantage of the ultraconnected city is that feedback is instantaneous and the city government can be much more responsive.
Prototype Real / Digital Info Interface System
Using projection and gestures to create interactive relationship with information - video embedded below:
Fujitsu Laboratories has developed a next generation user interface which can accurately detect the users finger and what it is touching, creating an interactive touchscreen-like system, using objects in the real word.
"We think paper and many other objects could be manipulated by touching them, as with a touchscreen. This system doesn’t use any special hardware; it consists of just a device like an ordinary webcam, plus a commercial projector. Its capabilities are achieved by image processing technology."
Using this technology, information can be imported from a document as data, by selecting the necessary parts with your finger.
More at DigInfo here
RELATED: This is very similar to a concept developed in 1991 called ‘The Digital Desk’ [link]
Advancements in robotics are continually taking place in the fields of space exploration, health care, public safety, entertainment, defense, and more. These machines — some fully autonomous, some requiring human input — extend our grasp, enhance our capabilities, and travel as our surrogates to places too dangerous or difficult for us to go. Gathered here are recent images of robotic technology at the beginning of the 21st century, including robotic insurgents, NASA’s Juno spacecraft on its way to Jupiter, and a machine inside an archaeological dig in Mexico. [32 photos]
I’m a massive Star Trek fan. So I’m super-excited that Jorge Almeida took some time to discuss his work on Star Trek Into Darkness — for which he was the lead designer of the UI elements. (If you’re paying attention you’ll remember this previous post with Jorge on his work for MI:4 and The Dark Knight Rises).
Q: How did you get involved with Star Trek Into Darkness?
OOOii (pronounced “ooh-wee”) created all of the user interfaces for the first film, so we were brought on to continue our work on the second. I had done some UI work on “Star Trek”, and was asked to take the lead on “Star Trek Into Darkness.” I got a chance to see the movie on Sunday. Just a great ride. I am really proud to have been a part of this film. Hopefully fans will like what we did.
Q: What was your role? Were there a lot of others involved in the design and production? What software did you use?
I was lead designer for OOOii. I oversaw the look and animation style for all of the UI in the film. We had a great team, with major contributions from Blaise Hossain, David Schoneveld, Paul Luna, and Andrew Tomandl. I also need to single out Rudy Vessup, who was my right hand man on this job. Just a fantastic motion graphics artist and a real pro.
Everything we created was done using some combination of Adobe Illustrator, Photoshop, and After Effects. Additional 3d elements were created using Maya.
Q: Was there a general design brief or design direction that you were given? What were your design influences?
For the Enterprise, production already had the full set of interface animations we created from the first film, so we were only responsible for additional UI specific to the story. It was therefore important that I maintain the style and the spirit of what was done in the first film.
Scott Chambliss was the production designer, and I loved what he did with “Star Trek.” The look of that film reminded me of some of Frank Frazetta’s classic Buck Roger’s illustrations. I would always keep that style in mind when designing. I’m also a fan of the classic LCARS interface from “The Next Generation.” While production wasn’t looking for a revision of LCARS, the curved corners and elegance of those interfaces definitely had an influence on my work.
We also had the advantage of having seen the first film and how it was cut. The action often moves quickly, so the UI had to communicate story points clearly and efficiently. When you’re spending days or weeks on a shot, it’s easy to forget that it may only be onscreen for less than two seconds.
Q: Can you describe the work that went into the UI development for the starship Vengeance?
Early on, there was a focus placed on the starship “Vengeance.” They were shooting the Vengeance towards the end of the schedule, but Scott wanted to get a clear direction before production started and other priorities took over. He provided us with some imagery to use as inspiration- most of it pretty abstract, but the shapes definitely felt interstellar. There were many overlapping circles, and cloud-like clusters. They reminded me of some of the space station research I had done. I presented him with ideas and he started to narrow it down from there.
The “Vengeance,” like the “Enterprise,” featured 4 sets of monitors that wrap around the top half of the bridge walls and act as a 360º radar monitor. Some of the images Scott had provided us felt like nautical maps, so I kept that in mind when coming up with ideas. Thinking of the monitors as windows of a submarine, I tried to make what was happening outside feel slightly ominous and alive.
Once we started testing the animations on set, Scott asked us to desaturate them quite a bit so that they would blend in better with the black interior. I really liked the effect. Here are some of the finals (the viewscreen was done in post):
Q: There’s some really interesting heads-up display work. What was involved in their design?
All of the heads-up display shots were obviously done during post-production, so we worked under the direction of Visual Effects Supervisor Roger Guyett. We presented our work regularly to Roger and VFX Producer Ron Ames for comments, and eventually they would present our work to JJ.
The entire space jump sequence was definitely a highlight for me. It was obvious from the first edit I saw that this scene was going to be a lot of fun. We were asked to create the UI for the viewscreen, the glass panel display, and for the helmet heads-up display.
My goal with the HUD was to minimize the interface as much as possible. I wanted to frame it around the actors face in a way that didn’t feel too tech. I was trying to make it feel soothing, with a steady pulse- that way the animation had somewhere to go when things get dangerous.
The projected flightpath was something they had as a rough concept in their original edit, so I just took it from there. I had seen some POV video of an olympic luger, and thought it had the right rhythm and movement to use as a starting point for the animation. I showed our 3d artist the videos, as well as some sketches I had done, and he started building elements in Maya. He rendered a variety of frames and I started combining them in photoshop until we came up with a style that production liked.
From there, it was a matter of animating the individual shots. I animated all of the shots using After Effects. I would create the animation, then put together rough comps so Roger and JJ could see the graphics in context. Once approved, I provided the flat HUD graphics as separate passes for ILM so that they could have flexibility when doing final compositing. The whole process went pretty smoothly.
Q: How did you approach the Enterprise viewscreens?
One of the major challenges in post was designing the Enterprise viewscreen interface. There were only one or two viewscreen interfaces in the first film, but in “Star Trek Into Darkness” there were several. The obvious challenge was keeping the look consistent with the rest of the bridge. Like Scott, Roger also wanted to avoid any design that felt too grid-like or text-heavy.
I don’t really have a set process for how I work. Sometimes I draw thumbnails, sometimes I just start throwing elements onto a photoshop or ae file and start mixing and matching. Generally my philosophy is to keep fixing it until it breaks, then take it back a step. I heard Iain McCaig say that in a video once. Made sense to me.
In practical UI, you are trying to give the user an elegant way to make choices. With film UI, I am trying to give the viewer the illusion of choice. I am trying to deliberately direct the viewers eye to whatever story point the director wants revealed at the time he wants it revealed. The job becomes more about illustration, especially in post where we can see how the interface is framed within the shot. We paint a small part of a much bigger picture, and our work needs to visually support what’s on screen so that we don’t disrupt the rhythm of the viewing experience.
One technique that I often use is to design in greyscale (using an adjustment layer). It reduces the composition to it’s basic values so that I can design without being distracted by color. We also often use Adobe bridge to review various concepts and composites at thumbnail size. It’s an easy way to see which designs are the most effective.
The viewscreen for the volcano sequence was one of the first priorities we had, so the developmental process took place with that interface. I began with thumbnail sketches and tried to work out compositions both on paper and in photoshop.
The volcano viewscreen quickly exposed an issue with trying to make the design too nonlinear — that we might lose the distinction between what was being projected on the glass and what was floating behind it. The view screen needed some type of framing to visually attach it to the ship and easily distinguish it from the environment. We had used translucent glass panels as border elements in the first film, so I started enlarging and reconfiguring them to break up the shape of the viewscreen. I then added and rearranged graphic elements within that framework until the interface had a balance between design and functionality that everyone was happy with.
Once the first couple of viewscreens were approved, the look took off from there. We provided the elements to ILM in separate passes so they could make adjustments and dial in the final composites with Roger and JJ. ILM, as always, did a fantastic job. I couldn’t be happier with how our graphics looked onscreen.
Q: Any final thoughts?
“Star Trek Into Darkness” did a lot of shooting in Los Angeles, so I was much closer to this production than I have been on any film in a while. We were on set a lot, so I was reminded first-hand of just what an enormous operation film production is. Multiple sets being built simultaneously. Trees being painted red on one stage, and a giant Starfleet shuttle on the next. I was humbled by the tireless efforts of our producer, Jennifer Simms, as well the playback producer Cindy Jones. They took on many of the headaches of the job and helped facilitate the constant flow of information between our team and production. This is not easy when you’re talking about creative notes one second, detailed technical issues the next, and budget issues in between- all while this giant train is in motion.
I was also reminded of just how much we depend on the playback crew on set to make our animations work within a scene. We’ve worked with Monte and the guys at Cygnet Video for years. Aside from technical issues, they are also responsible for cueing our animations in sync with the actor’s movements. Ultimately, what you see on screen is an elaborate dance between a large number of people both onscreen and off. It’s pretty amazing to watch it all come together so effectively.
Thanks for the interest in our work. Hopefully people enjoy the movie as much as I did.
Kickstarter: Developer kit for the Oculus Rift - the first truly immersive virtual reality headset for video games. Oculus Rift is a new virtual reality (VR) headset designed specifically for video games that will change the way you think about gaming forever. With an incredibly wide field of view, high resolution display, and ultra-low latency head tracking, the Rift provides a truly immersive experience that allows you to step inside your favorite game and explore new worlds like never before.
Sight Systems A short futuristic film by Eran May-raz and Daniel Lazo.
I presented For a Future-Friendly Web, which covered how we as web creators can think and act in a more future-friendly way. Here are the slides, video and notes from my talk:
I’m truly honored to have been part of such an amazing conference. I saw a lot of old friends and met a lot of new ones too. I’m already excited for next year’s Mobilism!