The way we interact with technology has always been relatively consistent. If you wanted to make a phone call, you pressed a few buttons. Is the room a bit too dark? Easy, flip on a light switch. Navigating tactile interfaces is now wired into how we function. It’s the default way people know how to garner feedback from the objects surrounding them. Touch is a powerful thing. Poking, dragging, swiping, pinching and drawing can provoke a countless number of responses from all sorts of devices, but especially the one that’s probably in your pocket right now.
The way we interact with other people is profoundly different than what’s seen with machines. Regardless of what I’m able to accomplish with touch, using speech will always be able to yield a greater, more detailed response. Unfortunately, comprehending language has been a gift usually only relegated to humans and a few other animals (I’m looking at you Koko). Let’s face it; there will be a lot more toast to eat if I ask my friend to make some instead of trying to strike up a conversation with my toaster.
Forget about buttons. People really started to maximize the effectiveness of touch with the advent of the touch screen. Having an entire surface to interact with opened up a world of possibilities. This is especially true starting in 2007 when everyone, everywhere started carrying around the touch-enabled, witchcraft device known as the iPhone. Shortly after, the entire cellular phone industry jumped on the same design bandwagon driven by Steve Jobs (except for one company that’s now paying the price). Now, no matter if you have iOS, Android or even Windows Phone, everyone has a four-inch long rectangle with a touch screen face in their pocket.
For its intended purpose, there is really no better way to design a smartphone. All manufacturers converged on a similar smartphone and now tablet because there is nowhere left to go (although some are apparently cooler than others).
Instead of changing smartphone designs, big companies like Apple, Google and Microsoft are working on input innovation. Mankind’s desire to talk to machines can be seen throughout popular culture, like the 1960’s series’ Lost in Space and the Jetsons, and also this eerie video made by Apple in 1987. We have had some consumer technology that’s on the cusp of responding to language, but nothing very useful or reliable. My visual voicemail service butchers every message I receive and has confused me to the point that I ignore it.
In 2011, with the introduction of Siri as the iPhone’s major selling point, the shift from touch to speech began. Initially, the chatty, virtual assistant faced some criticism. She seemed a lot more useful helping Samuel L Jackson cook dinner on the commercial than she was in real life. Apple pulled the often not well received “Siri’s still in beta” angle. Even though Siri needed to be used in real world situations to allow Apple to improve her, it might not have been the best strategy to make something in beta the face of the product. Luckily with the recent operating system update, Siri has put what she learned to good use and became much more accurate and useful. With continued iteration, asking Siri to do something for you might become the default over doing it yourself.
Apple isn’t the only one getting in on voice recognition. At this year’s Google I/O Developers conference one of crowd’s favorite demos was Google Now. With less personality than Siri, Now can pull up information at your command. It does this with a series of cards including: flights, sports, weather, appointments, etc. It is a much more straight forward approach than a virtual personal assistant and looked to have significantly greater functionality. Microsoft is also improving their voice recognition offering. With the debut of Windows Phone 8, they displayed simple, intuitive vocal commands for everything from pausing a movie, to rerouting navigation while driving. Voice recognition was even opened up so that developers could work with it in their own applications.
Is There Still A Place For Touch?
Smartphones and tablets won’t be getting rid of touch screens anytime soon. However, they will continue to change and adapt, with new materials and features. Thinner, flexible screens are being developed that could save energy and improve battery life. A promising company called Tactus is creating its own product that works to bring the response of physical buttons to a touch screen. Using so called “micro-fluids” the Tactus layer allows keys, buttons and other interactive shapes to pop up anywhere on a touch screen and subsequently recede when they’re not needed. Imagine being able to use a QWERTY keyboard on the screen of your iPad, or actually feel the fruit your slicing in a game of Fruit Ninja.
What do you think? Will voice overtake touch as the preferred method of input anytime soon? What do you see as some of the obstacles? Share with us in the comments.