Microsoft on Tuesday demonstrated added hand gesture functionality for its Kinect technology, which is being adapted to let users not only wave their hands, but clench their fists and pinch their fingers to pan and scroll through an on-screen app.
The company showed the latest version of Kinect during its annual TechFest conference, which highlights developmental projects underway at Microsoft Research.
“Ultimately you would want to get to the ‘Minority Report’ approach where you’re flipping your way through data,” Bob O’Donnell, a program vice president at IDC, told TechNewsWorld. “The trick is to get the software to work the way you want it to.”
Using hand gestures “would be great for content creation and simulation,” Jim McGregor, principal analyst at Tirias Researchsuggested. “Imagine the potential for analyzing motions or simulating those motions into computer-generated characters.” The technology could “bring us further towards making animation very close to reality.”
Kinect Hardware and Software
The Kinect contains a camera, audio sensors and motion-sensing technology that tracks 48 points of movement on the human body. It can reportedly perform full-motion tracking of the human body at 30 frames per second.
The device uses facial and voice recognition as well as gesture recognition.
It’s not clear whether Microsoft has beefed up the Kinect’s hardware for the enhanced gesture project. However, it’s reported that the updated software uses machine learning to recognize when users open or close their hands.
This is one of 15 projects being demonstrated at TechFest. Another is the Kinect Fusion project, where Microsoft is “in real time fusing together multiple snapshots at varying levels from a moving Kinect for Windows sensor to create a full 3D model,” Microsoft spokesperson Jerry Mischel told TechNewsWorld.
The projects involve two technology trends — more human natural interfaces and big data, Mischel said.
Possible Uses for Enhanced Kinect
One obvious use for enhanced gesture control technology is in user interfaces, where it will “replace the mouse, remote control and so on,” McGregor told TechNewsWorld. However, “the problem with gesturing is that it’s really only a good UI for large screens and for very limited applications.”
Enhanced gesture technology “allows the computer to see in 3D real time, and we are just scraping the surface of what that will allow,” Rob Enderle, principal analyst at the Enderle Group, told TechNewsWorld. The most obvious use for such technology is in robotics, he added.
However, “we’re in the early days (of gesture control) and it’s still quite rough,” Wes Miller, an analyst at Directions on Microsoft told TechNewsWorld. “I’m a pretty firm believer that it’s not a replacement for keyboards and mice, especially with regard to long-form text entry, editing or precise movements.”
Problems and Issues
It will take some time before a user could call up apps and manipulate data on screens solely with gestures, as was done in the movie “Minority Report,” O’Donnell said. “The hard part is to get (the technology) to the point where the application can take advantage of it.”
High-end solutions in niche applications like gaming, content creation or simulation will use enhanced gesture controls first, McGregor noted. However, they’ll be overtaken by lower-cost ones that use the existing webcams built into computing and consumer electronic devices. A few solutions will be available this year and will eventually become another standard UI option for applications that can use their interface technology. EyeSight Technology, for example, demonstrated such a capability at the International CES in Las Vegas in January, he said.
“These are Microsoft Research projects only,” Mischel cautioned. Still, some of the research and projects demoed at TechFest “have been evaluated for potential inclusion in a forthcoming release of the Kinect for Windows SDK.”