Saturday, October 29, 2011

Gesture user interfaces

Many people "speak" with their hands using gesture to reinforce the context of what they are actually saying. Recent innovations such as the Kinect system from Microsoft could enable these gestures to be parsed and to provide user input. Generally, such gestures are subtle inputs which are directed at the peripheral vision of the observer, the listener in the conversation.

Recent moves toward more overt gestural input, such as drawing shapes in thin air or waving one's arms about in "Minority Report" style ate interesting in principle but will undoubtedly present problems for users with regard to repetitive strain injuries sustained from attempting to make precise control movements in a 3d input space.

I recently fitted a light in an awkward place and the strain of trying to do up small screws while reaching in front of me into a restricted space was considerable. This would effectively be the kind of input that a CAD operator might have to repeat while creating a precision drawing for some physical object in a 3d design space.

I think that gesture input will be most useful when used to reinforce the nuance of verbal commands, somewhat similar a to our own innate gesture recognition systems we use as humans every day.


- Posted using BlogPress from my iPad

No comments: