Gesture recognition
From Wikipedia, the free encyclopedia
Gesture Recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques.[1]
Gesture Recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (Graphical User Interfaces), which still limit the majority of input to keyboard and mouse.
Gesture Recognition enables humans to interface with the machine (HMI) and interact naturally without any mechanical devices. Using the concept of Gesture Recognition, it is possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch-screens redundant.
Gesture Recognition can be conducted with techniques from computer vision and image processing.
Gesture recognition and pen computing:
- In some literature, the term gesture recognition has been used to refer specifically to handwriting gestures, such as inking on a graphics tablet or mouse gesture recognition. This is computer interaction through the drawing of symbols with a pointing device cursor (see discussion at Pen computing). Strictly speaking the term mouse strokes could be used instead of mouse gestures since this implies written communication, making a mark to represent a symbol. The literature in this area does, however, not appear to be aware of the ongoing work in the computer vision literature on capturing gestures in more general human pose and movements by cameras connected to a computer [2][3][4][5].
Contents |
[edit] Uses of Gesture Recognition
Gesture Recognition is useful for processing information from humans which is not conveyed through speech or type. As well, there are various types of gestures which can be identified by computers.
- Sign language recognition. Just as speech recognition can transcribe speech to text, certain types of gesture recognition software can transcribe the symbols represented through sign language into text.[6]
- For Socially Assistive Robotics. By using proper sensors (Accelerometers & Gyros)worn on the body of a patient & by reading the values from those sensors robots can assist in patient rehabilitation. The best example can be stroke rehabilitation.
- Directional indication through pointing. Pointing has a very specific purpose in our society, to reference an object or location based on its position relative to ourselves. The use of Gesture Recognition to determine where a person is pointing is useful for identifying the context of statements or instructions. This application is of particular interest in the field of robotics. [7]
- Control through facial gestures. Controlling a computer through facial gestures is a useful application of Gesture Recognition for users who may not physically be able to use a mouse or keyboard. Eye tracking in particular may be of use for controlling cursor motion or focusing on elements of a display.
- Alternative computer interfaces. Foregoing the traditional keyboard and mouse setup to interact with a computer, strong Gesture Recognition could allow users to accomplish frequent or common tasks using hand or face gestures to a camera.[8][9]
- Immersive game technology. Gestures can be used to control interactions within video games to try and make the game player's experience more interactive or immersive.
- Virtual controllers. For systems where the act of finding or acquiring a physical controller could require too much time, gestures can be used as an alternative control mechanism. Controlling secondary devices in a car, or controlling a television set are examples of such usage.[10]
- Affective computing. In Affective computing, gesture recognition is used in the process of identifying emotional expression through computer systems.
- Remote control. Through the use of gesture recognition, "remote control with the wave of a hand" of various devices is possible. The signal must not only indicate the desired response, but also which device to be controlled.[11][12][13]
[edit] Input devices
The ability to track a person's movements and determine what gestures they may be performing can be achieved through various tools. Although there is a large amount of research done in image/video based Gesture Recognition, there is some variation within the tools and environments used between implementations.
- Depth-aware cameras. Using specialized cameras such as time-of-flight cameras, one can generate a depth map of what is being seen through the camera at a short range, and use this data to approximate a 3d representation of what is being seen. These can be effective for detection of hand gestures due to their short range capabilities.[14]
- Stereo cameras. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. This method uses more traditional cameras, and thus does not hold the same distance issues as current depth-aware cameras. To get the cameras' relations, one can use a positioning reference such as a lexian-stripe or infrared emitters.[15]
- Controller-based gestures. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by software. Mouse gestures are one such example, where the motion of the mouse is correlated to a symbol being drawn by a person's hand, as is the Wii Remote, which can study changes in acceleration over time to represent gestures.[16][17]
- Single camera. A normal camera can be used for gesture recognition where the resources/environment wouldn't be convenient for other forms of image-based recognition. Although not necessarily as effective as stereo or depth aware cameras, using a single camera allows a greater possibility of accessibility to a wider audience.[18]
[edit] Challenges of Gesture Recognition
There are many challenges associated with the accuracy and usefulness of Gesture Recognition software. For image-based gesture recognition there are limitations on the equipment used and image noise. Images or video may not be under consistent lighting, or in the same location. Items in the background or distinct features of the users may make recognition more difficult.
The variety of implementations for image-based gesture recognition may also cause issue for viability of the technology to general usage. For example, an algorithm calibrated for one camera may not work for a different camera. The amount of background noise also causes tracking and recognition difficulties, especially when occlusions (partial and full) occur. Furthermore, the distance from the camera, and the camera's resolution and quality, also cause variations in recognition accuracy.
In order to capture human gestures by visual sensors, robust computer vision methods are also required, for example for hand tracking and hand posture recognition [19][20][21][22][23] or for capturing movements of the head, facial expressions or gaze direction.
[edit] "Gorilla arm"
"Gorilla arm" was a side-effect that destroyed vertically-oriented touch-screens as a mainstream input technology despite a promising start in the early 1980s.[24]
Designers of touch-menu systems failed to notice that humans aren't designed to hold their arms in front of their faces making small motions. After more than a very few selections, the arm begins to feel sore, cramped, and oversized -- the operator looks like a gorilla while using the touch screen and feels like one afterwards. This is now considered a classic cautionary tale to human-factors designers; "Remember the gorilla arm!" is shorthand for "How is this going to fly in real use?".
Gorilla arm is not a problem for specialist short-term-use uses, since they only involve brief interactions which do not last long enough to cause gorilla arm.
[edit] See also
- Pen computing Discussion of gesture recognition for tablet computers
- Mouse gesture
- Computer vision
- Gestures
- Hidden Markov Model
- Language Technology
[edit] External links
This article's external links may not follow Wikipedia's content policies or guidelines. Please improve this article by removing excessive or inappropriate external links. |
- SignWiiver--A gesture recognition system using a Nintendo Wii-controller
- Cybernet Systems Corporation--Commercial gesture recognition products
- iGesture Framework for Pen and Mouse-based Gesture Recognition--Free Java-based gesture recognition framework
- A Gesture Recognition Review--Compendium of references
- HandVu--Open-source vision-based hand gesture interface
- The future, it is all a Gesture--Gesture interfaces and video gaming
- Interactive Projection Systems--Commercial Motion Tracking and Gesture Recognition Products
- AME Patterns library--A free C++ library for pattern analysis and recognition, currently implemented for gesture recognition
[edit] References
- ^ Matthias Rehm, Nikolaus Bee, Elisabeth André, Wave Like an Egyptian - Accelerometer Based Gesture Recognition for Culture Specific Interactions, British Computer Society, 2007
- ^ Pavlovic, V., Sharma, R. & Huang, T. (1997), "Visual interpretation of hand gestures for human-computer interaction: A review", IEEE Trans. Pattern Analysis and Machine Intelligence., July, 1997. Vol. 19(7), pp. 677 -695.
- ^ R. Cipolla and A. Pentland, Computer Vision for Human-Machine Interaction, Cambridge University Press, 1998, ISBN 978-0521622530
- ^ Ying Wu and Thomas S. Huang, "Vision-Based Gesture Recognition: A Review", In: Gesture-Based Communication in Human-Computer Interaction, Volume 1739 of Springer Lecture Notes in Computer Science, pages 103-115, 1999, ISBN 978-3-540-66935-7, doi 10.1007/3-540-46616-9
- ^ Alejandro Jaimesa and Nicu Sebe, Multimodal human–computer interaction: A survey, Computer Vision and Image Understanding Volume 108, Issues 1-2, October-November 2007, Pages 116-134 Special Issue on Vision for Human-Computer Interaction, doi:10.1016/j.cviu.2006.10.019
- ^ Thad Starner, Alex Pentland, Visual Recognition of American Sign Language Using Hidden Markov Models, Massachusetts Institute of Technology
- ^ Kai Nickel, Rainer Stiefelhagen, Visual recognition of pointing gestures for human-robot interaction, Image and Vision Computing, vol 25, Issue 12, December 2007, pp 1875-1884
- ^ Matthew Turk and Mathias Kölsch, "Perceptual Interfaces", University of California, Santa Barbara UCSB Technical Report 2003-33
- ^ M Porta "Vision-based user interfaces: methods and applications", International Journal of Human-Computer Studies, 57:11, 27-73, 2002.
- ^ William Freeman, Craig Weissman, Television control by hand gestures, Mitsubishi Electric Research Lab, 1995
- ^ Do Jun-Hyeong, Jung Jin-Woo, Sung hoon Jung, Jang Hyoyoung, Bien Zeungnam, Advanced soft remote control system using hand gesture, Mexican International Conference on Artificial Intelligence, 2006
- ^ K. Ouchi, N. Esaka, Y. Tamura, M. Hirahara, M. Doi, Magic Wand: an intuitive gesture remote control for home appliances, International Conference on Active Media Technology, 2005 (AMT 2005), 2005
- ^ Lars Bretzner, Ivan Laptev, Tony Lindeberg, Sören Lenman, Yngve Sundblad "A Prototype System for Computer Vision Based Human Computer Interaction", Technical report CVAP251, ISRN KTH NA/P--01/09--SE. Department of Numerical Analysis and Computer Science, KTH (Royal Institute of Technology), SE-100 44 Stockholm, Sweden, April 23-25, 2001.
- ^ Yang Liu, Yunde Jia, A Robust Hand Tracking and Gesture Recognition Method for Wearable Visual Interfaces and Its Applications, Proceedings of the Third International Conference on Image and Graphics (ICIG’04), 2004
- ^ Kue-Bum Lee, Jung-Hyun Kim, Kwang-Seok Hong, An Implementation of Multi-Modal Game Interface Based on PDAs, Fifth International Conference on Software Engineering Research, Management and Applications, 2007
- ^ Per Malmestig, Sofie Sundberg, SignWiiver - implementation of sign language technology
- ^ Thomas Schlomer, Benjamin Poppinga, Niels Henze, Susanne Boll, Gesture Recognition with a Wii Controller, Proceedings of the 2nd international Conference on Tangible and Embedded interaction, 2008
- ^ Wei Du, Hua Li, Vision based gesture recognition system with single camera, 5th International Conference on Signal Processing Proceedings, 2000
- ^ Ivan Laptev and Tony Lindeberg "Tracking of Multi-state Hand Models Using Particle Filtering and a Hierarchy of Multi-scale Image Features", Proceedings Scale-Space and Morphology in Computer Vision, Volume 2106 of Springer Lecture Notes in Computer Science, pages 63-74, Vancouver, BC, 001. ISBN 978-3-540-42317-1, doi 10.1007/3-540-47778-0
- ^ Lars Bretzner, Ivan Laptev, Tony Lindeberg "Hand gesture recognition using multi-scale colour features, hierarchical models and particle filtering", Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, 21-21 May 2002, pages 423-428. ISBN 0-7695-1602-5, doi 10.1109/AFGR.2002.1004190
- ^ M. Kolsch and M. Turk "Fast 2D Hand Tracking with Flocks of Features and Multi-Cue Integration", CVPRW '04. Proceedings Computer Vision and Pattern Recognition Workshop, May 27-June 2, 2004, doi 10.1109/CVPR.2004.71
- ^ Stenger B, Thayananthan A, Torr PH, Cipolla R: "Model-based hand tracking using a hierarchical Bayesian filter", IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(9):1372-84, Sep 2006.
- ^ A Erol, G Bebis, M Nicolescu, RD Boyle, X Twombly, "Vision-based hand pose estimation: A review", Computer Vision and Image Understanding Volume 108, Issues 1-2, October-November 2007, Pages 52-73 Special Issue on Vision for Human-Computer Interaction, doi:10.1016/j.cviu.2006.10.012.
- ^ Windows 7? No arm in it - Mixed Signals - Rupert Goodwins's Blog at ZDNet.co.uk Community
Please help improve this article or section by expanding it. Further information might be found on the talk page. (February 2008) |