Extraction of spatial-temporal features for vision-based gesture recognition |
| |
Authors: | Yu Huang Guangyou Xu Yuanxin Zhu |
| |
Affiliation: | (1) Department of Computer Science and Technology, Tsinghua University, 100084 Beijing, P. R. China |
| |
Abstract: | One of the key problems in a vision-based gesture recognition system is the extraction of spatial-temporal features of gesturing. In this paper anapproach of motion-based segmentation is proposed to realize this task. The directmethod cooperated with the robust M-estimator to estimate the affine parametersof gesturing motion is used, and based on the dominant motion model the gesturingregion is extracted, i.e., the dominant object. So the spatial-temporal features ofgestures can be extracted. Finally, the dynamic time warping (DTW) method isdirectly used to perform matching of 12 control gestures (6 for "translation" orders,6 for "rotation" orders). A small demonstration system has been set up to verify themethod, in which a panorama image viewer can be controlled (set by mosaicing asequence of standard "Garden" images) with recognized gestures instead of the 3-Dmouse tool. |
| |
Keywords: | gesture recognition dominant motion model M-estimator affinetransform model |
本文献已被 CNKI 维普 万方数据 SpringerLink 等数据库收录! |
| 点击此处可从《计算机科学技术学报》浏览原始摘要信息 |
|
点击此处可从《计算机科学技术学报》下载全文 |