Model-based segmentation and recognition of dynamic gestures in continuous video streams

作者:

Highlights:

摘要

Segmentation and recognition of continuous gestures are challenging due to spatio-temporal variations and endpoint localization issues. A novel multi-scale Gesture Model is presented here as a set of 3D spatio-temporal surfaces of a time-varying contour. Three approaches, which differ mainly in endpoint localization, are proposed: the first uses a motion detection strategy and multi-scale search to find the endpoints; the second uses Dynamic Time Warping to roughly locate the endpoints before a fine search is carried out; the last approach is based on Dynamic Programming. Experimental results on two arm and single hand gestures show that all three methods achieve high recognition rates, ranging from 88% to 96% for the two arm test, with the last method performing best.

论文关键词:Continuous gesture recognition,Gesture segmentation,Motion Signature,Gesture Model,Dynamic Programming,Dynamic Time Warping

论文评审过程:Received 23 December 2009, Revised 24 May 2010, Accepted 21 December 2010, Available online 7 January 2011.

论文官网地址:https://doi.org/10.1016/j.patcog.2010.12.014