Head gestures for perceptual interfaces: The role of context in improving recognition

作者:

Highlights:

摘要

Head pose and gesture offer several conversational grounding cues and are used extensively in face-to-face interaction among people. To accurately recognize visual feedback, humans often use contextual knowledge from previous and current events to anticipate when feedback is most likely to occur. In this paper we describe how contextual information can be used to predict visual feedback and improve recognition of head gestures in human–computer interfaces. Lexical, prosodic, timing, and gesture features can be used to predict a user's visual feedback during conversational dialog with a robotic or virtual agent. In non-conversational interfaces, context features based on user–interface system events can improve detection of head gestures for dialog box confirmation or document browsing. Our user study with prototype gesture-based components indicate quantitative and qualitative benefits of gesture-based confirmation over conventional alternatives. Using a discriminative approach to contextual prediction and multi-modal integration, performance of head gesture detection was improved with context features even when the topic of the test set was significantly different than the training set.

论文关键词:Visual feedback,Head gesture recognition,Contextual information,Context-based recognition

论文评审过程:Received 2 June 2006, Revised 16 March 2007, Accepted 9 April 2007, Available online 19 April 2007.

论文官网地址:https://doi.org/10.1016/j.artint.2007.04.003