Visual estimation of pointed targets for robot guidance via fusion of face pose and hand orientation

作者:

Highlights:

摘要

In this paper we address an important issue in human–robot interaction, that of accurately deriving pointing information from a corresponding gesture. Based on the fact that in most applications it is the pointed object rather than the actual pointing direction which is important, we formulate a novel approach which takes into account prior information about the location of possible pointed targets. To decide about the pointed object, the proposed approach uses the Dempster–Shafer theory of evidence to fuse information from two different input streams: head pose, estimated by visually tracking the off-plane rotations of the face, and hand pointing orientation. Detailed experimental results are presented that validate the effectiveness of the method in realistic application setups.

论文关键词:

论文评审过程:Received 9 August 2011, Accepted 16 December 2013, Available online 24 December 2013.

论文官网地址:https://doi.org/10.1016/j.cviu.2013.12.006