A belief-based sequential fusion approach for fusing manual signs and non-manual signals

作者:

Highlights:

摘要

Most of the research on sign language recognition concentrates on recognizing only manual signs (hand gestures and shapes), discarding a very important component: the non-manual signals (facial expressions and head/shoulder motion). We address the recognition of signs with both manual and non-manual components using a sequential belief-based fusion technique. The manual components, which carry information of primary importance, are utilized in the first stage. The second stage, which makes use of non-manual components, is only employed if there is hesitation in the decision of the first stage. We employ belief formalism both to model the hesitation and to determine the sign clusters within which the discrimination takes place in the second stage. We have implemented this technique in a sign tutor application. Our results on the eNTERFACE’06 ASL database show an improvement over the baseline system which uses parallel or feature fusion of manual and non-manual features: we achieve an accuracy of 81.6%.

论文关键词:Sign language recognition,Manual signs and non-manual,Hidden Markov models,Data fusion,Belief functions

论文评审过程:Received 3 August 2007, Revised 4 June 2008, Accepted 21 September 2008, Available online 8 October 2008.

论文官网地址:https://doi.org/10.1016/j.patcog.2008.09.010