Learning temporal structure for task based control

作者:

Highlights:

摘要

We present an extension for variable length Markov models (VLMMs) to allow for modelling of continuous input data and show that the generative properties of these VLMMs are a powerful tool for dealing with real world tracking issues. We explore methods for addressing the temporal correspondence problem in the context of a practical hand tracker, which is essential to support expectation in task-based control using these behavioural models. The hand tracker forms a part of a larger multi-component distributed system, providing 3-D hand position data to a gesture recogniser client. We show how the performance of such a hand tracker can be improved by using feedback from the gesture recogniser client. In particular, feedback based on the generative extrapolation of the recogniser's internal models is shown to help the tracker deal with mid-term occlusion. We also show that VLMMs can be used as a means to inform the prior in an expectation maximisation (EM) process used for joint spatial and temporal learning of image features.

论文关键词:Variable length Markov models,Temporal learning,3-D Tracking,Data association,Task-based control

论文评审过程:Received 16 July 2004, Revised 10 July 2005, Accepted 5 August 2005, Available online 17 April 2006.

论文官网地址:https://doi.org/10.1016/j.imavis.2005.08.010