An expressive three-mode principal components model of human action style

作者:

Highlights:

摘要

We present a three-mode expressive-feature model for representing and recognizing performance styles of human actions. A set of style variations for an action are initially arranged into a three-mode data representation (body pose, time, style) and factored into its three-mode principal components to reduce the data dimensionality. We next embed tunable weights on trajectories within the sub-space model to enable different context-based style estimations. We outline physical and perceptual parameterization methods for choosing style labels for the training data, from which we automatically learn the necessary expressive weights using a gradient descent procedure. Experiments are presented examining several motion-capture walking variations corresponding to carrying load, gender, and pace. Results demonstrate a greater flexibility of the expressive three-mode model, over standard squared-error style estimation, to adapt to different style matching criteria.

论文关键词:Action recognition,Style,Three-mode principal components,Motion analysis,Gesture recognition

论文评审过程:Received 13 September 2002, Revised 11 June 2003, Accepted 26 June 2003, Available online 27 August 2003.

论文官网地址:https://doi.org/10.1016/S0262-8856(03)00138-0