Viewpoint projection based deep feature learning for single and dyadic action recognition

作者:

Highlights:

• A method that can be used for single and dyadic action recognition is proposed.

• Depth sequences are concatenated to construct a 3D isosurface.

• Different views of 3D volume are mapped to 2D deep features with pre-trained CNN.

• Experiments carried out with datasets commonly used by the community.

• The results of the deep features from different layers of the CNN are compared.

摘要

•A method that can be used for single and dyadic action recognition is proposed.•Depth sequences are concatenated to construct a 3D isosurface.•Different views of 3D volume are mapped to 2D deep features with pre-trained CNN.•Experiments carried out with datasets commonly used by the community.•The results of the deep features from different layers of the CNN are compared.

论文关键词:Action recognition,Deep learning,Convolutional neural networks,Random forest,Depth maps

论文评审过程:Received 6 October 2017, Revised 2 March 2018, Accepted 23 March 2018, Available online 26 March 2018, Version of Record 3 April 2018.

论文官网地址:https://doi.org/10.1016/j.eswa.2018.03.047