Video annotation using hierarchical Dirichlet process mixture model

作者:

Highlights:

摘要

Video annotation has become an important topic to support multimedia information retrieval. Video content analysis using low-level features cannot reduce the gap between low-level features and high level semantic concept. In this study, we propose an approach which combines visual features extracted from visual track of video and keywords extracted from speech transcripts of audio track. We construct a predictive model using hierarchical Dirichlet process mixture model. In the hierarchical model, one more layer is added to exploit sharing visual feature distributions among frames and use the shared information to enhance model learning. At top level the visual features in the groups are shared appropriately by imposing a prior correlation. At the bottom level each visual feature and associated annotation are modeled with mixture distributions. The leaned predictive model allows us to compute a conditional likelihood over words which are used to predict the most likely annotation words for the testing sample. The model achieves high accuracy in video annotation than the model without using hierarchy.

论文关键词:Video annotation,Visual feature,Speech transcripts,Dirichlet process,Hierarchical Dirichlet process mixture model

论文评审过程:Available online 16 October 2010.

论文官网地址:https://doi.org/10.1016/j.eswa.2010.08.094