A survey on multi-modal social event detection

作者:

Highlights:

摘要

Due to the prevalence of social media sites, users are allowed to conveniently share their ideas and activities anytime and anywhere. Therefore, these sites hold substantial real-world event related data. Different from traditional social event detection methods which mainly focus on single-media, multi-modal social event detection aims at discovering events in vast heterogeneous data such as texts, images and video clips. These data denote real-world events from multiple dimensions simultaneously so that they can provide comprehensive and complementary understanding of social event. In recent years, multi-modal social event detection has attracted intensive attentions. This paper concentrates on conducting a comprehensive survey of extant works. Two current attempts in this field are firstly reviewed: event feature learning and event inference. Particularly, event feature learning is a pre-requisite because of its ability on translating social media data into computer-friendly numerical form. Event inference aims at deciding whether a sample belongs to a social event. Then, several public datasets in the community are introduced and the comparison results are also provided. At the end of this paper, a general discussion of the insights is delivered to promote the development of multi-modal social event detection.

论文关键词:Multi-modality,Social event detection,Feature learning,Event inference

论文评审过程:Received 24 October 2019, Revised 21 February 2020, Accepted 22 February 2020, Available online 27 February 2020, Version of Record 4 April 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2020.105695