MMED: A multi-domain and Multi-modality event dataset

作者:

Highlights:

摘要

In this work, we release a multi-domain and multi-modality event dataset (MMED), containing 25,052 textual news articles collected from hundreds of news media sites (e.g., Yahoo News, BBC News, etc.) and 75,884 image posts shared on Flickr by thousands of social media users. The articles contributed by professional journalists and the images shared by amateur users are annotated according to 410 real-world events, covering emergencies, natural disasters, sports, ceremonies, elections, protests, military intervention, economic crises, etc. The MMED dataset is collected by the following the principles of high relevance in supporting the application needs, a wide range of event types, non-ambiguity of the event labels, imbalanced event clusters, and difficulty discriminating the event labels. The dataset can stimulate innovative research on related challenging problems, such as (weakly aligned) cross-modal retrieval and cross-domain event discovery, inspire visual relation mining and reasoning, etc. For comparisons, 15 baselines for two scenarios have been quantitatively and qualitatively evaluated using the dataset.

论文关键词:Cross-modal retrieval,Real-world event detection,Benchmark dataset

论文评审过程:Received 22 December 2019, Revised 23 May 2020, Accepted 27 May 2020, Available online 18 June 2020, Version of Record 18 June 2020.

论文官网地址:https://doi.org/10.1016/j.ipm.2020.102315