Integrating character networks for extracting narratives from multimodal data

作者:

Highlights:

摘要

This study aims to integrate diverse data within narrative multimedia (i.e., artworks containing stories and distributed through multimedia) into a unified character network (i.e., a social network between characters that appear in the story). By combining multiple data sources (e.g., the text, video, and audio), we attempted to enhance the accuracy and semantic richness of existing character networks that confine themselves to a particular data source. To merge various data, we propose story synchronization for (i) improving the accuracy of data extracted from the narrative multimedia and (ii) integrating the data into the unified character network. The story synchronization mainly consists of three steps: synchronizing (i) scenes, (ii) characters, and (iii) character networks. First, we synchronize dialogues in the text and audio, to discover speakers and time of dialogues. This enables us to segment the scene using time periods when dialogues (in the text and audio) and characters (in the video) do not commonly occur. Through the scene segmentation, we can discretize stories in the narrative work. By comparing the occurrence of dialogues and characters in each scene, we synchronize identities of the characters in the text and video (e.g., names and faces of characters). Thereby, we can more accurately estimate participants and time of a conversation between characters (i.e., a set of connected dialogues). Based on the conversation, the existing character networks are refined and integrated into the unified character network. In addition, we verified the efficacy of the proposed methods using movies in the real world, which are among the most accessible and popular narrative multimedia.

论文关键词:Story analytics,Character network,Computational narrative,Story synchronization,68R10,68T30,68T35,68T05

论文评审过程:Received 8 June 2018, Revised 24 January 2019, Accepted 9 February 2019, Available online 21 February 2019, Version of Record 20 June 2019.

论文官网地址:https://doi.org/10.1016/j.ipm.2019.02.005