Multimodal joint learning for personal knowledge base construction from Twitter-based lifelogs

作者:

Highlights:

摘要

People are used to log their life on social media platforms. In this paper, we aim to extract life events by leveraging both visual and textual information shared on Twitter and construct personal knowledge bases of individuals. The issues to be tackled include (1) not all text descriptions are related to life events, (2) life events in a text description can be expressed explicitly or implicitly, (3) the predicates in the implicit life events are often absent, and (4) the mapping from natural language predicates to knowledge base relations may be ambiguous. A multimodal joint learning approach trained on both text and images from social media posts shared on Twitter is proposed to detect life events in tweets and extract event components including subjects, predicates, objects, and time expressions. Finally, the extracted information is transformed to knowledge base facts. The evaluation is performed on a collection of lifelogs from 18 Twitter users. Experimental results show our proposed system is effective in life event extraction, and the constructed personal knowledge bases are expected to be useful to memory recall applications.

论文关键词:Lifelogging,Life event extraction,Personal knowledge base construction,Social media

论文评审过程:Received 6 April 2019, Revised 21 August 2019, Accepted 16 October 2019, Available online 12 November 2019, Version of Record 20 October 2020.

论文官网地址:https://doi.org/10.1016/j.ipm.2019.102148