Learning Image Representations Tied to Egomotion from Unlabeled Video

作者:Dinesh Jayaraman, Kristen Grauman

摘要

Understanding how images of objects and scenes behave in response to specific egomotions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose a new “embodied” visual learning paradigm, exploiting proprioceptive motor signals to train visual representations from egocentric video with no manual supervision. Specifically, we enforce that our learned features exhibit equivariance i.e., they respond predictably to transformations associated with distinct egomotions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.

论文关键词:Feature Space, Convolutional Neural Network, Feature Learning, Temporal Coherence, Scene Recognition

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11263-017-1001-2