Semantic labeling for prosthetic vision

作者:

Highlights:

摘要

Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from limited resolution and dynamic range of induced visual percepts. This can make navigating complex environments difficult for users. We introduce semantic labeling as a technique to improve navigation outcomes for prosthetic vision users. We produce a novel egocentric vision dataset to demonstrate how semantic labeling can be applied to this problem. We also improve the speed of semantic labeling with sparse computation of unary potentials, enabling its use in real-time wearable assistive devices. We use simulated prosthetic vision to demonstrate the results of our technique. Our approach allows a prosthetic vision system to selectively highlight specific classes of objects in the user’s field of view, improving the user’s situational awareness.

论文关键词:

论文评审过程:Received 16 April 2015, Revised 13 December 2015, Accepted 25 February 2016, Available online 4 March 2016, Version of Record 7 June 2016.

论文官网地址:https://doi.org/10.1016/j.cviu.2016.02.015