3-D motion estimation by integrating visual cues in 2-D multi-modal opti-acoustic stereo sequences

作者:

Highlights:

摘要

Object reconstruction and target-based positioning are among critical capabilities in deploying submersible platforms for a range of underwater applications, e.g., search and inspection missions. Optical cameras provide high-resolution and target details, but their utility becomes constrained by the visibility range. In comparison, high-frequency (MHz) 2-D sonar imaging systems introduced to the commercial market in recent years can image targets at distances of tens of meters in highly turbid waters.Where fair visibility permits optical imaging at reasonable quality, the integration with 2-D sonar data can enable better performance compared to deploying either system alone, and thus enabling automated operation in a wider range of conditions.We investigate the estimation of 3-D motion by exploiting the visual cues in optical and sonar video for vision-based navigation and 3-D positioning of submersible platforms. The application of structure from motion paradigm in this multi-modal imaging scenario also enables the 3-D reconstruction of scene features. Our method does not require establishing multi-modal association between corresponding optical and sonar features, but rather the tracking of features in the sonar and optical motion sequences independently. In addition to improving the motion estimation accuracy, another advantage of the proposed method includes overcoming the inherent ambiguities of monocular vision, e.g., the scale-factor ambiguity and dual interpretation of motion relative to planar scenes. We discuss how our solution can also provide an effective strategy to address the complex opti-acoustic stereo matching problem. Experiment with synthetic and real data demonstrate the advantages of our technical contribution.

论文关键词:

论文评审过程:Received 21 July 2009, Accepted 22 April 2010, Available online 25 May 2010.

论文官网地址:https://doi.org/10.1016/j.cviu.2010.04.005