MODS: Fast and robust method for two-view matching

作者:

Highlights:

摘要

A novel algorithm for wide-baseline matching called MODS—matching on demand with view synthesis—is presented. The MODS algorithm is experimentally shown to solve a broader range of wide-baseline problems than the state of the art while being nearly as fast as standard matchers on simple problems. The apparent robustness vs. speed trade-off is finessed by the use of progressively more time-consuming feature detectors and by on-demand generation of synthesized images that is performed until a reliable estimate of geometry is obtained.We introduce an improved method for tentative correspondence selection, applicable both with and without view synthesis. A modification of the standard first to second nearest distance rule increases the number of correct matches by 5–20% at no additional computational cost.Performance of the MODS algorithm is evaluated on several standard publicly available datasets, and on a new set of geometrically challenging wide baseline problems that is made public together with the ground truth. Experiments show that the MODS outperforms the state-of-the-art in robustness and speed. Moreover, MODS performs well on other classes of difficult two-view problems like matching of images from different modalities, with wide temporal baseline or with significant lighting changes.

论文关键词:

论文评审过程:Received 21 January 2015, Revised 24 May 2015, Accepted 14 August 2015, Available online 1 September 2015, Version of Record 1 November 2015.

论文官网地址:https://doi.org/10.1016/j.cviu.2015.08.005