LF-Net: Learning Local Features from Images. We present a novel deep architecture and a training strategy to learn a local feature pipeline from scratch, using collections of images without the need for human supervision. To do so we exploit depth and relative camera pose cues to create a virtual target that the network should achieve on one image, provided the outputs of the network for the other image. While this process is inherently non-differentiable, we show that we can optimize the network in a two-branch setup by confining it to one branch, while preserving differentiability in the other. We train our method on both indoor and outdoor datasets, with depth data from 3D sensors for the former, and depth estimates from an off-the-shelf Structure-from-Motion solution for the latter. Our models outperform the state of the art on sparse feature matching on both datasets, while running at 60+ fps for QVGA images.
Keywords for this software
References in zbMATH (referenced in 4 articles )
Showing results 1 to 4 of 4.
- Ma, Jiayi; Jiang, Xingyu; Fan, Aoxiang; Jiang, Junjun; Yan, Junchi: Image matching from handcrafted to deep features: a survey (2021)
- Qin, Zixuan; Yin, Mengxiao; Li, Guiqing; Yang, Feng: SP-Flow: self-supervised optical flow correspondence point prediction for real-time SLAM (2020)
- Axel Barroso-Laguna, Edgar Riba, Daniel Ponsa, Krystian Mikolajczyk: Key.Net: Keypoint Detection by Handcrafted and Learned CNN Filters (2019) arXiv
- Farhan, Erez: Highly accurate matching of weakly localized features (2019)