DTAM: Dense tracking and mapping in real-time. DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera’s 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.
Keywords for this software
References in zbMATH (referenced in 5 articles )
Showing results 1 to 5 of 5.
- Edgar Riba, Dmytro Mishkin, Daniel Ponsa, Ethan Rublee, Gary Bradski: Kornia: an Open Source Differentiable Computer Vision Library for PyTorch (2019) arXiv
- Yu, Fangwen; Shang, Jianga; Hu, Youjian; Milford, Michael: NeuroSLAM: a brain-inspired SLAM system for 3D environments (2019)
- Hollósi, Gergely; Lukovszki, Csaba; Moldován, István; Plósz, Sándor; Harasztos, Frigyes: Monocular indoor localization techniques for smartphones (2016)
- Becker, Florian; Lenzen, Frank; Kappes, Jörg H.; Schnörr, Christoph: Variational recursive joint estimation of dense scene structure and camera motion from monocular high speed traffic sequences (2013)
- Garg, Ravi; Roussos, Anastasios; Agapito, Lourdes: A variational approach to video registration with subspace constraints (2013)