KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct, geometrically precise, 3D models of the physical scene in real-time. The capabilities of KinectFusion, as well as the novel GPU-based pipeline are described in full. Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface to be appropriated for touch.
Keywords for this software
References in zbMATH (referenced in 12 articles )
Showing results 1 to 12 of 12.
- Lu, Wuyue; Liu, Ligang: Surface reconstruction via cooperative evolutions (2020)
- Kleinschmidt, Sebastian P.; Wagner, Bernardo: Spatial fusion of different imaging technologies using a virtual multimodal camera (2018)
- Lu, Feixiang; Zhou, Bin; Lu, Feng; Zhang, Yu; Chen, Xiaowu; Zhao, Qinping: Reconstructing non-rigid object with large movement using a single depth camera (2018)
- Slavcheva, Miroslava; Kehl, Wadim; Navab, Nassir; Ilic, Slobodan: SDF-2-SDF registration for real-time 3D reconstruction from RGB-D data (2018)
- Kadambi, Achuta; Taamazyan, Vage; Shi, Boxin; Raskar, Ramesh: Depth sensing using geometrically constrained polarization normals (2017)
- Khan, Salman H.; Bennamoun, Mohammed; Sohel, Ferdous; Togneri, Roberto; Naseem, Imran: Integrating geometrical context for semantic labeling of indoor scenes using RGBD images (2016)
- Kraft, Marek; Nowicki, Michał; Penne, Rudi; Schmidt, Adam; Skrzypczyński, Piotr: Efficient RGB-D data processing for feature-based self-localization of mobile robots (2016)
- Wang, Jun; Xie, Qian; Xu, Yabin; Zhou, Laishui; Ye, Nan: Cluttered indoor scene modeling via functional part-guided graph matching (2016)
- Wilkowski, Artur; Kornuta, Tomasz; Stefańczyk, Maciej; Kasprzak, Włodzimierz: Efficient generation of 3D surfel maps using RGB-D sensors (2016)
- Zollhöfer, Michael; Dai, Angela; Innmann, Matthias; Wu, Chenglei; Stamminger, Marc; Theobalt, Christian; Nießner, Matthias: Shading-based refinement on volumetric signed distance functions (2015)
- Huang, Xiangsheng; Chen, Xinghao; Tang, Tao; Huang, Ziling: Marching cubes algorithm for fast 3D modeling of human face by incremental data fusion (2013) ioport
- Bošnak, Matevž; Matko, Drago; Blažič, Sašo: Quadrocopter hovering using position-estimation information from inertial sensors and a high-delay video system (2012) ioport