ShapeNet

ShapeNet: An Information-Rich 3D Model Repository. We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans.


References in zbMATH (referenced in 17 articles )

Showing results 1 to 17 of 17.
Sorted by year (citations)

  1. Hu, Lan; Kneip, Laurent: Globally optimal point set registration by joint symmetry plane fitting (2021)
  2. Ping, Yuhan; Wei, Guodong; Yang, Lei; Cui, Zhiming; Wang, Wenping: Self-attention implicit function networks for 3D dental data completion (2021)
  3. Gadelha, Matheus; Rai, Aartika; Maji, Subhransu; Wang, Rui: Inferring 3D shapes from image collections using adversarial networks (2020)
  4. Henderson, Paul; Ferrari, Vittorio: Learning single-image 3D reconstruction by generative modelling of shape, pose and shading (2020)
  5. Lang, Xufeng; Sun, Zhengxing: Structure-aware shape correspondence network for 3D shape synthesis (2020)
  6. Maggiordomo, Andrea; Ponchio, Federico; Cignoni, Paolo; Tarini, Marco: \textitReal-World Textured Things: a repository of textured models generated with modern photo-reconstruction tools (2020)
  7. Meta Platforms, Inc; Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, Georgia Gkioxari: Accelerating 3D Deep Learning with PyTorch3D (2020) arXiv
  8. Rajeswar, Sai; Mannan, Fahim; Golemo, Florian; Parent-Lévesque, Jérôme; Vazquez, David; Nowrouzezahrai, Derek; Courville, Aaron: Pix2Shape: towards unsupervised learning of 3D scenes from images using a view-based representation (2020)
  9. Stutz, David; Geiger, Andreas: Learning 3D shape completion under weak supervision (2020)
  10. Sun, Xiao; Lian, Zhouhui: EasyMesh: an efficient method to reconstruct 3D mesh from a single image (2020)
  11. Yang, Bo; Wang, Sen; Markham, Andrew; Trigoni, Niki: Robust attentional aggregation of deep feature sets for multi-view 3D reconstruction (2020)
  12. Hu, Siyu; Chen, Xuejin: Preventing self-intersection with cycle regularization in neural networks for mesh reconstruction from a single RGB image (2019)
  13. Jens Behley, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Cyrill Stachniss, Juergen Gall: SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences (2019) arXiv
  14. Zhu, Jie; Zhang, Yunfeng; Guo, Jie; Liu, Huikun; Liu, Mingming; Liu, Yang; Guo, Yanwen: Label transfer between images and 3D shapes via local correspondence encoding (2019)
  15. Santana, José M.; Trujillo, Agustín; Suárez, José P.: A physical model for screen space distribution of 3D marks on geographical information systems (2018)
  16. Christopher B. Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, Silvio Savarese: 3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction (2016) arXiv
  17. Weichao Qiu, Alan Yuille: UnrealCV: Connecting Computer Vision to Unreal Engine (2016) arXiv