LEADER 04440nam a22003613i 4500
006 m d
007 cr un
008 180828t20182018cau om 000 0 eng d
a| CSt b| eng e| rda c| CSt
a| Qi, Ruizhongtai, e| author. ?| UNAUTHORIZED
a| Deep learning on point clouds for 3D scene understanding / c| Ruizhongtai Qi.
a| [Stanford, California] : b| [Stanford University], c| 2018.
a| 1 online resource.
a| text 2| rdacontent
a| computer 2| rdamedia
a| online resource 2| rdacarrier
a| Submitted to the Department of Electrical Engineering.
g| Thesis b| Ph.D. c| Stanford University d| 2018.
a| Point cloud is a commonly used geometric data type with many applications in computer vision, computer graphics and robotics. The availability of inexpensive 3D sensors has made point cloud data widely available and the current interest in self-driving vehicles has highlighted the importance of reliable and efficient point cloud processing. Due to its irregular format, however, current convolutional deep learning methods cannot be directly used with point clouds. Most researchers transform such data to regular 3D voxel grids or collections of images, which renders data unnecessarily voluminous and causes quantization and other issues. In this thesis, we present novel types of neural networks (PointNet and PointNet++) that directly consume point clouds, in ways that respect the permutation invariance of points in the input. Our network provides a unified architecture for applications ranging from object classification and part segmentation to semantic scene parsing, while being efficient and robust against various input perturbations and data corruption. We provide a theoretical analysis of our approach, showing that our network can approximate any set function that is continuous, and explain its robustness. In PointNet++, we further exploit local contexts in point clouds, investigate the challenge of non-uniform sampling density in common 3D scans, and design new layers that learn to adapt to varying sampling densities. The proposed architectures have opened doors to new 3D-centric approaches to scene understanding. We show how we can adapt and apply PointNets to two important perception problems in robotics: 3D object detection and 3D scene flow estimation. In 3D object detection, we propose a new frustum-based detection framework that achieves 3D instance segmentation and 3D amodal box estimation in point clouds. Our model, called Frustum PointNets, benefits from accurate geometry provided by 3D points and is able to canonicalize the learning problem by applying both non-parametric and data-driven geometric transformations on the inputs. Evaluated on large-scale indoor and outdoor datasets, our real-time detector significantly advances state of the art. In scene flow estimation, we propose a new deep network called FlowNet3D that learns to recover 3D motion flow from two frames of point clouds. Compared with previous work that focuses on 2D representations and optimizes for optical flow, our model directly optimizes 3D scene flow and shows great advantages in evaluations on real LiDAR scans. As point clouds are prevalent, our architectures are not restricted to the above two applications or even 3D scene understanding. This thesis concludes with a discussion on other potential application domains and directions for future research.
a| Guibas, Leonidas J., e| degree supervisor. 4| ths 0| http://id.loc.gov/authorities/names/n86109709 =| ^A1673175
a| Girod, Bernd, e| degree committee member. 4| ths 0| http://id.loc.gov/authorities/names/n90727529 =| ^A765295
a| Savarese, Silvio, e| degree committee member. 4| ths 0| http://id.loc.gov/authorities/names/no2011143935 =| ^A2703901
a| Stanford University. b| Department of Electrical Engineering. 0| http://id.loc.gov/authorities/names/nr2002030762 =| ^A1600680
a| 21 22
u| http://purl.stanford.edu/xm943cz7043 x| SDR-PURL x| item
a| DATE CATALOGED b| 20180829
a| 3781 2018 Q w| ALPHANUM c| 1 i| 36105227908170 d| 8/12/2019 l| UARCH-30 m| SPEC-COLL r| Y s| Y t| NONCIRC u| 8/29/2018
a| INTERNET RESOURCE w| ASIS c| 1 i| 12741586-2001 l| INTERNET m| SUL r| Y s| Y t| SUL u| 8/29/2018 x| E-THESIS