Shape As Points
A Differentiable Poisson Solver

NeurIPS 2021 (Oral)


Songyou Peng1,2     Chiyu "Max" Jiang     Yiyi Liao2,3     Michael Niemeyer2,3    
Marc Pollefeys1,4         Andreas Geiger2,3
1ETH Zurich   2Max Planck Institute for Intelligent Systems   3University of Tübingen   4Microsoft



Shape-As-Points (SAP) efficiently and differentiably bridge oriented point clouds and meshes.

Abstract

TL;DR: SAP is a differentiable version of classic Poisson surface reconstruction, and a hybrid shape representation that unifies implicit and explicit representations.

In recent years, neural implicit representations gained popularity in 3D reconstruction due to their expressiveness and flexibility. However, the implicit nature of neural implicit representations results in slow inference time and requires careful initialization. In this paper, we revisit the classic yet ubiquitous point cloud representation and introduce a differentiable point-to-mesh layer using a differentiable formulation of Poisson Surface Reconstruction (PSR) that allows for a GPU-accelerated fast solution of the indicator function given an oriented point cloud. The differentiable PSR layer allows us to efficiently and differentiably bridge the explicit 3D point representation with the 3D mesh via the implicit indicator field, enabling end-to-end optimization of surface reconstruction metrics such as Chamfer distance. This duality between points and meshes hence allows us to represent shapes as oriented point clouds, which are explicit, lightweight and expressive. Compared to neural implicit representations, our Shape-As-Points (SAP) model is more interpretable, lightweight, and accelerates inference time by one order of magnitude. Compared to other explicit representations such as points, patches, and meshes, SAP produces topology-agnostic, watertight manifold surfaces. We demonstrate the effectiveness of SAP on the task of surface reconstruction from unoriented point clouds and learning-based reconstruction.

Video


Different Shape Representations



Traditional explicit shape representations (e.g. voxels, point clouds or meshes) are usually very efficient during inference, but all to some extent suffer from discretization.

Neural implicit representations produce smooth and high-quality shapes, but their inference time is typically very slow due to numerous network evaluations in 3D space.

Shape-As-Points (SAP) unifies implicit and explicit shape representations. SAP is interpretable, lightweight, topology agnostic, yields high-quality watertight meshes at low inference times and can be initialized from noisy or incomplete observations.


Intuition of Poisson Equation


Solving the Poisson equation is the cornerstone of our SAP representation. A shape (we use a circle as an example) can be represented as an implicit indicator function. The premise of the Poisson equation is that:

The point normals are an approximation of the gradient of the indicator function.

We use spectral methods to solve the Poisson equation. The spetral method is highly optimized on GPUs/TPUs. It is extremely simple and can be implemented with 25-line code.


Applications

Optimization-based 3D Reconstruction

Using SAP you can have 3D reconstruction from only noisy unoriented point clouds / scans.

Wheel (Genus > 0)

Dragon (Real Scan)

Learning-based 3D Reconstruction

You can use SAP to learn parameters of a deep neural network. The network can handle large noises and outliers.



Input Point Clouds with Large Noises

Input Point Clouds with 50% Outliers

BibTeX

@inproceedings{Peng2021SAP,
      author    = {Peng, Songyou and Jiang, Chiyu "Max" and Liao, Yiyi and Niemeyer, Michael and Pollefeys, Marc and Geiger, Andreas},
      title     = {Shape As Points: A Differentiable Poisson Solver},
      booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
      year      = {2021}
}

Acknowledgements

Andreas Geiger was supported by the ERC Starting Grant LEGO-3D (850533) and the DFG EXC number 2064/1 - project number 390727645. The authors thank the Max Planck ETH Center for Learning Systems (CLS) for supporting Songyou Peng and the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Michael Niemeyer. This work was supported by an NVIDIA research gift. We thank Matthias Niessner, Thomas Funkhouser, Hugues Hopp, Yue Wang for helpful discussions in early stages of this project. We also thank Xu Chen, Christian Reiser, Rémi Pautrat for proofreading.