Improved Point Transformation Methods For Self-Supervised Depth Prediction

Primary tabs

This is the authors' accepted manuscript for presentation at the 2021 18th Conference on Robots and Vision (CRV). Given stereo or egomotion image pairs, a popular and successful method for un supervised learning of monocular depth estimation is to measure the quality of i mage reconstructions resulting from the learned depth predictions. Continued research has improved the overall approach in recent years, yet the common framework still suffers from several important limitations, particularly when dealing with points occluded after transformation to a novel viewpoint. While prior work has addressed the problem heuristically, this paper introduces a z-buffering algorithm that correctly and efficiently handles occluded points. Because our algorithm is implemented with operators typical of machine learning libraries, it can be incorporated into any existing unsupervised depth learning framework with automatic support for differentiation. Additionally, because points having negative depth after transformation often signify erroneously shallow depth predictions, we introduce a loss function to explicitly penalize this undesirable behavior. Experimental results on the KITTI data set show that the z-buffer and negative depth loss both improve the performance of a state of the art depth-prediction network. The code is available at and archived at