Toggle Main Menu Toggle Search

Open Access padlockePrints

To Complete or to Estimate, That is the Question: A Multi-Task Approach to Depth Completion and Monocular Depth Estimation

Lookup NU author(s): Dr Amir Atapour Abarghouei

Downloads


Licence

This is the authors' accepted manuscript of a conference proceedings (inc. abstract) that has been published in its final definitive form by IEEE, 2019.

For re-use rights please refer to the publisher's terms and conditions.


Abstract

Robust three-dimensional scene understanding is now an ever-growing area of research highly relevant in many real-world applications such as autonomous driving and robotic navigation. In this paper, we propose a multi-task learning-based model capable of performing two tasks:- sparse depth completion (i.e. generating complete dense scene depth given a sparse depth image as the input) and monocular depth estimation (i.e. predicting scene depth from a single RGB image) via two sub-networks jointly trained end to end using data randomly sampled from a publicly available corpus of synthetic and real-world images. The first sub-network generates a sparse depth image by learning lower level features from the scene and the second predicts a full dense depth image of the entire scene, leading to a better geometric and contextual understanding of the scene and, as a result, superior performance of the approach. The entire model can be used to infer complete scene depth from a single RGB image or the second network can be used alone to perform depth completion given a sparse depth input. Using adversarial training, a robust objective function, a deep architecture relying on skip connections and a blend of synthetic and real-world training data, our approach is capable of producing superior high quality scene depth. Extensive experimental evaluation demonstrates the efficacy of our approach compared to contemporary state-of-the-art techniques across both problem domains.


Publication metadata

Author(s): Atapour-Abarghouei A, Breckon TP

Publication type: Conference Proceedings (inc. Abstract)

Publication status: Published

Conference Name: International Conference on 3D Vision (3DV 2019)

Year of Conference: 2019

Pages: 183-193

Online publication date: 31/10/2019

Acceptance date: 30/07/2019

Date deposited: 06/02/2021

ISSN: 2475-7888

Publisher: IEEE

URL: https://doi.org/10.1109/3DV.2019.00029

DOI: 10.1109/3DV.2019.00029

Library holdings: Search Newcastle University Library for this item

ISBN: 9781728131313


Actions

Link to this publication


Share