Toggle Main Menu Toggle Search

Open Access padlockePrints

Depth Embedded Recurrent Predictive Parsing Network for Video Scenes

Lookup NU author(s): Dr Yang Long

Downloads

Full text for this publication is not currently held within this repository. Alternative links are provided below where available.


Abstract

IEEE Semantic segmentation-based scene parsing plays an important role in automatic driving and autonomous navigation. However, most of the previous models only consider static images, and fail to parse sequential images because they do not take the spatial-temporal continuity between consecutive frames in a video into account. In this paper, we propose a depth embedded recurrent predictive parsing network (RPPNet), which analyzes preceding consecutive stereo pairs for parsing result. In this way, RPPNet effectively learns the dynamic information from historical stereo pairs, so as to correctly predict the representations of the next frame. The other contribution of this paper is to systematically study the video scene parsing (VSP) task, in which we use the RPPNet to facilitate conventional image paring features by adding spatial-temporal information. The experimental results show that our proposed method RPPNet can achieve fine predictive parsing results on cityscapes and the predictive features of RPPNet can significantly improve conventional image parsing networks in VSP task.


Publication metadata

Author(s): Zhou L, Zhang H, Long Y, Shao L, Yang J

Publication type: Article

Publication status: Published

Journal: IEEE Transactions on Intelligent Transportation Systems

Year: 2019

Volume: 20

Issue: 12

Pages: 4643-4654

Print publication date: 01/12/2019

Online publication date: 15/04/2019

Acceptance date: 31/03/2019

ISSN (print): 1524-9050

ISSN (electronic): 1558-0016

Publisher: IEEE

URL: https://doi.org/10.1109/TITS.2019.2909053

DOI: 10.1109/TITS.2019.2909053


Altmetrics

Altmetrics provided by Altmetric


Share