Browse by author
Lookup NU author(s): Dr Jiabin Wang, Dr Bingzhang Hu, Dr Yang Long, Dr Yu GuanORCiD
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
© 2019. The copyright of this document resides with its authors.Predicting future frames in natural video sequences is a new challenge that is receiving increasing attention in the computer vision community. However, existing models suffer from severe loss of temporal information when the predicted sequence is long. Compared to previous methods focusing on generating more realistic contents, this paper extensively studies the importance of sequential order information for video generation. A novel Shuffling sEquence gEneration network (SEE-Net) is proposed that can learn to discriminate between natural and unnatural sequential orders by shuffling the video frames and comparing them to the real video sequences. Systematic experiments on three datasets with both synthetic and real-world videos manifest the effectiveness of shuffling sequence generation for video prediction in our proposed model and demonstrate state-of-the-art performance by both qualitative and quantitative evaluations. The source code is available at https://github.com/andrewjywang/SEENet.
Author(s): Wang J, Hu B, Long Y, Guan Y
Publication type: Conference Proceedings (inc. Abstract)
Publication status: Published
Conference Name: 30th British Machine Vision Conference 2019, BMVC 2019
Year of Conference: 2020
Pages: 1-13
Online publication date: 09/09/2019
Acceptance date: 02/04/2016
Publisher: BMVA Press
URL: https://bmvc2019.org/