Browse by author
Lookup NU author(s): Dr Varun OjhaORCiD
This is the final published version of a conference proceedings (inc. abstract) that has been published in its final definitive form by British Machine Vision Association (BMVA), 2024.
For re-use rights please refer to the publisher's terms and conditions.
© 2024. The copyright of this document resides with its authors.Trash screens are used to prevent floating debris from damaging critical assets (e.g. pipes, pumping stations) in rivers. However, debris accumulates at the trash screen location and can contribute to floods. Here we develop a novel application of deep learning that uses cameras to automatically monitor the presence and amount of trash on trash screens. We manually annotated debris in 575 trash screen images from 54 cameras and used this dataset to train and evaluate the performance of several semantic segmentation networks. This process reaches segmentation accuracy above 95% MIoU using the SegVit network based on a Vision Transformer architecture. We show that this approach can be used to accurately monitor the state of trash screens during flood events, detecting build up of trash to guide preventative maintenance. This research is an important step towards the automation of trash screen monitoring, an application of great importance in environmental monitoring and better management of flooding.
Author(s): Vandaele R, Dance SL, Williams HTP, Ojha V
Publication type: Conference Proceedings (inc. Abstract)
Publication status: Published
Conference Name: 35th British Machine Vision Conference Workshop Proceedings (BMVC 2024)
Year of Conference: 2024
Online publication date: 28/11/2024
Acceptance date: 02/04/2018
Date deposited: 17/02/2026
Publisher: British Machine Vision Association (BMVA)
URL: https://bmva-archive.org.uk/bmvc/2024/workshops/MVEO/paper4.pdf