Toggle Main Menu Toggle Search

Open Access padlockePrints

Multi-Granularity Canonical Appearance Pooling for Remote Sensing Scene Classification

Lookup NU author(s): Dr Shidong WangORCiD, Dr Yu GuanORCiD


Full text for this publication is not currently held within this repository. Alternative links are provided below where available.


© 1992-2012 IEEE.Recognising remote sensing scene images remains challenging due to large visual-semantic discrepancies. These mainly arise due to the lack of detailed annotations that can be employed to align pixel-level representations with high-level semantic labels. As the tagging process is labour-intensive and subjective, we hereby propose a novel Multi-Granularity Canonical Appearance Pooling (MG-CAP) to automatically capture the latent ontological structure of remote sensing datasets. We design a granular framework that allows progressively cropping the input image to learn multi-grained features. For each specific granularity, we discover the canonical appearance from a set of pre-defined transformations and learn the corresponding CNN features through a maxout-based Siamese style architecture. Then, we replace the standard CNN features with Gaussian covariance matrices and adopt the proper matrix normalisations for improving the discriminative power of features. Besides, we provide a stable solution for training the eigenvalue-decomposition function (EIG) in a GPU and demonstrate the corresponding back-propagation using matrix calculus. Extensive experiments have shown that our framework can achieve promising results in public remote sensing scene datasets.

Publication metadata

Author(s): Wang S, Guan Y, Shao L

Publication type: Article

Publication status: Published

Journal: IEEE Transactions on Image Processing

Year: 2020

Volume: 29

Pages: 5396-5407

Online publication date: 06/04/2020

Acceptance date: 18/03/2020

ISSN (print): 1057-7149

ISSN (electronic): 1941-0042

Publisher: IEEE


DOI: 10.1109/TIP.2020.2983560


Altmetrics provided by Altmetric