Toggle Main Menu Toggle Search

Open Access padlockePrints

On the application of quantization for mobile optimized convolutional neural networks as a predictor of realtime ageing biomarkers

Lookup NU author(s): Scott Stainton, Professor Mike Catt, Emeritus Professor Satnam Dlay


Full text for this publication is not currently held within this repository. Alternative links are provided below where available.


© 2018 IEEE. In this paper we propose a mobile optimized deep learning network based on the VGG16 architecture. Compared to the classical approach, after training has been performed the model is converted to a quantized equivalent where 32 bit floating point operations are exchanged for 8 bit ones. This reduces the strain on mobile memory and local caches while simultaneously reducing the computational complexity and energy requirement of the entire deep learning model. Aggregated testing has been performed to validate the complexity hypothesis and the quantized model has been compared to the original model in terms of accuracy. The results show that for a modest decrease in accuracy, the quantized model takes up 75% less disk space and through the 8 bit operations the computational complexity is reduced, showing a load and inference speed up of 3-4 times faster than the original model.

Publication metadata

Author(s): Stainton S, Barney S, Catt M, Dlay S

Publication type: Conference Proceedings (inc. Abstract)

Publication status: Published

Conference Name: 2018 11th International Symposium on Communication Systems, Networks and Digital Signal Processing, CSNDSP 2018

Year of Conference: 2018

Online publication date: 27/09/2018

Acceptance date: 18/07/2018

Publisher: IEEE


DOI: 10.1109/CSNDSP.2018.8471792

Library holdings: Search Newcastle University Library for this item

ISBN: 9781538613351