Toggle Main Menu Toggle Search

Open Access padlockePrints

REDRESS: Generating Compressed Models for Edge Inference Using Tsetlin Machines

Lookup NU author(s): Dr Sidharth Maheshwari, Tousif Rahman, Dr Rishad Shafik, Professor Alex Yakovlev, Dr Ashur Rafiev

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

AuthorInference at-the-edge using embedded machine learning models is associated with challenging trade-offs between resource metrics, such as energy and memory footprint, and the performance metrics, such as computation time and accuracy. In this work, we go beyond the conventional Neural Network based approaches to explore Tsetlin Machine (TM), an emerging machine learning algorithm, that uses learning automata to create propositional logic for classification. We use algorithm-hardware co-design to propose a novel methodology for training and inference of TM. The methodology, called REDRESS, comprises independent TM training and inference techniques to reduce the memory footprint of the resulting automata to target low and ultra-low power applications. The array of Tsetlin Automata (TA) holds learned information in the binary form as bits: ${0,1}$, called excludes and includes, respectively. REDRESS proposes a lossless TA compression method, called the include-encoding, that stores only the information associated with includes to achieve over 99% compression. This is enabled by a novel computationally minimal training procedure, called the Tsetlin Automata Re-profiling, to improve the accuracy and increase the sparsity of TA to reduce the number of includes, hence, the memory footprint. Finally, REDRESS includes an inherently bit-parallel inference algorithm that operates on the optimally trained TA in the compressed domain, that does not require decompression during runtime, to obtain high speedups when compared with the state-of-the-art Binary Neural Network (BNN) models. In this work, we demonstrate that using REDRESS approach, TM outperforms BNN models on all design metrics for five benchmark datasets viz. MNIST, CIFAR2, KWS6, Fashion-MNIST and Kuzushiji-MNIST. When implemented on an STM32F746G-DISCO microcontroller, REDRESS obtained speedups and energy savings ranging 5-5700× compared with different BNN models.


Publication metadata

Author(s): Maheshwari S, Rahman T, Shafik R, Yakovlev A, Rafiev A, Jiao L, Granmo O

Publication type: Article

Publication status: Published

Journal: IEEE Transactions on Pattern Analysis and Machine Intelligence

Year: 2023

Pages: epub ahead of print

Online publication date: 19/04/2023

Acceptance date: 02/04/2023

Date deposited: 09/05/2023

ISSN (print): 0162-8828

ISSN (electronic): 1939-3539

Publisher: IEEE Computer Society

URL: https://doi.org/10.1109/TPAMI.2023.3268415

DOI: 10.1109/TPAMI.2023.3268415


Altmetrics

Altmetrics provided by Altmetric


Funding

Funder referenceFunder name
5thICON-12
AIEverywhere project
NACCF 220

Share