Training progressively Binarizing Deep Networks using FPGAs
Lammie, Corey, Xiang, Wei, and Rahimi Azghadi, Mostafa (2020) Training progressively Binarizing Deep Networks using FPGAs. In: Proceedings of the 2020 IEEE International Symposium on Circuits and Systems. From: ISCAS: 2020 IEEE International Symposium on Circuits and Systems, 10-21 October 2020, Seville, Spain.
PDF (Accepted Author Version)
- Accepted Version
Restricted to Repository staff only |
Abstract
While hardware implementations of inference routines for Binarized Neural Networks (BNNs) are plentiful, current realizations of efficient BNN hardware training accelerators, suitable for Internet of Things (IoT) edge devices, leave much to be desired. Conventional BNN hardware training accelerators perform forward and backward propagations with parameters adopting binary representations, and optimization using parameters adopting floating or fixed-point real-valued representations-requiring two distinct sets of network parameters. In this paper, we propose a hardware-friendly training method that, contrary to conventional methods, progressively binarizes a singular set of fixed-point network parameters, yielding notable reductions in power and resource utilizations. We use the Intel FPGA SDK for OpenCL development environment to train our progressively binarizing DNNs on an OpenVINO FPGA. We benchmark our training approach on both GPUs and FPGAs using CIFAR-10 and compare it to conventional BNNs.
Item ID: | 65714 |
---|---|
Item Type: | Conference Item (Research - E1) |
ISBN: | 978-1-7281-3320-1 |
Copyright Information: | (C) IEEE |
Additional Information: | This was an online event. |
Date Deposited: | 04 Feb 2021 00:56 |
FoR Codes: | 40 ENGINEERING > 4009 Electronics, sensors and digital hardware > 400902 Digital electronic devices @ 100% |
SEO Codes: | 97 EXPANDING KNOWLEDGE > 970110 Expanding Knowledge in Technology @ 100% |
Downloads: |
Total: 1 |
More Statistics |