Abstract
With the increasing wider application of neural networks, there has been significant focus on accelerating this class of computations. Larger, more complex networks are being proposed in a variety of domains, requiring more powerful computation platforms. The inherent parallelism and regularity of neural network structures means custom architectures can be adopted for this purpose. FPGAs have been widely used to implement such accelerators because of their flexibility, achievable performance, efficiency, and abundant peripherals. While platforms that utilize multicore CPUs and GPUs are also competitive, FPGAs offer superior energy efficiency, and a wider space of optimisations to enhance performance and efficiency. FPGAs are also more suitable for performing such computations at the edge, where multicore CPUs and GPUs are are less likely to be used and energy efficiency is paramount.
Original language | English (US) |
---|---|
Title of host publication | Proceedings - 29th International Conference on Field-Programmable Logic and Applications, FPL 2019 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 252-253 |
Number of pages | 2 |
ISBN (Print) | 9781728148847 |
DOIs | |
State | Published - Sep 1 2019 |
Externally published | Yes |