Indexed by:
Abstract:
In this paper, a floating point multiplication and accumulation operator based on FPGA is designed for neural network calculation, and a custom 32 bit floating-point data format is used to change the amount of computation by changing the overall structure of the data, and the performance of the operator is optimized. Finally, the simulation results in FPGA are given to verify the correctness of the design. The design saves the resources by comparing the floating point operation with the common algorithm of 32 bit floating-point data of the IEEE standard. © 2018 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2018
Page: 282-285
Language: English
Cited Count:
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 10
Affiliated Colleges: