Data Orchestration in Deep Learning Accelerators


Data Orchestration in Deep Learning Accelerators
By 作者: Tushar Krishna
Pub Date: 2020
ISBN: 9781681738697
Pages 页数: 164
Language 语言: English
Format: PDF
Size: 13 Mb
EAN: 9781681738697
Format: Paperback, 164 pages UPC: 9781681738697
Recommended Age Range: 12+ years ISBN: 9781681738697
Publisher Finelybook 出版社: Morgan & Claypool
The Book Description robot was collected from Amazon and arranged by Finelybook
Data Orchestration in Deep Learning Accelerators (Synthesis Lectures on Computer Architecture)
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore’s Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.


下载地址

Data Orchestration in Deep Learning Accelerators 9781681738697.pdf

觉得文章有用就打赏一下文章作者
未经允许不得转载:finelybook » Data Orchestration in Deep Learning Accelerators
分享到: 更多 (0)

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏