Building Energy-Efficient Accelerators for Deep Learning
As deep learning is becoming more ubiquitous in our lives, we are in need of better hardware infrastructure to support the large amount of computation foreseeable. In particular, the high energy/power consumption of current CPU and GPU systems prevents the deployment of deep learning at a larger scale, and dedicated deep learning accelerators will be the key to solve this problem. In this talk, I will give an overview of our work to build an energy-efficient accelerator, called Eyeriss, for deep convolutional neural networks (CNN), which are currently the cornerstone of many deep learning algorithms. Eyeriss is reconfigurable to support state-of-the-art deep CNNs. Focusing on minimizing data movement between the accelerator and the main memory as well as within the computation fabric of the accelerator, we are able to achieve 10 times higher energy efficiency compared to modern mobile GPUs.
Yu-Hsin Chen is currently a PhD candidate at MIT working on the architecture design for deep learning accelerators. Co-advised by Prof. Vivienne Sze and Prof. Joel Emer, his research interests include energy-efficient VLSI system design, computer vision and digital signal processing. He received the B.S. and M.S. degrees, both in department of EECS, from National Taiwan University and MIT, respectively. He was also a recipient of the 2015 NVIDIA Graduate Fellowship and the 2015 ADI Outstanding Student Designer Award.