Vivienne Sze

Building Energy-Efficient Accelerators for Deep Learning

As deep learning is becoming more ubiquitous in our lives, we are in need of better hardware infrastructure to support the large amount of computation foreseeable. In particular, the high energy/power consumption of current CPU and GPU systems prevents the deployment of deep learning at a larger scale, and dedicated deep learning accelerators will be the key to solve this problem. In this talk, I will give an overview of our work to build an energy-efficient accelerator, called Eyeriss, for deep convolutional neural networks (CNN), which are currently the cornerstone of many deep learning algorithms. Eyeriss is reconfigurable to support state-of-the-art deep CNNs. Focusing on minimizing data movement between the accelerator and the main memory as well as within the computation fabric of the accelerator, we are able to achieve 10 times higher energy efficiency compared to modern mobile GPUs.

Vivienne Sze is an Assistant Professor at MIT in the Electrical Engineering and Computer Science Department. Her research interests include energy-aware signal processing algorithms, and low-power circuit and system design for multimedia applications. In 2011, she was awarded the Jin-Au Kong Outstanding Doctoral Thesis Prize in electrical engineering at MIT for her thesis on “Parallel Algorithms and Architectures for Low Power Video Decoding”. She is a recipient of the 2016 3M Non-tenured Faculty Award, the 2014 DARPA Young Faculty Award, the 2007 DAC/ISSCC Student Design Contest Award and a co-recipient of the 2008 A-SSCC Outstanding Design Award.

This website uses cookies to ensure you get the best experience. Learn more