Presentation Type
Lecture

A Cross-Layer Perspective for Energy Efficient Processing - From Beyond-CMOS Devices to Deep Learning

Presenter

Presentation Menu

Abstract

As Moore’s Law based device scaling and accompanying performance scaling trends are slowing down, there is increasing interest in new technologies and computational models for fast and more energy-efficient information processing. Meanwhile, there is growing evidence that, with respect to traditional Boolean circuits and von Neumann processors, it will be challenging for beyond-CMOS devices to compete with the CMOS technology. Nevertheless, some beyond-CMOS devices demonstrate other unique characteristics such as ambipolarity, negative differential resistance, hysteresis, and oscillatory behavior. Exploiting such unique characteristics, especially in the context of alternative circuit and architectural paradigms, has the potential to offer orders of magnitude improvement in terms of power, performance, and capability.

In order to take full advantage of beyond-CMOS devices, however, it is no longer sufficient to develop algorithms, architectures, and circuits independent of one another. Cross-layer efforts spanning from devices to circuits to architectures to algorithms are indispensable. This talk will examine energy-efficient neural network accelerators for embedded applications in this context. Several deep neural network accelerator designs based on cross-layer efforts spanning from alternative device technologies, circuit styles and architectures will be highlighted. A comprehensive application-level benchmarking study for the MNIST dataset will be presented. The discussions will demonstrate that cross-layer efforts indeed can lead to orders of magnitude gain towards achieving extreme scale energy-efficient processing.

Description