IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Fri, January 13, 2023
In this lecture, I will talk about two overarching research goals we have been pursuing for several years. The first goal is to explore the limits of energy per operation when running AI algorithms such as deep learning (DL). In-memory computing (IMC) is a non-von Neumann compute paradigm that keeps alive the promise for 1fJ/Operation for DL. Attributes such as synaptic efficacy and plasticity can be implemented in place by exploiting the physical attributes of memory devices such as phase-change memory. I will provide an overview of the most advanced IMC chips based on phase-change memory integrated in 14nm CMOS technology node. The second goal is to develop algorithmic and architectural building blocks for a more efficient and general AI. I will introduce the paradigm of neuro-vector symbolic architecture (NVSA) that could address problems such as continual learning and visual abstract reasoning. I will also showcase the role of IMC in realizing some of the critical compute blocks for NVSA.