Distinguished Lecturer

In-Memory Computing with Associative Memories — A Cross-Layer Perspective

X. Sharon Hu

Date & Time

Thu, August 19, 2021

Abstract

Data transfer between processors and memory is a major bottleneck in improving application-level performance. This is particularly true for data intensive tasks such as many machine learning and security applications. In-memory computing, where certain data processing is performed directly in the memory array, can be an effective solution to address this bottleneck. Associative memory (AM), a type of memory that can efficiently “associate” an input query with appropriate data words/locations in the memory, is a powerful in-memory computing core. Nonetheless harnessing the benefits of AM requires cross-layer efforts spanning from devices and circuits to architectures and systems. In this talk, I will showcase several representatives cross-layer AM based design efforts. In particular, I will highlight how different non-volatile memory technologies (such as RRAM, FeFET memory and Flash) can be exploited to implement various types of AM (e.g., exact and approximate match, ternary and multi-bit data representation, and different distance functions). I will use several popular machine learning and security applications to demonstrate how they can profit from these different AM designs. End-to-end (from device to application) evaluations will be analyzed to reveal the benefits contributed by each design layer, which can serve as guides for future research efforts.


Presenter

X. Sharon Hu

University of Notre Dame
United States
4 (Central U.S.)

Date & Time

Thu, August 19, 2021