CEDA Virtual DL Program

Virtual Distinguished Lecturer Program

The Virtual Distinguished Lecturer Program (VDLP) allows us to continue to serve the CEDA participants and the electronic design automation community the opportunity to hear from our respected Distinguished Lecturers.

Registration is free for all webinars. If you are unable to attend the "live" virtual events, the presentations will be available on our Presentation Library and the CEDA YouTube channel after the event.

Virtual Webinar Schedule

Date/Time (EST) Title Distinguished Lecturer Registration Recordings
10:00 AM-11:00 AM  ET

Electronic Design Automation for Emerging Technologies

Anupam Chattopadhyay

🌐 Zoom

10:00 AM-11:00 AM  ET


Sheldon Tan

🌐 Zoom

10:00 AM-11:00 AM  ET


Mohammad Al Faruque

🌐 Zoom


DL Program Manager

Hai (Helen) Li

Duke University
United States
3 (Southeastern U.S.)

Virtual DL Talks


Anupam Chattopadhyay

Distinguished Lecturer 2022 - 2023

Electronic Design Automation for Emerging Technologies

31 October 2022 at 10:00 AM ET

The continued scaling of horizontal and vertical physical features of silicon-based complementary metal-oxide-semiconductor (CMOS) transistors, termed as “More Moore”, has a limited runway and would eventually be replaced with “Beyond CMOS” technologies. There has been a tremendous effort to follow Moore’s law but it is currently approaching atomistic and quantum mechanical physics boundaries. This has led to active research in other non-CMOS technologies such as memristive devices, carbon nanotube field-effect transistors, quantum computing, etc. Several of these technologies have been realized on practical devices with promising gains in yield, integration density, runtime performance, and energy efficiency. Their eventual adoption is largely reliant on the continued research of Electronic Design Automation (EDA) tools catering to these specific technologies. Indeed, some of these technologies present new challenges to the EDA research community, which are being addressed through a series of innovative tools and techniques. In this tutorial, we will particularly cover the two phases of EDA flow, logic synthesis, and technology mapping, for two types of emerging technologies, namely, in-memory computing and quantum computing.



Vijaykrishnan Narayanan

Distinguished Lecturer 2018 - 2021

Distributed Visual Analytics

April 22, 2021 at 10:00 AM ET

Visual content continues to be on the rise.  Increasingly, visual analytics are part of computational pipelines supporting latency-sensitive and interactive applications such as situational awareness, which combine content dynamically aggregated from sensors and end-user devices. The talk will highlight assistive visual systems for persons with visual impairments and a pollinator-tracking system as examples of such end uses. Using these application contexts, the design space of  these distributed sensors will be explored with focus on communication costs.  Next, the talk will focus on Video query processing that is evolving from applications that query pre-extracted metadata using traditional query processing techniques to applications that directly analyze geometry, semantics, and content in the video bitstream. This talk will showcase ongoing efforts in system level design. Finally,  we will show some new opportunities with the shift from 2D to 3D sensors. The talk leverages effort from collaborators from the SRC JUMP Visual Analytics team and the NSF Expeditions in Computing Visual Cortex on Silicon program.


Yiran Chen

Distinguished Lecturer 2018 - 2021

An EDA Researcher’s Journey Into AI

may 20, 2021 at 11:00 AM ET

Abstract: Artificial Intelligence (A) ubiquitously impacts almost all research societies including electronic design automation (EDA). Many scholars with mathematic and modeling backgrounds have shifted their focuses onto applying AI technologies to their research or directly working on AI problems. As a researcher with a Ph.D. training of EDA and circuit designs, I started my AI-relevant research since late 2000s, i.e., neuromorphic computing that implements hardware to accelerate computation of biologically plausible learning models. In this talk, I will review the development process of my research from neuromorphic computing to a broader scope of AI, including machine learning accelerator designs, neural network quantization and pruning, neural architectural search, federated learning, and neural network robustness, privacy, security, etc., and how I benefit from my EDA background.


David Pan

Distinguished Lecturer 2019 - 2021

Closing the Virtuous Cycle of AI for IC and IC for AI

June 24, 2021 at 11:00 AM ET

The recent artificial intelligence (AI) boom has been primarily driven by three confluence forces: algorithms, data, and computing power enabled by modern integrated circuits (ICs), including specialized AI accelerators. This talk will present a closed-loop perspective for synergistic AI and agile IC design with two main themes, AI for IC and IC for AI. As semiconductor technology enters the era of extreme scaling, IC design and manufacturing complexities become extremely high. More intelligent and agile IC design technologies are needed than ever to optimize performance, power, manufacturability, design cost, etc., and deliver equivalent scaling to Moore’s Law. This talk will present some recent results leveraging modern AI and machine learning advancement with domain-specific customizations for agile IC design and manufacturing closure. Meanwhile, customized ICs, including those with beyond-CMOS technologies, can drastically improve AI performance and energy efficiency by orders of magnitude. I will present some recent results on hardware/software co-design for high performance and energy-efficient optical neural networks. Closing the virtuous cycle between AI and IC holds great potential to advance the state-of-the-art of each other significantly.


Rajiv Joshi

Distinguished Lecturer 2019 - 2021

Predictive Analytics in Machine Learning for VLSI Circuits

July 15, 2021 at 11:00 AM ET

As semiconductor technology enters the sub-14nm era, geometry, process, voltage and temperature (PVT) variability in devices can affect the performance, functionality, and power of circuits, especially in new Artificial Intelligent (AI) accelerators. This is where predictive failure analytics is extremely critical. It can identify the failure issues related to logic and memory circuits and drive the circuits in energy efficient area.

This talk describes how key statistical techniques and new algorithms can be effectively used to analyze and build robust circuits.  These algorithms can be used to analyze decoders, latches, and volatile as well as non-volatile memories. In addition, how these methodologies can be extended to “reliability prediction” and “hardware corroboration” is demonstrated. Logistic regression-based machine learning techniques are employed for modeling the circuit response and speeding up the importance sample points simulations. To avoid overfitting, a cross-validation based regularization framework for ordered feature selection is demonstrated.  

Also, techniques to generate accurate parasitic capacitance modeling along with PVT variations for sub-22nm technologies and their incorporation into a physics-based statistical analysis methodology for accurate Vmin analysis are described. In addition, extension of these techniques based on machine learning e.g KNN is highlighted. Finally, the talk summarizes important issues in this field.


Xiaobo Sharon Hu

Distinguished Lecturer 2018 - 2021

In-Memory Computing with Associative Memories — A Cross-Layer Perspective

august 19, 2021 at 11:00 AM ET

Abstract: Data transfer between processors and memory is a major bottleneck in improving application-level performance. This is particularly true for data intensive tasks such as many machine learning and security applications. In-memory computing, where certain data processing is performed directly in the memory array, can be an effective solution to address this bottleneck. Associative memory (AM), a type of memory that can efficiently “associate” an input query with appropriate data words/locations in the memory, is a powerful in-memory computing core. Nonetheless harnessing the benefits of AM requires cross-layer efforts spanning from devices and circuits to architectures and systems. In this talk, I will showcase several representatives cross-layer AM based design efforts. In particular, I will highlight how different non-volatile memory technologies (such as RRAM, FeFET memory and Flash) can be exploited to implement various types of AM (e.g., exact and approximate match, ternary and multi-bit data representation, and different distance functions). I will use several popular machine learning and security applications to demonstrate how they can profit from these different AM designs. End-to-end (from device to application) evaluations will be analyzed to reveal the benefits contributed by each design layer, which can serve as guides for future research efforts.


Sachin Sapatnekar

Distinguished Lecturer 2018 - 2021

Automating Analog Layout in the 21st Century

SEPTEMBER 23, 2021 at 11:00 AM ET

EDA tools have been used routinely in the digital design flow for decades, but despite valiant efforts from the research community, analog design has stubbornly resisted automation. Several recent developments are helping turn the tide, driving wider adoption of automation tools within the analog design flow.  This talk explains the reasons for this change, and then describes recent efforts in analog layout automation with particular focus on the ALIGN (Analog Layout, Intelligently Generated from Netlists) project. ALIGN is a joint university-industry effort that is developing an open-source analog layout flow, leveraging a blend of traditional algorithmic methods with machine learning based approaches.  ALIGN targets a wide variety of designs – low frequency analog circuits, wireline circuits for high-speed links, RF/wireless circuits, and power delivery circuits – under a single framework.  The flow is structured modularly and is being built to cater to a wide range of designer expertise: the novice designer could use it in “push-button” mode, automatically generating GDSII layout from a SPICE netlist, while users with greater levels of expertise could bypass parts of the flow to incorporate their preferences and constraints. 

The talk will present an overview of both the technical challenges and logistical barriers to building an open-source tool flow while respecting the confidentiality requirements of secured IP information. Finally, the application of ALIGN to a variety of designs will be demonstrated.


Yier Jin

Distinguished Lecturer 2019 - 2021

Hardware Supported Cybersecurity for IoT

november 18, 2021 at 11:00 AM ET

Within the past decade, the number of IoT devices introduced in the market has increased dramatically. This trend is expected to continue at a rapid pace. However, the massive deployment of IoT devices has led to significant security and privacy concerns given that security is often treated as an afterthought for IoT systems. Security issues may come at different levels, from deployment issues that leave devices exposed to the internet with default credentials, to implementation issues where manufacturers incorrectly employ existing protocols or develop proprietary ones for communications that have not been examined for their sanity. While existing cybersecurity and network security solutions can help protect IoT, they often suffer from the limited on-board/on-chip resources. To mitigate this problem, researchers have developed multiple solutions based on a top-down (relying on cloud for IoT data processing and authentication) or a bottom-up (leveraging hardware modifications for efficient cybersecurity protection). In this talk, I will first introduce the emerging security and privacy challenges in the IoT domain. I will then focus on the bottom-up solutions on IoT protection and will present our recent research effort in microarchitecture supported IoT runtime attack detection and device attestation. The developed methods will lead to a design-for-security flow towards trusted IoT and their applications.

About CEDA's DL Program

The IEEE CEDA Distinguished Lecturer Program promotes the field of electronic design automation to the scientific community and the public at large. The goal of the program is to increase awareness about topics relevant to CEDA by creating a pool of subject matter experts and scholars to present to IEEE and CEDA Chapters, Sections and other venues such as universities and companies.