IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
The recent artificial intelligence (AI) boom has been primarily driven by three confluence forces: algorithms, data, and computing power enabled by modern integrated circuits (ICs), including specialized AI accelerators. This talk will present a closed-loop perspective for synergistic AI and agile IC design with two main themes, AI for IC and IC for AI. As semiconductor technology enters the era of extreme scaling, IC design and manufacturing complexities become extremely high. More intelligent and agile IC design technologies are needed than ever to optimize performance, power, manufacturability, design cost, etc., and deliver equivalent scaling to Moore’s Law. This talk will present some recent results leveraging modern AI and machine learning advancement with domain-specific customizations for agile IC design and manufacturing closure. Meanwhile, customized ICs, including those with beyond-CMOS technologies, can drastically improve AI performance and energy efficiency by orders of magnitude. I will present some recent results on hardware/software co-design for high-performance and energy-efficient optical neural networks. Closing the virtuous cycle between AI and IC holds great potential to advance the state-of-the-art of each other significantly.
Side-Channel Analysis - Debdeep Mukhopadhyay (IIT Kharagpur) 30-min demo with a 10-min Q&A CAD Tool: https://cadforassurance.org/tools/sca/exp-fault/ MIMI - Patanjali SLPSK and Jonathan Cruz (UF) 30-min demo with a 10-min Q&A CAD Tool: https://cadforassurance.org/tools/ic-trust-verification/mimi/
The CAD for Trust and Assurance website is an academic dissemination effort by researchers in the field of hardware security. The goal is to assemble information on all CAD for trust/assurance activities in academia and industry in one place and share them with the broader community of researchers and practitioners in a timely manner, with an easy-to-search and easy-to-access interface. We’re including information on many major CAD tools the research community has developed over the past decade, including open-source license-free or ready-for-licensing tools, associated metrics, relevant publications, and video-demos. We are also delighted to announce that a series of virtual CAD for Assurance tool training webinars starting in February 2021.
Additional information on these webinars is available at the CAD for Assurance website at https://cadforassurance.org/.
Artificial Intelligence (A) ubiquitously impacts almost all research societies including electronic design automation (EDA). Many scholars with mathematic and modeling backgrounds have shifted their focuses onto applying AI technologies to their research or directly working on AI problems. As a researcher with a Ph.D. training of EDA and circuit designs, I started my AI-relevant research since late 2000s, i.e., neuromorphic computing that implements hardware to accelerate computation of biologically plausible learning models. In this talk, I will review the development process of my research from neuromorphic computing to a broader scope of AI, including machine learning accelerator designs, neural network quantization and pruning, neural architectural search, federated learning, and neural network robustness, privacy, security, etc., and how I benefit from my EDA background.
NEOS Toolset by Kaveh Shamsi (UT-Dallas) and Deep Learning-Based Model Building Attacks on Arbiter PUF Compositions by Rajat Subhra Chakraborty (IIT Kharagpur) and Pranesh Santikellur (IIT Kharagpur).
Visual content continues to be on the rise. Increasingly, visual analytics are part of computational pipelines supporting latency-sensitive and interactive applications such as situational awareness, which combine content dynamically aggregated from sensors and end-user devices. The talk will highlight assistive visual systems for persons with visual impairments and a pollinator-tracking system as examples of such end uses. Using these application contexts, the design space of these distributed sensors will be explored with focus on communication costs. Next, the talk will focus on Video query processing that is evolving from applications that query pre-extracted metadata using traditional query processing techniques to applications that directly analyze geometry, semantics, and content in the video bitstream. This talk will showcase ongoing efforts in system level design. Finally, we will show some new opportunities with the shift from 2D to 3D sensors. The talk leverages effort from collaborators from the SRC JUMP Visual Analytics team and the NSF Expeditions in Computing Visual Cortex on Silicon program.
CAD for Assurance tool training virtual webinars
More information provided here.
The second Design Automation WebiNar (DAWN) on Career Development for Scholars in EDA Research will be held Tuesday, June 23rd from 9:00-10:30 am CDT / 10-11:30 pm HKT / 4-5:30 pm CEST.
Recent years have seen an explosion in the cost and time required to design advanced System-on-Chips (SoCs), systems-in-packages (SiPs), and PCBs. As part of the $1.5B Electronics Resurgence Initiative (ERI), DARPA is building the world's first general purpose silicon compilers. The effort involves two distinct research programs, the Intelligent Design of Electronic Assets (IDEA) program aiming to create a no-human-in-the-loop layout generator for digital and analog circuits, and the Posh Open Source Hardware (POSH) program aiming to create a high quality trustable open source ecosystem. Together the efforts will create a universal hardware compiler capable of automatically generating production ready GDSII drawings directly from a rich catalog of trustable source code and schematics for digital as well as analog circuits. Achieving this ambitious goal will require advancing the state of the art in machine learning, optimization algorithms, expert systems, and verification technology. This talk will discuss technical challenges associated with building a universal hardware compiler and provide analysis of potential economic and societal impacts.
Learning and adaptation are key to natural and artificial intelligence in complex and variable environments. Neural computation and communication in the brain are partitioned into the grey matter of dense local synaptic connectivity in tightly knit neuronal networks, and the white matter of sparse long-range connectivity over axonal fiber bundles across distant brain regions. This exquisite distributed multiscale organization provides inspiration to the design of scalable neuromorphic systems for deep learning and inference, with hierarchical address event-routing of neural spike events and multiscale synaptic connectivity and plasticity, and their efficient implementation in silicon low-power mixed-signal very-large-scale-integrated circuits. Advances in machine learning and system-on-chip integration have led to the development of massively parallel silicon learning machines with pervasive real-time adaptive intelligence at nanoscale that begin to approach the efficacy and resilience of biological neural systems, and already exceed the nominal energy efficiency of synaptic transmission in the mammalian brain. I will highlight examples of neuromorphic learning systems-on-chips with applications in template-based pattern recognition, vision processing, and human-computer interfaces, and outline emerging scientific directions and engineering challenges in their large-scale deployment.