Rajiv Joshi IBM, T. J. Watson Research Center United States 1 (Northeastern U.S.) Email 2019 2021 Talk(s): Predictive Analytics in Machine Learning Predictive Analytics in Machine Learning × As semiconductor technology enters the sub-14nm era, geometry, process, voltage and temperature (PVT) variability in devices can affect the performance, functionality, and power of circuits, especially in new Artificial Intelligent (AI) accelerators. This is where predictive failure analytics is extremely critical. It can identify the failure issues related to logic and memory circuits and drive the circuits in the energy efficient area. This talk describes how key statistical techniques and new algorithms can be effectively used to analyze and build robust circuits. These algorithms can be used to analyze decoders, latches, and volatile as well as non-volatile memories. In addition, how these methodologies can be extended to “reliability prediction” and “hardware corroboration” is demonstrated. Logistic regression-based machine learning techniques are employed for modeling the circuit response and speeding up the importance of sample points simulations. To avoid overfitting, a cross-validation based regularization framework for ordered feature selection is demonstrated. Also, techniques to generate accurate parasitic capacitance modeling along with PVT variations for sub-22nm technologies and their incorporation into a physics-based statistical analysis methodology for accurate Vmin analysis are described. In addition, extension of these techniques based on machine learning e.g KNN is highlighted. Finally, the talk summarizes important issues in this field. From Deep Scaling To Deep Intelligence From Deep Scaling To Deep Intelligence × Moore’s law driving the advancement in the semiconductor industry over decades has been coming to a screeching halt and many researchers are convinced that it is almost dead. After revival and promise of artificial intelligence (AI) due to increased computational performance and memory bandwidth aided by Moore’s law, there is overwhelming enthusiasm in researchers for increasing the pace of VLSI industry. AI uses many neural network techniques for computation which involves training and inference. The advancement in AI requires energy efficient, low power hardware systems. This is more so for servers, main processors, Internet of Things (IoT) and System on chip (SOC) applications and newer applications in cognitive computing. In the light of AI, this talk focuses on important circuit techniques for lowering power, improving performance and functionality in nanoscale VLSI design in the midst of variability. The same techniques can be used for AI specific accelerators. Accelerator development for reduction in power and throughput improvement for both edge and data-centric accelerators compared to GPUs used for convolutional Neural (CNN) and Deep Neural (DNN) Networks are described. The talk covers memory (volatile and nonvolatile) solutions for CNN/DNN applications at extremely low Vmin. The talk also focuses on in-memory computation. Binary and analog applications using non-volatile memories (NVM) are illustrated. Accelerator architectures for bitwise convolution that features massive parallelism with high energy efficiency are described for both binary and analog memories. In our earlier work, numerical experiment results show that the binary CNN accelerator on a digital ReRAM-crossbar achieves a peak throughput of 792 GOPS at the power consumption of 4.5 mW, which is 1.61 times faster and 296 times more energy-efficient than a high-end GPU. Finally, the talk summarizes challenges and future directions for circuit applications for edge and data-centric accelerators.