How Machine Leaning Reshape VLSI Interconnect Reliability Modeling, Optimization and Management
Presentation Menu
As machine learning, especially deep learning, has been proved to be effective for capturing spatial and temporal dynamics behaviors, it brings new opportunities for addressing those difficult tasks. In this talk, I will look at the emerging machine learning/deep learning-based approaches for the VLSI interconnect reliability modeling, optimization, and dynamic management. I first present a machine learning-based approach to model hydrostatic stress in the multi-segment interconnects based on the generative adversarial learning, in which we treat the stress modeling as a time-varying 2D image-to-image conversion problem and the resulting solution provide an order of magnitudes over existing numerical method and 10x over state of art semi-analytic method. Second, based on the observation that VLSI multi-segment interconnects trees can be naturally viewed as graphs. I will present a new graph convolution network (GCN) model, called EMgraph, to consider both node and edge embedding features, to estimate the transient EM stress of interconnecting trees. The new method can lead 10X speedup over GAN-based EM analysis with transferable knowledge to predict stress on new interconnect trees. To mitigate the long-term aging effects due to NBTI, EM, and HCI, I further present an accuracy reconfigurable stochastic computing (ARSC) framework for dynamic reliability and power management. Different than the existing stochastic computing works, where the accuracy versus power/energy trade-off is carried out in the design time, the new ARSC design can change the accuracy or bit-width of the data in the run-time so that it can accommodate the long-term aging effects by slowing the system clock frequency at the cost of accuracy while maintaining the throughput of the computing.