Title: Runtime Co-optimisation of Cores, Caches, and On-chip Network
Speaker: Preeti Ranjan Panda (IIT Delhi, India)
Modern multi-core systems provide large computational capabilities which can be used to run multiple processes concurrently. To achieve the best possible performance within limited power budgets, the various system resources need to be allocated effectively. Choosing between multiple optimizations at runtime is complex due to the non-additive effects, making the scenario suitable for the application of machine learning techniques. We present a novel method, Machine Learned Machines (MLM), which uses online reinforcement learning to perform dynamic partitioning of caches at multiple levels, along with dynamic voltage and frequency scaling of the core and interconnection network. We show that the co-optimization results in much lower energy-delay product (EDP) than any of the techniques applied individually on a mix of 30 workloads using Spec2006 benchmarks: an average of 25% EDP improvement with limited degradation of system throughput and fairness.
Speaker’s bio: Preeti Ranjan Panda received his B. Tech. in CSE from the IIT Madras, and his MS and PhD degrees from the University of California at Irvine. He is currently a Professor in CSE Department at IIT Delhi. He has previously worked at Texas Instruments and Synopsys. His research interests are Embedded Systems, Design Automation, Post-silicon Validation, Memory Architectures and Optimisations, and Low Power Computing. He is the author of two books: Memory issues in Embedded Systems-on-chip: Optimizations and Exploration and Power-efficient System Design. Prof. Panda has served as a member of the editorial board of IEEE TCAD, ACM TODAES, and IJPP, and as Program and organizing Co-chair of CODES+ISSS and ESWeek. He has also served on the program committees and chaired sessions at several conferences in the areas of Embedded Systems and EDA, including DAC, ICCAD, and DATE.