Design Automation WebiNar (DAWN) We are thrilled to announce Design Automation WebiNar (DAWN) to drive research momentum and ensure our community remains at the cutting edge. Different from conventional keynote and individual speaker webinars, DAWN is a special-session-style webinar. DAWN is formed by multiple presentations on focused topics by leading experts in our community. Due to the outbreak of coronavirus (COVID-19), almost all conferences/symposiums in design automation community are canceled/postponed. People are not able to meet each other, learn recent advances, and discuss research ideas. The whole community is quarantined, just like shrouded in darkness made by COVID-19. Thus, it is about time to bring the light from inspired scholars in our community and therefore we are thrilled to announce Design Automation WebiNar (DAWN) to drive research momentum and ensure our community remains at the cutting edge. Different from conventional keynote- and sole-speaker-style webinars, DAWN is a special-session-style webinar. DAWN is formed by multiple presentations on focused topics by leading experts in our community. Learn More "The best of EDA research in 2021" invites the researchers to give talks about their papers that received best paper awards from EDA-related journals (e.g., IEEE TCAD) and conferences including MICRO, DAC, ICCAD, DATE, ASP-DAC, and ESWEEK). This is a two-day webinar including four, 20-minute talks (a 15-minute presentation with a 5 minute Q&A). April 11, 2022 (UTC-4) 10:00 AM-10:20 AM - Hardware/Software Co-Exploration of Neural Architectures 10:20 AM-10:40 AM - BOOM-Explorer: RISC-V BOOM Microarchitecture Design Space Exploration Framework 10:40 AM-11:00 AM - APOLLO: An Automated Power Modeling Framework for Runtime Power Introspection in High-Volume Commercial Microprocessors 11:00 AM-11:20 AM - Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation via Full-Stack Integration April 12, 2022 (UTC-4) 10:00 AM-10:20 AM - DREAMPlace: Deep Learning Toolkit-Enabled GPU Acceleration for Modern VLSI Placement 10:20 AM-10:40 AM - TreeNet: Deep Point Cloud Embedding for Routing Tree Construction 10:40 AM-11:00 AM - Intermittent-Aware Neural Architecture Search 11:00 AM-11:20 AM - A GPU-accelerated Deep Stereo-LiDAR Fusion for Real-time High-precision Dense Depth Sensing 11:20 AM-11:40 AM - An Energy-aware Online Learning Framework for Resource Management in Heterogeneous Platforms Click each talk title for talk abstracts, speakers, and speaker bios. 2022 Webinar Hardware/Software Co-Exploration Of Neural Architectures Weiwen Jiang Hardware/Software Co-Exploration Of Neural Architectures × Abstract We propose a novel hardware and software co-exploration framework for efficient neural architecture search (NAS). Different from existing hardware-aware NAS which assumes a fixed hardware design and explores the NAS space only, our framework simultaneously explores both the architecture search space and the hardware design space to identify the best neural architecture and hardware pairs that maximize both test accuracy and hardware efficiency. Such a practice greatly opens up the design freedom and pushes forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs. The framework iteratively performs a twolevel (fast and slow) exploration. Without lengthy training, the fast exploration can effectively fine-tune hyperparameters and prune inferior architectures in terms of hardware specifications, which significantly accelerates the NAS process. Then, the slow exploration trains candidates on a validation set and updates a controller using the reinforcement learning to maximize the expected accuracy together with the hardware efficiency. In this article, we demonstrate that the co-exploration framework can effectively expand the search space to incorporate models with high accuracy, and we theoretically show that the proposed two-level optimization can efficiently prune inferior solutions to better explore the search space. The experimental results on ImageNet show that the co-exploration NAS can find solutions with the same accuracy, 35.24% higher throughput, 54.05% higher energy efficiency, compared with the hardware-aware NAS. Webinar BOOM-Explorer: RISC-V BOOM Microarchitecture Design Space Exploration Framework Bei Yu , Chen Bai BOOM-Explorer: RISC-V BOOM Microarchitecture Design Space Exploration Framework × Abstract The microarchitecture design of a processor has been increasingly difficult due to the large design space and time-consuming verification flow. Previously, researchers rely on prior knowledge and cycle-accurate simulators to analyze the performance of different microarchitecture designs but lack sufficient discussions on methodologies to strike a good balance between power and performance. This work proposes an automatic framework to explore microarchitecture designs of the RISCV Berkeley Out-of-Order Machine (BOOM), termed as BOOM-Explorer, achieving a good trade-off on power and performance. Firstly, the framework utilizes an advanced microarchitecture-aware active learning (MicroAL) algorithm to generate a diverse and representative initial design set. Secondly, a Gaussian process model with deep kernel learning functions (DKL-GP) is built to characterize the design space. Thirdly, correlated multi-objective Bayesian optimization is leveraged to explore Pareto-optimal designs. Experimental results show that BOOM-Explorer can search for designs that dominate previous arts and designs developed by senior engineers in terms of power and performance within a much shorter time. Webinar APOLLO: An Automated Power Modeling Framework For Runtime Power Introspection In High-Volume Commercial Microprocessors Zhiyao Xie APOLLO: An Automated Power Modeling Framework For Runtime Power Introspection In High-Volume Commercial Microprocessors × Abstract Accurate power modeling is crucial for energy-efficient CPU design and runtime management. An ideal power modeling framework needs to be accurate yet fast, achieve high temporal resolution (ideally cycle-accurate) yet with low runtime computational overheads, and easily extensible to diverse designs through automation. Simultaneously satisfying such conflicting objectives is challenging and largely unattained despite significant prior research. In this talk, I will introduce our work APOLLO with multiple key attributes. First, it supports fast and accurate design-time power model simulation, handling millions-of-cycles benchmarks in minutes with an emulator. Second, it incorporates an unprecedented low-cost runtime on-chip power meter in CPU RTL for per-cycle power tracing. Third, the development process of this method is fully automated and applies to any given design solution. This method has been validated on high-volume commercial microprocessors Neoverse N1 and Cortex-A77. Webinar Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation Via Full-Stack Integration Sophia Shao Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation Via Full-Stack Integration × Abstract DNN accelerators are often developed and evaluated in isolation without considering the cross-stack, system-level effects in real-world environments. This makes it difficult to appreciate the impact of Systemon-Chip (SoC) resource contention, OS overheads, and programming-stack inefficiencies on overall performance/energy-efficiency. To address this challenge, we present Gemmini, an open-source, full-stack DNN accelerator generator. Gemmini generates a wide design-space of efficient ASIC accelerators from a flexible architectural template, together with flexible programming stacks and full SoCs with shared resources that capture system-level effects. Gemmini-generated accelerators have also been fabricated, delivering up to three orders-of-magnitude speedups over high-performance CPUs on various DNN benchmarks. Webinar DREAMPlace: Deep Learning Toolkit-Enabled GPU Acceleration For Modern VLSI Placement Yibo Lin DREAMPlace: Deep Learning Toolkit-Enabled GPU Acceleration For Modern VLSI Placement × Abstract Placement for very large-scale integrated (VLSI) circuits is one of the most important steps for design closure. We propose a novel GPU-accelerated placement framework DREAMPlace, by casting the analytical placement problem equivalently to training a neural network. Implemented on top of a widely adopted deep learning toolkit PyTorch, with customized key kernels for wirelength and density computations, DREAMPlace can achieve around 40× speedup in global placement without quality degradation compared to the state-of-the-art multithreaded placer RePlAce. We believe this work shall open up new directions for revisiting classical EDA problems with advancements in AI hardware and software. Webinar TreeNet: Deep Point Cloud Embedding For Routing Tree Construction Yuzhe Ma TreeNet: Deep Point Cloud Embedding For Routing Tree Construction × Abstract In the routing tree construction, both wirelength (WL) and pathlength (PL) are of importance. Among all methods, PD-II and SALT are the two most prominent ones. However, neither PD-II nor SALT always dominates the other one in terms of both WL and PL for all nets. In addition, estimating the best parameters for both algorithms is still an open problem. In this paper, we model the pins of a net as point cloud and formalize a set of special properties of such point cloud. Considering these properties, we propose a novel deep neural net architecture, TreeNet, to obtain the embedding of the point cloud. Based on the obtained cloud embedding, an adaptive workflow is designed for the routing tree construction. Experimental results show that the proposed TreeNet is superior to other mainstream models for the point cloud on classification tasks. Moreover, the proposed adaptive workflow for the routing tree construction outperforms SALT and PD-II in terms of both efficiency and effectiveness. Webinar Intermittent-Aware Neural Architecture Search Pi-Cheng Hsiu , Hashan Roshantha Mendis Intermittent-Aware Neural Architecture Search × Abstract Intermittently executing deep neural network (DNN) inference powered by ambient energy, paves the way for sustainable and intelligent edge applications. Neural architecture search (NAS) has achieved great success in automatically finding highly accurate networks with low latency. However, we observe that NAS attempts to improve inference latency by primarily maximizing data reuse, but the derived solutions when deployed on intermittent systems may be inefficient, such that the inference may not satisfy an end-to-end latency requirement and, more seriously, they may be unsafe given an insufficient energy budget. This work proposes iNAS, which introduces intermittent execution behavior into NAS. In order to generate accurate neural networks and corresponding intermittent execution designs that are safe and efficient, iNAS finds the right balance between data reuse and the costs related to progress preservation and recovery, while ensuring the power-cycle energy budget is not exceeded. The solutions found by iNAS and an existing HW-NAS were evaluated on a Texas Instruments device under intermittent power, across different datasets, energy budgets and latency requirements. Experimental results show that in all cases the iNAS solutions safely meet the latency requirements, and substantially improve the end-to-end inference latency compared to the HW-NAS solutions. Webinar A GPU-Accelerated Deep Stereo-LiDAR Fusion For Real-Time High-Precision Dense Depth Sensing Gang Chen , Haitao Meng A GPU-Accelerated Deep Stereo-LiDAR Fusion For Real-Time High-Precision Dense Depth Sensing × Abstract Active LiDAR and stereo vision are the most commonly used depth sensing techniques in autonomous vehicles. Each of them alone has weaknesses in terms of density and reliability and thus cannot perform well on all practical scenarios. Recent works use deep neural networks (DNNs) to exploit their complementary properties, achieving a superior depth-sensing. However, these state-of-the-art solutions are not satisfactory on real-time responsiveness due to the high computational complexities of DNNs. In this paper, we present FastFusion, a fast deep stereo-LiDAR fusion framework for real-time high-precision depth estimation. FastFusion provides an efficient two-stage fusion strategy that leverages binary neural network to integrate stereoLiDAR information as input and use cross-based LiDAR trust aggregation to further fuse the sparse LiDAR measurements in the back-end of stereo matching. More importantly, we present a GPU-based acceleration framework for providing a low latency implementation of FastFusion, gaining both accuracy improvement and real-time responsiveness. In the experiments, we demonstrate the effectiveness and practicability of FastFusion, which obtains a significant speedup over state-of-the-art baselines while achieving comparable accuracy on depth sensing. Webinar An Energy-Aware Online Learning Framework For Resource Management In Heterogeneous Platforms Umit Ogras , Sumit K. Mandal An Energy-Aware Online Learning Framework For Resource Management In Heterogeneous Platforms × Abstract Mobile platforms must satisfy the contradictory requirements of fast response time and minimum energy consumption as a function of dynamically changing applications. To address this need, systems-on-chip (SoC) that are at the heart of these devices provide a variety of control knobs, such as the number of active cores and their voltage/frequency levels. Controlling these knobs optimally at runtime is challenging for two reasons. First, the large configuration space prohibits exhaustive solutions. Second, control policies designed offline are at best sub-optimal, since many potential new applications are unknown at design-time. We address these challenges by proposing an online imitation learning approach. Our key idea is to construct an offline policy and adapt it online to new applications to optimize a given metric (e.g., energy). The proposed methodology leverages the supervision enabled by power-performance models learned at runtime. We demonstrate its effectiveness on a commercial mobile platform with 16 diverse benchmarks. Our approach successfully adapts the control policy to an unknown application after executing less than 25% of its instructions. 2020 DAWN Webinar DAWN: Machine Learning for EDA Azalia Mirhoseini , Anna Goldie , David Pan , Jiang Hu , Song Han , Shao-Yun Fang Talk 1 (0‘-20‘): Reinforcement Learning for Placement Optimization Talk 2 (20‘-35‘): AI-Enabled Agile IC Physical Design and Manufacturing Talk 3 (35‘-50‘): Plug-in Use of Machine Learning and Beyond Talk 4 (50‘-65‘): Efficient AI, TinyML, Model Compression Talk 5 (65‘-80‘): Pin Access Optimization Using Machine Learning Panel (80‘-120‘) Q&A DAWN: Machine Learning for EDA × More information provided here. Webinar DAWN: Secure Silicon Recent Developments and Upcoming Challenges Mark M. Tehranipoor , Gang Qu , Brandon Wang , Ahmad-Reza Sadeghi Talk 1 (0‘-25‘): Keys to Hardware Security Talk 2 (25‘-50‘): Automated Implementation of Secure Silicon Talk 3 (50‘-75‘): Assessment of Hardware Security and Trust Talk 4 (75‘-100‘): Enclave Computing on RISC-V: A Brighter Future for Platform Security? DAWN: Secure Silicon Recent Developments and Upcoming Challenges × More information provided here. Webinar DAWN: Career Development for Scholars in EDA Research Diana Marculescu , Kwang-Ting Tim Cheng , Giovanni De Micheli , Ayse Coskun , Phillip Stanley-Marbell , Jeyavijayan Rajendran The second Design Automation WebiNar (DAWN) on Career Development for Scholars in EDA Research will be held Tuesday, June 23rd from 9:00-10:30 am CDT / 10-11:30 pm HKT / 4-5:30 pm CEST. DAWN: Career Development for Scholars in EDA Research × Abstract The second Design Automation WebiNar (DAWN) on Career Development for Scholars in EDA Research will be held Tuesday, June 23rd from 9:00-10:30 am CDT / 10-11:30 pm HKT / 4-5:30 pm CEST. Webinar DAWN: Publishing in EDA Transactions, Journals, and Magazines Hai (Helen) Li , Rajesh K. Gupta , X. Sharon Hu , Tulika Mitra , Ramesh Karri , Jörg Henkel We are excited to announce Design Automation WebiNar (DAWN) to drive research momentum and ensure our community remains at the cutting edge. Different from conventional keynote and individual speaker webinars, DAWN is a special-session-style webinar. DAWN is formed by multiple presentations on focused topics by leading experts in our community. The fourth event in this series is a panel on the topic "Publishing in EDA Transactions, Journals, and Magazines". DAWN: Publishing in EDA Transactions, Journals, and Magazines × More information provided here. Webinar DAWN: Quantum Computing & Design Automation Challenges & Opportunities Oliver Dial , Ross Duncan , Carmen G. Almudever , Robert Wille We are excited to announce Design Automation WebiNar (DAWN) to drive research momentum and ensure our community remains at the cutting edge. Different from conventional keynote and individual speaker webinars, DAWN is a special-session-style webinar. DAWN is formed by multiple presentations on focused topics by leading experts in our community. DAWN: Quantum Computing & Design Automation Challenges & Opportunities × More information provided here. Organizers Yuan-Hao Chang Academia Sinica Taiwan 10 (Asia and Pacific) Website Tsung-Yi Ho National Tsing Hua University Taiwan 10 (Asia and Pacific) Email DAWN Sponsored By
Webinar Hardware/Software Co-Exploration Of Neural Architectures Weiwen Jiang Hardware/Software Co-Exploration Of Neural Architectures × Abstract We propose a novel hardware and software co-exploration framework for efficient neural architecture search (NAS). Different from existing hardware-aware NAS which assumes a fixed hardware design and explores the NAS space only, our framework simultaneously explores both the architecture search space and the hardware design space to identify the best neural architecture and hardware pairs that maximize both test accuracy and hardware efficiency. Such a practice greatly opens up the design freedom and pushes forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs. The framework iteratively performs a twolevel (fast and slow) exploration. Without lengthy training, the fast exploration can effectively fine-tune hyperparameters and prune inferior architectures in terms of hardware specifications, which significantly accelerates the NAS process. Then, the slow exploration trains candidates on a validation set and updates a controller using the reinforcement learning to maximize the expected accuracy together with the hardware efficiency. In this article, we demonstrate that the co-exploration framework can effectively expand the search space to incorporate models with high accuracy, and we theoretically show that the proposed two-level optimization can efficiently prune inferior solutions to better explore the search space. The experimental results on ImageNet show that the co-exploration NAS can find solutions with the same accuracy, 35.24% higher throughput, 54.05% higher energy efficiency, compared with the hardware-aware NAS.
Webinar BOOM-Explorer: RISC-V BOOM Microarchitecture Design Space Exploration Framework Bei Yu , Chen Bai BOOM-Explorer: RISC-V BOOM Microarchitecture Design Space Exploration Framework × Abstract The microarchitecture design of a processor has been increasingly difficult due to the large design space and time-consuming verification flow. Previously, researchers rely on prior knowledge and cycle-accurate simulators to analyze the performance of different microarchitecture designs but lack sufficient discussions on methodologies to strike a good balance between power and performance. This work proposes an automatic framework to explore microarchitecture designs of the RISCV Berkeley Out-of-Order Machine (BOOM), termed as BOOM-Explorer, achieving a good trade-off on power and performance. Firstly, the framework utilizes an advanced microarchitecture-aware active learning (MicroAL) algorithm to generate a diverse and representative initial design set. Secondly, a Gaussian process model with deep kernel learning functions (DKL-GP) is built to characterize the design space. Thirdly, correlated multi-objective Bayesian optimization is leveraged to explore Pareto-optimal designs. Experimental results show that BOOM-Explorer can search for designs that dominate previous arts and designs developed by senior engineers in terms of power and performance within a much shorter time.
Webinar APOLLO: An Automated Power Modeling Framework For Runtime Power Introspection In High-Volume Commercial Microprocessors Zhiyao Xie APOLLO: An Automated Power Modeling Framework For Runtime Power Introspection In High-Volume Commercial Microprocessors × Abstract Accurate power modeling is crucial for energy-efficient CPU design and runtime management. An ideal power modeling framework needs to be accurate yet fast, achieve high temporal resolution (ideally cycle-accurate) yet with low runtime computational overheads, and easily extensible to diverse designs through automation. Simultaneously satisfying such conflicting objectives is challenging and largely unattained despite significant prior research. In this talk, I will introduce our work APOLLO with multiple key attributes. First, it supports fast and accurate design-time power model simulation, handling millions-of-cycles benchmarks in minutes with an emulator. Second, it incorporates an unprecedented low-cost runtime on-chip power meter in CPU RTL for per-cycle power tracing. Third, the development process of this method is fully automated and applies to any given design solution. This method has been validated on high-volume commercial microprocessors Neoverse N1 and Cortex-A77.
Webinar Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation Via Full-Stack Integration Sophia Shao Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation Via Full-Stack Integration × Abstract DNN accelerators are often developed and evaluated in isolation without considering the cross-stack, system-level effects in real-world environments. This makes it difficult to appreciate the impact of Systemon-Chip (SoC) resource contention, OS overheads, and programming-stack inefficiencies on overall performance/energy-efficiency. To address this challenge, we present Gemmini, an open-source, full-stack DNN accelerator generator. Gemmini generates a wide design-space of efficient ASIC accelerators from a flexible architectural template, together with flexible programming stacks and full SoCs with shared resources that capture system-level effects. Gemmini-generated accelerators have also been fabricated, delivering up to three orders-of-magnitude speedups over high-performance CPUs on various DNN benchmarks.
Webinar DREAMPlace: Deep Learning Toolkit-Enabled GPU Acceleration For Modern VLSI Placement Yibo Lin DREAMPlace: Deep Learning Toolkit-Enabled GPU Acceleration For Modern VLSI Placement × Abstract Placement for very large-scale integrated (VLSI) circuits is one of the most important steps for design closure. We propose a novel GPU-accelerated placement framework DREAMPlace, by casting the analytical placement problem equivalently to training a neural network. Implemented on top of a widely adopted deep learning toolkit PyTorch, with customized key kernels for wirelength and density computations, DREAMPlace can achieve around 40× speedup in global placement without quality degradation compared to the state-of-the-art multithreaded placer RePlAce. We believe this work shall open up new directions for revisiting classical EDA problems with advancements in AI hardware and software.
Webinar TreeNet: Deep Point Cloud Embedding For Routing Tree Construction Yuzhe Ma TreeNet: Deep Point Cloud Embedding For Routing Tree Construction × Abstract In the routing tree construction, both wirelength (WL) and pathlength (PL) are of importance. Among all methods, PD-II and SALT are the two most prominent ones. However, neither PD-II nor SALT always dominates the other one in terms of both WL and PL for all nets. In addition, estimating the best parameters for both algorithms is still an open problem. In this paper, we model the pins of a net as point cloud and formalize a set of special properties of such point cloud. Considering these properties, we propose a novel deep neural net architecture, TreeNet, to obtain the embedding of the point cloud. Based on the obtained cloud embedding, an adaptive workflow is designed for the routing tree construction. Experimental results show that the proposed TreeNet is superior to other mainstream models for the point cloud on classification tasks. Moreover, the proposed adaptive workflow for the routing tree construction outperforms SALT and PD-II in terms of both efficiency and effectiveness.
Webinar Intermittent-Aware Neural Architecture Search Pi-Cheng Hsiu , Hashan Roshantha Mendis Intermittent-Aware Neural Architecture Search × Abstract Intermittently executing deep neural network (DNN) inference powered by ambient energy, paves the way for sustainable and intelligent edge applications. Neural architecture search (NAS) has achieved great success in automatically finding highly accurate networks with low latency. However, we observe that NAS attempts to improve inference latency by primarily maximizing data reuse, but the derived solutions when deployed on intermittent systems may be inefficient, such that the inference may not satisfy an end-to-end latency requirement and, more seriously, they may be unsafe given an insufficient energy budget. This work proposes iNAS, which introduces intermittent execution behavior into NAS. In order to generate accurate neural networks and corresponding intermittent execution designs that are safe and efficient, iNAS finds the right balance between data reuse and the costs related to progress preservation and recovery, while ensuring the power-cycle energy budget is not exceeded. The solutions found by iNAS and an existing HW-NAS were evaluated on a Texas Instruments device under intermittent power, across different datasets, energy budgets and latency requirements. Experimental results show that in all cases the iNAS solutions safely meet the latency requirements, and substantially improve the end-to-end inference latency compared to the HW-NAS solutions.
Webinar A GPU-Accelerated Deep Stereo-LiDAR Fusion For Real-Time High-Precision Dense Depth Sensing Gang Chen , Haitao Meng A GPU-Accelerated Deep Stereo-LiDAR Fusion For Real-Time High-Precision Dense Depth Sensing × Abstract Active LiDAR and stereo vision are the most commonly used depth sensing techniques in autonomous vehicles. Each of them alone has weaknesses in terms of density and reliability and thus cannot perform well on all practical scenarios. Recent works use deep neural networks (DNNs) to exploit their complementary properties, achieving a superior depth-sensing. However, these state-of-the-art solutions are not satisfactory on real-time responsiveness due to the high computational complexities of DNNs. In this paper, we present FastFusion, a fast deep stereo-LiDAR fusion framework for real-time high-precision depth estimation. FastFusion provides an efficient two-stage fusion strategy that leverages binary neural network to integrate stereoLiDAR information as input and use cross-based LiDAR trust aggregation to further fuse the sparse LiDAR measurements in the back-end of stereo matching. More importantly, we present a GPU-based acceleration framework for providing a low latency implementation of FastFusion, gaining both accuracy improvement and real-time responsiveness. In the experiments, we demonstrate the effectiveness and practicability of FastFusion, which obtains a significant speedup over state-of-the-art baselines while achieving comparable accuracy on depth sensing.
Webinar An Energy-Aware Online Learning Framework For Resource Management In Heterogeneous Platforms Umit Ogras , Sumit K. Mandal An Energy-Aware Online Learning Framework For Resource Management In Heterogeneous Platforms × Abstract Mobile platforms must satisfy the contradictory requirements of fast response time and minimum energy consumption as a function of dynamically changing applications. To address this need, systems-on-chip (SoC) that are at the heart of these devices provide a variety of control knobs, such as the number of active cores and their voltage/frequency levels. Controlling these knobs optimally at runtime is challenging for two reasons. First, the large configuration space prohibits exhaustive solutions. Second, control policies designed offline are at best sub-optimal, since many potential new applications are unknown at design-time. We address these challenges by proposing an online imitation learning approach. Our key idea is to construct an offline policy and adapt it online to new applications to optimize a given metric (e.g., energy). The proposed methodology leverages the supervision enabled by power-performance models learned at runtime. We demonstrate its effectiveness on a commercial mobile platform with 16 diverse benchmarks. Our approach successfully adapts the control policy to an unknown application after executing less than 25% of its instructions.
Webinar DAWN: Machine Learning for EDA Azalia Mirhoseini , Anna Goldie , David Pan , Jiang Hu , Song Han , Shao-Yun Fang Talk 1 (0‘-20‘): Reinforcement Learning for Placement Optimization Talk 2 (20‘-35‘): AI-Enabled Agile IC Physical Design and Manufacturing Talk 3 (35‘-50‘): Plug-in Use of Machine Learning and Beyond Talk 4 (50‘-65‘): Efficient AI, TinyML, Model Compression Talk 5 (65‘-80‘): Pin Access Optimization Using Machine Learning Panel (80‘-120‘) Q&A DAWN: Machine Learning for EDA × More information provided here.
Webinar DAWN: Secure Silicon Recent Developments and Upcoming Challenges Mark M. Tehranipoor , Gang Qu , Brandon Wang , Ahmad-Reza Sadeghi Talk 1 (0‘-25‘): Keys to Hardware Security Talk 2 (25‘-50‘): Automated Implementation of Secure Silicon Talk 3 (50‘-75‘): Assessment of Hardware Security and Trust Talk 4 (75‘-100‘): Enclave Computing on RISC-V: A Brighter Future for Platform Security? DAWN: Secure Silicon Recent Developments and Upcoming Challenges × More information provided here.
Webinar DAWN: Career Development for Scholars in EDA Research Diana Marculescu , Kwang-Ting Tim Cheng , Giovanni De Micheli , Ayse Coskun , Phillip Stanley-Marbell , Jeyavijayan Rajendran The second Design Automation WebiNar (DAWN) on Career Development for Scholars in EDA Research will be held Tuesday, June 23rd from 9:00-10:30 am CDT / 10-11:30 pm HKT / 4-5:30 pm CEST. DAWN: Career Development for Scholars in EDA Research × Abstract The second Design Automation WebiNar (DAWN) on Career Development for Scholars in EDA Research will be held Tuesday, June 23rd from 9:00-10:30 am CDT / 10-11:30 pm HKT / 4-5:30 pm CEST.
Webinar DAWN: Publishing in EDA Transactions, Journals, and Magazines Hai (Helen) Li , Rajesh K. Gupta , X. Sharon Hu , Tulika Mitra , Ramesh Karri , Jörg Henkel We are excited to announce Design Automation WebiNar (DAWN) to drive research momentum and ensure our community remains at the cutting edge. Different from conventional keynote and individual speaker webinars, DAWN is a special-session-style webinar. DAWN is formed by multiple presentations on focused topics by leading experts in our community. The fourth event in this series is a panel on the topic "Publishing in EDA Transactions, Journals, and Magazines". DAWN: Publishing in EDA Transactions, Journals, and Magazines × More information provided here.
Webinar DAWN: Quantum Computing & Design Automation Challenges & Opportunities Oliver Dial , Ross Duncan , Carmen G. Almudever , Robert Wille We are excited to announce Design Automation WebiNar (DAWN) to drive research momentum and ensure our community remains at the cutting edge. Different from conventional keynote and individual speaker webinars, DAWN is a special-session-style webinar. DAWN is formed by multiple presentations on focused topics by leading experts in our community. DAWN: Quantum Computing & Design Automation Challenges & Opportunities × More information provided here.