Presentation Type
Lecture

AI Hardware Accelerator for Safety-Critical Applications

Presenter

Presentation Menu

Abstract

Deep Neural Networks (DNNs) are amongst the most intensively and widely used predictive models in machine learning. Nonetheless, increased computation speed and memory resources, along with significant energy consumption, are required to achieve the full potentials of DNNs. To be able to run DNNs algorithms out of the cloud and onto distributed Internet-of-Things (IoT) devices, customized HardWare platforms for Artificial Intelligence (HW-AI) are required. However, similar to traditional computing hardware, HW-AI is subject to hardware faults, occurring due to process, aging and environmental reliability threats. Although HW-AI comes with some inherent fault resilience, faults can lead to prediction failures seriously affecting the application execution. Typical reliability approaches, such as on-line testing and hardware redundancy, or even retraining, are less appropriate for HW-AI due to prohibited overhead; DNNs are large architectures with important memory requirements, coming along with an immense training set. This talk will address these limitations by exploiting the particularities of HW-AI architectures to develop low-cost and efficient reliability strategies. 

Description