Hardware-aware Machine Learning for Biomedical Applications
Presentation Menu
With the prevalence of deep neural networks, machine intelligence has recently demonstrated performance comparable with, and in some cases superior to, that of human experts in medical imaging and computer assisted intervention. Such accomplishments can be largely credited to the ever-increasing computing power, as well as a growing abundance of medical data. As larger clusters of faster computing nodes become available at lower cost and in smaller form factors, more data can be used to train deeper neural networks with more layers and neurons, which usually translate to higher performance and at the same time higher computational complexity. For example, the widely used 3D U-Net for medical image segmentation has more than 16 million parameters and needs about 4.7×1013 floating point operations to process a 512×512×200 3D image. The large sizes and high computation complexity of neural networks have brought about various issues that need to be addressed by the joint efforts between hardware designers and medical practitioners towards hardware aware learning. In this talk, I will present novel solutions for the data acquisition and data processing stages in medical image computing respectively, using hardware-oriented schemes for lower latency, memory footprint and higher performance in embedded platforms. I will discuss how our hardware-aware machine learning approaches led to the realtime MRI segmentation for prosthetic valve implantation assistance, and enabled the world’s first AI assisted telementoring of cardiac surgery on April 3, 2019.