IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Tue, June 26, 2018
Learning and adaptation are key to natural and artificial intelligence in complex and variable environments. Neural computation and communication in the brain are partitioned into the grey matter of dense local synaptic connectivity in tightly knit neuronal networks, and the white matter of sparse long-range connectivity over axonal fiber bundles across distant brain regions. This exquisite distributed multiscale organization provides inspiration to the design of scalable neuromorphic systems for deep learning and inference, with hierarchical address event-routing of neural spike events and multiscale synaptic connectivity and plasticity, and their efficient implementation in silicon low-power mixed-signal very-large-scale-integrated circuits. Advances in machine learning and system-on-chip integration have led to the development of massively parallel silicon learning machines with pervasive real-time adaptive intelligence at nanoscale that begin to approach the efficacy and resilience of biological neural systems, and already exceed the nominal energy efficiency of synaptic transmission in the mammalian brain. I will highlight examples of neuromorphic learning systems-on-chips with applications in template-based pattern recognition, vision processing, and human-computer interfaces, and outline emerging scientific directions and engineering challenges in their large-scale deployment.
Advances in machine learning and system-on-chip integration have led to the development of massively parallel silicon learning machines with pervasive real-time adaptive intelligence at the nanoscale that begin to approach the efficacy and resilience of biological neural systems...