Accelerating Recurrent Networks with Neuromorphic Hardware-Algorithm Co-Design
Neuromorphic computing takes inspiration from the brain to design novel algorithms and hardware platforms. Leveraging sparse, event-driven, and massively parallel processing enables lower latency and better energy efficiency on critical machine intelligence workloads. This talk will cover the motivation and principles behind modern neuromorphic architectures, focusing on the Intel Loihi 2 research chip as a representative example. Through two case studies, we will discuss the hardware-algorithm co-design approach and how it can be applied to accelerate recurrent networks with the specific neuromorphic feature set. Our results on combinatorial optimization and State Space Models show great promise for our approach to accelerate compute-intensive tasks, especially in edge and energy-constrained domains, and advocate for more hardware-informed design in computational sciences.
Short Bio
Alessandro Pierro is an AI researcher at the Intel Neuromorphic Computing Lab and a doctoral candidate in computer science at the Ludwig-Maximilians-Universität München. His research aims to accelerate machine learning workloads through hardware-algorithm co-design, leveraging the latest advancements in parallel computing architectures. He focuses on energy-efficient inference for linear recurrent networks and State Space Models to enable sequence modeling at the edge with neuromorphic accelerators.
Contact and booking details
- Booking required?
- No