Real-Time Inference of Neural Networks - A Guide for DSP Engineers - Part II
ADC - Audio Developer Conference via YouTube
Get 35% Off CFI Certifications - Code CFI35
AI Engineer - Learn how to integrate AI into software applications
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn advanced techniques for implementing neural network inference in real-time audio applications through this conference talk from ADC 2024. Explore the development of a comprehensive library that simplifies neural network deployment and integration in audio systems, building upon foundational concepts from Part I. Discover methods for quantifying real-time violations within inference executions and understand the critical importance of monitoring performance in real-time environments. Examine strategies for integrating inference engines in real-time audio systems, particularly when running multiple instances simultaneously using static thread pools and host-provided threads. Master techniques for achieving minimal latency in neural network inference, including controversial approaches that push the boundaries of performance optimization. Analyze extensive benchmark results comparing different neural network architectures across various inference engines, revealing how factors such as input buffer size, model size, and previously executed inferences impact overall performance. Gain insights from researchers at Technische Universität Berlin who specialize in applying neural networks to creative audio effects and synthesis in real-time and mixed-signal domains, combining theoretical knowledge with practical implementation strategies for DSP engineers working with modern AI-powered audio applications.
Syllabus
Real-Time Inference of Neural Networks: A Guide for DSP Engineers - Part II - ADC 2024
Taught by
ADC - Audio Developer Conference