GPU-Powered Neural Audio - High-Performance Inference for Real-Time Sound Processing
ADC - Audio Developer Conference via YouTube
Live Online Classes in Design, Coding & AI — Small Classes, Free Retakes
Power BI Fundamentals - Create visualizations and dashboards from scratch
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This hands-on workshop from the Audio Developer Conference (ADC) explores GPU-powered neural audio for high-performance, real-time sound processing. Dive into the practical application of neural networks for audio processing by working with Neural Amp Modeler, an open-source project that uses deep learning to replicate guitar amplifiers and pedals with remarkable accuracy. Learn how to port and scale Neural Amp Modeler plugins to GPU using GPU AUDIO technology stack, focusing on low-latency, parallel execution, and flexible model creation. Work within a Jupyter environment to build and test various versions of Neural Amp Modeler, combining different neural building blocks to create high-performance audio models. After completing the workshop, gain access to the codebase and environment to continue experimenting on your own machine, whether using NVIDIA, AMD, or Mac (M-Silicon) platforms. This 2-hour 53-minute session, presented by Alexander Talashov and Alexander Prokopchuk, offers an exclusive preview of GPU-powered neural building blocks that unlock new possibilities for real-time, scalable, low-latency audio processing.
Syllabus
Workshop: GPU-Powered Neural Audio - High-Performance Inference for Real-Time Sound Processing - ADC
Taught by
ADC - Audio Developer Conference