Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

GPU Based Audio Processing Platform with AI Audio Effects

ADC - Audio Developer Conference via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This conference talk explores the potential of GPUs for real-time audio processing in live sound engineering, presented by Simon Schneider at ADCxGather 2024. Discover how GPU architectures can effectively parallelize audio effects while maintaining flexible scheduling comparable to CPUs. Learn about Schneider's implementation of an embedded GPU-based audio processing framework on an Nvidia Jetson platform that processes audio within periods as small as 32 frames (0.667ms). The presentation examines how the CUDA graph API improves stability and performance compared to previous methods, testing the framework on large-scale applications like 64-channel mixing consoles. While achieving a 99% success rate for processing complex signal graphs, occasional GPU stalling prevents full real-time capability classification. Schneider, a musician and software engineer from Winterthur who combines his passion for music with technical expertise, discusses potential future improvements to achieve true real-time capability through CUDA scheduler optimization and audio driver enhancements.

Syllabus

GPU Based Audio Processing Platform with AI Audio Effects - Simon Schneider - ADCxGather 2024

Taught by

ADC - Audio Developer Conference

Reviews

Start your review of GPU Based Audio Processing Platform with AI Audio Effects

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.