Signal Processing Models for Spatial Hearing
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore signal processing models for spatial hearing in this 90-minute lecture from the 2000 CLSP Summer Workshop at Johns Hopkins University. Delve into the computational and mathematical frameworks used to understand how humans perceive sound location and spatial audio cues. Learn about the underlying signal processing techniques that model binaural hearing, including interaural time differences, interaural level differences, and head-related transfer functions. Examine how these models can be applied to audio engineering, virtual reality systems, and hearing aid technology. Gain insights into the intersection of auditory neuroscience and digital signal processing as presented by Richard Duda during this comprehensive workshop session at the Center for Language & Speech Processing.
Syllabus
Richard Duda: Signal Processing Models for Spatial Hearing
Taught by
Center for Language & Speech Processing(CLSP), JHU