Robust Representation of Attended Speech in Human Brain with Implications for ASR
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore how the human brain processes and represents attended speech through this 75-minute conference talk that examines the neural mechanisms underlying selective auditory attention and their potential applications to automatic speech recognition systems. Discover the robust computational principles that allow the brain to focus on specific speakers in complex acoustic environments, learn about cutting-edge neuroscience research methods used to decode speech representations from brain signals, and understand how these biological insights can inform the development of more effective ASR technologies. Gain insights into the intersection of neuroscience and speech technology as the speaker presents findings on how attended speech is encoded in neural activity and discusses the implications for creating brain-inspired approaches to speech recognition in noisy environments.
Syllabus
Nima Mesgarani: Robust Representation of Attended Speech in Human Brain with Implications for ASR
Taught by
Center for Language & Speech Processing(CLSP), JHU