Speech and Audio Processing in Non-Invasive Brain-Computer Interfaces at Meta
Center for Language & Speech Processing(CLSP), JHU via YouTube
Power BI Fundamentals - Create visualizations and dashboards from scratch
The Most Addictive Python and SQL Courses
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the potential of non-invasive neural interfaces in transforming human-computer interaction through this 43-minute talk by Michael Mandel from Reality Labs at Meta. Delve into the development of an interface for controlling augmented reality devices using electromyographic (EMG) signals captured at the wrist. Discover how speech and audio technologies are uniquely suited to unlocking the full potential of these signals and interactions. Learn about the neuroscientific background necessary to understand these signals, and examine automatic speech recognition-inspired interfaces for generating text and beamforming-inspired interfaces for identifying individual neurons. Gain insights into how these technologies connect with egocentric machine intelligence tasks that could be implemented on augmented reality devices. Understand the potential for creating effortless and joyful interfaces that provide low friction, information-rich, and always available inputs for users.
Syllabus
Speech and Audio Processing in Non-Invasive Brain-Computer Interfaces at Meta [Michael Mandel]
Taught by
Center for Language & Speech Processing(CLSP), JHU