Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore groundbreaking research in brain-computer interface technology through this 24-minute webinar presented by Kaylo Littlejohn, a PhD student at UC Berkeley working on the BRAVO clinical trial. Discover how high-density surface recordings of the speech cortex enable real-time decoding across three complementary speech-related output modalities: text, speech audio, and facial-avatar animation. Learn about the development and evaluation of deep-learning models using neural data collected from a clinical trial participant with severe limb and vocal paralysis attempting to silently speak sentences. Examine the impressive results achieved, including accurate large-vocabulary text decoding at 78 words per minute with 25% word error rate, intelligible speech synthesis personalized to the participant's pre-injury voice, and control of virtual orofacial movements for both speech and non-speech communicative gestures. Understand how these multimodal decoders reached high performance with less than two weeks of training, demonstrating substantial promise for restoring full, embodied communication to people living with severe paralysis. Gain insights into cutting-edge assistive communication device development and real-time speech synthesis applications in neuroprosthetics. The webinar offers PACE credits for registered Labroots members and provides valuable knowledge for professionals interested in brain-computer interfaces, speech technology, and assistive devices for paralyzed individuals.
Syllabus
A Multimodal Neuroprosthesis for Speech Decoding and Avatar Control
Taught by
Labroots