Speech Processing for Low Resource Scenarios with SpeechBrain - Day 8 Afternoon
Center for Language & Speech Processing(CLSP), JHU via YouTube
Save 40% on 3 months of Coursera Plus
PowerBI Data Analyst - Create visualizations and dashboards from scratch
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Attend this 2-hour afternoon tutorial session continuing Dr. Salima Mdhaffar's exploration of speech processing techniques for low-resource scenarios, featuring hands-on laboratory work with SpeechBrain. Engage in practical exercises building upon the morning session's theoretical foundations, focusing on implementing neural end-to-end systems for automatic speech recognition, semantic information extraction, and self-supervised learning approaches specifically designed for languages and domains with limited data availability. Learn from Dr. Mdhaffar's extensive research experience spanning named entity recognition, spoken language understanding, federated learning, and privacy-preserving ASR systems developed through her work on European SELMA H2020, ANR DeepPrivacy, and other collaborative projects with industry partners including Airbus, Elyadata, and Sonos. Gain insights into cutting-edge methodologies for speech translation and semantic information extraction while working directly with SpeechBrain toolkit implementations. Benefit from the expertise of a senior researcher at Avignon University's LIA laboratory who has contributed to over 35 international conference publications and actively supervises PhD students in low-resource speech processing research.
Syllabus
[camera] Day 8 afternoon - JSALT 2025 - Mdhaffar: Speech Processing for Low Resource scenarios II.
Taught by
Center for Language & Speech Processing(CLSP), JHU