Speech Processing for Low Resource Scenarios I - Day 8 Morning
Center for Language & Speech Processing(CLSP), JHU via YouTube
AI, Data Science & Business Certificates from Google, IBM & Microsoft
Learn Excel & Financial Modeling the Way Finance Teams Actually Use Them
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore speech processing techniques specifically designed for low-resource language scenarios in this comprehensive tutorial from JSALT 2025. Learn about speech semantic-based tasks that can be effectively applied when working with limited linguistic resources, and discover text-to-speech (TTS) synthesis methods tailored for low-resource languages. Gain insights from Dr. Salima Mdhaffar, a senior researcher at Avignon University's LIA laboratory, who brings extensive expertise in automatic speech recognition, semantic information extraction from speech, speech translation, and self-supervised learning. Understand the challenges and solutions for developing speech processing systems when training data is scarce, including approaches for named entity recognition, spoken language understanding, and neural end-to-end systems. Examine practical applications and methodologies that have been developed through various research projects including work with federated learning, privacy-preserving ASR systems, and industrial collaborations. This tutorial provides essential knowledge for researchers and practitioners working on speech technologies for underrepresented languages and resource-constrained environments.
Syllabus
[camera] Day 8 morning - JSALT 2025 - Mdhaffar: Speech Processing for Low Resource scenarios I.
Taught by
Center for Language & Speech Processing(CLSP), JHU