Speech Processing for Low Resource Scenarios I - Day 8 Morning
Center for Language & Speech Processing(CLSP), JHU via YouTube
Free courses from frontend to fullstack and AI
AI Engineer - Learn how to integrate AI into software applications
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore speech processing techniques specifically designed for low-resource language scenarios in this comprehensive tutorial from JSALT 2025. Learn about speech semantic-based tasks that can be effectively applied when working with limited linguistic resources, and discover text-to-speech (TTS) synthesis methods tailored for low-resource languages. Gain insights from Dr. Salima Mdhaffar, a senior researcher at Avignon University's LIA laboratory, who brings extensive expertise in automatic speech recognition, semantic information extraction from speech, speech translation, and self-supervised learning. Understand the challenges and solutions for developing speech processing systems when training data is scarce, including approaches for named entity recognition, spoken language understanding, and neural end-to-end systems. Examine practical applications and methodologies that have been developed through various research projects including work with federated learning, privacy-preserving ASR systems, and industrial collaborations. This tutorial provides essential knowledge for researchers and practitioners working on speech technologies for underrepresented languages and resource-constrained environments.
Syllabus
[camera] Day 8 morning - JSALT 2025 - Mdhaffar: Speech Processing for Low Resource scenarios I.
Taught by
Center for Language & Speech Processing(CLSP), JHU