Evaluating and Inducing Dialectal Robustness in Large Language Models
Center for Language & Speech Processing(CLSP), JHU via YouTube
The Most Addictive Python and SQL Courses
AI Engineer - Learn how to integrate AI into software applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This seminar talk by Niyati Bafna from the Center for Language & Speech Processing (CLSP) at Johns Hopkins University explores how large language models perform across dialects and language variants. Learn about the challenges these models face when handling low-resource dialects despite performing well on high-resource languages. Discover a framework for understanding performance degradation as a function of linguistic distance between related languages using artificially generated dialects. The presentation introduces DialUp, a novel method designed to make pretrained machine translation models more robust across dialect continua, even for previously unseen dialects. Examine the factors that influence the success of this approach and explore future research directions in dialectal robustness for language models.
Syllabus
Evaluating and Inducing Dialectal Robustness in Large Language Models
Taught by
Center for Language & Speech Processing(CLSP), JHU