Multilingual Representation Distillation with Contrastive Learning
Center for Language & Speech Processing(CLSP), JHU via YouTube
AI Product Expert Certification - Master Generative AI Skills
Power BI Fundamentals - Create visualizations and dashboards from scratch
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a 12-minute conference talk from the European Chapter of the Association for Computational Linguistics (EACL) 2023, presented by Steven Weiting Tan from the Center for Language & Speech Processing (CLSP) at Johns Hopkins University. Dive into the innovative approach of integrating contrastive learning into multilingual representation distillation for quality estimation of parallel sentences. Discover how this method enhances the ability to find semantically similar sentences that can be used as translations across different languages. Learn about the experimental results that demonstrate significant improvements over previous sentence encoders like LASER, LASER3, and LaBSE, particularly in low-resource language scenarios. Gain insights into the applications of this technique in multilingual similarity search and corpus filtering tasks, and understand its potential impact on cross-lingual information retrieval and matching.
Syllabus
Multilingual Representation Distillation with Contrastive Learning - EACL 2023
Taught by
Center for Language & Speech Processing(CLSP), JHU