Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Model Evaluation for Custom Datasets With Open Models - Multi-Model Comparison With Streamlit

Linux Foundation via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to evaluate open-source embedding models for specific languages and custom datasets through a comprehensive conference talk that addresses the significant performance gaps between multilingual models' English benchmarks and their actual performance in other languages. Discover how to build a Streamlit-based evaluation platform for comparing different types of embedding models, including language-specific models like the Japanese Ruri series, multilingual alternatives such as multilingual-E5 and BGE-M3, and general-purpose models across real-world tasks including semantic search. Explore practical methodologies for assessing model performance on custom data that better reflects actual use cases rather than relying solely on standard benchmarks. Gain insights into creating evaluation frameworks that can be adapted for any language or cultural context using entirely open-source tools, with all code and methodologies made available as open source for global developer community use.

Syllabus

Model Evaluation for Custom Datasets With Open Models: Multi-Model Comparison With... Sho Tanaka

Taught by

Linux Foundation

Reviews

Start your review of Model Evaluation for Custom Datasets With Open Models - Multi-Model Comparison With Streamlit

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.