Master Finance Tools - 35% Off CFI (Code CFI35)
AI Adoption - Drive Business Value and Organizational Impact
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how embedding models fundamentally misunderstand language and contribute to AI hallucinations in this 29-minute conference talk from Conf42 ML 2025. Begin with foundational concepts of embeddings and their evaluation methods, then dive into the embedding process and key underlying concepts. Examine practical use cases for embeddings and learn techniques for comparing different embedding approaches. Work through hands-on examples using OpenAI's embedding models to understand their practical applications. Investigate the critical challenges and hallucination phenomena that arise when embedding models process language, uncovering the systematic ways these models can misinterpret meaning and context. Discover fine-tuning techniques for embedding models, including step-by-step approaches to improve model performance and reduce misunderstandings. Gain insights into the limitations of current embedding technologies and understand why these models struggle with accurate language representation, ultimately learning how to identify and mitigate embedding-related hallucinations in AI systems.
Syllabus
00:00 Introduction to Embeddings
00:16 Understanding Embeddings
01:20 Evaluating Embeddings
01:48 The Embedding Process
03:53 Key Concepts in Embeddings
06:27 Use Cases for Embeddings
07:51 Comparing Embeddings
08:42 Practical Examples with OpenAI
13:27 Challenges and Hallucinations in Embeddings
18:04 Fine-Tuning Embedding Models
24:55 Steps for Fine-Tuning
27:17 Key Takeaways and Conclusion
Taught by
Conf42