Python, Prompt Engineering, Data Science — Build the Skills Employers Want Now
You’re only 3 weeks away from a new language
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore advanced AI model optimization techniques in this 51-minute episode featuring Unsloth CEO Daniel Han discussing breakthrough approaches to fine-tuning and local AI deployment. Discover how Unsloth achieves 2-3× faster training speeds and significant memory savings through mathematical optimizations rather than specialized hardware, while maintaining model intelligence through dynamic quantization methods. Learn about the critical behind-the-scenes work of fixing broken chat templates and incorrect tokens in models released by major AI labs, and understand why this quality assurance is essential for practical AI applications. Examine the rapid evolution of local, small, and fine-tuned models and their increasing competitiveness with frontier AI systems, including insights into how developers can achieve meaningful results with minimal training examples. Gain practical knowledge about implementing Unsloth's training tools and reinforcement learning notebooks through Docker containers, making advanced AI techniques accessible for immediate experimentation. Understand the broader implications of dynamic quantization becoming mainstream and how these optimization techniques are reshaping the open-source AI ecosystem for more efficient and practical AI model deployment.
Syllabus
Faster Fine-Tuning & Smarter Local Models feat. Dan from Unsloth | Docker’s AI Guide to the Galaxy
Taught by
Docker