Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Knowledge Distillation: Build Smaller, Faster AI Models

AWS Events via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This 36-minute AWS Events talk explores how knowledge distillation can transfer capabilities from large language models to smaller, faster models while maintaining performance. Discover how organizations can achieve significant improvements in throughput and cost efficiency through distillation techniques. Learn implementation methods using Amazon Bedrock or how to build custom solutions on Amazon SageMaker. Watch Julien Simon demonstrate how Arcee AI leverages distillation to develop industry-leading small language models (SLMs) based on open architectures. Get introduced to the open-source DistillKit library and see demonstrations of newly distilled SLMs from Arcee AI. Featuring insights from AWS experts Laurens van der Maas, Aleksandra Dokic, and Jean Launay Orlanda, this presentation provides practical knowledge for optimizing AI model deployment.

Syllabus

Knowledge Distillation: Build Smaller, Faster AI Models | AWS Events

Taught by

AWS Events

Reviews

Start your review of Knowledge Distillation: Build Smaller, Faster AI Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.