Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

From the Lab to the Edge: Post-Training Compression for Deep Neural Networks

EDGE AI FOUNDATION via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a 58-minute technical talk exploring how Datakalab tackles the challenge of deploying deep neural networks (DNNs) efficiently on edge devices. Learn about a two-step approach that enables framework-agnostic inference support across diverse hardware platforms and implements advanced compression techniques. Discover how post-training quantization, pruning, and context adaptation methods achieve significant model optimization while maintaining accuracy within 1% of the original performance. Presented by PhD student Edouard Yvinec from Sorbonne Université, gain insights into practical solutions for transitioning DNNs from development frameworks like TensorFlow and PyTorch to resource-constrained edge devices without requiring intensive cloud computing or model retraining.

Syllabus

tinyML Talks: From the lab to the edge: Post-Training Compression

Taught by

EDGE AI FOUNDATION

Reviews

Start your review of From the Lab to the Edge: Post-Training Compression for Deep Neural Networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.