From the Lab to the Edge: Post-Training Compression for Deep Neural Networks
EDGE AI FOUNDATION via YouTube
Stuck in Tutorial Hell? Learn Backend Dev the Right Way
Build GenAI Apps from Scratch — UCSB PaCE Certificate Program
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Watch a 58-minute technical talk exploring how Datakalab tackles the challenge of deploying deep neural networks (DNNs) efficiently on edge devices. Learn about a two-step approach that enables framework-agnostic inference support across diverse hardware platforms and implements advanced compression techniques. Discover how post-training quantization, pruning, and context adaptation methods achieve significant model optimization while maintaining accuracy within 1% of the original performance. Presented by PhD student Edouard Yvinec from Sorbonne Université, gain insights into practical solutions for transitioning DNNs from development frameworks like TensorFlow and PyTorch to resource-constrained edge devices without requiring intensive cloud computing or model retraining.
Syllabus
tinyML Talks: From the lab to the edge: Post-Training Compression
Taught by
EDGE AI FOUNDATION