MLOps: OpenVino Toolkit - Compress and Quantize YOLO Model
The Machine Learning Engineer via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to convert a YoloV10 model to OpenVino IR format and quantize it to int8 using the nnvc library from OpenVino. Explore the process of model compression and quantization for improved performance and efficiency. Follow along with practical examples of inference on CPU using both YoloV10 and YoloV8 versions. Gain hands-on experience in MLOps techniques for optimizing deep learning models, specifically focusing on YOLO architectures. Access the accompanying notebook on GitHub to practice and implement the demonstrated techniques in your own projects.
Syllabus
MLOps: OpenVino Toolkit Compress and Quantize YOLO Model #datascience #machinelearning
Taught by
The Machine Learning Engineer