MLOps: OpenVino Toolkit - Compress and Quantize YOLO Model
The Machine Learning Engineer via YouTube
Google Data Analytics, IBM AI & Meta Marketing — All in One Subscription
Power BI Fundamentals - Create visualizations and dashboards from scratch
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to convert a YoloV10 model to OpenVino IR format and quantize it to int8 using the nnvc library from OpenVino. Explore the process of model compression and quantization for improved performance and efficiency. Follow along with practical examples of inference on CPU using both YoloV10 and YoloV8 versions. Gain hands-on experience in MLOps techniques for optimizing deep learning models, specifically focusing on YOLO architectures. Access the accompanying notebook on GitHub to practice and implement the demonstrated techniques in your own projects.
Syllabus
MLOps: OpenVino Toolkit Compress and Quantize YOLO Model #datascience #machinelearning
Taught by
The Machine Learning Engineer