MLOps: Logging and Loading Microsoft Phi3 Mini 128k in GGUF with MLflow
The Machine Learning Engineer via YouTube
AI Product Expert Certification - Master Generative AI Skills
The Most Addictive Python and SQL Courses
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to log and load a quantized LLama.cpp model in MLflow through this 18-minute tutorial video. Create a Python class, log it in MLflow, and load it for inference using a Microsoft Phi3 mini 128k model quantized with llama.cpp into int8. Follow along with the provided notebook to gain hands-on experience in implementing MLOps practices for machine learning and data science projects.
Syllabus
MLOPS MLFlow :Log and Load in MLflow Microsoft Phi3 mini 128k in GGUF #machinelearning #datascience
Taught by
The Machine Learning Engineer