Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Securing Models - Safeguarding ML Systems in the GenAI Era

MLOps World: Machine Learning in Production via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical security challenges facing machine learning and generative AI systems in this 28-minute conference talk from MLOps World. Discover why traditional security frameworks are inadequate for protecting ML lifecycles and learn practical strategies for safely building and deploying open-source large language models at scale. Examine the unique security threats posed by AI models, particularly those from community-driven marketplaces like Hugging Face and Ollama that often lack trusted authorship and may contain hidden vulnerabilities. Understand why conventional DevSecOps methods fall short in AI workflows and gain insights into designing safe, scalable practices for AI model governance and validation. Learn to identify and mitigate risks when working with open-source LLMs while building effective guardrails around AI development pipelines to protect your organization's ML systems in the rapidly evolving generative AI landscape.

Syllabus

Securing Models: Safeguarding ML Systems in the GenAI Era | Hudson Buzby, JFrog

Taught by

MLOps World: Machine Learning in Production

Reviews

Start your review of Securing Models - Safeguarding ML Systems in the GenAI Era

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.