Securing Models - Safeguarding ML Systems in the GenAI Era
MLOps World: Machine Learning in Production via YouTube
Power BI Fundamentals - Create visualizations and dashboards from scratch
Get 50% Off Udacity Nanodegrees — Code CC50
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical security challenges facing machine learning and generative AI systems in this 28-minute conference talk from MLOps World. Discover why traditional security frameworks are inadequate for protecting ML lifecycles and learn practical strategies for safely building and deploying open-source large language models at scale. Examine the unique security threats posed by AI models, particularly those from community-driven marketplaces like Hugging Face and Ollama that often lack trusted authorship and may contain hidden vulnerabilities. Understand why conventional DevSecOps methods fall short in AI workflows and gain insights into designing safe, scalable practices for AI model governance and validation. Learn to identify and mitigate risks when working with open-source LLMs while building effective guardrails around AI development pipelines to protect your organization's ML systems in the rapidly evolving generative AI landscape.
Syllabus
Securing Models: Safeguarding ML Systems in the GenAI Era | Hudson Buzby, JFrog
Taught by
MLOps World: Machine Learning in Production