Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to secure Large Language Models (LLMs) in production environments through a comprehensive shift-left security approach in this 36-minute conference talk from DevConf.IN 2026. Discover the unique security challenges that LLMs introduce beyond traditional SDLC practices, including model tampering, prompt-based attacks, data leakage, hallucinations, and unsecured inference pipelines. Explore a holistic end-to-end security strategy that begins with securing the model itself through signing and verification using Sigstore and Cosign to ensure integrity and provenance, followed by vulnerability scanning with NVIDIA Garak. Master the implementation of guardrails around model interactions including moderation filters, PII detection, hallucination checks, and pre/post prompt screening to prevent unsafe prompts, malicious injections, and harmful outputs. Understand how to secure inference traffic using Envoy as a controlled API gateway for authentication, rate-limiting, and external threat protection, while leveraging Istio to add zero-trust layers within clusters through secure service-to-service communication and enhanced observability. Gain practical knowledge of LLM red teaming techniques that introduce structured adversarial testing with attack corpora including prompt injections, jailbreak attempts, and data-exfiltration prompts for continuous regression testing. Walk away with a clear roadmap for deploying and operating LLMs safely, reliably, and at scale in real-world production environments.
Syllabus
Shift-Left for LLMs: Securing the AI Model Supply Chain- DevConf.IN 2026
Taught by
DevConf