Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Logically Securing the Illogically Logical Use of Large Language Models

Linux Foundation via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical intersection of security and Large Language Models (LLMs) in this 43-minute conference talk presented by Sarah Evans from Dell Technologies and Jay White from Microsoft at a Linux Foundation event. Delve into the potential security risks associated with emerging technologies like LLMs, focusing on a specific scenario of downloading a model from Hugging Face and applying it to internal datasets. Gain insights into applying established risk management frameworks such as NIST 800-53 (rev 5) and the emerging AI RMF 1.0 to LLM development and adoption. Learn about key risk control families including access control, incident response, configuration management, and supply chain risk management. Discover how to bridge the gap between traditional security fundamentals and LLM development, enabling more secure design and efficient enterprise implementation. Walk away with practical knowledge on pre-emptive risk management measures that can be directly applied to LLM projects, ensuring a more secure and robust development process.

Syllabus

Logically Securing the Illogically Logical Use of Large Language Models - Sarah Evans & Jay White

Taught by

Linux Foundation

Reviews

Start your review of Logically Securing the Illogically Logical Use of Large Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.