Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical security challenges facing open-source Large Language Models in this 54-minute conference talk. Examine the unique vulnerabilities that arise from the open nature of these models, including data poisoning attacks, model inference threats, and supply chain compromises. Learn about the specific attack vectors that target open-source LLMs and understand why their accessibility, while beneficial for flexibility and innovation, creates distinct security risks. Discover key security considerations that developers and users must address when working with open-source LLMs, and gain insights into potential mitigation strategies for building and deploying secure LLM applications. Understand the emerging security landscape surrounding these models and develop the knowledge necessary to leverage open-source LLMs safely and effectively in various applications.