Learn Generative AI, Prompt Engineering, and LLMs for Free
Live Online Classes in Design, Coding & AI — Small Classes, Free Retakes
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
In this 59-minute talk from the Simons Institute, Carnegie Mellon University researcher Aditi Raghunathan explores the critical safety challenges that arise when AI systems encounter out-of-distribution scenarios. Examine how machine learning models can behave unpredictably when faced with inputs that differ from their training data, and understand the implications for AI safety guarantees. Learn about cutting-edge research approaches to creating more robust AI systems that maintain reliable performance even in unfamiliar situations. The presentation is part of the Safety-Guaranteed LLMs series and offers valuable insights for researchers, practitioners, and anyone concerned with the responsible development of artificial intelligence.
Syllabus
Out Of Distribution, Out Of Control? Understanding Safety Challenges In AI
Taught by
Simons Institute