Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
In this 56-minute talk from the Simons Institute, David Dalrymple from MIT discusses safety-guaranteed large language models (LLMs) and explores the concept of safeguarded AI workflows. Learn about the methodologies and frameworks that can be implemented to ensure AI systems operate within safe parameters, with particular focus on how to design workflows that maintain safety guarantees while leveraging the capabilities of large language models.