Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

How Could We Design Aligned and Provably Safe AI?

Inside Livermore Lab via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a thought-provoking seminar on designing aligned and provably safe AI systems, presented by Dr. Yoshua Bengio, a Turing Award winner and world-renowned AI expert. Delve into the challenges of evaluating risks in learned AI systems and discover a potential solution through run-time risk assessment. Examine the concept of bounding the probability of harm using Bayesian approaches and neural networks, while considering the importance of capturing epistemic uncertainty. Learn about the research program based on these ideas and the potential application of amortized inference with large neural networks for estimating required quantities. Gain valuable insights into the future of AI safety and alignment from one of the pioneers in deep learning.

Syllabus

DSI Seminar Series | How Could We Design Aligned and Provably Safe AI?

Taught by

Inside Livermore Lab

Reviews

Start your review of How Could We Design Aligned and Provably Safe AI?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.