Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Linear Representations of Concepts in Modern AI Models

Simons Institute via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how concepts are linearly represented within modern AI models through this 37-minute conference talk by Mikhail Belkin from UCSD, presented at the Simons Institute's "Smale@95: A Conference in Honor of Steve Smale." Discover how trained Large Language Models contain vast amounts of human knowledge and learn how many concepts can be recovered from neural network internal activations using linear "probes" - mathematically equivalent to single index models. Examine the construction and application of these probes based on Recursive Feature Machines, a feature-learning kernel method originally developed for extracting relevant features from tabular data, and understand their role in interpreting the internal representations of AI systems.

Syllabus

Linear representations of concepts in modern AI models

Taught by

Simons Institute

Reviews

Start your review of Linear Representations of Concepts in Modern AI Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.