Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Massive CoT Problems: Sonnet 3.7 Reasoning - Chain-of-Thought Reliability in AI Models

Discover AI via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This video explores the challenges and implications of Chain-of-Thought (CoT) reasoning in advanced AI models like Claude 3.7 Sonnet. Delve into how these new AI models show their reasoning process alongside their answers, creating a transparent window into their problem-solving methods. Learn why this feature has become valuable for AI safety researchers who use it to detect potential undesirable behaviors such as deception by examining what models say in their reasoning but omit from final outputs. The presentation raises a critical question about the trustworthiness of Chain-of-Thought processes for alignment purposes, referencing Anthropic's research "Reasoning models don't always say what they think" from April 2025. Perfect for those interested in AI research, safety, and the technical challenges of aligning advanced reasoning models.

Syllabus

Massive CoT PROBLEMS: Sonnet 3.7 Reasoning

Taught by

Discover AI

Reviews

Start your review of Massive CoT Problems: Sonnet 3.7 Reasoning - Chain-of-Thought Reliability in AI Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.