Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

The Coherence Trap - Why LLMs Feel Smart But Aren't Thinking

AI Engineer via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore why large language models feel intelligent despite lacking true cognition in this 21-minute conference talk that introduces the concept of coherence reconstruction as a mental model for understanding LLM behavior. Discover how LLMs generate meaning through latent coherence—an internal mechanism that aligns language with context without actual reasoning or awareness. Learn why hallucinations are inevitable and cannot be completely eliminated, understand how prompts function as force vectors that shape AI behavior in structured ways, and examine the implications for reasoning tasks, evaluation practices, and agent design. Gain insights into rethinking reliability, cognition, and the nature of understanding when building tools, agents, or workflows with large language models, while exploring the fundamental disconnect between perceived intelligence and actual thinking processes in AI systems.

Syllabus

The Coherence Trap: Why LLMs Feel Smart (But Aren’t Thinking) - Travis Frisinger

Taught by

AI Engineer

Reviews

Start your review of The Coherence Trap - Why LLMs Feel Smart But Aren't Thinking

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.