AI Adoption - Drive Business Value and Organizational Impact
Get 35% Off CFI Certifications - Code CFI35
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the foundational principles behind modern AI development in this 41-minute conference talk delivered by Anthropic co-founder Jared Kaplan at Y Combinator's AI Startup School. Discover how insights from theoretical physics led to groundbreaking discoveries about AI scaling laws that fundamentally reshaped the path toward human-level artificial intelligence. Learn about the predictable, almost physical nature of intelligence scaling and how this understanding became foundational to large language models. Examine the key phases of AI training including pre-training and reinforcement learning, and understand how scaling laws unlock new AI capabilities as models grow larger. Delve into critical challenges facing AI development such as organizational knowledge management, memory systems, and oversight mechanisms for increasingly nuanced tasks. Gain insights into the future of AI models including Claude 4 and beyond, exploring what remains to be solved as models become smarter and tackle longer-horizon tasks. Understand the evolving landscape of human-AI collaboration and the compute efficiency considerations that drive scaling laws. Benefit from audience Q&A addressing practical questions about AI development and implementation strategies for startups and researchers working in the field.
Syllabus
00:17 - From Physics to AI
01:41 - Initial Skepticism and Shift to AI
02:12 - AI Training Phases
02:32 - Pre-Training
03:16 - Reinforcement Learning
04:19 - Scaling Laws in Training
08:19 - Unlocking AI Capabilities
11:27 - Organizational Knowledge and Memory
12:19 - Oversight and Nuanced Tasks
13:38 - Preparing for the Future
15:48 - Claude 4 and Beyond
21:18 - Human-AI Collaboration
29:50 - Scaling Laws and Compute Efficiency
35:26 - Audience Q&A
Taught by
Y Combinator