Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore the persistent phenomenon of AI hallucinations in this 11-minute video featuring Gartner Global Chief of Research Chris Howard, who examines why generative AI systems continue to produce inaccurate outputs and what this reveals about machine reasoning processes. Discover how AI hallucinations stem from the fundamental way machines think through prediction and probability rather than deterministic logic. Learn about emerging solutions including multiagent systems that function like medical panels through collaborative debate, advanced filtering mechanisms, and constrained data approaches that help minimize erroneous outputs. Understand the revolutionary potential of physics-informed neural networks (PINNs) that integrate scientific principles directly into AI reasoning frameworks, enabling more accurate and reliable decision-making processes. Examine the shift toward probabilistic thinking in AI systems, where binary answers give way to nuanced probability distributions that better reflect real-world uncertainty. Gain insights into essential data investment strategies that organizations need to implement now to prepare for more sophisticated AI deployments. Discover surprising scenarios where AI hallucinations actually provide value, offering creative solutions and novel perspectives that can benefit specific use cases and applications.
Syllabus
00:00 Intro and Why AI Makes Stuff Up
02:50 Solving the Problem: Agents, Filters and Constrained Data
04:53 PINNs and the Future of AI Reasoning
06:45 Probabilistic Thinking: Why Not Every Answer Is Binary
08:16 Invest in Your Data to Get Ready Now
09:43 When Hallucinations Are Actually Helpful
Taught by
Gartner