Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This lecture by Nikolay Malkin from the University of Edinburgh explores how Bayesian inference techniques can address challenges in large language models. Discover various probabilistic inference methods—including Monte Carlo, amortised variational inference with deep reinforcement learning, and hybrid approaches—that can solve complex LLM tasks like constrained generation, reasoning, planning, information extraction, and human feedback alignment. Learn how these techniques enable the development of generalisable yet uncertainty-aware reasoners and planners. The talk also examines the challenges of extracting structured knowledge from pretrained language models and proposes that amortised inference techniques could enable more faithful extraction of relational and causal information by creating symbolic structures consistent with language model predictions. The discussion concludes by exploring implications for developing aligned AI systems with probabilistic safety guarantees.
Syllabus
Amortised Inference Meets Llms: Algorithms And Implications For Faithful Knowledge Extraction
Taught by
Simons Institute