Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the complex phenomena of knowledge integration in neural network transformer architectures in this 40-minute talk from Discover AI. Delve into how new knowledge permeates through large language models and the challenges of data-efficient learning. The presentation examines research from multiple papers including "How new data permeates LLM knowledge and how to dilute it" from Google DeepMind, "Identifying and Mitigating the Influence of the Prior Distribution in Large Language Models" from Princeton University, and "Memorization vs. Reasoning: Updating LLMs with New Knowledge" from Cornell University. Learn about synthetic continued pretraining, Model Context Protocol (MCP), Agent-to-Agent (A2A) interactions, and why these systems sometimes fail in unexpected ways. Gain insights into multi-agent reinforcement learning, latency drift, failure modes, and the complexities of agent coordination in AI data pipelines and distributed systems.
Syllabus
MCP & A2A FAIL - not for the reasons you think #ai
Taught by
Discover AI