AI Engineer - Learn how to integrate AI into software applications
The Most Addictive Python and SQL Courses
Overview
Syllabus
00:00:00 - Introduction: Speaker Samuel Colvin introduces himself as the creator of Pydantic.
00:00:42 - Pydantic Ecosystem: Introduction to Pydantic the company, the Pydantic AI agent framework, and the Logfire observability platform.
00:01:18 - Talk Thesis: Explaining the title "MCP is all you need" and the main argument that MCP simplifies agent communication.
00:02:05 - MCP's Focus: Clarifying that the talk focuses on MCP for autonomous agents and custom code, not its original desktop automation use case.
00:02:48 - Tool Calling Primitive: Highlighting that "tool calling" is the most relevant MCP primitive for this context.
00:03:10 - MCP vs. OpenAPI: Listing the advantages MCP has over a simple OpenAPI specification for tool calls.
00:03:21 - Feature 1: Dynamic Tools: Tools can appear and disappear based on server state.
00:03:26 - Feature 2: Streaming Logs: The ability to return log data to the user while a tool is still executing.
00:03:33 - Feature 3: Sampling: A mechanism for a tool server to request an LLM call back through the agent client.
00:04:01 - MCP Architecture Diagram: Visualizing the basic agent-to-tool communication flow.
00:04:43 - Complex Architecture: Discussing scenarios where tools are themselves agents that need LLM access.
00:05:24 - Explaining Sampling: Detailing how sampling solves the problem of every agent needing its own LLM by allowing tools to "piggyback" on the client's LLM access.
00:06:42 - Pydantic AI's Role in Sampling: How the Pydantic AI library supports sampling on both the client and server side.
00:07:10 - Demo Start: Beginning the demonstration of a research agent that uses an MCP tool to query BigQuery.
00:08:23 - Code Walkthrough: Validation: Showing how Pydantic is used for output validation and automatic retries model_retry.
00:09:00 - Code Walkthrough: Context Logging: Demonstrating the use of mcp_context.log to send progress updates back to the client.
00:10:51 - MCP Server Setup: Showing the code for setting up an MCP server using fast_mcp.
00:11:54 - Design Pattern: Inference Inside the Tool: Explaining the benefit of having the tool perform its own LLM inference to reduce the context burden on the main agent.
00:12:27 - Main Application Code: Reviewing the client-side code that defines the agent and registers the MCP tool.
00:13:16 - Observability with Logfire: Switching to the Logfire UI to trace the execution of the agent's query.
00:14:09 - Observing Sampling in Action: Pointing out the specific span in the trace that shows the tool making an LLM call back through the client via sampling.
00:14:48 - Inspecting the SQL Query: Showing how the observability tool can be used to see the exact SQL query that was generated by the internal agent.
00:15:15 - Conclusion: Final summary of the talk's points.
Taught by
AI Engineer