Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Beyond Copilots - How LinkedIn Scales Multi-Agent Systems

InfoQ via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how LinkedIn engineers Daniel Hewlett and Karthik Ramgopal built and scaled their internal "Agent Platform" that powers LinkedIn's Hiring Assistant in this 50-minute conference talk. Learn why simple prompt chains are insufficient for production AI systems and discover how LinkedIn evolved from basic prompt-in/string-out products to sophisticated multi-agent architectures. Understand the transition from single LLM blocks to hierarchical sub-agents using the Supervisor Pattern, which coordinates specialized agent skills and enables parallel development with independent quality evaluation. Examine LinkedIn's approach to model selection, comparing when to use GPT-4o versus fine-tuned smaller models, and see how they adapt models using the LinkedIn Economic Graph for domain-specific tasks. Dive into the technical architecture including LLM inference abstractions for managing quotas and GPU limits, distributed messaging platforms for handling non-deterministic AI workloads, and memory management strategies distinguishing between working memory and long-term collective memory. Discover how LinkedIn built their Skill Registry system before the Model Control Protocol (MCP) existed, and learn about the observability challenges in asynchronous agentic systems. Gain insights into practical decisions about when to use procedural code instead of LLMs, understand the Model Customization Pyramid comparing RAG versus fine-tuning approaches, and explore UX design principles for agent interfaces that go beyond simple text boxes. The session includes a Q&A segment covering security and service principles in skill registry management, providing comprehensive guidance for scaling multi-agent AI systems in production environments.

Syllabus

– Evolution of Generative AI at LinkedIn: From "Coach" to "Agent"
– The Early Days: Simple prompt-in/string-out products
– Moving to Prompt Chains: Handling memory and online inference
– The "Agent Era": Introducing prompt graphs and task automation
– Deep Dive: The LinkedIn Hiring Assistant problem space
– Why natural language interfaces beat 40+ search filters
– Scaling bottlenecks in single LLM block architectures
– Modular Design: Moving to a Manager/Interpreter pattern
– Transitioning from LLM blocks to hierarchical sub-agents
– The Supervisor Pattern: Coordinating specialized agent skills
– Parallel development and independent quality evaluation
– Model Selection: When to use GPT-4o vs. fine-tuned small models
– Domain Adaptation: Training models on the LinkedIn Economic Graph
– The LinkedIn Agent Platform: Standardizing prompts and namespaces
– LLM Inference Abstractions: Managing quotas and GPU limits
– Scaling non-deterministic workloads with a messaging platform
– Memory Management: Working memory vs. long-term collective memory
– Building a Skill Registry and why it predated MCP
– Observability challenges in asynchronous agentic systems
– Lessons Learned: When to use procedural code instead of an LLM
– The Model Customization Pyramid: RAG vs. Fine-tuning
– UX for Agents: Why text boxes alone aren't enough
– Q&A: Managing security and service principles in a skill registry

Taught by

InfoQ

Reviews

Start your review of Beyond Copilots - How LinkedIn Scales Multi-Agent Systems

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.