Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how to accelerate AI deployment in enterprise environments through strategic gateway architecture in this 30-minute conference presentation from KubeCon North America 2025. Learn why decoupling applications from specific AI models at the gateway layer is crucial for avoiding constant refactoring as AI models rapidly evolve and proliferate. Discover how Traefik Labs' runtime gateway approach provides operational freedom and leverage by handling critical functions including authentication, rate limiting, and implementing enterprise policy guardrails that prevent AI misuse scenarios like finance agents answering legal questions. Understand the benefits of consolidating AI logic at the gateway level rather than embedding it in individual applications, including improved scalability, performance, unified control, and observability in production environments. Examine the innovative "triple gate pattern" concept for agentic workflows that manages interactions between LLMs, MCP resources, and backend APIs through AI gateways, MCP gateways, and traditional API gateways within a single binary deployment. Gain insights into why decoupling API runtime from model runtime is essential given the rapid evolution of AI models and the reality that no single model will dominate the landscape. Master practical strategies for optimizing token consumption through gateway-level caching and implementing comprehensive AI governance frameworks that align with enterprise policies and operational requirements.
Syllabus
Accelerating AI In the Enterprise with Traefik Labs
Taught by
Tech Field Day