Highly Scalable AI Search Engine and AI Data Lake With Kubernetes and LanceDB
CNCF [Cloud Native Computing Foundation] via YouTube
Get 35% Off CFI Certifications - Code CFI35
Learn EDR Internals: Research & Development From The Masters
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn how to build highly scalable AI search engines and data lakes using Kubernetes and LanceDB in this conference talk that addresses the complex retrieval challenges facing modern AI applications. Discover how AI workflows require not just vector database capabilities, but also feature store retrieval and analytical queries, which traditionally forces organizations to store AI data in separate silos using multiple systems, increasing both cost and complexity. Explore LanceDB's unified approach that combines vector search, feature retrieval, and SQL-based analytics within a single system built on the open-source Lance columnar format, which serves as the new standard for AI data storage. Understand how Kubernetes-native autoscaling enables RAG and AI Agent applications embedding LanceDB to scale dynamically, creating an architecture that redefines the performance-scale-cost curve and delivers hyper-scalable AI applications with 10X better cost-efficiency. Examine the integration of KServe for model serving, Postgres for metadata caching, and Kubernetes native data caching solutions that breaks the traditional impossible triangle of performance, scale, and cost in AI infrastructure.
Syllabus
Highly Scalable AI Search Engine and AI Data Lake With Kubernetes and Lance... Lu Qiu & Chanchan Mao
Taught by
CNCF [Cloud Native Computing Foundation]