PowerBI Data Analyst - Create visualizations and dashboards from scratch
MIT Sloan AI Adoption: Build a Playbook That Drives Real Business ROI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about BalanceKV, a novel algorithm that reduces memory requirements for large language model inference by compressing the key-value cache while maintaining attention computation quality. Discover how this Google TechTalk presentation explores the geometric properties of key-value caches and applies discrepancy theory to achieve theoretical guarantees for memory optimization. Explore the empirical validation showing performance improvements over existing methods, understand the challenges of context length scaling in LLM inference, and examine how theoretical insights from discrepancy theory can be applied to develop efficient algorithms for large-scale machine learning applications.
Syllabus
Streaming Attention Approximation via Discrepancy Theory
Taught by
Google TechTalks