Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Streaming Attention Approximation via Discrepancy Theory

Google TechTalks via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about BalanceKV, a novel algorithm that reduces memory requirements for large language model inference by compressing the key-value cache while maintaining attention computation quality. Discover how this Google TechTalk presentation explores the geometric properties of key-value caches and applies discrepancy theory to achieve theoretical guarantees for memory optimization. Explore the empirical validation showing performance improvements over existing methods, understand the challenges of context length scaling in LLM inference, and examine how theoretical insights from discrepancy theory can be applied to develop efficient algorithms for large-scale machine learning applications.

Syllabus

Streaming Attention Approximation via Discrepancy Theory

Taught by

Google TechTalks

Reviews

Start your review of Streaming Attention Approximation via Discrepancy Theory

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.