AI Product Expert Certification - Master Generative AI Skills
Master AI & Data—50% Off Udacity (Code CC50)
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore application-level priority management techniques for optimizing both throughput and latency in a single application during this Linux Foundation webinar. Delve into ScyllaDB CTO and Co-Founder Avi Kivity's insights on achieving high performance through strategies like Shard per Core, task isolation, and application-managed tasks. Learn about execution timelines, switching queues, preemption techniques, and stall detectors. Compare I/O to CPU challenges, understand safe disk space management, and discover scheduler basics with operation highlights. Examine dynamic shares adjustment and resource partitioning for providing different quality of service to users. Gain valuable knowledge on balancing the constant tension between throughput and latency in modern applications.
Syllabus
Intro
Comparing throughput and latency
Why mix throughput and latency computing?
Achieving high throughput
Shard per Core
Isolating tasks in threads
Application-level task isolation
Application managed tasks
Execution timeline
Switching queues
Preemption techniques
Stall detector
Comparing 1/0 to CPU
Challenges with 1/0
Safe space for disk
Schedulers Basics - operation highlight
Dynamic Shares Adjustment
Resource partitioning (QoS) Provide different quality of service to different users
Taught by
Linux Foundation