Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

How Application-Level Priority Management Keeps Latency Low and Throughput High

Linux Foundation via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore application-level priority management techniques for optimizing both throughput and latency in a single application during this Linux Foundation webinar. Delve into ScyllaDB CTO and Co-Founder Avi Kivity's insights on achieving high performance through strategies like Shard per Core, task isolation, and application-managed tasks. Learn about execution timelines, switching queues, preemption techniques, and stall detectors. Compare I/O to CPU challenges, understand safe disk space management, and discover scheduler basics with operation highlights. Examine dynamic shares adjustment and resource partitioning for providing different quality of service to users. Gain valuable knowledge on balancing the constant tension between throughput and latency in modern applications.

Syllabus

Intro
Comparing throughput and latency
Why mix throughput and latency computing?
Achieving high throughput
Shard per Core
Isolating tasks in threads
Application-level task isolation
Application managed tasks
Execution timeline
Switching queues
Preemption techniques
Stall detector
Comparing 1/0 to CPU
Challenges with 1/0
Safe space for disk
Schedulers Basics - operation highlight
Dynamic Shares Adjustment
Resource partitioning (QoS) Provide different quality of service to different users

Taught by

Linux Foundation

Reviews

Start your review of How Application-Level Priority Management Keeps Latency Low and Throughput High

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.