How Application-Level Priority Management Keeps Latency Low and Throughput High

How Application-Level Priority Management Keeps Latency Low and Throughput High

Linux Foundation via YouTube Direct link

Intro

1 of 18

1 of 18

Intro

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

How Application-Level Priority Management Keeps Latency Low and Throughput High

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Comparing throughput and latency
  3. 3 Why mix throughput and latency computing?
  4. 4 Achieving high throughput
  5. 5 Shard per Core
  6. 6 Isolating tasks in threads
  7. 7 Application-level task isolation
  8. 8 Application managed tasks
  9. 9 Execution timeline
  10. 10 Switching queues
  11. 11 Preemption techniques
  12. 12 Stall detector
  13. 13 Comparing 1/0 to CPU
  14. 14 Challenges with 1/0
  15. 15 Safe space for disk
  16. 16 Schedulers Basics - operation highlight
  17. 17 Dynamic Shares Adjustment
  18. 18 Resource partitioning (QoS) Provide different quality of service to different users

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.