Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Cache Aware Scheduling

Linux Plumbers Conference via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore cache aware scheduling implementation in this 21-minute conference talk from the Linux Plumbers Conference presented by Intel engineers Tim Chen and Yu Chen. Learn about the proposed RFC patch series designed to optimize thread scheduling by keeping data-sharing threads within the same last level cache domain to minimize cache bouncing. Discover the primary use cases motivating this feature and examine current performance metrics demonstrating its effectiveness. Analyze the fundamental approach of the current patches and evaluate whether the feature should extend beyond single-process thread aggregation to include processes communicating through pipes, sockets, or shared memory. Investigate the potential integration of NUMA balancing memory scanning mechanisms to identify data-sharing tasks and estimate shared data extent. Review the load aggregation policy implementation and consider possible improvements to enhance scheduling efficiency in cache-aware environments.

Syllabus

Cache Aware Scheduling - Mr Tim Chen (Intel), Mr Yu Chen (Intel)

Taught by

Linux Plumbers Conference

Reviews

Start your review of Cache Aware Scheduling

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.