Get 35% Off CFI Certifications - Code CFI35
PowerBI Data Analyst - Create visualizations and dashboards from scratch
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore cache aware scheduling implementation in this 21-minute conference talk from the Linux Plumbers Conference presented by Intel engineers Tim Chen and Yu Chen. Learn about the proposed RFC patch series designed to optimize thread scheduling by keeping data-sharing threads within the same last level cache domain to minimize cache bouncing. Discover the primary use cases motivating this feature and examine current performance metrics demonstrating its effectiveness. Analyze the fundamental approach of the current patches and evaluate whether the feature should extend beyond single-process thread aggregation to include processes communicating through pipes, sockets, or shared memory. Investigate the potential integration of NUMA balancing memory scanning mechanisms to identify data-sharing tasks and estimate shared data extent. Review the load aggregation policy implementation and consider possible improvements to enhance scheduling efficiency in cache-aware environments.
Syllabus
Cache Aware Scheduling - Mr Tim Chen (Intel), Mr Yu Chen (Intel)
Taught by
Linux Plumbers Conference