Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Dynamic Data Structures and Memory Management on GPUs

Simons Institute via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about the challenges and solutions for implementing dynamic data structures and managing memory efficiently on Graphics Processing Units (GPUs) in this 38-minute conference talk. Explore the fundamental differences between CPU and GPU memory architectures and discover how traditional dynamic data structures must be adapted for parallel computing environments. Examine specific techniques for handling memory allocation, deallocation, and garbage collection in GPU contexts where thousands of threads operate simultaneously. Understand the trade-offs between different approaches to dynamic memory management and their impact on performance in parallel applications. Investigate case studies of successful implementations of dynamic data structures like hash tables, trees, and graphs on GPU platforms. Analyze the synchronization challenges that arise when multiple threads need to modify shared data structures concurrently and learn about lock-free and wait-free algorithms designed for GPU architectures. Gain insights into memory coalescing patterns, bank conflicts, and other GPU-specific optimization strategies that affect the performance of dynamic data structures. Discover emerging research directions in GPU memory management and their potential applications in high-performance computing, machine learning, and data analytics workloads.

Syllabus

Dynamic Data Structures and Memory Management on GPUs

Taught by

Simons Institute

Reviews

Start your review of Dynamic Data Structures and Memory Management on GPUs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.