Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Colorado Boulder

Parallel Computing with MPI

University of Colorado Boulder via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This course is designed for scientists, engineers, students, and professionals looking to develop efficient solutions for high-performance and distributed computing systems. It focuses on parallel programming using the Message Passing Interface (MPI), a standard for scalable communication across multiple processors. Learners should have basic programming experience in C or C++ and familiarity with Linux. No prior knowledge of MPI is required. This course can be taken for academic credit as part of CU Boulder’s Master of Science in Data Science (MS-DS) degree offered on the Coursera platform. The MS-DS is an interdisciplinary degree that brings together faculty from CU Boulder’s departments of Applied Mathematics, Computer Science, Information Science, and others. With performance-based admissions and no application process, the MS-DS is ideal for individuals with a broad range of undergraduate education and/or professional experience in computer science, information science, mathematics, and statistics. Learn more about the MS-DS program at https://www.coursera.org/degrees/master-of-science-data-science-boulder.

Syllabus

  • Introduction to Parallel Computing and MPI
    • This module focuses on the key concepts and techniques for transforming serial algorithms into parallel solutions using the Message Passing Interface (MPI). You will explore the principles of message passing, synchronization, and parallel thinking, equipping them with the skills to efficiently utilize parallel computing in their projects.
  • Advanced Communication Techniques in MPI
    • This module delves into the advanced communication techniques in MPI, focusing on transforming serial algorithms into parallel implementations. You will learn about nonblocking communication, point-to-point communication, and the intricacies of blocking sends and receives, along with strategies to avoid deadlock in their parallel applications.
  • Performance Optimization in Parallel Computing
    • This module focuses on enhancing the performance of parallel applications using nonblocking communication and effective load-balancing strategies. You will learn how to implement nonblocking communication, overlap communication with computation, and achieve optimal load distribution to maximize speedup in their MPI programs.
  • Advanced MPI Concepts – Communicators and Derived Datatypes
    • This module explores advanced parallel computing concepts using MPI, focusing on communicator creation, domain decomposition, and derived datatypes. You will learn to create custom communicators for process coordination and effectively divide computational domains. The module covers MPI's derived datatypes, including contiguous, vector, indexed, and struct types, enabling efficient communication for both regular and irregular data patterns in high-performance applications.
  • Parallel I/O in MPI and HDF5 for High-Performance Computing
    • This module focuses on parallel I/O in MPI, emphasizing efficient data management in high-performance computing. You will learn the principles of MPI I/O and explore practical examples of concurrent data operations. The module also introduces HDF5, a widely used data model and file format in scientific computing, highlighting its features for managing large datasets. By the end, students will be equipped to implement effective parallel I/O strategies using MPI and HDF5 in their applications.

Taught by

Shelley Knuth and Thomas Hauser

Reviews

4.5 rating at Coursera based on 13 ratings

Start your review of Parallel Computing with MPI

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.