The world's largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.
Parallel programming by definition involves co-operation between processors to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.
This course will be run over two days, slightly shortert than the normal three-day format, usinga variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.
On completion of this course students should be able to:
-
Understand the message-passing model in detail.
-
Implement standard message-passing algorithms in MPI.
-
Debug simple MPI codes.
-
Measure and comment on the performance of MPI codes.
-
Design and implement efficient parallel programs to solve regular-grid problems.
Pre-requisite Programming Languages:
C, C++ or Fortran. The course does not cover the details of how to use MPI from Python.
Dates: 26th - 27th November 2024, Birmingham
Location: Elm House, Edgbaston Park, 351 Bristol Road
- 09:00 Logging on to ARCHER2 (you can skip this if you have successfully logged on already)
- 09:30 Message-Passing Concepts
- 10:15 Practical: Parallel Traffic Modelling
- 10:45 Break
- 11:15 MPI Programs
- 12:00 MPI Programs on ARCHER2
- 12:15 Practical: Hello World
- 12:30 Lunch
- 13:30 Point-to-Point Communication
- 14:15 Practical: Pi
- 15:00 Break
- 15:30 Communicators, Tags and Modes
- 16:15 Practical: Pi / Ping-Pong
- 17:00 Finish
- 09:00 Practical: Pi / Ping-Pong (cont)
- 09:30 Pi Solution
- 10:00 Non-Blocking Communication
- 10:30 Practical: Message Round a Ring
- 10:45 Break
- 11:15 Practical: Message Round a Ring (cont)
- 12:00 Collective Communication
- 12:30 Lunch
- 13:30 Practical: Collective Communication
- 14:00 Virtual Topologies
- 14:30 Practical: Message Round a Ring (cont)
- 15:00 Break
- 15:15 Derived Data Types
- 16:00 Practical: Message Round a Ring (cont)
- 16:30 Case Study
- 17:00 Finish
Unless otherwise indicated all material is Copyright © EPCC, The University of Edinburgh, and is only made available for private study.
- Overview of MPI course
- Message-Passing Concepts
- Parallel Traffic Modelling
- Parallel Traffic Modelling: solution
- MPI Programs
- MPI on Cirrus and ARCHER2
- Point-to-Point Communication
- Communicators, Tags and Modes
- Non-Blocking Communication
- Collective Communication
- Virtual Topologies
- Derived Data Types
- Case Study
- MPI Tips and Tricks (includes dynamic memory allocation in C and array syntax issues in Fortran)
- MPI Scaling
Unless otherwise indicated all material is Copyright © EPCC, The University of Edinburgh, and is only made available for private study.
- Traffic modelling exercise sheet
- Instructions for logging on, compiling and running MPI jobs on ARCHER2
- Useful files and pieces of code: MPP-templates.tar
- MPI exercise sheet
- Detailed solutions to pi calculation example
- Simple example solutions to all exercises
- Case Study exercise sheet
- Case Study source code
- Simple Case Study solutions (serial)
- Simple Case Study solutions (parallel)
- Code for dynamic array allocation in C
- Serial and parallel solutions to the traffic model
Note that all registered users will be given access to the ARCHER2 system. Although having MPI installed on your laptop may be convenient, do not worry if these instructions do not work for you.
Linux users need to install the GNU compilers and a couple of MPI packages, e.g. for Ubuntu:
user@ubuntu$ sudo apt install gcc
user@ubuntu$ sudo apt install openmpi-bin
user@ubuntu$ sudo apt install libopenmpi-dev
Mac users need to install compilers from the Xcode developer package. It is easiest to install MPI using the Homebrew package manager - here are Instructions on how to install Xcode and Homebrew.
Now install OpenMPI:
user@mac$ brew install open-mpi
Rather than installing MPI locally, we recommend that Windows users access ARCHER2 using MobaXterm.
If you want to try local access to MPI, one solution is to install a Linux virtual machine (e.g. Ubuntu) and follow the Linux installation instructions above.
I know that some users have been able to install MPI compilers and libraries natively on Windows using the Intel® oneAPI HPC Toolkit
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.