Parallel programming in Chapel



 Table of contents


June 12th
1:30pm-4:30pm Pacific Time

Chapel is a modern programming language designed for both shared and distributed memory systems, offering high-level, easy-to-use abstractions for task and data parallelism. Its intuitive syntax makes it an excellent choice for novice HPC users learning parallel programming. Chapel supports a wide range of parallel hardware – from multicore processors and multi-node clusters to GPUs – using consistent syntax and concepts across all levels of hardware parallelism.

Chapel dramatically reduces the complexity of parallel coding by combining the simplicity of Python-style programming with the performance of compiled languages like C and Fortran. Parallel operations that might require dozens of lines in MPI can often be written in just a few lines of Chapel. As an open-source language, it runs on most Unix-like operating systems and scales from laptops to large HPC systems.

This course begins with Chapel fundamentals, then focuses on data parallelism through two numerical examples: one embarrassingly parallel and one tightly coupled. We’ll also briefly explore task parallelism (a more complex topic, and not the primary focus in this course). Finally, we’ll introduce GPU programming with Chapel.

Instructor: Alex Razoumov (SFU)

Prerequisites: basic understanding of HPC at the introductory level (how to submit jobs with Slurm scheduler) and basic knowledge of the Linux command line.

Software: For the hands-on, we will use Chapel on our training cluster. To access the training cluster, you will need a remote secure shell (SSH) client installed on your computer. On Windows we recommend the free Home Edition of MobaXterm. On Mac and Linux computers SSH is usually pre-installed (try typing ssh in a terminal to make sure it is there). We will provide guest accounts to all participants. No need to install Chapel on your computer.


Part 1: basic language features  
Introduction to Chapel
Basic syntax and variables – Julia set description
Ranges and arrays
Control flow
Using command-line arguments
Measuring code performance
Writing output


Part 2: data parallelism  
Intro to parallel computing
Single-locale data parallelism
Multi-locale Chapel
Domains and data parallelism
Parallel Julia set
Heat transfer solver on distributed domains – heat transfer description


Part 3: task parallelism (briefly)  
Fire-and-forget tasks
Synchronization of tasks
Task-parallelizing the heat transfer solver


Part 4: GPU computing with Chapel  

Solutions

You can find the solutions here.