Parallel Programming Models
The Pacman and Fish systems support multiple parallel programming models.
|Shared-memory node||Auto||Automatic shared-memory parallel executables can be compiled by and linked with the
|Shared-memory node||OpenMP||This is a form of explicit parallel programming in which the programmer inserts directives into the program to spawn multiple shared-memory threads, typically at the loop level. It iscommon, portable, and relatively easy. On the downside, it requires shared memory which limitsscaling to the number of processors on a node. To activate OpenMP directives in your code, usethe
|Shared-memory node||pthreads||The system supports POSIX threads.|
|Distributed memory system||MPI||
This is the most common and portable method for parallelizing codes for scalable distributed memory systems. MPI is a library of subroutines for message passing, collective operations, andother forms of inter-processor communication. The programmer is responsible for implementingdata distribution, synchronization, and reassembly of results using explicit MPI calls.
Using MPI, the programmer can largely ignore the physical organization of processors into nodes and simply treat the system as a collection of independent processors.
The system Fish also support the GPU and PGAS programming models.
|Node Level GPU||GPU||Fish supports several programming models used to interact with GPU devices. The PGI and Cray compilers support OpenACC directives for interacting with GPU devices. The PGI compiler supports CUDA Fortran and CUDA C. The NVidia nvcc compiler is also available|
|Distributed memory system||PGAS||The Cray Compiler Environment supports the Partitioned Global Address Space (PGAS) languages- UPC and CoArray Fortran.|