Using Partitioned Global Address Space for Algorithmic Differentiation

Partitioned Global Address Space (PGAS) is a model for parallel programming. It assumes a global memory address space that is logically partitioned and a portion of it is local to each processor. PGAS handles threads that are able to access the processor local memory much faster than the global memory. Therefore, PGAS can help the programmer to write applications getting highest performance out of the parallel computer by exploiting locality of memory references.

Unified Parallel C (UPC) is a parallel extension to the C Standard and follows the partitioned global address space programming model. UPC is designed for high- performance computing on large-scale parallel machines, supporting SMP and NUMA (global memory) as well as cluster systems (distributed memory). UPC extends ISO C 99 with synchronization primitives and a memory consistency model. Version 2 of Berkeley UPC supports creating applications which are a hybrid of UPC and C++ or containing Message-Passing-Interface (MPI) calls.

Together with the research group Prof. Andrea Walther (Institut für Mathematik), we are interested in using a PGAS language for algorithmic differentiation (AD) within the software package ADOL-C. It is an open-source package for the automatic differentiation of C and C++ programs. The resulting evaluation routines for first and higher-order derivatives may be called from C, C++, FORTRAN. Currently a version of the tool is implemented which generates C/C++ and MPI code.

The task of this thesis is to compare the MPI-based version of ADOL-C with a UPC- based version.

  • Porting of an MPI instance of ADOL-C to UPC
  • Performance evaluation - UPC-instance vs. MPI-instance

The thesis can be written in English or German.

Preconditions: Skills in C/C++ programming and Linux OS. Some basic experiences in parallel computing are required. Motivation is assumed.

Resources: www.coin-or.org/projects/ADOL-C.xml

business-card image

Dr. Jens Simon

Paderborn Center for Parallel Computing (PC2)

Leiter HPC-Betrieb

E-Mail schreiben +49 5251 60-1731