University of Nizhni Novgorod

Faculty of Computational Mathematics & Cybernetics

Teaching Course: CS338. Introduction to Parallel Programming


Overview of Laboratory Works:

·        Parallel Programming with MPI  (4 hours)

Development of the test parallel programs. Data communications, synchronization, collective operations. Local and distributed program execution. Evaluating the program execution time. 

Solving the test computational problems: scalar product, integration.

·        Parallel Programming with OpenMP  (4 hours)

Overview of the OpenMP technology: threads, parallel regions, distributing the computations among the threads. Shared and local data. Critical sections. Synchronization.

Solving the test computational problems: scalar product, matrix computations.

·        Laboratory works for estimating the parallel method efficiency with the use of the ParaLab system (2 hours)

Overview of the software laboratory ParaLab as an integrated system for carrying out computational experiments for estimating the parallel method efficiency.

Modeling multiprocessor systems. Stating the problems being solved and choosing the parallel method for solving the problem. Carrying out the experiments in the simulation mode. Analyzing the experiment results and estimating the parallel method efficiency. Carrying out experiments in the mode of real parallel computations.

·        Laboratory works for developing the parallel algorithms and programs (14 hours)

Matrix computations (matrix-vector multiplication Lab01, matrix multiplication Lab02, solving the systems of linear equations Lab03).

Parallel sorting: bubble sort, Shell sort and quick sort. Lab04

Graph calculations: finding the minimum spanning tree, search for the shortest paths. Lab05

·        Laboratory works for parallel solving partial differential equations (4 hours)

General description of the method of finite differences.

Step-by-step parallelization of the finite difference method for the shared memory systems and demonstrating various issues of parallel computations (mutual exclusion, excessive synchronization, serialization and blocking, race condition, indeterminacy in parallel calculations, red/black alternation scheme, wavefront computations, block-oriented data distribution, load balancing, job queue).

Step-by-step parallelization of the finite difference method for the distributed memory systems (block striped and checkerboard data decomposition schemes, multiple wave computations, data communications). Lab06

·        Laboratory works for using Microsoft Compute Cluster Server (4 hours)

Installation of Microsoft Compute Cluster Server 2003 CCS_Lab01, Executing jobs on the cluster under Microsoft Compute Cluster Server 2003 CCS_Lab02, Debugging parallel MPI programs with the help of Microsoft Visual Studio 2005 CCS_Lab03

·        Laboratory works for studying the parallel method libraries (4 hours)

Overview of parallel libraries Scalapack, Plapack, PETs, Aztec. Using the libraries for solving test problems.

·        Laboratory works for parallel solving the problem of multidimensional multiextremal optimization  (4 hours)

General description of the computational scheme (statement of the global optimization problem, the reduction of multidimensional problems to one-dimensional functions, information statistical global optimization algorithms).

Multiple mappings of Peano curve type for obtaining various grids in the search region of an optimization problem. Reducing the problem of multidimensional optimization to a set of one-dimensional information compatible problems. Parallel solving the problems of the generated problems and the scheme of information communications.

Applicability of the above discussed approach for solving a number of time consuming problems (integration, solving non-linear equation systems, approximation, multicriterial optimization etc.).

The laboratory works for solving the problems of multiextremal optimization with the use of the parallel software system Globalizer (problem setting, computation execution, data visualization, analysis of the computation results, the change of parameters and the continuation of computations).

Laboratory Works with the use of the system ParaLab.