Discontinuous Galerkin Library
#include "dg/algorithm.h"
|
Classes | |
struct | dg::MPISparseBlockMat< Vector, LocalMatrixInner, LocalMatrixOuter > |
Distributed memory Sparse block matrix class, asynchronous communication. More... | |
struct | dg::MPIDistMat< Vector, LocalMatrixInner, LocalMatrixOuter > |
Distributed memory matrix class, asynchronous communication. More... | |
struct | dg::MPI_Vector< container > |
A simple wrapper around a container object and an MPI_Comm. More... | |
Functions | |
template<class real_type , class ConversionPolicyRows , class ConversionPolicyCols > | |
auto | dg::make_mpi_sparseblockmat (const EllSparseBlockMat< real_type, thrust::host_vector > &src, const ConversionPolicyRows &g_rows, const ConversionPolicyCols &g_cols) |
Split given EllSparseBlockMat into computation and communication part. | |
template<class ConversionPolicy , class real_type > | |
dg::MIHMatrix_t< real_type > | dg::make_mpi_matrix (const dg::IHMatrix_t< real_type > &global_cols, const ConversionPolicy &col_policy) |
Convert a (row-distributed) matrix with local row and global column indices to a row distributed MPI matrix. | |
template<class ConversionPolicy , class real_type > | |
dg::IHMatrix_t< real_type > | dg::convertGlobal2LocalRows (const dg::IHMatrix_t< real_type > &global, const ConversionPolicy &row_policy) |
Convert a (column-distributed) matrix with global row and column indices to a row distributed matrix. | |
template<class ConversionPolicy , class real_type > | |
void | dg::convertLocal2GlobalCols (dg::IHMatrix_t< real_type > &local, const ConversionPolicy &policy) |
Convert a matrix with local column indices to a matrix with global column indices. | |
Contrary to a vector a matrix can be distributed among processes in two ways: row-wise and column-wise. When we implement a matrix-vector multiplication the order of communication and computation depends on the distribution of the matrix.
In a row-distributed matrix each process holds the rows of the matrix that correspond to the portion of the MPI_Vector
it holds. When we implement a matrix-vector multiplication each process first has to gather all the elements of the input vector it needs to be able to compute the elements of its output. In general this requires MPI communication. (s.a. MPI distributed gather and scatter operations for more info of how global scatter/gather operations work). After the elements have been gathered into a buffer the local matrix-vector multiplications can be executed. Formally, the gather operation can be written as a matrix \(G\) of \(1'\)s and \(0'\)s and we write.
\[ M v = R\cdot G v \]
where \(R\) is the row-distributed matrix with modified indices into a buffer vector and \(G\) is the gather matrix, in which the MPI-communication takes place. In this way we achieve a simple split between communication \( w=Gv\) and computation \( Rw\). Since the computation of \( R w\) is entirely local we can reuse the existing implementation for shared memory systems.
We can go one step further on a row distributed matrix and separate the matrix \( M \) into
\[ M v = (M_i + M_o) v = (M_{i} + R_o\cdot G_o) v \]
where \( M_i\) is the inner matrix which requires no communication, while \( M_o\) is the outer matrix containing all elements which require MPI communication. This enables the implementation of overlapping communication and computation which is done in the dg::MPIDistMat
and dg::MPISparseBlockMat
classes.
You can create a row-distributed MPI matrix given its local parts on each process with local row and global column indices by our dg::make_mpi_matrix
function. If you have a column distributed matrix with its local parts on each process with global row and local columns indices, you can use a combination of dg::convertLocal2GlobalCols
and dg::convertGlobal2LocalRows
to bring it to a row-distributed form. The latter can then be used in dg::make_mpi_matrix
again.
In a column-distributed matrix each process holds the columns of the matrix that correspond to the portion of the MPI_Vector
it holds. In a column distributed matrix the local matrix-vector multiplication can be executed first because each processor already has all vector elements it needs. However the resulting elements have to be communicated back to the process they belong to. Furthermore, a process has to sum all elements it receives from other processes on the same index. This is a scatter and reduce operation and it can be written as a scatter matrix \(S\)
\[ M v= S\cdot C v \]
where \(S\) is the scatter matrix and \(C\) is the column distributed matrix with modified indices. Again, we can reuse our shared memory algorithms to implement the local matrix-vector operation \( w=Cv\) before the communication step \( S w\).
It turns out that a row-distributed matrix can be transposed by transposition of both the local matrix and the gather matrix:
\[ M^\mathrm{T} = G^\mathrm{T} R^\mathrm{T} = S C\]
The result is then a column distributed matrix. Analogously, the transpose of a column distributed matrix is a row-distributed matrix. It is also possible to convert a column distributed mpi matrix to a row distributed mpi matrix. In code
dg::IHMatrix_t< real_type > dg::convertGlobal2LocalRows | ( | const dg::IHMatrix_t< real_type > & | global, |
const ConversionPolicy & | row_policy ) |
Convert a (column-distributed) matrix with global row and column indices to a row distributed matrix.
Send all elements with a global row-index that does not belong to the calling process to the process where it belongs to. This can be used to convert a column distributed matrix to a row-distributed matrix as in
ConversionPolicy | (can be one of the MPI grids ) has to have the members:
|
global | the row indices and num_rows need to be global |
row_policy | the conversion object |
num_cols
is the one from global
void dg::convertLocal2GlobalCols | ( | dg::IHMatrix_t< real_type > & | local, |
const ConversionPolicy & | policy ) |
Convert a matrix with local column indices to a matrix with global column indices.
Simply call policy.local2globalIdx for every column index
ConversionPolicy | (can be one of the MPI grids ) has to have the members:
|
local | the column indices and num_cols need to be local, will be global on output |
policy | the conversion object |
dg::MIHMatrix_t< real_type > dg::make_mpi_matrix | ( | const dg::IHMatrix_t< real_type > & | global_cols, |
const ConversionPolicy & | col_policy ) |
Convert a (row-distributed) matrix with local row and global column indices to a row distributed MPI matrix.
ConversionPolicy | (can be one of the MPI grids ) has to have the members:
|
global_cols | the local part of the matrix (different on each process) with global column indices and num_cols but local row indices and num_rows |
col_policy | the conversion object |
auto dg::make_mpi_sparseblockmat | ( | const EllSparseBlockMat< real_type, thrust::host_vector > & | src, |
const ConversionPolicyRows & | g_rows, | ||
const ConversionPolicyCols & | g_cols ) |
Split given EllSparseBlockMat
into computation and communication part.
src | global rows and global cols, right_size and left_size must have correct local sizes |
g_rows | 1 dimensional grid for the rows Is needed for local2globalIdx and global2localIdx |
g_cols | 1 dimensional grid for the columns |
g_rows
and g_cols
need to be at least MPI_CONGRUENT
dg::MHMatrix_t<real_type>