Discontinuous Galerkin Library
#include "dg/algorithm.h"
Loading...
Searching...
No Matches
Collaboration diagram for MPI utility functions:

Topics

 Legacy MPI functions
 

Functions

void dg::mpi_init (int argc, char *argv[])
 Convencience shortcut: Calls MPI_Init or MPI_Init_thread and inits CUDA devices.
 
template<class T >
std::vector< T > dg::mpi_read_as (unsigned num, MPI_Comm comm, std::istream &is=std::cin)
 Read num values from is and broadcast to all processes as type T.
 
void dg::mpi_read_grid (unsigned &n, std::vector< unsigned > &N, MPI_Comm comm, std::istream &is=std::cin, bool verbose=true, std::ostream &os=std::cout)
 Read in grid sizes from is.
 
void dg::mpi_read_grid (unsigned &n, std::vector< unsigned * > N, MPI_Comm comm, std::istream &is=std::cin, bool verbose=true, std::ostream &os=std::cout)
 Convenience shortcut allowing a call like.
 
MPI_Comm dg::mpi_cart_create (MPI_Comm comm_old, std::vector< int > dims, std::vector< int > periods, bool reorder=true)
 Convenience call to MPI_Cart_create preceded by MPI_Dims_create.
 
MPI_Comm dg::mpi_cart_create (std::vector< dg::bc > bcs, std::istream &is=std::cin, MPI_Comm comm_old=MPI_COMM_WORLD, bool reorder=true, bool verbose=true, std::ostream &os=std::cout)
 Convenience call: read in number of processses from istream and create Cartesian MPI communicator.
 
MPI_Comm dg::mpi_cart_sub (MPI_Comm comm, std::vector< int > remain_dims, bool duplicate=false)
 Call and register a call to MPI_Cart_sub with the dg library.
 
MPI_Comm dg::mpi_cart_kron (std::vector< MPI_Comm > comms)
 Form a Kronecker product among Cartesian communicators.
 
template<class Vector >
MPI_Comm dg::mpi_cart_kron (Vector comms)
 Convenience shortcut for return mpi_cart_kron( std::vector<MPI_Comm>(comms.begin(), comms.end()));
 
std::vector< MPI_Comm > dg::mpi_cart_split (MPI_Comm comm)
 Split a Cartesian communicator along each dimensions.
 
template<size_t Nd>
std::array< MPI_Comm, Nd > dg::mpi_cart_split_as (MPI_Comm comm)
 Same as mpi_cart_split but differen return type.
 

Detailed Description

Function Documentation

◆ mpi_cart_create() [1/2]

MPI_Comm dg::mpi_cart_create ( MPI_Comm comm_old,
std::vector< int > dims,
std::vector< int > periods,
bool reorder = true )
inline

Convenience call to MPI_Cart_create preceded by MPI_Dims_create.

This function is equivalent to

int size;
MPI_Comm_size( comm_old, &size);
int ndims = dims.size();
MPI_Dims_create( size, ndims, &dims[0]);
MPI_Comm comm_cart;
MPI_Cart_create( comm_old, dims.size(), &dims[0], periods, reorder, &comm_cart);
return comm_cart;
Parameters
comm_oldinput communicator (handle) (parameter used in MPI_Cart_create)
dimsspecify number of processes in each dimensions. (dims.size() determines ndims parameter used in MPI_Cart_create). Elements can be 0 in which case a distribution is automatically chosen in that direction.
periodslogical array of size ndims specifying whether the grid is periodic (true) or not (false) in each dimension (parameter used in MPI_Cart_create
reorder(parameter used in MPI_Cart_create)
Note
most MPI libraries ignore the reorder parameter
Returns
communicator with new Cartesian topology (handle)

◆ mpi_cart_create() [2/2]

MPI_Comm dg::mpi_cart_create ( std::vector< dg::bc > bcs,
std::istream & is = std::cin,
MPI_Comm comm_old = MPI_COMM_WORLD,
bool reorder = true,
bool verbose = true,
std::ostream & os = std::cout )
inline

Convenience call: read in number of processses from istream and create Cartesian MPI communicator.

The intended use for this function is as a setup module for application codes. This function does:

  1. [verbose, rank0] print a query to os
  2. Call auto np = mpi_read_as<int>( bcs.size(), comm_old, is);
  3. [rank0] read bcs.size() integers from is
  4. [verbose, rank0] print read integers back to os
  5. [verbose, rank0, cuda_aware_mpi] print GPU information to os
  6. Call MPI_Cart_create and return Cartesian comm
Parameters
bcsif bcs[u]==dg::PER then the communicator is periodic in that dimension
isInput stream rank 0 reads bcs.size() parameters (np[u])
comm_oldinput communicator (handle) (parameter used in MPI_Cart_create)
reorder(parameter used in MPI_Cart_create)
Note
most MPI libraries ignore the reorder parameter
Parameters
verboseIf true, rank 0 prints queries and information to os In this case bcs.size()<=6
osOutput stream used if verbose==true
Returns
communicator with new Cartesian topology (handle)

◆ mpi_cart_kron() [1/2]

MPI_Comm dg::mpi_cart_kron ( std::vector< MPI_Comm > comms)
inline

Form a Kronecker product among Cartesian communicators.

Find communicator as the one that hypothetically generated all input comms through MPI_Cart_sub( return_comm, remain_dims[u], comms[u]); for all u < comms.size();

Unless comms is empty or contains only 1 communicator all input comms must be registered in the dg library as Cartesian communicators that derive from the same root Cartesian communicator. Furthermore the comms must be mutually orthogonal i.e. any true entry in remain_dims can exist in only exactly one comm. The resulting remain_dims of the output is then the union of all remain_dims of the inputs.

For example

// Create a Cartesian communicator
MPI_Comm comm = dg::mpi_cart_create( MPI_COMM_WORLD, {0,0,0}, {0,0,0});
// Split into 3 axes
std::array<MPI_Comm,3> axes3d = dg::mpi_cart_split_as<3>( comm);
// Join the first and third axis
MPI_Comm comm_101 = dg::mpi_cart_kron( {axes3d[0], axes3d[2]});
// Split up again
std::array<MPI_Comm,2> axes2d = dg::mpi_cart_split_as<2>( comm_101);
CHECK( axes2d[0] == axes3d[0]);
CHECK( axes2d[1] == axes3d[2]);
Attention
The order of communicators matters. The function will not transpose communicators
Parameters
commsinput communicators
Returns
Kronecker product of communicators
Note
The reason the dg library provides dg::mpi_cart_sub and dg::mpi_cart_kron is that unfortunately the MPI standard does not provide a way to form the Kronecker product of Cartesian communicators without manually tracking their parent Cartesian communicators. However, this is needed in the dg::blas1::kronecker and dg::kronecker functions.

◆ mpi_cart_kron() [2/2]

template<class Vector >
MPI_Comm dg::mpi_cart_kron ( Vector comms)

Convenience shortcut for return mpi_cart_kron( std::vector<MPI_Comm>(comms.begin(), comms.end()));

◆ mpi_cart_split()

std::vector< MPI_Comm > dg::mpi_cart_split ( MPI_Comm comm)
inline

Split a Cartesian communicator along each dimensions.

using repeated calls to dg::mpi_cart_sub

Parameters
comminput Cartesian communicator
Returns
Array of 1-dimensional Cartesian communicators. The i-th communicator corresponds to the i-th axis in comm

◆ mpi_cart_split_as()

template<size_t Nd>
std::array< MPI_Comm, Nd > dg::mpi_cart_split_as ( MPI_Comm comm)

Same as mpi_cart_split but differen return type.

// Create a Cartesian communicator
MPI_Comm comm = dg::mpi_cart_create( MPI_COMM_WORLD, {0,0,0}, {0,0,0});
// Split into 3 axes
std::array<MPI_Comm,3> axes3d = dg::mpi_cart_split_as<3>( comm);
// Join the first and third axis
MPI_Comm comm_101 = dg::mpi_cart_kron( {axes3d[0], axes3d[2]});
// Split up again
std::array<MPI_Comm,2> axes2d = dg::mpi_cart_split_as<2>( comm_101);
CHECK( axes2d[0] == axes3d[0]);
CHECK( axes2d[1] == axes3d[2]);
Template Parameters
NdNumber of dimensions to copy from mpi_cart_split
Parameters
comminput Cartesian communicator ( Nd <= comm.ndims)
Returns
Array of 1-dimensional Cartesian communicators

◆ mpi_cart_sub()

MPI_Comm dg::mpi_cart_sub ( MPI_Comm comm,
std::vector< int > remain_dims,
bool duplicate = false )
inline

Call and register a call to MPI_Cart_sub with the dg library.

Parameters
commcommunicator with Cartesian structure (handle) (parameter used in MPI_Cart_sub)
remain_dimsthe i-th entry of remain_dims specifies whether the i-th dimension is kept in the subgrid (true) or is dropped (false), must have ndims entries. (parameter used in MPI_Cart_sub)
duplicateDetermines what happens in case MPI_Cart_sub was already registered with comm and the same remain_dims. True: call MPI_Cart_sub and generate a novel communicator even if a duplicate exists. False: first check if a communicator that was subbed from comm with remain_dims was previously registered. In case one is found return existing_comm;. Else, call and register MPI_Cart_sub.
Returns
communicator containing the subgrid that includes the calling process (handle) (parameter used in MPI_Cart_sub)
Note
The reason the dg library provides dg::mpi_cart_sub and dg::mpi_cart_kron is that unfortunately the MPI standard does not provide a way to form the Kronecker product of Cartesian communicators without manually tracking their parent Cartesian communicators. However, this is needed in the dg::blas1::kronecker and dg::kronecker functions.

◆ mpi_init()

void dg::mpi_init ( int argc,
char * argv[] )
inline

Convencience shortcut: Calls MPI_Init or MPI_Init_thread and inits CUDA devices.

Shortcut for

#ifdef _OPENMP
int provided, error;
error = MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &provided);
assert( error == MPI_SUCCESS && "Threaded MPI lib required!\n");
#else
MPI_Init(&argc, &argv);
#endif
//... init cuda devices if THRUST_DEVICE_SYSTEM_CUDA
Note
Also sets the GPU a process should use via cudaSetDevice( rank % num_devices_per_node) if THRUST_DEVICE_SYSTEM == THRUST_DEVICE_SYSTEM_CUDA . We assume that the number of GPUs per node is fixed.
Attention
Abort program if MPI does not support OpenMP and _OPENMP is defined or if no CUDA capable devices are found in case of THRUST_DEVICE_SYSTEM_CUDA
Parameters
argccommand line argument number
argvcommand line arguments

◆ mpi_read_as()

template<class T >
std::vector< T > dg::mpi_read_as ( unsigned num,
MPI_Comm comm,
std::istream & is = std::cin )

Read num values from is and broadcast to all processes as type T.

Only the rank0 in comm reads from is

Template Parameters
Tthe type of value to read from is
Parameters
numThe number of values to read
commThe communicator to which to broadcast
isInput stream rank 0 reads num parameters of type T from

◆ mpi_read_grid() [1/2]

void dg::mpi_read_grid ( unsigned & n,
std::vector< unsigned * > N,
MPI_Comm comm,
std::istream & is = std::cin,
bool verbose = true,
std::ostream & os = std::cout )
inline

Convenience shortcut allowing a call like.

mpi_read_grid( n, {&Nx, &Ny}, comm, is, verbose, os);
void mpi_read_grid(unsigned &n, std::vector< unsigned > &N, MPI_Comm comm, std::istream &is=std::cin, bool verbose=true, std::ostream &os=std::cout)
Read in grid sizes from is.
Definition mpi_init.h:112

◆ mpi_read_grid() [2/2]

void dg::mpi_read_grid ( unsigned & n,
std::vector< unsigned > & N,
MPI_Comm comm,
std::istream & is = std::cin,
bool verbose = true,
std::ostream & os = std::cout )
inline

Read in grid sizes from is.

The intended use for this function is as a setup module for application codes that use our dg::Grid s. Basically just

  1. [verbose, rank0] print a query to os
  2. Call auto vals = dg::mpi_read_as<unsigned>( 1 + N.size(), comm, is);
  3. [verbose, rank0] print read values back to os
Parameters
nrank 0 reads in from is and broadcasts to all processes in MPI_COMM_WORLD
Nrank 0 reads in from is and broadcasts to all processes in MPI_COMM_WORLD
commcommunicator (handle) to which rank0 broadcasts
isInput stream rank 0 reads parameters (n, N)
verboseIf true, rank 0 prints queries and information on os
osOutput stream used if verbose==true
See also
dg::aRealTopologyNd