Discontinuous Galerkin Library
#include "dg/algorithm.h"
|
Topics | |
Legacy MPI functions | |
Functions | |
void | dg::mpi_init (int argc, char *argv[]) |
Convencience shortcut: Calls MPI_Init or MPI_Init_thread and inits CUDA devices. | |
template<class T > | |
std::vector< T > | dg::mpi_read_as (unsigned num, MPI_Comm comm, std::istream &is=std::cin) |
Read num values from is and broadcast to all processes as type T . | |
void | dg::mpi_read_grid (unsigned &n, std::vector< unsigned > &N, MPI_Comm comm, std::istream &is=std::cin, bool verbose=true, std::ostream &os=std::cout) |
Read in grid sizes from is . | |
void | dg::mpi_read_grid (unsigned &n, std::vector< unsigned * > N, MPI_Comm comm, std::istream &is=std::cin, bool verbose=true, std::ostream &os=std::cout) |
Convenience shortcut allowing a call like. | |
MPI_Comm | dg::mpi_cart_create (MPI_Comm comm_old, std::vector< int > dims, std::vector< int > periods, bool reorder=true) |
Convenience call to MPI_Cart_create preceded by MPI_Dims_create . | |
MPI_Comm | dg::mpi_cart_create (std::vector< dg::bc > bcs, std::istream &is=std::cin, MPI_Comm comm_old=MPI_COMM_WORLD, bool reorder=true, bool verbose=true, std::ostream &os=std::cout) |
Convenience call: read in number of processses from istream and create Cartesian MPI communicator. | |
MPI_Comm | dg::mpi_cart_sub (MPI_Comm comm, std::vector< int > remain_dims, bool duplicate=false) |
Call and register a call to MPI_Cart_sub with the dg library. | |
MPI_Comm | dg::mpi_cart_kron (std::vector< MPI_Comm > comms) |
Form a Kronecker product among Cartesian communicators. | |
template<class Vector > | |
MPI_Comm | dg::mpi_cart_kron (Vector comms) |
Convenience shortcut for return mpi_cart_kron( std::vector<MPI_Comm>(comms.begin(), comms.end())); | |
std::vector< MPI_Comm > | dg::mpi_cart_split (MPI_Comm comm) |
Split a Cartesian communicator along each dimensions. | |
template<size_t Nd> | |
std::array< MPI_Comm, Nd > | dg::mpi_cart_split_as (MPI_Comm comm) |
Same as mpi_cart_split but differen return type. | |
|
inline |
Convenience call to MPI_Cart_create
preceded by MPI_Dims_create
.
This function is equivalent to
comm_old | input communicator (handle) (parameter used in MPI_Cart_create ) |
dims | specify number of processes in each dimensions. (dims.size() determines ndims parameter used in MPI_Cart_create ). Elements can be 0 in which case a distribution is automatically chosen in that direction. |
periods | logical array of size ndims specifying whether the grid is periodic (true) or not (false) in each dimension (parameter used in MPI_Cart_create |
reorder | (parameter used in MPI_Cart_create ) |
reorder
parameter
|
inline |
Convenience call: read in number of processses from istream and create Cartesian MPI communicator.
The intended use for this function is as a setup module for application codes. This function does:
os
auto np = mpi_read_as<int>( bcs.size(), comm_old, is);
bcs.size()
integers from is
os
os
MPI_Cart_create
and return Cartesian commbcs | if bcs[u]==dg::PER then the communicator is periodic in that dimension |
is | Input stream rank 0 reads bcs.size() parameters (np [u]) |
comm_old | input communicator (handle) (parameter used in MPI_Cart_create ) |
reorder | (parameter used in MPI_Cart_create ) |
reorder
parameter verbose | If true, rank 0 prints queries and information to os In this case bcs.size()<=6 |
os | Output stream used if verbose==true |
|
inline |
Form a Kronecker product among Cartesian communicators.
Find communicator as the one that hypothetically generated all input comms through MPI_Cart_sub( return_comm, remain_dims[u],
comms[u]);
for all u < comms.size();
Unless comms
is empty or contains only 1 communicator all input comms must be registered in the dg library as Cartesian communicators that derive from the same root Cartesian communicator. Furthermore the comms must be mutually orthogonal i.e. any true
entry in remain_dims
can exist in only exactly one comm. The resulting remain_dims
of the output is then the union of all remain_dims
of the inputs.
For example
comms | input communicators |
dg::mpi_cart_sub
and dg::mpi_cart_kron
is that unfortunately the MPI standard does not provide a way to form the Kronecker product of Cartesian communicators without manually tracking their parent Cartesian communicators. However, this is needed in the dg::blas1::kronecker
and dg::kronecker
functions. MPI_Comm dg::mpi_cart_kron | ( | Vector | comms | ) |
Convenience shortcut for return mpi_cart_kron( std::vector<MPI_Comm>(comms.begin(), comms.end()));
|
inline |
Split a Cartesian communicator along each dimensions.
using repeated calls to dg::mpi_cart_sub
comm | input Cartesian communicator |
comm
std::array< MPI_Comm, Nd > dg::mpi_cart_split_as | ( | MPI_Comm | comm | ) |
Same as mpi_cart_split
but differen return type.
Nd | Number of dimensions to copy from mpi_cart_split |
comm | input Cartesian communicator ( Nd <= comm.ndims ) |
|
inline |
Call and register a call to MPI_Cart_sub
with the dg library.
comm | communicator with Cartesian structure (handle) (parameter used in MPI_Cart_sub ) |
remain_dims | the i-th entry of remain_dims specifies whether the i-th dimension is kept in the subgrid (true) or is dropped (false), must have ndims entries. (parameter used in MPI_Cart_sub ) |
duplicate | Determines what happens in case MPI_Cart_sub was already registered with comm and the same remain_dims . True: call MPI_Cart_sub and generate a novel communicator even if a duplicate exists. False: first check if a communicator that was subbed from comm with remain_dims was previously registered. In case one is found return existing_comm; . Else, call and register MPI_Cart_sub . |
MPI_Cart_sub
) dg::mpi_cart_sub
and dg::mpi_cart_kron
is that unfortunately the MPI standard does not provide a way to form the Kronecker product of Cartesian communicators without manually tracking their parent Cartesian communicators. However, this is needed in the dg::blas1::kronecker
and dg::kronecker
functions.
|
inline |
Convencience shortcut: Calls MPI_Init or MPI_Init_thread and inits CUDA devices.
Shortcut for
cudaSetDevice( rank % num_devices_per_node)
if THRUST_DEVICE_SYSTEM == THRUST_DEVICE_SYSTEM_CUDA
. We assume that the number of GPUs per node is fixed. _OPENMP
is defined or if no CUDA capable devices are found in case of THRUST_DEVICE_SYSTEM_CUDA
argc | command line argument number |
argv | command line arguments |
std::vector< T > dg::mpi_read_as | ( | unsigned | num, |
MPI_Comm | comm, | ||
std::istream & | is = std::cin ) |
Read num
values from is
and broadcast to all processes as type T
.
Only the rank0 in comm
reads from is
T | the type of value to read from is |
num | The number of values to read |
comm | The communicator to which to broadcast |
is | Input stream rank 0 reads num parameters of type T from |
|
inline |
Convenience shortcut allowing a call like.
|
inline |
Read in grid sizes from is
.
The intended use for this function is as a setup module for application codes that use our dg::Grid
s. Basically just
os
auto vals = dg::mpi_read_as<unsigned>( 1 + N.size(), comm, is);
os
n | rank 0 reads in from is and broadcasts to all processes in MPI_COMM_WORLD |
N | rank 0 reads in from is and broadcasts to all processes in MPI_COMM_WORLD |
comm | communicator (handle) to which rank0 broadcasts |
is | Input stream rank 0 reads parameters (n , N ) |
verbose | If true, rank 0 prints queries and information on os |
os | Output stream used if verbose==true |