Discontinuous Galerkin Library
#include "dg/algorithm.h"
dg::BijectiveComm< Index, Vector > Struct Template Reference

Perform bijective gather and its transpose (scatter) operation across processes on distributed vectors using mpi. More...

Inheritance diagram for dg::BijectiveComm< Index, Vector >:
[legend]

Public Member Functions

 BijectiveComm ()=default
 no memory allocation; size 0 More...
 
 BijectiveComm (const thrust::host_vector< int > &pids, MPI_Comm comm)
 Construct from a given scatter map (inverse index map) with respect to the source/data vector. More...
 
 BijectiveComm (unsigned local_size, thrust::host_vector< int > localIndexMap, thrust::host_vector< int > pidIndexMap, MPI_Comm comm)
 Construct from local indices and PIDs index map. More...
 
template<class ConversionPolicy >
 BijectiveComm (const thrust::host_vector< int > &globalIndexMap, const ConversionPolicy &p)
 Construct from global indices index map. More...
 
template<class OtherIndex , class OtherVector >
 BijectiveComm (const BijectiveComm< OtherIndex, OtherVector > &src)
 reconstruct from another type; if src is empty, same as default constructor More...
 
const thrust::host_vector< int > & get_pids () const
 These are the pids that were given in the Constructor. More...
 
virtual BijectiveCommclone () const override final
 Generic copy method. More...
 
- Public Member Functions inherited from dg::aCommunicator< Vector >
Vector allocate_buffer () const
 Allocate a buffer object of size buffer_size() More...
 
void global_gather (const value_type *values, Vector &buffer) const
 \( w = G v\). Globally (across processes) gather data into a buffer More...
 
Vector global_gather (const value_type *values) const
 \( w = G v\). Globally (across processes) gather data into a buffer (memory allocating version) More...
 
void global_scatter_reduce (const Vector &toScatter, value_type *values) const
 \( v = G^\mathrm{T} w\). Globally (across processes) scatter data accross processes and reduce on multiple indices More...
 
unsigned buffer_size () const
 The local size of the buffer vector w = local map size. More...
 
unsigned local_size () const
 The local size of the source vector v = local size of the dg::MPI_Vector More...
 
bool isCommunicating () const
 True if the gather/scatter operation involves actual MPI communication. More...
 
MPI_Comm communicator () const
 The internal MPI communicator used. More...
 
virtual aCommunicatorclone () const=0
 Generic copy method. More...
 
virtual ~aCommunicator ()
 vritual destructor More...
 

Additional Inherited Members

- Public Types inherited from dg::aCommunicator< Vector >
using value_type = get_value_type< Vector >
 reveal value type More...
 
using container_type = Vector
 reveal local container type More...
 
- Protected Member Functions inherited from dg::aCommunicator< Vector >
 aCommunicator (unsigned local_size=0)
 only derived classes can construct More...
 
 aCommunicator (const aCommunicator &src)
 only derived classes can copy More...
 
aCommunicatoroperator= (const aCommunicator &src)
 only derived classes can assign More...
 
void set_local_size (unsigned new_size)
 Set the local size of the source vector v. More...
 

Detailed Description

template<class Index, class Vector>
struct dg::BijectiveComm< Index, Vector >

Perform bijective gather and its transpose (scatter) operation across processes on distributed vectors using mpi.

If the index map idx[i] is bijective, each element of the source vector v maps to exactly one location in the buffer vector w. In this case the scatter matrix S is the inverse of G. (see aCommunicator for more details)

int i = myrank;
double values[8] = {i,i,i,i, 9,9,9,9};
thrust::host_vector<double> hvalues( values, values+8);
int pids[8] = {0,1,2,3, 0,1,2,3};
thrust::host_vector<int> hpids( pids, pids+8);
BijectiveComm coll( hpids, MPI_COMM_WORLD);
thrust::host_vector<double> hrecv = coll.global_gather( hvalues); //for e.g. process 0 hrecv is now {0,9,1,9,2,9,3,9}
thrust::host_vector<double> hrecv2( hvalues.size());
coll.global_scatter_reduce( hrecv, hrecv2); //hrecv2 now equals hvalues independent of process rank
BijectiveComm()=default
no memory allocation; size 0
Template Parameters
Indexan integer thrust Vector (needs to be int due to MPI interface)
Vectora thrust Vector
Note
a scatter followed by a gather of the received values restores the original array
The order of the received elements is according to their original array index (i.e. a[0] appears before a[1]) and their process rank of origin ( i.e. values from rank 0 appear before values from rank 1)

Constructor & Destructor Documentation

◆ BijectiveComm() [1/5]

template<class Index , class Vector >
dg::BijectiveComm< Index, Vector >::BijectiveComm ( )
default

no memory allocation; size 0

◆ BijectiveComm() [2/5]

template<class Index , class Vector >
dg::BijectiveComm< Index, Vector >::BijectiveComm ( const thrust::host_vector< int > &  pids,
MPI_Comm  comm 
)
inline

Construct from a given scatter map (inverse index map) with respect to the source/data vector.

Implicitly construct a bijective index map into the buffer vector. With only the pid map available there exist in general several index maps that can fulfill the required scatter/gather of pids. Which one is selected is undefined, but can be determined a posteriori through the getLocalIndexMap function

Note
This constructor is useful if the only thing you care about is to which PID elements are sent, not necessarily in which order the elements arrive there. This operation is then by construction bijective with the size of the buffer determined to fit all elements.
Parameters
pidsGives to every index i of the values/data vector (not the buffer vector!) the rank pids[i] to which to send the data element data[i]. The rank pids[i] needs to be element of the given communicator.
commAn MPI Communicator that contains the participants of the scatter/gather
Note
The actual scatter/gather map is constructed/inverted from the given map so the result behaves with scatter/gather defined wrt the buffer

◆ BijectiveComm() [3/5]

template<class Index , class Vector >
dg::BijectiveComm< Index, Vector >::BijectiveComm ( unsigned  local_size,
thrust::host_vector< int >  localIndexMap,
thrust::host_vector< int >  pidIndexMap,
MPI_Comm  comm 
)
inline

Construct from local indices and PIDs index map.

The indices in the index map are written with respect to the buffer vector. Each location in the source vector is uniquely specified by a local vector index and the process rank.

Parameters
local_sizelocal size of a dg::MPI_Vector (same for all processes)
localIndexMapEach element localIndexMap[i] represents a local vector index from (or to) where to take the value buffer[i]. There are local_buffer_size = localIndexMap.size() elements.
pidIndexMapEach element pidIndexMap[i] represents the pid/rank to which the corresponding index localIndexMap[i] is local. Same size as localIndexMap. The pid/rank needs to be element of the given communicator.
commThe MPI communicator participating in the scatter/gather operations
Note
we assume that the index map is bijective and is given wrt the buffer
Attention
In fact, localIndexMap is ignored, only pidIndexMap is used. If the order of values is important, use SurjectiveComm

◆ BijectiveComm() [4/5]

template<class Index , class Vector >
template<class ConversionPolicy >
dg::BijectiveComm< Index, Vector >::BijectiveComm ( const thrust::host_vector< int > &  globalIndexMap,
const ConversionPolicy &  p 
)
inline

Construct from global indices index map.

Uses the global2localIdx() member of MPITopology to generate localIndexMap and pidIndexMap

Parameters
globalIndexMapEach element globalIndexMap[i] represents a global vector index from (or to) where to take the value buffer[i]. There are local_buffer_size = globalIndexMap.size() elements.
pthe conversion object
Template Parameters
ConversionPolicyhas to have the members:
  • bool global2localIdx(unsigned,unsigned&,unsigned&) const; where the first parameter is the global index and the other two are the output pair (localIdx, rank). return true if successful, false if global index is not part of the grid
  • MPI_Comm communicator() const; returns the communicator to use in the gather/scatter
  • local_size(); return the local vector size
See also
Topology base classes the MPI grids defined in Level 3 can all be used as a ConversionPolicy
Note
we assume that the index map is bijective and is given wrt the buffer
Attention
In fact, globalIndexMap is just used to produce a pidIndexMap, the resulting localIndexMap is ignored, only pidIndexMap is used. If the order of values is important, use SurjectiveComm

◆ BijectiveComm() [5/5]

template<class Index , class Vector >
template<class OtherIndex , class OtherVector >
dg::BijectiveComm< Index, Vector >::BijectiveComm ( const BijectiveComm< OtherIndex, OtherVector > &  src)
inline

reconstruct from another type; if src is empty, same as default constructor

Member Function Documentation

◆ clone()

template<class Index , class Vector >
virtual BijectiveComm * dg::BijectiveComm< Index, Vector >::clone ( ) const
inlinefinaloverridevirtual

Generic copy method.

Returns
pointer to allocated object

Implements dg::aCommunicator< Vector >.

◆ get_pids()

template<class Index , class Vector >
const thrust::host_vector< int > & dg::BijectiveComm< Index, Vector >::get_pids ( ) const
inline

These are the pids that were given in the Constructor.

Returns
the vector given in the constructor

The documentation for this struct was generated from the following file: