Discontinuous Galerkin Library
#include "dg/algorithm.h"
dg::Average< MPI_Vector< container > > Struct Template Reference

MPI specialized class for average computations. More...

Public Member Functions

 Average (const aMPITopology2d &g, enum coo2d direction, std::string mode="exact")
 Prepare internal workspace. More...
 
 Average (const aMPITopology3d &g, enum coo3d direction, std::string mode="exact")
 Prepare internal workspace. More...
 
void operator() (const MPI_Vector< container > &src, MPI_Vector< container > &res, bool extend=true)
 Compute the average as configured in the constructor. More...
 

Detailed Description

template<class container>
struct dg::Average< MPI_Vector< container > >

MPI specialized class for average computations.

dg::MPIGrid3d g( 0, lx, 0, ly, 0, lz, n, Nx, Ny, Nz, comm);
const dg::MDVec vector = dg::evaluate( function ,g);
dg::MDVec average_z;
if(rank==0)std::cout << "Averaging z ... \n";
avg( vector, average_z, false);
@ z
z direction
thrust::host_vector< real_type > evaluate(UnaryOp f, const RealGrid1d< real_type > &g)
Evaluate a 1d function on grid coordinates.
Definition: evaluation.h:67
Topological average computations in a Cartesian topology.
Definition: average.h:53
mpi Vector class
Definition: mpi_vector.h:32
The simplest implementation of aRealMPITopology3d.
Definition: mpi_grid.h:727

Constructor & Destructor Documentation

◆ Average() [1/2]

template<class container >
dg::Average< MPI_Vector< container > >::Average ( const aMPITopology2d g,
enum coo2d  direction,
std::string  mode = "exact" 
)
inline

Prepare internal workspace.

Parameters
gthe grid from which to take the dimensionality and sizes
directionthe direction or plane over which to average when calling operator() (at the moment cannot be coo3d::xz or coo3d::y)
modeeither "exact" ( uses the exact and reproducible dot product for the summation) or "simple" (uses inexact but much faster direct summation) use simple if you do not need the reproducibility
Note
computing in "exact" mode is especially difficult if the averaged direction is small compared to the remaining dimensions and for GPUs in general, expect to gain a factor 10-1000 (no joke) from going to "simple" mode in these cases

◆ Average() [2/2]

template<class container >
dg::Average< MPI_Vector< container > >::Average ( const aMPITopology3d g,
enum coo3d  direction,
std::string  mode = "exact" 
)
inline

Prepare internal workspace.

Parameters
gthe grid from which to take the dimensionality and sizes
directionthe direction or plane over which to average when calling operator() (at the moment cannot be coo3d::xz or coo3d::y)
modeeither "exact" ( uses the exact and reproducible dot product for the summation) or "simple" (uses inexact but much faster direct summation) use simple if you do not need the reproducibility
Note
computing in "exact" mode is especially difficult if the averaged direction is small compared to the remaining dimensions and for GPUs in general, expect to gain a factor 10-1000 (no joke) from going to "simple" mode in these cases

Member Function Documentation

◆ operator()()

template<class container >
void dg::Average< MPI_Vector< container > >::operator() ( const MPI_Vector< container > &  src,
MPI_Vector< container > &  res,
bool  extend = true 
)
inline

Compute the average as configured in the constructor.

The compuatation is based on the exact, reproducible scalar product provided in the dg::exblas library. It is divided in two steps

  • average the input field over the direction or plane given in the constructor
  • extend the lower dimensional result back to the original dimensionality
Parameters
srcSource Vector (must have the same size and communicator as the grid given in the constructor)
resresult Vector (if extend==true, res must have same size and communicator as src vector, else it gets properly resized, may alias src)
extendif true the average is extended back to the original dimensionality and the communicator is the 3d communicator, if false, this step is skipped. In that case, each process has a result vector with reduced dimensionality and a Cartesian communicator only in the remaining dimensions. Note that in any case all processes get the result (since the underlying dot product distributes its result to all processes)

The documentation for this struct was generated from the following file: