Discontinuous Galerkin Library
#include "dg/algorithm.h"
Level 0: Miscellaneous additions
Collaboration diagram for Level 0: Miscellaneous additions:

Modules

 Timer class
 t.tic(); t.toc(); t.diff();
 
 Functions and functors
 
 Lowlevel helper functions and classes
 

Classes

class  dg::Message
 small class holding a stringstream More...
 
class  dg::Error
 class intended for the use in throw statements More...
 
struct  dg::Fail
 Indicate failure to converge. More...
 
struct  dg::MultiMatrix< MatrixType, ContainerType >
 Struct that applies given matrices one after the other. More...
 
class  dg::NoRoot1d
 Exception class, that stores boundaries for 1D root finding. More...
 

Functions

template<class ContainerType >
void dg::transpose (unsigned nx, unsigned ny, const ContainerType &in, ContainerType &out)
 Transpose vector. More...
 
template<class ContainerType >
void dg::extend_line (unsigned nx, unsigned ny, const ContainerType &in, ContainerType &out)
 Copy a line into rows of output vector. More...
 
template<class ContainerType >
void dg::extend_column (unsigned nx, unsigned ny, const ContainerType &in, ContainerType &out)
 Copy a line into columns of output vector. More...
 
static void dg::abort_program (int code=-1)
 Abort program (both MPI and non-MPI) More...
 
static void dg::mpi_init (int argc, char *argv[])
 Convencience shortcut: Calls MPI_Init or MPI_Init_thread. More...
 
static void dg::mpi_init2d (dg::bc bcx, dg::bc bcy, MPI_Comm &comm, std::istream &is=std::cin, bool verbose=true)
 Read in number of processses and create Cartesian MPI communicator. More...
 
static void dg::mpi_init2d (unsigned &n, unsigned &Nx, unsigned &Ny, MPI_Comm comm, std::istream &is=std::cin, bool verbose=true)
 Read in number of processes and broadcast to process group. More...
 
static void dg::mpi_init2d (dg::bc bcx, dg::bc bcy, unsigned &n, unsigned &Nx, unsigned &Ny, MPI_Comm &comm, std::istream &is=std::cin, bool verbose=true)
 Read in number of processses and grid size and create Cartesian MPI communicator. More...
 
static void dg::mpi_init3d (dg::bc bcx, dg::bc bcy, dg::bc bcz, MPI_Comm &comm, std::istream &is=std::cin, bool verbose=true)
 Read in number of processses and create Cartesian MPI communicator. More...
 
static void dg::mpi_init3d (unsigned &n, unsigned &Nx, unsigned &Ny, unsigned &Nz, MPI_Comm comm, std::istream &is=std::cin, bool verbose=true)
 Read in number of processes and broadcast to process group. More...
 
static void dg::mpi_init3d (dg::bc bcx, dg::bc bcy, dg::bc bcz, unsigned &n, unsigned &Nx, unsigned &Ny, unsigned &Nz, MPI_Comm &comm, std::istream &is=std::cin, bool verbose=true)
 Read in number of processses and grid size and create Cartesian MPI communicator. More...
 
template<class real_type >
thrust::host_vector< real_type > dg::forward_transform (const thrust::host_vector< real_type > &in, const aRealTopology2d< real_type > &g)
 Transform a vector from dg::xspace (nodal values) to dg::lspace (modal values) More...
 
template<class UnaryOp , class real_type >
dg::Operator< real_type > dg::create::modal_filter (UnaryOp op, const DLT< real_type > &dlt)
 Create a modal filter block \( V D V^{-1}\). More...
 
template<class T >
dg::gcd (T a, T b)
 Greatest common divisor. More...
 
template<class T >
dg::lcm (T a, T b)
 Least common multiple. More...
 
bool dg::is_same (double x, double y, double eps=1e-15)
 
bool dg::is_same (float x, float y, float eps=1e-6)
 
bool dg::is_divisable (double a, double b, double eps=1e-15)
 
bool dg::is_divisable (float a, float b, float eps=1e-6)
 

Detailed Description

Function Documentation

◆ abort_program()

static void dg::abort_program ( int  code = -1)
inlinestatic

Abort program (both MPI and non-MPI)

#ifdef MPI_VERSION
MPI_Abort(MPI_COMM_WORLD, code);
#endif //WITH_MPI
exit( code);
Parameters
codeThe abortion code that will be signalled to the caller (of the program)

◆ extend_column()

template<class ContainerType >
void dg::extend_column ( unsigned  nx,
unsigned  ny,
const ContainerType &  in,
ContainerType &  out 
)

Copy a line into columns of output vector.

out[i*nx+j] = in[i]

Template Parameters
ContainerTypeAny class for which a specialization of TensorTraits exists and which fulfills the requirements of the there defined data and execution policies derived from AnyVectorTag and AnyPolicyTag. Among others
  • dg::HVec (serial), dg::DVec (cuda / omp), dg::MHVec (mpi + serial) or dg::MDVec (mpi + cuda / omp)
  • std::vector<dg::DVec> (vector of shared device vectors), std::array<double, 4> (array of 4 doubles) or std::map < std::string, dg::DVec> ( a map of named vectors)
  • double (scalar) and other primitive types ...
If there are several ContainerTypes in the argument list, then TensorTraits must exist for all of them
See also
See The dg dispatch system for a detailed explanation of our type dispatch system
Parameters
nxnumber of columns (size of contiguous chunks) in output vector
nysize of input vector, number of rows in output vector
ininput (size ny)
outoutput (size nx*ny may not alias in)

◆ extend_line()

template<class ContainerType >
void dg::extend_line ( unsigned  nx,
unsigned  ny,
const ContainerType &  in,
ContainerType &  out 
)

Copy a line into rows of output vector.

Just append the line of size nx until the output vector size of nx*ny is reached: out[i*nx+j] = in[j]

Template Parameters
ContainerTypeAny class for which a specialization of TensorTraits exists and which fulfills the requirements of the there defined data and execution policies derived from AnyVectorTag and AnyPolicyTag. Among others
  • dg::HVec (serial), dg::DVec (cuda / omp), dg::MHVec (mpi + serial) or dg::MDVec (mpi + cuda / omp)
  • std::vector<dg::DVec> (vector of shared device vectors), std::array<double, 4> (array of 4 doubles) or std::map < std::string, dg::DVec> ( a map of named vectors)
  • double (scalar) and other primitive types ...
If there are several ContainerTypes in the argument list, then TensorTraits must exist for all of them
See also
See The dg dispatch system for a detailed explanation of our type dispatch system
Parameters
nxsize of the input vector/ number of columns (size of contiguous chunks) in output vector
nynumber of rows in output vector
ininput (size nx)
outoutput (size nx*ny; may not alias in)

◆ forward_transform()

template<class real_type >
thrust::host_vector< real_type > dg::forward_transform ( const thrust::host_vector< real_type > &  in,
const aRealTopology2d< real_type > &  g 
)

Transform a vector from dg::xspace (nodal values) to dg::lspace (modal values)

Parameters
ininput
ggrid
Returns
the vector in LSPACE
See also
fast_transform

◆ gcd()

template<class T >
T dg::gcd ( a,
b 
)

Greatest common divisor.

Parameters
aFirst number
bSecond number
Returns
greatest common divisor

◆ is_divisable() [1/2]

bool dg::is_divisable ( double  a,
double  b,
double  eps = 1e-15 
)
inline

Checks if two number are integer divisable \(a/b \in \mathbb{Z}\) within accuracy

Attention
Does not check for equal sign!

◆ is_divisable() [2/2]

bool dg::is_divisable ( float  a,
float  b,
float  eps = 1e-6 
)
inline

Checks if two number are integer divisable \(a/b \in \mathbb{Z}\) within accuracy

Attention
Does not check for equal sign!

◆ is_same() [1/2]

bool dg::is_same ( double  x,
double  y,
double  eps = 1e-15 
)
inline

Checks if two number are same within accuracy

◆ is_same() [2/2]

bool dg::is_same ( float  x,
float  y,
float  eps = 1e-6 
)
inline

Checks if two number are same within accuracy

◆ lcm()

template<class T >
T dg::lcm ( a,
b 
)

Least common multiple.

Parameters
aFist number
bSecond number
Returns
Least common multiple

◆ modal_filter()

template<class UnaryOp , class real_type >
dg::Operator< real_type > dg::create::modal_filter ( UnaryOp  op,
const DLT< real_type > &  dlt 
)

Create a modal filter block \( V D V^{-1}\).

where \( V\) is the Vandermonde matrix (the backward transformation matrix) and \( D \) is a diagonal matrix with \( D_{ii} = \sigma(i)\)

See also
A discussion of the effects of the modal filter on advection schemes can be found here https://mwiesenberger.github.io/advection
Note
basically the result is that it is usually not advantageous to use a modal filter
Template Parameters
UnaryOpModel of Unary Function real_type sigma(unsigned) The input will be the modal number i where \( i=0,...,n-1\) and n is the number of polynomial coefficients in use. The output is the filter strength for the given mode number
Parameters
opthe unary function
dltprovide the forward, backward transformation and number of coefficients n
Returns
The product \( V D V^{-1}\)
Note
The idea is to use the result in connection with dg::create::fast_transform() to create a matrix that applies the filter to vectors. For example to create a modal filter that acts in two dimensions:
// create filter:
dg::create::modal_filter( op, grid.dltx()),
dg::create::modal_filter( op, grid.dlty()), grid);
//apply filter:
dg::blas2::symv( filter, x, y);
void symv(MatrixType &&M, const ContainerType1 &x, ContainerType2 &y)
Definition: blas2.h:287
dg::HMatrix_t< real_type > fast_transform(dg::Operator< real_type > opx, const RealGrid1d< real_type > &t)
Create a block-diagonal matrix.
Definition: fast_interpolation.h:262
dg::Operator< real_type > modal_filter(UnaryOp op, const DLT< real_type > &dlt)
Create a modal filter block .
Definition: filter.h:40

◆ mpi_init()

static void dg::mpi_init ( int  argc,
char *  argv[] 
)
inlinestatic

Convencience shortcut: Calls MPI_Init or MPI_Init_thread.

Shortcut for

#ifdef _OPENMP
int provided, error;
error = MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &provided);
assert( error == MPI_SUCCESS && "Threaded MPI lib required!\n");
#else
MPI_Init(&argc, &argv);
#endif
Parameters
argccommand line argument number
argvcommand line arguments

◆ mpi_init2d() [1/3]

static void dg::mpi_init2d ( dg::bc  bcx,
dg::bc  bcy,
MPI_Comm &  comm,
std::istream &  is = std::cin,
bool  verbose = true 
)
inlinestatic

Read in number of processses and create Cartesian MPI communicator.

Also sets the GPU a process should use via rank% num_devices_per_node if THRUST_DEVICE_SYSTEM==THRUST_DEVICE_SYSTEM_CUDA

Parameters
bcxif bcx==dg::PER then the communicator is periodic in x
bcyif bcy==dg::PER then the communicator is periodic in y
comm(write only) MPI_COMM_WORLD as a 2d Cartesian MPI communicator
isInput stream rank 0 reads parameters (npx, npy)
verboseIf true, rank 0 prints queries and information on std::cout
Attention
Before creating a second Cartesian communicator consider freeing existing ones with MPI_Comm_free. (Using mpi_init2d and mpi_init3d in the same program has sometimes led to Segmentation faults in the past)

◆ mpi_init2d() [2/3]

static void dg::mpi_init2d ( dg::bc  bcx,
dg::bc  bcy,
unsigned &  n,
unsigned &  Nx,
unsigned &  Ny,
MPI_Comm &  comm,
std::istream &  is = std::cin,
bool  verbose = true 
)
inlinestatic

Read in number of processses and grid size and create Cartesian MPI communicator.

Also sets the GPU a process should use via rank% num_devices_per_node if THRUST_DEVICE_SYSTEM==THRUST_DEVICE_SYSTEM_CUDA

Parameters
bcxif bcx==dg::PER then the communicator is periodic in x
bcyif bcy==dg::PER then the communicator is periodic in y
nrank 0 reads in from is and broadcasts to all processes in MPI_COMM_WORLD
Nxrank 0 reads in from is and broadcasts to all processes in MPI_COMM_WORLD
Nyrank 0 reads in from is and broadcasts to all processes in MPI_COMM_WORLD
comm(write only) MPI_COMM_WORLD as a 2d Cartesian MPI communicator
isInput stream rank 0 reads parameters (npx, npy, n, Nx, Ny)
verboseIf true, rank 0 prints queries and information on std::cout
Attention
Before creating a second Cartesian communicator consider freeing existing ones with MPI_Comm_free. (Using mpi_init2d and mpi_init3d in the same program has sometimes led to Segmentation faults in the past)

◆ mpi_init2d() [3/3]

static void dg::mpi_init2d ( unsigned &  n,
unsigned &  Nx,
unsigned &  Ny,
MPI_Comm  comm,
std::istream &  is = std::cin,
bool  verbose = true 
)
inlinestatic

Read in number of processes and broadcast to process group.

Parameters
nrank 0 reads in from is and broadcasts to all processes in comm
Nxrank 0 reads in from is and broadcasts to all processes in comm
Nyrank 0 reads in from is and broadcasts to all processes in comm
comm(read only) a 2d Cartesian MPI communicator
isInput stream rank 0 reads parameters (n, Nx, Ny)
verboseIf true, rank 0 prints queries and information on std::cout

◆ mpi_init3d() [1/3]

static void dg::mpi_init3d ( dg::bc  bcx,
dg::bc  bcy,
dg::bc  bcz,
MPI_Comm &  comm,
std::istream &  is = std::cin,
bool  verbose = true 
)
inlinestatic

Read in number of processses and create Cartesian MPI communicator.

Also sets the GPU a process should use via rank% num_devices_per_node if THRUST_DEVICE_SYSTEM==THRUST_DEVICE_SYSTEM_CUDA

Parameters
bcxif bcx==dg::PER then the communicator is periodic in x
bcyif bcy==dg::PER then the communicator is periodic in y
bczif bcz==dg::PER then the communicator is periodic in z
comm(write only) MPI_COMM_WORLD as a 3d Cartesian MPI communicator
isInput stream rank 0 reads parameters (npx, npy, npz)
verboseIf true, rank 0 prints queries and information on std::cout
Attention
Before creating a second Cartesian communicator consider freeing existing ones with MPI_Comm_free. (Using mpi_init2d and mpi_init3d in the same program has sometimes led to Segmentation faults in the past)

◆ mpi_init3d() [2/3]

static void dg::mpi_init3d ( dg::bc  bcx,
dg::bc  bcy,
dg::bc  bcz,
unsigned &  n,
unsigned &  Nx,
unsigned &  Ny,
unsigned &  Nz,
MPI_Comm &  comm,
std::istream &  is = std::cin,
bool  verbose = true 
)
inlinestatic

Read in number of processses and grid size and create Cartesian MPI communicator.

Also sets the GPU via rank% num_devices_per_node a process should use if THRUST_DEVICE_SYSTEM==THRUST_DEVICE_SYSTEM_CUDA

Parameters
bcxif bcx==dg::PER then the communicator is periodic in x
bcyif bcy==dg::PER then the communicator is periodic in y
bczif bcz==dg::PER then the communicator is periodic in z
nrank 0 reads in from is and broadcasts to all processes in MPI_COMM_WORLD
Nxrank 0 reads in from is and broadcasts to all processes in MPI_COMM_WORLD
Nyrank 0 reads in from is and broadcasts to all processes in MPI_COMM_WORLD
Nzrank 0 reads in from is and broadcasts to all processes in MPI_COMM_WORLD
comm(write only) MPI_COMM_WORLD as a 3d Cartesian MPI communicator
isInput stream rank 0 reads parameters (npx, npy, npz, n, Nx, Ny, Nz)
verboseIf true, rank 0 prints queries and information on std::cout
Attention
Before creating a second Cartesian communicator consider freeing existing ones with MPI_Comm_free. (Using mpi_init2d and mpi_init3d in the same program has sometimes led to Segmentation faults in the past)

◆ mpi_init3d() [3/3]

static void dg::mpi_init3d ( unsigned &  n,
unsigned &  Nx,
unsigned &  Ny,
unsigned &  Nz,
MPI_Comm  comm,
std::istream &  is = std::cin,
bool  verbose = true 
)
inlinestatic

Read in number of processes and broadcast to process group.

Parameters
nrank 0 reads in from is and broadcasts to all processes in comm
Nxrank 0 reads in from is and broadcasts to all processes in comm
Nyrank 0 reads in from is and broadcasts to all processes in comm
Nzrank 0 reads in from is and broadcasts to all processes in comm
comm(read only) a 3d Cartesian MPI communicator
isInput stream rank 0 reads parameters (n, Nx, Ny, Nz )
verboseIf true, rank 0 prints queries and information on std::cout

◆ transpose()

template<class ContainerType >
void dg::transpose ( unsigned  nx,
unsigned  ny,
const ContainerType &  in,
ContainerType &  out 
)

Transpose vector.

The equivalent of out[j*ny+i] = in[i*nx+j];

Template Parameters
ContainerTypeAny class for which a specialization of TensorTraits exists and which fulfills the requirements of the there defined data and execution policies derived from AnyVectorTag and AnyPolicyTag. Among others
  • dg::HVec (serial), dg::DVec (cuda / omp), dg::MHVec (mpi + serial) or dg::MDVec (mpi + cuda / omp)
  • std::vector<dg::DVec> (vector of shared device vectors), std::array<double, 4> (array of 4 doubles) or std::map < std::string, dg::DVec> ( a map of named vectors)
  • double (scalar) and other primitive types ...
If there are several ContainerTypes in the argument list, then TensorTraits must exist for all of them
See also
See The dg dispatch system for a detailed explanation of our type dispatch system
Parameters
nxnumber of columns in input vector (size of contiguous chunks) /rows in output vector
nynumber of rows in input vector /columns in output vector
ininput
outoutput (may not alias in)