Extension: Json and NetCDF utilities
#include "dg/file/file.h"
Loading...
Searching...
No Matches

A NetCDF Hyperslab for MPINcFile. More...

Public Member Functions

 MPINcHyperslab (size_t local_start, size_t local_count, MPI_Comm comm)
 {local_start}, {local_count}
 
 MPINcHyperslab (std::vector< size_t > local_start, std::vector< size_t > local_count, MPI_Comm comm)
 local_start, local_count, comm
 
template<class ContainerType , std::enable_if_t< dg::is_vector_v< ContainerType, dg::MPIVectorTag >, bool > = true>
 MPINcHyperslab (const ContainerType &data)
 {local_start(data) , local_size(data), data.communicator()}
 
template<class MPITopology , std::enable_if_t<!dg::is_vector_v< MPITopology >, bool > = true>
 MPINcHyperslab (const MPITopology &grid)
 grid.start(), grid.count(), grid.communicator()
 
template<class T >
 MPINcHyperslab (size_t start0, const T &param)
 Same as MPINcHyperslab{ start0, 1, grid}
 
template<class T >
 MPINcHyperslab (size_t start0, size_t count0, const T &param)
 {start0, MPINcHyperslab( param).start()}, {count0, MPINcHyperslab(param).count(), MPINcHyperslab.communicator()}
 
unsigned ndim () const
 
const std::vector< size_t > & start () const
 
const std::vector< size_t > & count () const
 
std::vector< size_t > & start ()
 
std::vector< size_t > & count ()
 
MPI_Comm communicator () const
 
const size_t * startp () const
 
const size_t * countp () const
 

Detailed Description

A NetCDF Hyperslab for MPINcFile.

In MPI the data of arrays is usually distributed among processes and each process needs to know where their chunk of data needs to be written in the global array. It is also possible that fewer ranks than are present in file.communicator() actually hold relevant data.

This is how to

See also
specify a hyperslab
Attention
When writing variables, NetCDF-C always assumes that the last dimension of the NetCDF variable varies fastest in the given array. This is in contrast to the default behaviour of our dg::evaluate function, which produces vectors where the first dimension of the given grid varies fastest. Thus, when defining variable dimensions the dimension name of the first grid dimension needs to come last.
Note
The unlimited dimension, if present, must be the first dimension.

Constructor & Destructor Documentation

◆ MPINcHyperslab() [1/6]

dg::file::MPINcHyperslab::MPINcHyperslab ( size_t local_start,
size_t local_count,
MPI_Comm comm )
inline

{local_start}, {local_count}

One dimensional slab

Parameters
local_startthe starting position of a 1d variable
local_countthe count of a 1d variable
commcommunicator of ranks that hold relevant data

◆ MPINcHyperslab() [2/6]

dg::file::MPINcHyperslab::MPINcHyperslab ( std::vector< size_t > local_start,
std::vector< size_t > local_count,
MPI_Comm comm )
inline

local_start, local_count, comm

local_start.size() dimensional slab

Parameters
local_startspecific local start vector
local_countspecific local count vector (must have same size as local_start)
commcommunicator of ranks that hold relevant data

◆ MPINcHyperslab() [3/6]

template<class ContainerType , std::enable_if_t< dg::is_vector_v< ContainerType, dg::MPIVectorTag >, bool > = true>
dg::file::MPINcHyperslab::MPINcHyperslab ( const ContainerType & data)
inline

{local_start(data) , local_size(data), data.communicator()}

Infer the local start and count by the size of the data vector. The local size is communicated to all processes in data.communicator() and using the rank one can infer the starting position of the local data chunk. This assumes that the data is ordered by rank.

Attention
This only works for one-dimensional data
Template Parameters
ContainerTypeContainerType::size() and ContainerType::communicator() must be callable
Parameters
dataexplicitly set one dimensional start and count
Here is the call graph for this function:

◆ MPINcHyperslab() [4/6]

template<class MPITopology , std::enable_if_t<!dg::is_vector_v< MPITopology >, bool > = true>
dg::file::MPINcHyperslab::MPINcHyperslab ( const MPITopology & grid)
inline

grid.start(), grid.count(), grid.communicator()

Template Parameters
MPITopologyMPITopolgy::start() and *count() need to return an iterable that can be used to construct std::vector<size_t> MPITopology.communicator() needs to return the communicator of ranks that hold data
Parameters
gridexplicitly set start and count and comm
Attention
When writing variables, NetCDF-C always assumes that the last dimension of the NetCDF variable varies fastest in the given array. This is in contrast to the default behaviour of our dg::evaluate function, which produces vectors where the first dimension of the given grid varies fastest. Thus, when defining variable dimensions the dimension name of the first grid dimension needs to come last.
Note
The unlimited dimension, if present, must be the first dimension.

◆ MPINcHyperslab() [5/6]

template<class T >
dg::file::MPINcHyperslab::MPINcHyperslab ( size_t start0,
const T & param )
inline

Same as MPINcHyperslab{ start0, 1, grid}

◆ MPINcHyperslab() [6/6]

template<class T >
dg::file::MPINcHyperslab::MPINcHyperslab ( size_t start0,
size_t count0,
const T & param )
inline

{start0, MPINcHyperslab( param).start()}, {count0, MPINcHyperslab(param).count(), MPINcHyperslab.communicator()}

Template Parameters
MPITopologyMPITopolgy::start() and *count() need to return an iterable that can be used to construct std::vector<size_t> MPITopology.communicator() needs to return the communicator of ranks that hold data
Parameters
start0The start coordinate of the unlimited dimension is prepended to the grid.start()
count0The count coordinate of the unlimited dimension is prepended to the grid.count()
paramexplicitly set start and count and comm
Here is the call graph for this function:

Member Function Documentation

◆ communicator()

MPI_Comm dg::file::MPINcHyperslab::communicator ( ) const
inline
Returns
MPI Communicator specifying ranks that participate in reading/writing data

◆ count() [1/2]

std::vector< size_t > & dg::file::MPINcHyperslab::count ( )
inline

Returns
count vector
Here is the call graph for this function:

◆ count() [2/2]

const std::vector< size_t > & dg::file::MPINcHyperslab::count ( ) const
inline

Returns
count vector
Here is the call graph for this function:

◆ countp()

const size_t * dg::file::MPINcHyperslab::countp ( ) const
inline

Returns
pointer to first element of count
Here is the call graph for this function:

◆ ndim()

unsigned dg::file::MPINcHyperslab::ndim ( ) const
inline

Returns
Size of start and count vectors
Here is the call graph for this function:

◆ start() [1/2]

std::vector< size_t > & dg::file::MPINcHyperslab::start ( )
inline

Returns
start vector
Here is the call graph for this function:

◆ start() [2/2]

const std::vector< size_t > & dg::file::MPINcHyperslab::start ( ) const
inline

Returns
start vector
Here is the call graph for this function:

◆ startp()

const size_t * dg::file::MPINcHyperslab::startp ( ) const
inline

Returns
pointer to first element of start
Here is the call graph for this function:

The documentation for this struct was generated from the following file: