Extension: Json and NetCDF utilities
#include "dg/file/file.h"
|
Classes | |
struct | dg::file::NC_Error_Handle |
DEPRECATED Empty utitlity class that handles return values of netcdf functions and throws NC_Error(status) if( status != NC_NOERR) More... | |
Functions | |
template<class T > | |
bool | dg::file::check_real_time (int ncid, const char *name, int *dimID, int *tvarID) |
DEPRECATED Check if an unlimited dimension exists as if define_real_time was called. | |
template<class T > | |
int | dg::file::define_real_time (int ncid, const char *name, int *dimID, int *tvarID, bool full_check=false) |
DEPRECATED Define an unlimited time dimension and coordinate variable. | |
int | dg::file::define_time (int ncid, const char *name, int *dimID, int *tvarID) |
DEPRECATED An alias for define_real_time<double> | |
int | dg::file::define_limited_time (int ncid, const char *name, int size, int *dimID, int *tvarID) |
DEPRECATED Define a limited time dimension and coordinate variable. | |
template<class T > | |
bool | dg::file::check_dimension (int ncid, int *dimID, const dg::RealGrid1d< T > &g, std::string name_dim="x", std::string axis="X") |
DEPRECATED Check if a dimension exists as if define_dimension was called. | |
template<class T > | |
int | dg::file::define_dimension (int ncid, int *dimID, const dg::RealGrid1d< T > &g, std::string name_dim="x", std::string axis="X", bool full_check=false) |
DEPRECATED Define a 1d dimension and associated coordinate variable. | |
template<class Topology , std::enable_if_t< dg::is_vector_v< typename Topology::host_vector, dg::SharedVectorTag >, bool > = true> | |
int | dg::file::define_dimensions (int ncid, int *dimsIDs, const Topology &g, std::vector< std::string > name_dims={}, bool full_check=false) |
DEPRECATED Define dimensions and associated coordiante variables. | |
template<class Topology , std::enable_if_t< dg::is_vector_v< typename Topology::host_vector, dg::SharedVectorTag >, bool > = true> | |
int | dg::file::define_dimensions (int ncid, int *dimsIDs, int *tvarID, const Topology &g, std::vector< std::string > name_dims={}, bool full_check=false) |
DEPRECATED Define an unlimited time and grid dimensions together with their coordinate variables. | |
template<class Topology , std::enable_if_t< dg::is_vector_v< typename Topology::host_vector, dg::SharedVectorTag >, bool > = true> | |
bool | dg::file::check_dimensions (int ncid, int *dimsIDs, const Topology &g, std::vector< std::string > name_dims={}) |
DEPRECATED Check if dimensions exist as if define_dimensions was called. | |
template<class Topology , std::enable_if_t< dg::is_vector_v< typename Topology::host_vector, dg::SharedVectorTag >, bool > = true> | |
bool | dg::file::check_dimensions (int ncid, int *dimsIDs, int *tvarID, const Topology &g, std::vector< std::string > name_dims={}) |
DEPRECATED Check if dimensions exist as if define_dimensions was called. | |
template<class T > | |
int | dg::file::define_limtime_xy (int ncid, int *dimsIDs, int size, int *tvarID, const dg::aRealTopology2d< T > &g, std::vector< std::string > name_dims={"time", "y", "x"}) |
DEPRECATED Define a limited time and 2 dimensions and associated coordinate variables. | |
template<class MPITopology , std::enable_if_t< dg::is_vector_v< typename MPITopology::host_vector, dg::MPIVectorTag >, bool > = true> | |
int | dg::file::define_dimensions (int ncid, int *dimsIDs, const MPITopology &g, std::vector< std::string > name_dims={}, bool full_check=false) |
DEPRECATED All processes may call this but only master process has to and will execute!! Convenience function that just calls the corresponding serial version with the global grid. | |
template<class MPITopology , std::enable_if_t< dg::is_vector_v< typename MPITopology::host_vector, dg::MPIVectorTag >, bool > = true> | |
int | dg::file::define_dimensions (int ncid, int *dimsIDs, int *tvarID, const MPITopology &g, std::vector< std::string > name_dims={}, bool full_check=false) |
DEPRECATED All processes may call this but only master process has to and will execute!! Convenience function that just calls the corresponding serial version with the global grid. | |
template<class MPITopology , std::enable_if_t< dg::is_vector_v< typename MPITopology::host_vector, dg::MPIVectorTag >, bool > = true> | |
bool | dg::file::check_dimensions (int ncid, int *dimsIDs, const MPITopology &g, std::vector< std::string > name_dims={}) |
DEPRECATED All processes may call this and all will execute!! Convenience function that just calls the corresponding serial version with the global grid. | |
template<class MPITopology , std::enable_if_t< dg::is_vector_v< typename MPITopology::host_vector, dg::MPIVectorTag >, bool > = true> | |
bool | dg::file::check_dimensions (int ncid, int *dimsIDs, int *tvarID, const MPITopology &g, std::vector< std::string > name_dims={}) |
DEPRECATED All processes may call this and all will execute!! Convenience function that just calls the corresponding serial version with the global grid. | |
template<class host_vector , class Topology > | |
void | dg::file::get_var (int ncid, int varid, const Topology &grid, host_vector &data, bool parallel=true) |
DEPRECATED Convenience wrapper around nc_get_var . | |
template<class host_vector , class Topology > | |
void | dg::file::get_vara (int ncid, int varid, unsigned slice, const Topology &grid, host_vector &data, bool parallel=true) |
DEPRECATED Convenience wrapper around nc_get_vara() | |
template<class T , class real_type > | |
void | dg::file::get_var (int ncid, int varid, const RealGrid0d< real_type > &grid, T &data, bool parallel=true) |
DEPRECATED Read a scalar from the netcdf file. | |
template<class T , class real_type > | |
void | dg::file::get_vara (int ncid, int varid, unsigned slice, const RealGrid0d< real_type > &grid, T &data, bool parallel=true) |
DEPRECATED Read a scalar to the netcdf file. | |
template<class host_vector , class Topology > | |
void | dg::file::put_var (int ncid, int varid, const Topology &grid, const host_vector &data, bool parallel=false) |
DEPRECATED Write an array to NetCDF file. | |
template<class host_vector , class Topology > | |
void | dg::file::put_vara (int ncid, int varid, unsigned slice, const Topology &grid, const host_vector &data, bool parallel=false) |
DEPRECATED Write an array to NetCDF file. | |
template<class T , class real_type > | |
void | dg::file::put_var (int ncid, int varid, const RealGrid0d< real_type > &grid, T data, bool parallel=false) |
DEPRECATED Write a scalar to the NetCDF file. | |
template<class T , class real_type > | |
void | dg::file::put_vara (int ncid, int varid, unsigned slice, const RealGrid0d< real_type > &grid, T data, bool parallel=false) |
DEPRECATED Write a scalar to the NetCDF file. | |
bool dg::file::check_dimension | ( | int | ncid, |
int * | dimID, | ||
const dg::RealGrid1d< T > & | g, | ||
std::string | name_dim = "x", | ||
std::string | axis = "X" ) |
DEPRECATED Check if a dimension exists as if define_dimension
was called.
This function returns false if the dimension with the given name does not exist.
This function throws std::runtime_error
if
This function throws an dg::file::NC_Error
if
ncid | NetCDF file or group ID |
dimID | dimension ID (output) |
g | The 1d DG grid from which data points for coordinate variable are generated using g.abscissas() |
name_dim | Name of dimension and coordinate variable (input) |
axis | The axis attribute (input), ("X", "Y" or "Z") |
T | determines the datatype of the dimension variables |
|
inline |
DEPRECATED All processes may call this and all will execute!! Convenience function that just calls the corresponding serial version with the global grid.
bool dg::file::check_dimensions | ( | int | ncid, |
int * | dimsIDs, | ||
const Topology & | g, | ||
std::vector< std::string > | name_dims = {} ) |
DEPRECATED Check if dimensions exist as if define_dimensions
was called.
This function checks if the given file contains dimensions and their associated dimension variables in the same way that the corresponding define_dimensions
creates them. If anything is amiss, an error will be thrown.
ncid | NetCDF file or group ID |
dimsIDs | (write - only) dimension IDs, Must be of size g.ndim() |
g | The dG grid from which data points for coordinate variable are generated using g.abscissas() in each dimension |
name_dims | Names for the dimension and coordinate variables (Must have size g.ndim() ) in numpy python ordering e.g. in 3d we have {"z", "y", "x"} ; If name_dims.empty() then default names in {"z", "y", "x"} will be used |
Topology | typename Topology::value_type determines the datatype of the dimension variables |
|
inline |
DEPRECATED All processes may call this and all will execute!! Convenience function that just calls the corresponding serial version with the global grid.
bool dg::file::check_dimensions | ( | int | ncid, |
int * | dimsIDs, | ||
int * | tvarID, | ||
const Topology & | g, | ||
std::vector< std::string > | name_dims = {} ) |
DEPRECATED Check if dimensions exist as if define_dimensions
was called.
Semantically equivalent to the following:
This function checks if the given file contains dimensions and their associated dimension variables in the same way that the corresponding define_dimensions
creates them. If anything is amiss, an error will be thrown.
ncid | NetCDF file or group ID |
dimsIDs | (write - only) dimension IDs, Must be of size g.ndim()+1 |
tvarID | (write - only) time coordinate variable ID (unlimited) |
g | The dG grid from which data points for coordinate variable are generated using g.abscissas() |
name_dims | Names for the dimension and coordinate variables (Must have size g.ndim()+1 ) in numpy python ordering e.g. in 3d we have {"time", "z", "y", "x"} ; If name_dims.empty() then default names in {"time", "z", "y", "x"} will be used |
Topology | typename Topology::value_type determines the datatype of the dimension variables |
bool dg::file::check_real_time | ( | int | ncid, |
const char * | name, | ||
int * | dimID, | ||
int * | tvarID ) |
DEPRECATED Check if an unlimited dimension exists as if define_real_time
was called.
This function returns false if the dimension with the given name does not exist.
This function throws std::runtime_error
if
This function throws an dg::file::NC_Error
if
ncid | NetCDF file or group ID |
name | Name of unlimited dimension and associated variable |
dimID | (write-only) time-dimension ID |
tvarID | (write-only) time-variable ID (for a time variable of type T ) |
T | determine type of dimension variable |
|
inline |
DEPRECATED Define a 1d dimension and associated coordinate variable.
ncid | NetCDF file or group ID |
dimID | dimension ID (output) |
g | The 1d DG grid from which data points for coordinate variable are generated using g.abscissas() |
name_dim | Name of dimension and coordinate variable (input) |
axis | The axis attribute (input), ("X", "Y" or "Z") |
T | determines the datatype of the dimension variables |
full_check | If true, will call check_dimension before definition. |
|
inline |
DEPRECATED All processes may call this but only master process has to and will execute!! Convenience function that just calls the corresponding serial version with the global grid.
int dg::file::define_dimensions | ( | int | ncid, |
int * | dimsIDs, | ||
const Topology & | g, | ||
std::vector< std::string > | name_dims = {}, | ||
bool | full_check = false ) |
DEPRECATED Define dimensions and associated coordiante variables.
ncid | NetCDF file or group ID |
dimsIDs | (write - only) dimension IDs, Must be of size g.ndim() |
g | The dG grid from which data points for coordinate variable are generated using g.abscissas() in each dimension |
name_dims | Names for the dimension and coordinate variables (Must have size g.ndim() ) in numpy python ordering e.g. in 3d we have {"z", "y", "x"} ; If name_dims.empty() then default names in {"z", "y", "x"} will be used |
Topology | typename Topology::value_type determines the datatype of the dimension variables |
full_check | If true, will call check_dimensions before definition. In this case dimensions can already exist in the file and will not trigger a throw (it is also possible for some dimensions to exist while other do not) |
|
inline |
DEPRECATED All processes may call this but only master process has to and will execute!! Convenience function that just calls the corresponding serial version with the global grid.
int dg::file::define_dimensions | ( | int | ncid, |
int * | dimsIDs, | ||
int * | tvarID, | ||
const Topology & | g, | ||
std::vector< std::string > | name_dims = {}, | ||
bool | full_check = false ) |
DEPRECATED Define an unlimited time and grid dimensions together with their coordinate variables.
Semantically equivalent to the following:
ncid | NetCDF file or group ID |
dimsIDs | (write - only) dimension IDs, Must be of size g.ndim()+1 |
tvarID | (write - only) time coordinate variable ID (unlimited) |
g | The dG grid from which data points for coordinate variable are generated using g.abscissas() |
name_dims | Names for the dimension and coordinate variables (Must have size g.ndim()+1 ) in numpy python ordering e.g. in 3d we have {"time", "z", "y", "x"} ; If name_dims.empty() then default names in {"time", "z", "y", "x"} will be used |
Topology | typename Topology::value_type determines the datatype of the dimension variables |
full_check | If true, will call check_dimensions before definition. In this case dimensions can already exist in the file and will not trigger a throw (it is also possible for some dimensions to exist while other do not) |
|
inline |
DEPRECATED Define a limited time dimension and coordinate variable.
ncid | NetCDF file or group ID |
name | Name of the time variable (usually "time") |
size | The number of timesteps |
dimID | time-dimension ID |
tvarID | time-variable ID (for a time variable of type NC_DOUBLE ) |
|
inline |
DEPRECATED Define a limited time and 2 dimensions and associated coordinate variables.
Semantically equivalent to the following:
Dimensions have attributes of (time, Y, X)
ncid | NetCDF file or group ID |
dimsIDs | (write - only) 3D array of dimension IDs (time, Y,X) |
size | The size of the time variable |
tvarID | (write - only) The ID of the time variable (limited) |
g | The 2d DG grid from which data points for coordinate variable are generated using g.abscissas() |
name_dims | Names for the dimension variables (time, Y, X) |
T | determines the datatype of the dimension variables |
|
inline |
DEPRECATED Define an unlimited time dimension and coordinate variable.
ncid | NetCDF file or group ID |
name | Name of unlimited dimension and associated variable |
dimID | (write-only) time-dimension ID |
tvarID | (write-only) time-variable ID (for a time variable of type T ) |
T | determine type of dimension variable |
full_check | If true, will call check_real_time before definition. |
|
inline |
DEPRECATED An alias for define_real_time<double>
void dg::file::get_var | ( | int | ncid, |
int | varid, | ||
const RealGrid0d< real_type > & | grid, | ||
T & | data, | ||
bool | parallel = true ) |
DEPRECATED Read a scalar from the netcdf file.
dg::file::NC_Error
if an error occurs T | Determines data type to read |
real_type | ignored |
ncid | NetCDF file or group ID |
varid | Variable ID |
grid | a Tag to signify scalar ouput (and help the compiler choose this function over the array input function). Can be of type dg::RealMPIGrid0d<real_type> |
data | The (single) datum read from file. |
parallel | This parameter is ignored in both serial and MPI versions. In an MPI program all processes call this function and all processes read. |
parallel
is always true
in which case all processes must have previously opened the file and inquire e.g. the varid void dg::file::get_var | ( | int | ncid, |
int | varid, | ||
const Topology & | grid, | ||
host_vector & | data, | ||
bool | parallel = true ) |
DEPRECATED Convenience wrapper around nc_get_var
.
The purpose of this function is mainly to simplify input in an MPI environment and to provide the same interface also in a shared memory system for uniform programming. This version is for a time-independent variable, i.e. reads a single variable in one go and is actually equivalent to nc_get_var
. The dimensionality is given by the grid.
dg::file::NC_Error
if an error occurs Topology | One of the dG defined grids (e.g. dg::RealGrid2d ) Determines if shared memory or MPI version is called |
host_vector | For shared Topology: Type with data() member that returns pointer to first element in CPU (host) adress space, meaning it cannot be a GPU vector. For MPI Topology: must be MPI_Vector host_vector::value_type must match data type of variable in file. |
ncid | NetCDF file or group ID |
varid | Variable ID |
grid | The grid from which to construct start and count variables to forward to nc_get_vara |
data | contains the read data on return (must be of size grid.size() ) |
parallel | This parameter is ignored in the serial version. In the MPI version this parameter indicates whether each process reads/writes to the file independently in parallel (true ) or each process funnels its data through the master rank (false ), which involves communication but may be faster than the former method. |
MPI_COMM_WORLD
, e.g. when reading/writing a 2d slice of a 3d vector. In this example case grid.communicator()
is only 2d not 3d. Remember that only the group containing the master process reads/writes its data to the file, while all other processes immediately return. There are two ways to relyable read/write the data in such a case:MPI_Comm_split
on MPI_COMM_WORLD
followed by MPI_Cart_create
)MPI_Cart_sub
)rank==0
in MPI_COMM_WORLD
. The MPI_COMM_WORLD
rank of a process is usually the same in a Cartesian communicator of the same size but is not guaranteed. So always check MPI_COMM_WORLD
ranks for file write operations. parallel
is always true
in which case all processes must have previously opened the file and inquire e.g. the varid void dg::file::get_vara | ( | int | ncid, |
int | varid, | ||
unsigned | slice, | ||
const RealGrid0d< real_type > & | grid, | ||
T & | data, | ||
bool | parallel = true ) |
DEPRECATED Read a scalar to the netcdf file.
dg::file::NC_Error
if an error occurs T | Determines data type to read |
real_type | ignored |
ncid | NetCDF file or group ID |
varid | Variable ID |
slice | The number of the time-slice to read (first element of the startp array in nc_get_vara ) |
grid | a Tag to signify scalar ouput (and help the compiler choose this function over the array input function). Can be of type dg::RealMPIGrid0d<real_type> |
data | The (single) datum to read. |
parallel | This parameter is ignored in both serial and MPI versions. In an MPI program all processes call this function and all processes read. |
rank==0
in MPI_COMM_WORLD
. The MPI_COMM_WORLD
rank of a process is usually the same in a Cartesian communicator of the same size but is not guaranteed. So always check MPI_COMM_WORLD
ranks for file write operations. parallel
is always true
in which case all processes must have previously opened the file and inquire e.g. the varid void dg::file::get_vara | ( | int | ncid, |
int | varid, | ||
unsigned | slice, | ||
const Topology & | grid, | ||
host_vector & | data, | ||
bool | parallel = true ) |
DEPRECATED Convenience wrapper around nc_get_vara()
The purpose of this function is mainly to simplify input in an MPI environment and to provide the same interface also in a shared memory system for uniform programming. This version is for a time-dependent variable, i.e. reads a single time-slice from the file. The dimensionality is given by the grid.
dg::file::NC_Error
if an error occurs Topology | One of the dG defined grids (e.g. dg::RealGrid2d ) Determines if shared memory or MPI version is called |
host_vector | For shared Topology: Type with data() member that returns pointer to first element in CPU (host) adress space, meaning it cannot be a GPU vector. For MPI Topology: must be MPI_Vector host_vector::value_type must match data type of variable in file. |
ncid | NetCDF file or group ID |
varid | Variable ID |
slice | The number of the time-slice to read (first element of the startp array in nc_get_vara ) |
grid | The grid from which to construct start and count variables to forward to nc_get_vara |
data | contains the read data on return (must be of size grid.size() ) |
parallel | This parameter is ignored in the serial version. In the MPI version this parameter indicates whether each process reads/writes to the file independently in parallel (true ) or each process funnels its data through the master rank (false ), which involves communication but may be faster than the former method. |
MPI_COMM_WORLD
, e.g. when reading/writing a 2d slice of a 3d vector. In this example case grid.communicator()
is only 2d not 3d. Remember that only the group containing the master process reads/writes its data to the file, while all other processes immediately return. There are two ways to relyable read/write the data in such a case:MPI_Comm_split
on MPI_COMM_WORLD
followed by MPI_Cart_create
)MPI_Cart_sub
)rank==0
in MPI_COMM_WORLD
. The MPI_COMM_WORLD
rank of a process is usually the same in a Cartesian communicator of the same size but is not guaranteed. So always check MPI_COMM_WORLD
ranks for file write operations. parallel
is always true
in which case all processes must have previously opened the file and inquire e.g. the varid void dg::file::put_var | ( | int | ncid, |
int | varid, | ||
const RealGrid0d< real_type > & | grid, | ||
T | data, | ||
bool | parallel = false ) |
DEPRECATED Write a scalar to the NetCDF file.
dg::file::NC_Error
if an error occurs T | Determines data type to write |
real_type | ignored |
ncid | NetCDF file or group ID |
varid | Variable ID (Note that in NetCDF variables without dimensions are scalars) |
grid | a Tag to signify scalar ouput (and help the compiler choose this function over the array output function). Can be of type dg::RealMPIGrid<real_type> |
data | The (single) datum to write. |
parallel | This parameter is ignored in both serial and MPI versions. In an MPI program all processes can call this function but only the master thread writes. |
rank==0
in MPI_COMM_WORLD
. The MPI_COMM_WORLD
rank of a process is usually the same in a Cartesian communicator of the same size but is not guaranteed. So always check MPI_COMM_WORLD
ranks for file write operations. void dg::file::put_var | ( | int | ncid, |
int | varid, | ||
const Topology & | grid, | ||
const host_vector & | data, | ||
bool | parallel = false ) |
DEPRECATED Write an array to NetCDF file.
Convenience wrapper around nc_put_var
The purpose of this function is mainly to simplify output in an MPI environment and to provide the same interface also in a shared memory system for uniform programming. This version is for a time-independent variable, i.e. writes a single variable in one go and is actually equivalent to nc_put_var
. The dimensionality is given by the grid.
dg::file::NC_Error
if an error occurs Topology | One of the dG defined grids (e.g. dg::RealGrid2d ) Determines if shared memory or MPI version is called |
host_vector | For shared Topology: Type with data() member that returns pointer to first element in CPU (host) adress space, meaning it cannot be a GPU vector. For MPI Topology: must be MPI_Vector host_vector::value_type must match data type of variable in file. |
ncid | NetCDF file or group ID |
varid | Variable ID |
grid | The grid from which to construct start and count variables to forward to nc_put_vara |
data | data to be written to the NetCDF file |
parallel | This parameter is ignored in the serial version. In the MPI version this parameter indicates whether each process reads/writes to the file independently in parallel (true ) or each process funnels its data through the master rank (false ), which involves communication but may be faster than the former method. |
MPI_COMM_WORLD
, e.g. when reading/writing a 2d slice of a 3d vector. In this example case grid.communicator()
is only 2d not 3d. Remember that only the group containing the master process reads/writes its data to the file, while all other processes immediately return. There are two ways to relyable read/write the data in such a case:MPI_Comm_split
on MPI_COMM_WORLD
followed by MPI_Cart_create
)MPI_Cart_sub
)rank==0
in MPI_COMM_WORLD
. The MPI_COMM_WORLD
rank of a process is usually the same in a Cartesian communicator of the same size but is not guaranteed. So always check MPI_COMM_WORLD
ranks for file write operations. parallel
should be false
ncid
, variable or dimension names, the slice to write etc.parallel
should be true
NC_MPIIO
flag from the NetCDF_par.h
header and the variable be marked with NC_COLLECTIVE
accessncid
, variable and dimension names, the slice to write etc.void dg::file::put_vara | ( | int | ncid, |
int | varid, | ||
unsigned | slice, | ||
const RealGrid0d< real_type > & | grid, | ||
T | data, | ||
bool | parallel = false ) |
DEPRECATED Write a scalar to the NetCDF file.
dg::file::NC_Error
if an error occurs T | Determines data type to write |
real_type | ignored |
ncid | NetCDF file or group ID |
varid | Variable ID (Note that in NetCDF variables without dimensions are scalars) |
slice | The number of the time-slice to write (first element of the startp array in nc_put_vara ) |
slice>=size
(in which case all variables sharing the unlimited dimension will increase in size) but it is not possible to read data at slice>=size
. It is the user's responsibility to manage the slice value across variables. grid | a Tag to signify scalar ouput (and help the compiler choose this function over the array output function). Can be of type dg::RealMPIGrid<real_type> |
data | The (single) datum to write. |
parallel | This parameter is ignored in both serial and MPI versions. In an MPI program all processes can call this function but only the master thread writes. |
rank==0
in MPI_COMM_WORLD
. The MPI_COMM_WORLD
rank of a process is usually the same in a Cartesian communicator of the same size but is not guaranteed. So always check MPI_COMM_WORLD
ranks for file write operations. void dg::file::put_vara | ( | int | ncid, |
int | varid, | ||
unsigned | slice, | ||
const Topology & | grid, | ||
const host_vector & | data, | ||
bool | parallel = false ) |
DEPRECATED Write an array to NetCDF file.
Convenience wrapper around nc_put_vara
The purpose of this function is mainly to simplify output in an MPI environment and to provide the same interface also in a shared memory system for uniform programming. This version is for a time-dependent variable, i.e. writes a single time-slice into the file. The dimensionality is given by the grid.
dg::file::NC_Error
if an error occurs Topology | One of the dG defined grids (e.g. dg::RealGrid2d ) Determines if shared memory or MPI version is called |
host_vector | For shared Topology: Type with data() member that returns pointer to first element in CPU (host) adress space, meaning it cannot be a GPU vector. For MPI Topology: must be MPI_Vector host_vector::value_type must match data type of variable in file. |
ncid | NetCDF file or group ID |
varid | Variable ID |
slice | The number of the time-slice to write (first element of the startp array in nc_put_vara ) |
slice>=size
(in which case all variables sharing the unlimited dimension will increase in size) but it is not possible to read data at slice>=size
. It is the user's responsibility to manage the slice value across variables. grid | The grid from which to construct start and count variables to forward to nc_put_vara |
data | data to be written to the NetCDF file |
parallel | This parameter is ignored in the serial version. In the MPI version this parameter indicates whether each process reads/writes to the file independently in parallel (true ) or each process funnels its data through the master rank (false ), which involves communication but may be faster than the former method. |
MPI_COMM_WORLD
, e.g. when reading/writing a 2d slice of a 3d vector. In this example case grid.communicator()
is only 2d not 3d. Remember that only the group containing the master process reads/writes its data to the file, while all other processes immediately return. There are two ways to relyable read/write the data in such a case:MPI_Comm_split
on MPI_COMM_WORLD
followed by MPI_Cart_create
)MPI_Cart_sub
)rank==0
in MPI_COMM_WORLD
. The MPI_COMM_WORLD
rank of a process is usually the same in a Cartesian communicator of the same size but is not guaranteed. So always check MPI_COMM_WORLD
ranks for file write operations. parallel
should be false
ncid
, variable or dimension names, the slice to write etc.parallel
should be true
NC_MPIIO
flag from the NetCDF_par.h
header and the variable be marked with NC_COLLECTIVE
accessncid
, variable and dimension names, the slice to write etc.