|
| PCG ()=default |
| Allocate nothing, Call construct method before usage. More...
|
|
| PCG (const ContainerType ©able, unsigned max_iterations) |
| Allocate memory for the pcg method. More...
|
|
void | set_max (unsigned new_max) |
| Set the maximum number of iterations. More...
|
|
unsigned | get_max () const |
| Get the current maximum number of iterations. More...
|
|
const ContainerType & | copyable () const |
| Return an object of same size as the object used for construction. More...
|
|
void | set_verbose (bool verbose) |
| Set or unset debugging output during iterations. More...
|
|
void | set_throw_on_fail (bool throw_on_fail) |
| Set or unset a throw on failure-to-converge. More...
|
|
template<class ... Params> |
void | construct (Params &&...ps) |
| Perfect forward parameters to one of the constructors. More...
|
|
template<class MatrixType0 , class ContainerType0 , class ContainerType1 , class MatrixType1 , class ContainerType2 > |
unsigned | solve (MatrixType0 &&A, ContainerType0 &x, const ContainerType1 &b, MatrixType1 &&P, const ContainerType2 &W, value_type eps=1e-12, value_type nrmb_correction=1, int test_frequency=1) |
| Solve \( Ax = b\) using a preconditioned conjugate gradient method. More...
|
|
template<class ContainerType>
class dg::PCG< ContainerType >
Preconditioned conjugate gradient method to solve \( Ax=b\).
where \( A\) is positive definite and self-adjoint in the weighted scalar product (defined by the diagonal weights matrix \(W\))
\[ A^\dagger := \frac{1}{W} A^T W = A\]
. Note that if \( A\) is self-adjoint then both \( (WA)^T = WA \) and \( \left(A \frac{1}{W}\right)^T = A\frac{1}{W}\) are symmetric. The positive definite, self-adjoint preconditioner \( P \approx A^{-1}\) that approximates the inverse of \( A\) and is fast to apply, is used to solve the left preconditioned system
\[ PAx=Pb\]
- Note
- Our implementation uses a stopping criterion based on the residual at iteration i \( || r_i ||_W = ||Ax_i-b||_W < \epsilon( ||b||_W + C) \). However, the real error is bound by
\[ \frac{ ||e_i||_W}{||x||_W} \leq \kappa(PA) \frac{||r_i||_W}{||b||_W} \]
Thus, if the condition number \( \kappa\) is large the real error may still be large even if the residual error is small see Ashby et al., The Role of the Inner Product in Stopping Criteria for Conjugate Gradient Iterations (2001)
- See also
- This implements the PCG algorithm (applied to \((WA)\) as given in https://en.wikipedia.org/wiki/Conjugate_gradient_method or the book Iterative Methods for Sparse Linear Systems" 2nd edition by Yousef Saad
- Note
- Conjugate gradients might become unstable for positive semidefinite matrices arising e.g. in the discretization of the periodic laplacian
- Attention
- beware the sign: a negative definite matrix does not work in Conjugate gradient
const double eps = 1e-6;
unsigned num_iter = pcg.solve( A, x, b, 1., w2d, eps);
A 2d negative elliptic differential operator .
Definition: elliptic.h:231
Preconditioned conjugate gradient method to solve .
Definition: pcg.h:57
thrust::host_vector< real_type > evaluate(UnaryOp f, const RealGrid1d< real_type > &g)
Evaluate a 1d function on grid coordinates.
Definition: evaluation.h:67
MPI_Vector< thrust::host_vector< real_type > > weights(const aRealMPITopology2d< real_type > &g)
Nodal weight coefficients.
Definition: mpi_weights.h:22
thrust::host_vector< double > HVec
Host Vector.
Definition: typedefs.h:19
- Template Parameters
-
ContainerType | Any class for which a specialization of TensorTraits exists and which fulfills the requirements of the there defined data and execution policies derived from AnyVectorTag and AnyPolicyTag . Among others
dg::HVec (serial), dg::DVec (cuda / omp), dg::MHVec (mpi + serial) or dg::MDVec (mpi + cuda / omp)
std::vector<dg::DVec> (vector of shared device vectors), std::array<double, 4> (array of 4 doubles) or std::map < std::string, dg::DVec> ( a map of named vectors)
double (scalar) and other primitive types ...
If there are several ContainerTypes in the argument list, then TensorTraits must exist for all of them |
- See also
- See The dg dispatch system for a detailed explanation of our type dispatch system
template<class ContainerType >
template<class MatrixType0 , class ContainerType0 , class ContainerType1 , class MatrixType1 , class ContainerType2 >
unsigned dg::PCG< ContainerType >::solve |
( |
MatrixType0 && |
A, |
|
|
ContainerType0 & |
x, |
|
|
const ContainerType1 & |
b, |
|
|
MatrixType1 && |
P, |
|
|
const ContainerType2 & |
W, |
|
|
value_type |
eps = 1e-12 , |
|
|
value_type |
nrmb_correction = 1 , |
|
|
int |
test_frequency = 1 |
|
) |
| |
Solve \( Ax = b\) using a preconditioned conjugate gradient method.
The iteration stops if \( ||Ax-b||_W < \epsilon( ||b||_W + C) \) where \(C\) is the absolute error in units of \( \epsilon\) and \( W \) defines a square norm
- Parameters
-
A | A self-adjoint positive definit matrix with respect to the weights W |
x | Contains an initial value on input and the solution on output. |
b | The right hand side vector. |
P | The preconditioner to be used (an approximation to the inverse of A that is fast to compute) |
W | Weights that define the scalar product in which A and P are self-adjoint and in which the error norm is computed. |
eps | The relative error to be respected |
nrmb_correction | the absolute error C in units of eps to be respected |
test_frequency | if set to 1 then the norm of the error is computed in every iteration to test if the loop can be terminated. Sometimes, especially for small sizes the dot product is expensive to compute, then it is beneficial to set this parameter to e.g. 10, which means that the errror condition is only evaluated every 10th iteration. |
- Returns
- Number of iterations used to achieve desired precision
- Note
- The method will throw
dg::Fail
if the desired accuracy is not reached within max_iterations
You can unset this behaviour with the set_throw_on_fail
member
-
Required memops per iteration (
P
is assumed vector):
- 15 reads + 4 writes
- plus the number of memops for
A
;
- Template Parameters
-
MatrixType | Any class for which a specialization of TensorTraits exists and defines a tensor_category derived from AnyMatrixTag . Furthermore, any functor/lambda type with signature void operator()( const ContainerType0&, ContainerType1&) . For example
In case of SelfMadeMatrixTag only those blas2 functions that have a corresponding member function in the Matrix class (e.g. symv( const ContainerType0&, ContainerType1&); ) can be called. If a Container has the RecursiveVectorTag , then the matrix is applied to each of the elements unless the type has the SelfMadeMatrixTag or is a Functor type. |
ContainerType | Any class for which a specialization of TensorTraits exists and which fulfills the requirements of the there defined data and execution policies derived from AnyVectorTag and AnyPolicyTag . Among others
dg::HVec (serial), dg::DVec (cuda / omp), dg::MHVec (mpi + serial) or dg::MDVec (mpi + cuda / omp)
std::vector<dg::DVec> (vector of shared device vectors), std::array<double, 4> (array of 4 doubles) or std::map < std::string, dg::DVec> ( a map of named vectors)
double (scalar) and other primitive types ...
If there are several ContainerTypes in the argument list, then TensorTraits must exist for all of them |
- See also
- See The dg dispatch system for a detailed explanation of our type dispatch system