Approximate inverse Chebyshev Polynomial Preconditioner \( A^{-1} = \frac{c_0}{2} I + \sum_{k=1}^{r}c_kT_k( Z)\).
More...
template<class Matrix, class ContainerType>
struct dg::ModifiedChebyshevPreconditioner< Matrix, ContainerType >
Approximate inverse Chebyshev Polynomial Preconditioner \( A^{-1} = \frac{c_0}{2} I + \sum_{k=1}^{r}c_kT_k( Z)\).
This is the polynomial preconditioner as proposed by Dag and Semlyen, A New Preconditioned Conjugate Gradient Power Flow, IEEE Transactions on power Systems, 18, (2003) We have \( c_k = \sqrt{\lambda_\min\lambda_\max}^{-1} (\sqrt{\lambda_\min/\lambda_\max}-1)^k / (\sqrt{\lambda_\min/\lambda_\max }+ 1)^k\) and \( Z = 2 ( A - (\lambda_\max + \lambda_\min)I/2)/(\lambda_\max-\lambda_\min)\)
They propose to use \( \lambda_\min = \lambda_\max / (5r)\) where r is the degree of the polynomial
- Note
- This class can be used as a Preconditioner in the CG algorithm. The CG algorithm forms an approximation to the solution in the form \( x_{k+1} = x_0 + P_k(A) r_0\) where \( P_k\) is a polynomial of degree
k
, which is optimal in minimizing the A-norm. Thus a polynomial preconditioner cannot decrease the number of matrix-vector multiplications needed to achieve a certain accuracy. However, since polynomial preconditioners do not use scalar products they may offset the increased overhead if the dot product becomes a bottleneck for performance or scalability.
- Template Parameters
-
Matrix | Preferably a reference type |
ContainerType | |