Conjugate gradient optimization

From VASP Wiki
Revision as of 11:47, 18 March 2019 by Karsai (talk | contribs) (Created page with "Instead of the previous iteration scheme, which is just some kind of Quasi-Newton scheme, it also possible to optimize the expectation value of the Hamiltonian using a success...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Instead of the previous iteration scheme, which is just some kind of Quasi-Newton scheme, it also possible to optimize the expectation value of the Hamiltonian using a successive number of conjugate gradient steps. The first step is equal to the steepest descent step in section \ref{min-en2}. In all following steps the preconditioned gradient is conjugated to the previous search direction. The resulting conjugate gradient algorithm is almost as efficient as the algorithm given in Single band steepest descent scheme. For further reading see [1][2][3].

References