Conjugate gradient optimization: Difference between revisions
(Created page with "Instead of the previous iteration scheme, which is just some kind of Quasi-Newton scheme, it also possible to optimize the expectation value of the Hamiltonian using a success...") |
No edit summary |
||
(One intermediate revision by one other user not shown) | |||
Line 3: | Line 3: | ||
expectation value of the Hamiltonian using a successive number of | expectation value of the Hamiltonian using a successive number of | ||
conjugate gradient steps. | conjugate gradient steps. | ||
The first step is equal to the steepest descent step in section | The first step is equal to the steepest descent step in section {{TAG|Single band steepest descent scheme}}. | ||
In all following steps the preconditioned gradient <math> g^N_{n} </math> | In all following steps the preconditioned gradient <math> g^N_{n} </math> | ||
is conjugated to the previous search direction. | is conjugated to the previous search direction. | ||
The resulting conjugate gradient algorithm is almost as efficient as the algorithm | The resulting conjugate gradient algorithm is almost as efficient as the algorithm | ||
given in {{TAG| | given in {{TAG|Efficient single band eigenvalue-minimization}}. | ||
For further reading see {{cite|teter:prb:1989}}{{cite|bylander:prb:1990}}{{cite|press:book:1986}}. | For further reading see {{cite|teter:prb:1989}}{{cite|bylander:prb:1990}}{{cite|press:book:1986}}. | ||
Line 13: | Line 13: | ||
<references/> | <references/> | ||
---- | ---- | ||
[[Category:Electronic | [[Category:Electronic minimization]][[Category:Theory]] |
Latest revision as of 10:48, 6 April 2022
Instead of the previous iteration scheme, which is just some kind of Quasi-Newton scheme, it also possible to optimize the expectation value of the Hamiltonian using a successive number of conjugate gradient steps. The first step is equal to the steepest descent step in section Single band steepest descent scheme. In all following steps the preconditioned gradient is conjugated to the previous search direction. The resulting conjugate gradient algorithm is almost as efficient as the algorithm given in Efficient single band eigenvalue-minimization. For further reading see [1][2][3].