Next: Bibliography
Up: Preconditioned conjugate gradient method
Previous: Appendix A
To perform a line minimization from a point
along a certain
direction
, we wish to find
, the
optimum value of
which minimizes
![\begin{displaymath}
f(\lambda) = \sum_{m=1}^{M}
\frac{ (X^{(k)}_m + \lambda D^{(...
...}_m + \lambda D^{(k)}_m)^T S (X^{(k)}_m + \lambda D^{(k)}_m)}.
\end{displaymath}](img127.gif) |
(27) |
This may be achieved in several ways. First, by calculating the
derivative of
at
,
, taking a trial step
to
evaluate
and making a
parabolic fit to determine
.
Alternatively, since
![\begin{displaymath}
\frac{\d f}{\d\lambda} = \sum_{m=1}^{M} \frac{2(a_m + \lambda b_m +
\lambda^2 c_m)}{(1+ \lambda^2 (D_m)^T S D_m)^2},
\end{displaymath}](img132.gif) |
(28) |
where
we find
as one of the roots of the quadratic equation
![\begin{displaymath}
a + b \lambda + c \lambda^2 = 0,
\end{displaymath}](img143.gif) |
(32) |
where
etc.
Next: Bibliography
Up: Preconditioned conjugate gradient method
Previous: Appendix A
Peter Haynes