next up previous contents
Next: 7.7 Practical details Up: 7. Computational implementation Previous: 7.5 Occupation number preconditioning   Contents


7.6 Tensor properties of the gradients

We have already noted that it is important to take note of the tensor properties of quantities when non-orthogonal functions are involved. In particular, the gradient of the scalar functional with respect to the contravariant density-kernel is a covariant quantity which should not be directly added to the contravariant density-kernel, but should first be converted into contravariant form using the metric tensor $S^{\alpha \beta}=
S_{\alpha \beta}^{-1}$. Thus the correct search direction for the density-kernel variation is ${\mit\Lambda}^{\alpha \beta}$ given by

$\displaystyle {\mit\Lambda}^{\alpha \beta}$ $\textstyle =$ $\displaystyle S^{\alpha i} \frac{\partial Q[\rho;\alpha]}
{\partial K^{ji}} S^{j \beta}$  
  $\textstyle =$ $\displaystyle 2 S^{-1}_{\alpha i} \left[ H + \alpha SKS(1-KS)(1-2KS) \right]_{ij}
S^{-1}_{j \beta}$  
  $\textstyle =$ $\displaystyle 2 (S^{-1} H S^{-1})_{\alpha \beta} + 2 \alpha \left[K(1-SK)(1-2SK)\right]_{\alpha \beta} .$ (7.68)

While the penalty functional derivative is simplified, the energy derivative picks up two factors of the inverse overlap matrix, which, as in the case of occupation number preconditioning, makes this difficult to implement. Neglecting this conversion of the covariant gradient to its contravariant form corresponds to approximating the overlap matrix by the identity. Thus the covariant gradient corresponds to taking the first term only in the series expansion of the overlap matrix inverse in equation 7.66. Again, neglect of this correction may lead to a deterioration in the efficiency of the minimisation procedure as the system-size increases.

We now consider the contravariant gradient of the functional with respect to the covariant support functions. This is a first-rank tensor quantity whereas the density-kernel gradient is a second-rank tensor. The correct covariant gradient is thus $\delta \phi_{\alpha} ({\bf r})$ given by

$\displaystyle \delta \phi_{\alpha} ({\bf r})$ $\textstyle =$ $\displaystyle S_{\alpha \beta}
\frac{\delta Q[\rho;\alpha]}{\delta \phi_{\beta}({\bf r})}$  
  $\textstyle =$ $\displaystyle 4 S_{\alpha \beta} \left[ K^{\beta \gamma}{\hat H}
+ \alpha [KSK(1-SK)(1-2SK)]^{\beta \gamma} \right] \phi_{\gamma}({\bf r})$  
  $\textstyle =$ $\displaystyle 4 (SK)_{\alpha}^{\beta} \left\{ {\hat H} \delta_{\beta}^{\gamma} ...
...\left[SK(1-SK)(1-2SK)\right]_{\beta}^{\gamma} \right\}
\phi_{\gamma}({\bf r}) .$ (7.69)

The covariant preconditioned gradient in particular turns out to be
\begin{displaymath}
\delta \phi_{\alpha} ({\bf r}) = 4 \left\{ {\hat H} \delta_{...
...1-2SK)\right]_{\alpha}^{~\beta} \right\}
\phi_{\beta}({\bf r})
\end{displaymath} (7.70)

so that the factor of the inverse overlap matrix is now eliminated.
next up previous contents
Next: 7.7 Practical details Up: 7. Computational implementation Previous: 7.5 Occupation number preconditioning   Contents
Peter Haynes