Sections
through
have outlined a
series of approximations for the evaluation of
in the
eigenvalue problem represented by Equation
. We now
discuss the solution of these equations to find the ground state
energy and corresponding Kohn-Sham orbitals.
The original methods relied on a self consistent iteration to
find the ground state. In this approach a trial charge density is used
to calculate
. The resulting effective Hamiltonian is the
diagonalised to find its eigenstates and from these a new charge
density is constructed. This is then used as the input to the
procedure and the loop is repeated until the output charge density is
consistent with the previous iteration. This procedure is very
inefficient, as the cost of each iteration is dominated by the
diagonalisation of the effective Hamiltonian, which is
, where
M is the number of basis functions. This diagonalisation yields M
eigenstates, but the calculation of the total energy only requires the
lowest N of these states. As N is usually significantly smaller
than M, this represents a gross inefficiency.
The approach we use is that of direct minimisation of the energy
functional. This was first proposed by Car and Parinello [31]
who use a fictional electron dynamics to perform the minimisation. In
contrast, we use a preconditioned conjugate gradient technique
[5, 32]. This is an iterative technique which
successively `improves' a set of trial wavefunctions by evaluating a
step at each iteration that will reduce the total energy. This
involves the evaluation of the functional derivative of the energy
with respect to each of the wavefunctions
. However, the
required derivative is simply
. As the
kinetic energy is most efficiently calculated in Fourier space and the
potential energy in real space, the cost of evaluating this derivative
is dominated by transforming between these representations. The use of
fast Fourier transforms [33] implies that the cost of this is
. However, the minimisation must be carried out within
the constraint that the
must be orthogonal. Thus, the
asymptotic cost of this procedure is dominated by the cost of
orthogonalisation which is
using the Gram-Schmidt technique
[34]. A smaller prefactor for the cost of this procedure means
that this term only dominates for large system sizes. Nonetheless, it
may be seen that this approach offers a significant efficiency saving
over self-consistent iterative matrix diagonalisation.