window.onload=function() {document.getElementById("pageHeader").style.cursor = "hand"; document.getElementById("pageHeader").style.cursor = "pointer"; document.getElementById("pageHeader").onclick = function() { window.location.href="http://www.cam.ac.uk"; }; }
Feb 2014

Anyons

After Tuesday’s class a number of people asked about the possibility (mentioned in the handout and I think the Advanced Quantum course) of identical particle that are neither fermions or bosons: the so-called anyons. I could paraphrase their (perfectly reasonable) questions as follows: if such things are possible, the argument you gave that fermions and bosons are the only possibility must be rubbish. And indeed it is, but only in two spatial dimensions, as I’ll try and show in this post.

I’ll emphasise two things before getting going:

  • You (obviously) don’t need to know this. It’s just fun to think (and write) about.
  • Because particles can always move in (at least!) three dimensions, you may very well ask why we bother with this exotic possibility in two dimensions (space may have more dimensions, but it definitely doesn’t have fewer!). The reason is that, while this option is ruled out for elementary particles, it is very much still on the table for collective excitations or quasiparticles of systems confined to two dimensions. The quasiparticle concept is a difficult one, but the basic idea is that at low energies, it is often possible to describe an interacting system of many particles in terms of certain other excitations that are weakly interacting. Simplest example: oscillations of a crystal lattice can be described in terms of phonons. Even though the ions comprising the lattice may be interacting very strongly with each other, small deformations lead to lattice vibrations with negligible anharmonicity (this is where low energy comes in: higher energies give bigger deformations and anharmonic effects). A phonon can’t exist without a lattice, so a two dimensional lattice has intrinsically two dimensional phonons. Phonons, like photons, are bosons, but there’s no a priori reason why other kinds of collective excitations in two dimensions can’t be anyons. Remarkably enough, this actually happens in two dimensional electron gases in strong perpendicular magnetic fields, and is part of the wacky collection of phenomena associated with the Fractional Quantum Hall effect.

Let’s start in 2D, not only because it is the case of interest for this discussion but also because it is easy to picture. Let’s suppose that we have two particles moving in the plane, and further that we are dealing with a translationally invariant system, so that the two particle wavefunction depends only on the separation of the particles
$$\Psi(\mathbf{r}_1,\mathbf{r}_2)=\psi(\mathbf{r}_1-\mathbf{r}_2).$$
Now, it should be clear that exchanging the two particles corresponds to \(\psi(\mathbf{r})\to \psi(-\mathbf{r})\), and the condition that the probabilities are unaffected by this change leads to
$$|\psi(\mathbf{r})|^2=|\psi(-\mathbf{r})|^2,$$
or
\begin{equation}
\psi(\mathbf{r})=\exp(i\alpha(\mathbf{r}))\psi(-\mathbf{r}),
\label{eq:alpha}
\end{equation}
where the phase factor \(\alpha(\mathbf{r})\) can in general depend upon position. By switching again we get the condition
\begin{equation}
\alpha(\mathbf{r})+\alpha(-\mathbf{r})=2\pi n,
\label{eq:sum}
\end{equation}
for integer \(n\).

The simplest possibility is then constant \(\alpha=\pi n\), with even and odd integers corresponding to symmetric and antisymmetric wavefunctions respectively. Should we consider the more general case where \(\alpha\) can vary? Let suppose we have such an \(\alpha\), which still satisfies \eqref{eq:sum}. Then it’s not hard to see that the following
gauge transformation (see AQP Section 6.3)
\begin{equation}
\psi(\mathbf{r})\longrightarrow \exp(i[\alpha(\mathbf{r})-\alpha(\mathbf{-r})]/2)\psi(\mathbf{r}),
\label{eq:gauge}
\end{equation}
changes the condition \eqref{eq:alpha} to
\begin{equation}
\psi(\mathbf{r})=\exp(i[\alpha(\mathbf{r})+\alpha(\mathbf{-r})]/2)\psi(-\mathbf{r})=\exp(i\pi n)\psi(-\mathbf{r}).
\label{eq:alphanew}
\end{equation}
This is what we had before, when \(\alpha\) was constant. The effect of this transformation is to introduce a vector potential into the Schrödinger equation of the form
\begin{equation}
\frac{q}{\hbar}\mathbf{A}=\frac{1}{2}\nabla [\alpha(\mathbf{r})-\alpha(\mathbf{-r})].
\end{equation}
The important thing is that this vector potential has no flux associated with it, and thus there are no observable consequences to the gauge transformation: constant \(\alpha\) was good enough.

I want to emphasise that the absence of any flux arises because the vector potential is the gradient of a
single valued function. Contrast with the \(z\)-component of the curl of the gradient of the angular field \(\theta(\mathbf{r})=\arctan (y/x)\)
$$(\nabla\times\nabla\theta)_z=\delta(x)\delta(y),$$
as one may infer from Stokes’ theorem (I wonder if you have seen this before? Perhaps someone would be good enough to tell me). This is the physics at the heart of the
Aharonov--Bohm effect.

So we would seem to be settled. The argument I gave in the lectures goes through, and the possible position dependence of \(\alpha\) is a red herring with no consequences. Nothing to see here.

Except there is. The whole thing rests on \eqref{eq:sum}, which in turn rests on the idea that the wavefunction itself is single valued. The fundamental realisation of the Norwegian physicists Leinaas and Myrheim in 1977 was that in two dimensions there is no reason that it has to be. If the wavefunction changes by a phase as we go around the origin of \(\mathbf{r}\) -- which corresponds to exchange of particles -- the quantum mechanical prediction for probabilities is unchanged. Thus we could have a constant \(\alpha\) different from \(0\) or \(\pi\) (the argument that we don’t need to consider spatially varying \(\alpha\) still holds), and correspondingly a continuum of statistics intermediate between bosonic and fermonic.

Let me try and answer some possible objections to this:

  • The wavefunction is the wavefunction: it has to have a definite value at each point in space. Not at all: the probability density depends only on the square modulus, so if there are different values for the phase, there is no consequence.
  • This means the wavefunction is discontinuous. That would be really bad any time I need to take a derivative to find the momentum, energy, probability current, etc.. No! The wavefunction is not discontinuous. It is nice and smooth (ignoring any spiky potentials) in the vicinity of any point. You could Taylor expand around any point and get a nice series with a finite radius of convergence.
  • Then how can it be multiply valued? The idea is exactly the same as for analytic continuation in the theory of complex variables. Expanding around any point is fine. Then you pick a point within the domain in which your series converges, and you expand anew around that. Keep shuffling along, creating overlapping domains in which the series converge. Pasted Graphic(Source) What can happen is that, if you work your way around a singularity, you come back to a different value of the function. It’s illustrated above for the logarithm function. In the theory of complex variables, this gives rise to the idea of a Riemann surface, on which the function is single valued.

The final question is: why doesn’t this work in three dimensions? First note that it is the behaviour of the wavefunction as we approach the origin, where the particle are sitting on top of each other, that is crucial. If we could expand the wavefunction starting at the origin, we’d get bosons or fermions depending on whether we had only even or only odd terms. The more exotic possibilities require a singularity at the origin. This indicates that these possibilities are connected with the
topology of the space we obtain by cutting the origin out. In two dimensions, the resulting space is one where a path that loops the origin (as when we analytically continue in the above thought experiment) is topologically different from one that doesn’t. In three dimensions, one can’t “wrap around” the origin in the same way, and this means we are stuck with bosons and fermions.
Comments

Partition function for N identical particles

The purpose of this post is to provide some of the details that go into the derivation of the partition function for a system of N identical particles given in Eq. (5.46) in the handout. Although I gave a more detailed derivation in Thursday’s lectures, there are perhaps two points that merit emphasising:

  • The many particle states $$|\mathbf{r}_{1},\ldots,\mathbf{r}_{N}\rangle_{S/A} = \frac{1}{\sqrt{N!}}\sum_{P} \eta_{S/A}(P) |\mathbf{r}_{P1},\ldots,\mathbf{r}_{PN}\rangle, $$ only depend on the set of points \(\{\mathbf{r}_i\}\) and not on their labelling. Thus, for two bosons $$|\mathbf{r}_1,\mathbf{r}_2\rangle_S=\left(|\mathbf{r}_1,\mathbf{r}_2\rangle+|\mathbf{r}_2,\mathbf{r}_1\rangle\right)/\sqrt{2}=|\mathbf{r}_2,\mathbf{r}_1\rangle_S.$$ This means that when we take a trace over these N particle states, we have to include a factor \(1/N!\) in order to avoid over counting. For example $$\text{tr}\,\rho_2 = \frac{1}{2}\int d\mathbf{r}_1 d\mathbf{r}_2\,\rho_2(\mathbf{r}_1,\mathbf{r}_2),$$ where $$\rho_2(\mathbf{r}_1,\mathbf{r}_2)=\langle\mathbf{r}_1,\mathbf{r}_2|\exp(-\beta H_2)|\mathbf{r}_1,\mathbf{r}_2\rangle_S$$ is the symmetric two particle density matrix at thermal equilibrium, and $$H_2 = -\frac{\hbar^2}{2m}\left[\nabla_{\mathbf{r}_1}^2+\nabla_{\mathbf{r}_2}^2\right]$$ is the two-particle free Hamiltonian.
  • After substituting the expression for the single particle density matrix, this gives rise to the expression for the partition function involving a double sum over permutations, one for the bra and one for the ket, with an additional \(1/N!\) for over counting $$Z_{N} = \frac{1}{(N!)^2\lambda_{\text{dB}}^{3N}} \sum_{P,Q}\int d\mathbf{r}_{1}\cdots d\mathbf{r}_{N}\,\eta_{S/A}(P)\eta_{S/A}(Q)\exp\left(-\frac{\pi}{\lambda_{\text{dB}}^{2}}\sum_{j=1}^{N}|\mathbf{r}_{Pj}-\mathbf{r}_{Qj}|^{2}\right).$$ Now, we can do one of the sums (that labelled by Q, for example) by observing that, for each permutation Q, the sum over permutations P contains exactly the same terms. The only difference is that they are permuted, and the sum over Q is just \(N!\) copies of the same sum. This gives the result given in Eq. (5.46) $$Z_{N} = \frac{1}{N!\lambda_{\text{dB}}^{3N}} \sum_{P}\int d\mathbf{r}_{1}\cdots d\mathbf{r}_{N}\,\eta_{S/A}(P)\exp\left(-\frac{\pi}{\lambda_{\text{dB}}^{2}}\sum_{j=1}^{N}|\mathbf{r}_{j}-\mathbf{r}_{Pj}|^{2}\right).$$ Since the product of the \(\eta\)’s reflects the number of transpositions to go from permutation P to permutation Q, the signs are unaffected by this relabelling.
Comments

Solutions to problems

Worked solutions to the problems in the handout are now available here. Please let Mike Payne know of any errors.
Comments