On one hand the Hall effect consists of a voltage that arises when an electric field and a perpendicular magnetic field are in a material. This makes the charge carriers (electrons or holes) under the action of Lorentz force which deviates the charges from the direction they would have if no magnetic field was present, thereby creating a measurable voltage. Out of this voltage, one can retrieve the Hall coefficient $R_H$, whose sign determines the type of the charge carriers (positive for holes, negative for electrons).
On the other hand, the Seebeck effect which takes place in a material(*) in which there is a temperature gradient (e.g. keep the ends of a material at different temperatures), is also responsible for a voltage that builds up due to the charge redistribution due to the temperature gradient (that acts like a force). From that voltage, one can retrieve $S$, the Seebeck coefficient whose sign indicates the charge carriers type. As Wikipedia points out,
the Seebeck coefficient is negative for negatively charged carriers (such as electrons), and positive for positively charged carriers (such as electron holes).
However, there are several metals for which the Seebeck coefficient and Hall coefficient signs do not match. An example is copper near room temperature, which has $R_H <0$ and $S>0$ (omitting the units).
There is therefore a contradiction. Thus, is there any of these two constants that really determine the charge carriers type? Why isn't the other one always in agreement? When can we expect the sign to fail to give us an indication of the charge carriers? How do we resolve the contradiction?
(*) For some reason, there is a widespread misconception that the Seebeck effect requires two materials to take place. This is wrong. With two materials it is possible to measure a non-zero voltage with a voltmeter, if the goal is to measure $S_A-S_B$, or create a current to power some device, but it is not required for the Seebeck effect to take place. There exists some unconventional ways to measure a voltage that do not involve a voltmeter (think of optical ways), and a measure of the absolute Seebeck coefficient of a single material is possible without any closed circuit nor another material since several decades.
Answer
First, lets look at using the Drude theory for metals, as applied to the Hall effect and the Seebeck effect. Recall that the Drude model is an attempt to use classical statistical mechanics as applied to an electron gas. Ashcroft and Mermin, Solid State Physics, cover this in Chapter 1. At the heart of the Drude model is that there is a relaxation time $\tau$, which essentially is the average time between an electron experiencing a collision. The collisions result in (1) a randomization of the electron's velocity vector, and (2) is the process for coming to thermal equilibrium with their surroundings.
Now, when applied to a simple model for the Hall effect, one comes up with
$R_{H} = - {1 \over nec}$
with $R_{H}$ the Hall coefficient, $n$ the number of valence electrons per atom, $e$ the charge of an electron, and $c$ the speed of light. It also is independent of the relaxation time $\tau$, which is a little surprising. Now, for many materials in low magnetic fields this all works out pretty well (more later though!). Although it doesn't account for a variety of things that should be affected through $\tau$.
Next, one can apply the Drude model to thermal conductivity. When Drude did it, he managed to make several errors that cancelled each other out, and then another error of $1/2$ that managed to get the ratio of thermal to electrical conductivity in his model right in line with measurements on a variety of metals. We should all be so lucky.
Continuing onward, one proceeds to apply the Drude model to calculate the thermopower $Q$. Here, through equating the thermal and electric field fluxes one can get
$Q = -{c_{v} \over 3ne}$
and making the electron heat capacity $c_{v}$ to ${3 \over 2} nk_{B}$ one arrives at an estimate for $Q$ that is about 100 times too big. Oops. The Drude model just doesn't work for the Seebeck coefficient. One has to account for band structure.
Moving to a semiclassical model of electrons one can again get to a fairly simple thermopower expression. But:
If the energy dependence of the relaxation time is unimportant, then the sign of the thermopower is determined by the sign of the effective mass, averaged over the Fermi surface, i.e., by whether the carriers are electrons or holes...
However, the thermopower is not a very valuable probe of the fundamental electronic properties of a metal; the energy dependence of $\tau$ is not well understood, the validity ... depends on the relaxation time approximation, and, most important, vibrations of the lattice can affect the transport of thermal energy in a way that makes it very difficult to achieve an accurate theory of the thermopower. (p. 258)
(The last bit means that once you take phonons into account, heat isn't transferred just through the electrons so your assumptions on balancing heat and charge fall apart.)
So, a classical or semiclassical model for the Seebeck effect has a really hard time describing real metals. There are too many factors coming into play to get an easy answer.
Now, back to the Hall effect. It turns out that the Drude model is also too simplistic to apply under all conditions. It is generally well known that $R_{H}$ depends on the applied magnetic field. In particular, it is known that for aluminum R. Luck, physica status solidi (b) 18(1) 49-57 (1966), as one increases the applied magnetic field, the measured Hall coefficient even changes sign, going from indicating three conduction electrons per atom to one conduction hole per atom. This requires multiple kiloGauss to observe, far above what one generally uses for Hall effect measurements. As the field increases, the details of the band structure come into play. One can imagine that if the plasma frequency of the metal and the 'relaxation time' of the theory become close, weird things start happening as a 'classical' electron path at low fields starts to be orbits around the Fermi surface between collisions - think de Haas - van Alphen effects.
So, at low fields, the Hall coefficient can act in a fairly classical manner, being thought of as a magnetic field applied to a classical charge carrier in the material. The motion of a charge carrier occurs over long distances in the metal, much longer than a lattice constant. At high fields, one has to dig into the details of the band structure and Fermi surface, but those fields are not normally seen in physics labs.
For the Seebeck effect, the thermopower coefficient just doesn't work out well, either classically or semiclassically. The Drude result of agreement was an accident. Any other agreements are known to be fortuitous because of the many factors known to come into play. And, it is even considered really hard in a full-blown band structure based calculation to get an accurate number - too much is going on.
So, at low fields, the Hall coefficient is pretty direct, and agreement with the Seebeck effect is hit or miss. At high magnetic fields, well, all bets are off either way.
Addition:
The OP has pointed out a 2014 paper on First Principles Explanation of the Positive Seebeck Coefficient of Lithium, Bin Xu and Matthieu J. Verstraete, Physical Review Letters 112 196603 (2014) which uses mod/sim to look at the Seebeck coefficient of lithium. Lithium is interesting in that, as the simplest alkali metal, one might expect it to have a near-ideal electron structure, ie.e nearly a free-electron Fermi surface. I'll try to summarize some of this more recent work. In the introduction, the authors note:
Though S can be measured straightforwardly in experiment and calculated theoretically within certain approximations, a complete microscopic understanding and paths for systematic improvement of S are still lacking. The most common approach is to consider a constant averaged relaxation time for the electrons (τ). The relaxation time approximation (RTA) works in a surprisingly large number of cases, but has little formal justification; we expose some more of its limitations below.
In the end, the authors use a energy-dependent lifetime, and consider the actual band structure of Li, and manage to get get good agreement between their calculations and the measured values of the Seebeck coefficient across a very broad temperature range. In the end they conclude:
Through a comparison between Li and Na, a detailed analysis reveals that the sign of S is determined by the energy dependence of the electron lifetime (generically proportional to the inverse of the electronic DOS), whereas the quantitative influence of the electron-phonon interaction is not important. In Li, the DOS around the Fermi energy deviates considerably from the free-electron model; our analysis contradicts Robinson’s earlier explanations based on exotic energy variations of the electron-phonon coupling.
Now, this is interesting in that the other three well-known elements with 'anomalous' Seebeck coefficients are Cu, Ag, and Au. These noble metals are known to also have 'nearly' free electron band structures (see Ashcroft & Mermin for example). But, like Li, there are deviations right near the Fermi surface, and this changing DOS may play a similar role. However, I was unable to find any good recent references on the Seebeck coefficient in the noble metals.
Addition II:
I finally found some early work on thermopower in copper, Calculations Pertaining to the Thermopower of Copper, R. W. Williams and H. L. Davis, Physics Letters 28A(6) 412-413 (1968). As they state in their intro,
The thermopower of metals appears to be particularly sensitive to changes in both the shape of the Fermi surface and the band structure in the immediate vicinity of the Fermi energy. Consequently, a theoretical investigation of the thermopower of any metal with complex band structure in the vicinity of the Fermi energy is in order.
They calculated the electron velocity at some 20,000 points in the vicinity of the Fermi surface, and ended up with a reasonable value for the thermopower. They conclude with:
We believe that our calculations ... reflect the extreme sensitivity of the thermopower on the band structure of a metal and, consequently, show that only a few allowances for details in the structure might lead to erroneous conclusions. We suppose that the explanation for the positive sign of the thermopower might lie in understanding the dependence of $\tau$ on $k$ and $E$.
From this I conclude that my assumption above on the influence of the Fermi surface for (at least) Cu were correct.
No comments:
Post a Comment