Betavoltaic batteries are devices which creates electricity from beta radiation of a radioactive material. Alphavoltaics operate similarly, using alpha radiation. The concept was invented roughly 50 years ago and they are safe enough to be used, for example, in pacemakers.
However, the Wikipedia article on them states that they were "phased out as cheaper lithium-ion batteries were developed." I feel, though, that lithium ion batteries are hardly up to the task that consumers would want them to perform: for instance, iPhones hold their charge for about a day and notebooks can sometimes manage no more than four hours. Betavoltaics, on the other hand, can hold their charge for years.
Why, then, are they not used in commercial applications? What are their relative advantages and disadvantages with respect to the current solutions, and in particular to lithium ion batteries?
UPDATE
Amount of electricity is tied with half-life. For example, if Ni-63 has half-life of 100 years this means, that mole of Nickel will produce Avogadro/2 electrons during that 100 years. This means 10^21 electrons per year and 10^14 electrons per second.
This means up to 0.1 mA or electric current.
The energy of electrons from Nickel is 67 keV. This means that each electron has 67 kilovolts of electric tension.
So, the power of electricity from one mole of Nickel-63 is 67000*0.0001 = up to 6 watts.
Other way to calculate. If Nickel-64 produces 10^14 electrons per second, each of 67 keV of energy, then the power is 7 * 10^4 * 10^14 ev/s = 7 * 10^4 * 10^14 * 10^(-19) = 0.7 Watts.
So, the numbers are consistent to the order of magnitude.
Approximating, one mole of Nickel-63 provides 1 watt of electricity approx.
This looks sufficient for many cases including iPhone power consumption.
1 mole of Nickel-63 weights 63 grams. iPhones accumulator can weight more than 100.
So atomic batteries can supersede conventional batteries, and serve for years.
So why we don't use them?
Answer
There are many reasons for this situation.
Power produced is non-adjustable. The battery produces power at nearly constant rate (slowly decaying with time). It cannot be increased and if not consumed (or stored) the power is lost.
(Mentioned by DumpsterDoofus) low power density. ${}^{63}\text{Ni}$ for instance produces ~5 W/kg (and kg here is just mass of radioactive material, the actual battery would be at least order of magnitude heavier). There are, of course isotopes with power densities much higher but they encounter other problems.
Semiconductor damage. If we try to increase power by using isotopes with higher decay energies we find that the high energy electrons damage semiconductors, reducing service life of batteries to times much shorter than isotope halflife. Alpha particles, especially, damage the p-n junctions, so even though (for instance) ${}^{238}\text{Pu}$ produces 0.55 W/g of alpha radiation, it is mainly used in the thermoelectric schemes rather than in direct energy converters.
Gamma radiation. Many isotopes has gamma emission as a secondary mode of decay. Since this type of radiation is difficult to shield, this means that the selection of isotopes usable for batteries is limited only to pure beta emitters.
Bremsstrahlung. Electrons braking produces this type of radiation, that had to be shielded. Again, this limits our selection of isotopes to those with relatively low decay energies.
Low volume of production / Economics. Many isotopes cost too much to be practical in wide array of applications. This is partly explained by low volume of production and partly by production process which will be costly at all volumes because it requires energy consuming isotope separation and special facilities for working with radioactive materials. For instance, tritium (one of the materials for betavoltaics) costs about $30 000 per gram and its world annual production is 400 g (from wikipedia).
All this means, that nuclear batteries are limited to a selection of niche applications, typically those with low power / long autonomous lifetime requirements. That is not to say that there can't be innovations expanding their use or reducing costs.
[1] Tsvetkov, L. A., et al. "Possible Way To Industrial Production of Nickel-63 and the Prospects of Its Use." (2005). online version
Update. Your updated calculations on power output from ${}^{63}\text{Ni}$ is essentially correct with one crucial distinction: 67 keV is total decay energy and approximately maximum energy of electron. But, since the decay also produces neutrino the mean energy of electron is much smaller: 17 keV (look at this NUDAT reference, or this java applet for electron spectrum). So the usable power from 1 mole of ${}^{63}\text{Ni}$ is: $$ W= {}^{63}\text{Ni specific activity} \times 17\,\text{keV} \times 63\,\text{g} = 0.36\,\text{W}, $$ where specific activity could be, for instance, taken from Wolfram Alpha. This is not sufficient to provide iPhone peak power consumption, which is about 1.5 W (see my reason 1).
Incidentally, we come to one more reason (though not, strictly speaking, related to physics):
- Safety / Regulations / Perception: 63 grams of ${}^{63}\text{Ni}$ constitute more than 3500 curie of radioactivity, which would definitely require regulations for handling and probably would not be allowed inside a single unit for unrestricted civilian use. We know that when properly used betavoltaics are safe. But what about im-proper use / improper disposal / potential for abuse? At any rate, current perception of nuclear power by general public is not that good, so marketing nuclear batteries will present certain challenge.
No comments:
Post a Comment