Once, Dirac said the following about renormalization in Quantum Field Theory (look here, for example):
Renormalization is just a stop-gap procedure. There must be some fundamental change in our ideas, probably a change just as fundamental as the passage from Bohr's orbit theory to quantum mechanics. When you get a number turning out to be infinite which ought to be finite, you should admit that there is something wrong with your equations, and not hope that you can get a good theory just by doctoring up that number.
Has this fundamental change come along afterward, and if so, what is the nature of this "fundamental" change? Is it an attempt to unify quantum mechanics with general relativity (of which the two main streams are String Theory and Loop Quantum Gravity, and of which I don't think they correspond with reality, but that aside)? Is there something more exotic? Or was Dirac just wrong by assuming that the procedure is just a "stop-gap" procedure?
Answer
There are a lot of projects going on, and I'll try to sum them up with pithy one-liners that are as accurate as my own (admittedly limited) understanding of them. The solutions include:
- Classical renormalization: it's the predictions that matter, and renormalization is just the only (admittedly complicated) way of taking the continuum limit we have.
- Wilsonian renormalization: it's simply not possible to construct a non-trivial theory that is not a low energy effective theory, and the non-renormalizable constants are those that don't affect low energy effective theories.
- String theory: this whole 4-d space-time is an illusion that is built from the interaction of interacting 2-d space-times (strings). Because all interactions are renormalizable in 2-d, the problems go away (though there are many compactified space-like dimensions that we have yet to see).
- Loop quantum gravity: the problem comes from taking the continuum limit in space-time, so let's just throw out the idea of a continuum altogether.
I don't find any of these approaches particularly satisfying. My own inclination is to favor the "more derivatives" approach because it involves the fewest technical changes, but it requires an enormous philosophical change. The cause of that philosophical change comes about from the requirement that the theory be Lorentz invariant; it would, in principle, be possible to make theories not just renormalizable, but UV finite, by adding some more spatial derivatives. Because of Lorentz invariance, though, adding more space derivatives necessarily entails adding more time derivatives. Ostrogradsky showed in classical physics alone that more than two derivatives necessarily entails the Hamiltonian no longer having a lower bound (a good technical overview is given in Woodard (2007) and Woodard (2015)).
It is generally considered so important that the Hamiltonian serves as the thing that constrains the theory to a finite volume of phase space that it is half of one of the axioms that goes in to QFT; in sum:
- there exists an operator that corresponds to the Hamiltonian that serves as the generator of time translations (and to the Noether charge conserved due to the time invariance of the laws of physics), and
- the eigenvalues of the generator of time translations are positive semi-definite (or, have a lower bound).
The content of the Källen—Lehmann representation (Wikipedia link, also covered in section 10.7 of Weinberg's "The Quantum Theory of Fields", Vol. I) is that the above postulate, combined with Lorentz invariance, necessarily implies no more than two derivatives in the inverse of the propagator.
The combination of Ostrogradsky and Källen—Lehmann seems insuperable, but only if you're insistent on maintaining that "Hamiltonian = energy" (here, I use "Hamiltonian" as shorthand for the generator of time translations, and "energy" as shorthand for "that conserved charge that has a lower bound and confines the fields in phase space"). I suspect that if you're willing to split those two jobs up that the difficulties in higher derivative theories disappear. The new version of the energy/time translation postulate would be something like:
- the generators of space-time translations are conserved (Hamiltonian, 4-momentum),
- there exists a conserved 4-vector operator that takes on values in the forward light cone, and
- The operators in 1 and 2 coincide for low frequency (classical physics correspondence).
A key paper in this direction is Kaparulin, Lyakhovich, and Sharapov (2014) "Classical and quantum stability of higher-derivative dynamics" (and the papers that cite it, especially by the same authors), which shows that the instability only becomes a problem for the Pais—Uhlenbeck oscillator when you couple the higher derivative sector to other sectors in certain ways, and it's stable when you limit the couplings to other ways.
All of that said, more derivatives wouldn't be a panacea. If you try to remove the divergences in a gauge theory by adding more derivatives, for instance, you'll always add interaction terms with more derivatives in such a way as to keep the theory as divergent as it was in the beginning. Note, that "more derivatives" is mathematically equivalent to Pauli—Villars regularization (PV) by partial fraction decomposition of the Fourier transform of the propagator. PV is known to not play well with gauge theory precisely because of this issue, although it's usually worded as violating gauge invariance because the higher order couplings with more derivatives required to keep gauge invariance are left out.
No comments:
Post a Comment