From what I understand, especially from reading arguments on Physics.SE, different (length) scales of a system are extremely important. It's clear that if there are two scales $\delta,d,D,\Delta$ with say $$\delta < d \ll D < \Delta,$$ then effect, which happen on a length scale $d$ might be neglected if one is interested in effects of the length scale $D$.
However, I don't see which these points justify arguments involving the length scales per se. As far as I can see, the existence of a quantity itself doesn't imply that physical effects are of that size. If I have a natural length scale $l$, and problems of geometries involving a length $L$, then things could end up with depending on $$l\propto\frac{1}{16^{\pi^2}}L$$ or $$2^{\text{dim}}L\gg L.$$ For example, often people argue that if some characteristic scale $H$ is close to $\hbar$ then things will get problematic. I don't see why this would be a priori justified at all. Conversely, why does a model with some small quantifity automatically have to be suppressed at one point? And if I have a small and a big scale, why are the scales in between relevant, if their value is obviously just some real value times one other values.
So how is the apparent predominance in reasoning with length scales to be explained? I'm specifically thinking of field theories here, but not only.
Answer
The reason is that computations producing large dimensionless numbers are generally complex, in a computational sense, meaning that you have to have a plan, or a specific intent, to produce these large numbers. Physical systems are generally dumb, so that they rely on very simple computations at their core, and these computations do not produce large dimensionless numbers easily.
This is only true for exponential computations, you can generate dimensionless large numbers which are polynomially large relatively naturally. For example, when considering atoms, you might believe that the scale of the wavelength of visible light, which about 10,000 times bigger than the radius of the atom, is too big to have appreciable scattering off an atom. But this is not so if the atom has an absorption peak at a certain wavelength. In this case, the resonance for frequencies close to the maximum peak frequency can amplify the scattering by a factor which goes as $f_0/(f-f_0)$, where $f_0$ is the resonant frequency and f is the frequency of the light. This is a simple polynomial amplification factor, but it can easily give you a factor of 10,000 if f is tuned to be within 1 part in 10,000 of the resonant frequency.
In this case, which is typical, there is a small dimensionless parameter, the detuning parameter of the resonance, which serves to amplify the scattering cross section so that it can be many orders of magnitude bigger than the atomic radius. But absent this fine tuning, the general argument that light is bigger than an atom is correct, so that the light scatters off the atom only a small amount.
To get an exponentially enormous dimensionless number, like $10^{10^2}$, you need a machine that computes exponentials, and it is difficult to think up a physical system which computes exponentials. You are constrained by conservation of energy, so you generally can't have replication like bacterial growth.
No comments:
Post a Comment