Tuesday, February 2, 2016

quantum mechanics - Heisenberg's uncertainty principle for mean deviation?


The Heisenberg uncertainty principle states that


σxσp2


However, this is only for the standard deviation. What is the inequality if the mean deviation, defined as


ˉσx=|xˉx|ρ(x) dx=|xˉx| |Ψ(x)|2 dx


is used as the measure of dispersion? This measure of dispersion generally gives values less than the standard deviation.



Is there a positive number λ such that


ˉσxˉσpλ


holds in general?



Answer



We can assume WLOG that ˉx=ˉp=0 and =1. We don't assume that the wave-functions are normalised.


Let σxR|x||ψ(x)|2dxR|ψ(x)|2dx and σpR|p||˜ψ(p)|2dpR|˜ψ(p)|2dp


Using R|p|eipxdp=2x2 we can prove that1 σxσp=1πR3|ψ(z)|2ψ(x)ψ(y)|z|(xy)2dxdydz[R|ψ(x)|2dx]21πF[ψ]


In the case of Gaussian wave packets it is easy to check that F=1, that is, σxσp=1π. We know that Gaussian wave-functions have the minimum possible spread, so we might conjecture that λ=1/π. I haven't been able to prove that F[ψ]1 for all ψ, but it seems reasonable to expect that F is minimised for Gaussian functions. The reader could try to prove this claim by using the Euler-Langrange equations for F[ψ] because after all, F is just a functional of ψ.




Testing the conjecture



I evaluated F[ψ] for some random ψ: F[exp(ax2)]=1F[Π(xa)cos(πxa)]=π242π2(πSi(π)2)1.13532F[Π(xa)cos2(πxa)]=3π2169π2(πSi(2π)+log(2π)+γCi(2π))1.05604F[Λ(xa)]=3log221.03972F[J1(ax)x]=9π2641.38791F[J2(ax)x]=75π21285.78297


As pointed out by knzhou, any function that depends on a single dimensionful parameter a has an F that is independent of that parameter (as the examples above confirm). If we take instead functions that depend on a dimensionless parameter n, then F will depend on it, and we may try to minimise F with respect to that parameter. For example, if we take ψn(x)=Π(x)cosn(πx) then we get 1<F[ψ]<1+112n so that F[ψn] is minimised for n where we get F[ψ]=1.


Similarly, if we take ψn(x)=J2n+1(x)x we get F[ψ]=(4n+1)2(4n+2)2π264(2n+1)39π2641.38791 which is, again, consistent with our conjecture.


The function ψn(x)=1(x2+1)n has F[ψ]=Γ(2n)2Γ(n+12)2(2n1)n!Γ(n)Γ(2n12)21 which satisfies our conjecture.


As a final example, note that ψn(x)=xnex2 has F[ψ]=2nn!Γ(n+12)2Γ(n+12)21 as required.


We could do the same for other families of functions so as to be more confident about the conjecture.


Conjecture's wrong! (2018-03-04)


User Frédéric Grosshans has found a counter-example to the conjecture. Here we extend their analysis a bit.


We note that the set of functions ψn(x)=Hn(x)e12x2 with Hn the Hermite polynomials are a basis for L2(R). We may therefore write any function as ψ(x)=j=0ajHj(x)e12x2


Truncating the sum to jN and minimising with respect to {aj}j[1,N] yields the minimum of F when restricted to that subspace: min



Taking the limit N\to\infty yields the infimum of F over L^2(\mathbb R). I don't know how to calculate F[\psi] analytically but it is rather simple to do so numerically:


enter image description here


The upper and lower dashed lines represent the conjectured F\ge 1 and Frédéric's F\ge \pi^2/4e. The solid line is the fit of the numerical results to a model a+b/N^2, which yields as an asymptotic estimate F\ge 0.9574, which is represented by the middle dashed line.


If these numerical results are reliable, then we would conclude that the true bound is around F[\psi]\ge 0.9574 which is close to the gaussian result, and above Frédéric's result. This seems to confirm their analysis. A rigorous proof is lacking, but the numerics are indeed very suggestive. I guess at this point we should ask our friends the mathematicians to come and help us. The problem seems interesting in and of itself, so I'm sure they'd be happy to help.




Other moments


If we use \sigma_x(\nu)=\int\mathrm dx\ |x|^\nu\; |\psi(x)|^2\qquad \nu\in\mathbb N to measure the dispersion, we find that, for Gaussian functions, \sigma_x(\nu)\sigma_p(\nu)=\frac{1}{\pi}\Gamma\left(\frac{1+\nu}{2}\right)^2


In this case we get \sigma_x\sigma_p=1/\pi for \nu=1 and \sigma_x\sigma_p=1/4 for \nu=2, as expected. Its interesting to note that \sigma_x(\nu)\sigma_p(\nu) is minimised for \nu=2, that is, the usual HUR.




^1 we might need to introduce a small imaginary part to the denominator x-y-i\epsilon to make the integrals converge.



No comments:

Post a Comment

classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...