In Feynman's book "Quantum Mechanics and Path Integrals" Feynman states that
the probability $P(b,a)$ to go from point $x_a$ at time $t_a$ to the point $x_b$ at the time $t_b$ is $P(b,a) = \|K(b,a)\|^2$ of an amplitude $K(b,a)$ to go from $a$ to $b$. This amplitude is the sum of contributions $\phi[x(t)]$ from each path. $$ K(b,a) = \sum_{\text{paths from $a$ to $b$}} \phi[x(t)]$$ The contributions of a path has a phase proportional to the action $S$: $$ \phi[x(t)] = \text{const}\ e^{(i/\hbar)S[x(t)]}$$
Why must the contribution of a path be $\sim e^{(i/\hbar)S[x(t)]}$? Can this be somehow derived or explained? Why can't the contribution of a path be something else e.g. $\sim \frac{S}{\hbar}$, $\sim \cos(S/\hbar)$, $\log(S/\hbar)$ or $e^{- (S[x(t)]/\hbar)^2}$ ?
Edit: I have to admit that in the first version of this question, I didn't exclude the possibility to derive the contribution of a path directly from Schrödinger's equation. So answers along this line are valid although not so interesting. I think when Feynman developed his formalism his goal was to find a way to quantize systems, which cannot be treated by Schrödinger's equation, because they cannot be described in terms of a Hamiltonian (e.g. the Wheeler-Feynman absorber theory). So I think a good answer would explain Feynman's Ansatz without referring to Schrödinger's equation, because I think Schrödinger's equation can only handle a specific subset of all the systems that can be treated by Feynman's more general principle.
Answer
There are already several good answers. Here I will only answer the very last question, i.e., if the Boltzmann factor in the path integral is $f(S(t_f,t_i))$, with action $S(t_f,t_i)=\int_{t_i}^{t_f} dt \ L(t)$, why is the function $f:\mathbb{R}\to\mathbb{C}$ an exponential function, and not something else?
Well, since the Feynman "sum over histories" propagator should have the group property
$$ K(x_3,t_3;x_1,t_1) = \int_{-\infty}^{\infty}\mathrm{d}x_2 \ K(x_3,t_3;x_2,t_2) K(x_2,t_2;x_1,t_1),$$
one must demand that
$$f(S(t_3,t_2)f(S(t_2,t_1)) = f(S(t_3,t_1)) = f(S(t_3,t_2)+S(t_2,t_1)),$$
$$f(S(t_1,t_1)) = 1.$$
So the question boils down to: How many continuous functions $f:\mathbb{R}\to\mathbb{C}$ satisfy $f(s)f(s^{\prime})=f(s+s^{\prime})$ and $f(0)=1$?
Answer: The exponential function!
Proof (ignoring some mathematical technicalities): If $s$ is infinitesimally small, then one may Taylor expand
$$f(s) = f(0) + f^{\prime}(0)s +{\cal O}(s^{2}) = 1+cs+{\cal O}(s^{2}), $$
with some constant $c:=f^{\prime}(0)$. Then one calculates
$$ f(s)=\lim_{n\to\infty}f(\frac{s}{n})^n =\lim_{n\to\infty}\left(1+\frac{cs}{n}+o(\frac{1}{n})\right)^n =e^{cs}, $$
i.e., the exponential function.
No comments:
Post a Comment