Tuesday, August 12, 2014

quantum mechanics - General wavefunction and Schrödinger Equation


I'm starting with quantum mechanics and the book I follow (Griffiths) first introduces the wavefunction as the probability density of the position of a 0-spin single particle. Later on I've realized that there is a bigger picture: the state of a system can be expressed as a superposition, which is a linear combination of the vectors of a particular basis, (depending on the observable we choose), with a (maybe continuous) set of coefficients that are probability amplitudes of the eigenvalues. If the chosen observable happens to be the position we obtain the wavefunction.


So now I'm wondering the following: is this the definition of a wavefunction or is there a wavefunction for every observable, being this wavefuntion just a map from the possible values of the observable to their probability amplitudes?


If so: is the time-dependence of this wavefunction always restricted by the Schrödinger Equation? For example, suppose that in the case of a 2-dimensional system with eigenstates |0> and |1> for a certain observable, with respective eigencalues 0 and 1. We have the superposed state f(0,t)|0> + f(1,t)|1>, where f(state, time) is the "wavefunction" that assigns probability amplitude to each eigenstates. Does this f function have to satisfy the Schrödinger Equation or other constraints?



Answer





So now I'm wondering the following: is this the definition of a wavefunction or is there a wavefunction for every observable, being this wavefuntion just a map from the possible values of the observable to their probability amplitudes?



There is indeed a wavefunction for every observable. The state $|\psi\rangle$ is a vector in a complex Hilbert space. The wavefunction consists of its components along the eigenstates of some arbitrary observable $\hat A$: $$\psi(a) = \langle a|\psi\rangle\text{,}$$ where $a$ is an eigenvalue of $\hat{A}$ and $|a\rangle$ is the corresponding eigenvector. In position-space, the eigenvectors of the position operator $\hat{x}$ look like Dirac deltas, so this is obviously true for the usual position-space wavefunction just from the definition of the inner product. However, thinking of them as Dirac deltas obscures the fact that there is nothing intrinsically special about the position operator and in fact any will do, although a little more care should be taken in the case of degeneracy.


Notice that $\psi^*(a)\psi(a) = \langle\psi|a\rangle\langle a|\psi\rangle$, where $|a\rangle\langle a|$ is a projection operator to the corresponding eigenspace. This is exactly what the Born rule says.


For example, we could use a momentum-space wavefunction $\phi(p) = \langle p|\psi\rangle$, just to use a different symbol, and it would be related to the position-space wavefunction $\psi(x) = \langle x|\psi\rangle$ by: $$\begin{eqnarray*} \Phi(p,t) &=& \frac{1}{\sqrt{2\pi\hbar}}\int_{-\infty}^\infty e^{-ipx/\hbar}\Psi(x,t)\,\mathrm{d}x\\ \Psi(x,t) &=& \frac{1}{\sqrt{2\pi\hbar}}\int_{-\infty}^\infty e^{+ipx/\hbar}\Phi(p,t)\,\mathrm{d}p \end{eqnarray*}$$ i.e., Fourier transforms; cf. Griffith's chapter 3. You can think of the first as doing an inner product with a momentum eigenstate in the position basis, and of the latter as doing an inner product with a position eigenstate in the momentum basis.



If so: is the time-dependence of this wavefunction always restricted by the Schrödinger Equation?



The time-evolution is given by the Schrödinger equation, yes. The Hamiltonian is an operator and is therefore not intrinsically tied to any particular basis. Of course, as a matter of practicality, some bases are nicer to work in than others.



In fact, although Griffiths does not emphasize this, when you are solving the Schrödinger equation in the situations typically encountered in that book, what you usually end up with is the state written in terms of the energy eigenstates, and hence you are writing its "energy-space" representation. This is just as above, with the observable being the Hamiltonian itself.





I didn't get this, though: " You can think of the first as doing an inner product with a momentum eigenstate in the position basis, and of the latter as doing an inner product with a position eigenstate in the momentum basis." Could you (or anyone else) explain that a little more? An inner product of what?



In the context of Griffiths' presentation, remember that he defines the inner product of two position-space functions $$\langle f(x)|g(x)\rangle = \int_D f(x)^*g(x)\,\mathrm{d}x\text{,}$$ where $D$ is the domain. So you can solve for the eigenfunctions of the momentum operator and (say) $D = (-\infty,+\infty)$: $$\begin{eqnarray*} \hat{p} F_p(x) = -i\hbar\frac{\mathrm{d}F_p(x)}{\mathrm{d}x} = pF_p(x) &\Longleftrightarrow& F_p(x) = \frac{1}{\sqrt{2\pi\hbar}}e^{+ipx/\hbar}\text{.}\end{eqnarray*}$$ Thus you can see that the first of the above Fourier transforms is simply performing an inner product $$\Phi(p,t) = \langle F_p(x)|\Psi(x,t)\rangle$$ of the momentum eigenfunction with the usual, position-space wavefunction $\Psi(x,t)$. In other words, it's an inner product of the momentum eigenstate $F_p(x)$ and the wavefunction $\Psi(x,t)$, with both written in position-space.


The other Fourier transform is just the reverse: written in terms of momentum, $\hat{x} = +i\hbar\frac{\mathrm{d}}{\mathrm{d}p}$ and the position eigenfunctions are $$\begin{eqnarray*} \hat{x}G_x(p) = xG_x(p) &\Longleftrightarrow& G_x(p) = \frac{1}{\sqrt{2\pi\hbar}}e^{-ipx/\hbar}\text{.}\end{eqnarray*}$$


There's nothing special about position-space definition of the inner product except that it's sometimes convenient. The inner product works just the same as above in any orthonormal basis. It's probably worthwhile to revisit an Euclidean analogy: if you're in $\mathrm{E}^n$ with some orthonormal basis $\{\hat{e}_k\}$, you can write any vector $\hat\psi$ as in terms of its components in that basis using the dot product: $$\begin{eqnarray*}\hat\psi = \sum_k \hat{e}_k(\hat{e}_k\cdot\hat\psi) &\leadsto& |\psi\rangle = \sum_k |k\rangle\langle k|\psi\rangle = \sum_k \psi(k) |k\rangle\text{,}\end{eqnarray*}$$ where $\{|k\rangle\}$ is some orthonormal basis in our Hilbert space. And so an inner product of two states is: $$\langle\psi|\phi\rangle = \sum_k\langle\psi|k\rangle\langle k|\phi\rangle = \sum_k \psi(k)^*\phi(k)\text{.}$$ An integral instead where appropriate. Though you have to be careful to make sure that you span the whole Hilbert space if you're doing an inner product using eigenstates of some observable (even position, e.g., particle with spin).


No comments:

Post a Comment

classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...