I'm starting with quantum mechanics and the book I follow (Griffiths) first introduces the wavefunction as the probability density of the position of a 0-spin single particle. Later on I've realized that there is a bigger picture: the state of a system can be expressed as a superposition, which is a linear combination of the vectors of a particular basis, (depending on the observable we choose), with a (maybe continuous) set of coefficients that are probability amplitudes of the eigenvalues. If the chosen observable happens to be the position we obtain the wavefunction.
So now I'm wondering the following: is this the definition of a wavefunction or is there a wavefunction for every observable, being this wavefuntion just a map from the possible values of the observable to their probability amplitudes?
If so: is the time-dependence of this wavefunction always restricted by the Schrödinger Equation? For example, suppose that in the case of a 2-dimensional system with eigenstates |0> and |1> for a certain observable, with respective eigencalues 0 and 1. We have the superposed state f(0,t)|0> + f(1,t)|1>, where f(state, time) is the "wavefunction" that assigns probability amplitude to each eigenstates. Does this f function have to satisfy the Schrödinger Equation or other constraints?
Answer
So now I'm wondering the following: is this the definition of a wavefunction or is there a wavefunction for every observable, being this wavefuntion just a map from the possible values of the observable to their probability amplitudes?
There is indeed a wavefunction for every observable. The state |ψ⟩ is a vector in a complex Hilbert space. The wavefunction consists of its components along the eigenstates of some arbitrary observable ˆA: ψ(a)=⟨a|ψ⟩,
Notice that ψ∗(a)ψ(a)=⟨ψ|a⟩⟨a|ψ⟩, where |a⟩⟨a| is a projection operator to the corresponding eigenspace. This is exactly what the Born rule says.
For example, we could use a momentum-space wavefunction ϕ(p)=⟨p|ψ⟩, just to use a different symbol, and it would be related to the position-space wavefunction ψ(x)=⟨x|ψ⟩ by: Φ(p,t)=1√2πℏ∫∞−∞e−ipx/ℏΨ(x,t)dxΨ(x,t)=1√2πℏ∫∞−∞e+ipx/ℏΦ(p,t)dp
If so: is the time-dependence of this wavefunction always restricted by the Schrödinger Equation?
The time-evolution is given by the Schrödinger equation, yes. The Hamiltonian is an operator and is therefore not intrinsically tied to any particular basis. Of course, as a matter of practicality, some bases are nicer to work in than others.
In fact, although Griffiths does not emphasize this, when you are solving the Schrödinger equation in the situations typically encountered in that book, what you usually end up with is the state written in terms of the energy eigenstates, and hence you are writing its "energy-space" representation. This is just as above, with the observable being the Hamiltonian itself.
I didn't get this, though: " You can think of the first as doing an inner product with a momentum eigenstate in the position basis, and of the latter as doing an inner product with a position eigenstate in the momentum basis." Could you (or anyone else) explain that a little more? An inner product of what?
In the context of Griffiths' presentation, remember that he defines the inner product of two position-space functions ⟨f(x)|g(x)⟩=∫Df(x)∗g(x)dx,
The other Fourier transform is just the reverse: written in terms of momentum, ˆx=+iℏddp and the position eigenfunctions are ˆxGx(p)=xGx(p)⟺Gx(p)=1√2πℏe−ipx/ℏ.
There's nothing special about position-space definition of the inner product except that it's sometimes convenient. The inner product works just the same as above in any orthonormal basis. It's probably worthwhile to revisit an Euclidean analogy: if you're in En with some orthonormal basis {ˆek}, you can write any vector ˆψ as in terms of its components in that basis using the dot product: ˆψ=∑kˆek(ˆek⋅ˆψ)⇝|ψ⟩=∑k|k⟩⟨k|ψ⟩=∑kψ(k)|k⟩,
No comments:
Post a Comment