Background (just for context, you can skip it if you're familiar with Lorentz representations)
A Lorentz transformation can be represented by the matrix M(Λ)=exp(i2ωμνJμν), where Jμν are the 6 Lorentz generators which satisfy the Lorentz commutator algebra. From these generators we can express both boosts Ki=J0i and rotations Ji=ϵijkJjk/2 (here i,j=1,2,3, while μ,ν=1,2,3,4).
In particular, we can form two independent linear combinations:
→JI=12(→J+i→K) →JD=12(→J−i→K)
Which satisfy the SU(2) algebra (i.e. [JiI,D,JjI,D]=iϵijkJkI,D/2), and even commute between themselves.
This is extremedly useful as we can build any Lorentz representation by knowing how to represent SU(2) only. We know this from QM courses, that is, we can build the SU(2) matrices →J[j] of dimension (2j+1) by giving their spin 0,1/2,1,etc. (i.e, for j=0, →J[0]=1; for j=1/2, →J[1/2]=→σ/2; and so on).
From above, we see we can express then the rotation generators as →J=→JI+→JD. However, for →JI and →JD we can have different spins jI and jD, in general jI≠jD, so both matrices will have a different dimensions. We can fix this by taking "reducible" representations where the →JI have the irreducible blocks →J[jI] repeated (2jD+1) times on its diagonal; similarly for →JD. Now both matrices of have dimension (2jI+1)(2jD+1) and can be written, with l=(lI,lD), as:
(→JIl′l)=→J[jI]l′IlIδl′DlD (→JDl′l)=→J[jD]l′DlDδl′IlI
Therefore, the rotation generators are →J=→JI+→JD, so the Lorentz representation (jI,jD) of dimension (2jI+1)(2jD+1) can be in general reduced with respect to the subgroup of rotations, and includes the total spins |jI−jD|,...,jI+jD obtained by combining spins jI and jD.
Problem
We can rebuild the M(Λ) Lorentz transformation in terms of JI and JD (it's more convenient to write Λ=exp[i→θ⋅→J+i→α⋅→K]).
I want to show that M(Λ) decomposes into a product:
Ml′l=MIl′IlIMDl′DlD
I began by replacing →J=→JI+→JD and →K=i(→JI−→JD) and separate the exponential into two exponentials:
M(Λ)=exp[i→θ⋅→J+i→α⋅K]=exp[→JI⋅(i→θ−→a)+→JD⋅(i→θ+→a)]
Since by definition JI and JD commute, we can indeed separate them (by the BCH theorem):
M(Λ)=exp[→JI⋅(i→θ−→a)]exp[→JD⋅(i→θ+→a)]
Substituting the definitions of JI and JD in terms of J[jI] and J[jD], we get:
M(Λ)=exp[→J[jI]l′IlIδl′DlD⋅(i→θ−→a)]exp[→J[jD]l′DlDδl′IlI⋅(i→θ+→a)]
And defining:
MIl′IlI=exp[→J[jI]l′IlIδl′DlD⋅(i→θ−→a)] ; MIl′DlD=exp[→J[jD]l′DlDδl′IlI⋅(i→θ+→a)]
It seems the problem is complete, and both MIl′IlI and MIl′DlD are square matrices of dimension (2jI+1)(2jD+1). However, at a later exercise where I use this matrices, for it to be solvable, I must need that MIl′IlI and MIl′DlD be instead of dimensions (2jI+1) and (2jD+1) respectively! So either my definitions are incorrect and the matrices have the wrong dimensions, or the other exercise is wrong (or I'm interpreting it incorrectly).
Reference: S. Weinberg (1995), The Theory of Quantum Fields Vol. I, p. 229
Additional information:
In case it's relevant, the next exercise (which seems to contradict my results here) asks the following:
The vectors ϕ on which the matrices M act have components ϕl=ϕ(lI,lD), so they can be thought as rectangular matrices (2jI+1)x(2jD+1). We need to show that these "vectors" transform as ϕ→MIϕ(MD)T.
Since I calculated that MI is a square matrix of size (2jI+1)(2jD+1) and ϕ is a rectangular matrix of size (2jI+1)x(2jD+1) we can't even multiply the first product.
Progress
I read the Wikipedia article proposed in one of the comments, and while the notation is a bit complex, I managed to extract the most important information about the tensor products from there and other articles.
After redoing some algebra, let's start with the previous equation:
M(Λ)=exp[→JI⋅(i→θ−→a)]exp[→JD⋅(i→θ+→a)]
While it's true the matrices JI and JI where redefined to have dimension N=(2jI+1)(2jD+1), we're looking for a tensor product decomposition of the matrix M. Instead of substituting directly, however, let's treat them as two separate tensor spaces of dimensions (2jI+1) and (2jD+1). In this way, we're looking at a decomposition of the form:
M[(2jI+1)(2jD+1)]=MI[(2jI+1)]⊗MD[(2jD+1)]
Defining now:
MI=exp[→J[jI]⋅(i→θ−→a)]
MD=exp[→J[jD]⋅(i→θ+→a)]
We recover the tensor product as:
M=MI⊗MD
So now MI has dimension (2jI+1) and MD has dimension (2jD+1). I checked this is correct from the third reference. However, this seems to be inconsistent, as they should actually be rectangular matrices with dimension (2jI+1)×(2jD+1), so there seems to be a problem still.
References
http://maths.dur.ac.uk/users/philipp.b.lampe/LorentzBadDriburg.pdf http://www.math.mcgill.ca/walcher/phys580/PoincareFieldsMarcAntoine.pdf http://www.int.washington.edu/users/dbkaplan/571_12/Lorentz.pdf
Answer
I think your issues would disappear if only you utilized your knowledge of Kronecker multiplication of angular momentum reps, which is what Weinberg assumes in his turgid abstract generalization.
To make things as simple as possible, choose by way of example the doublet representation, jI=1/2 and the triplet representation, jD=1, so their tensor product acts on a 2×3=6 dimensional vector space, and covers the Rarita-Schwinger rep (1/2,1) you are apparently interested in.
The angular momentum 6×6 square matrices acting on this 6d space are the celebrated coproducts, →J=→JI+→JD=→J[1/2]⊗11+11⊗→J[1].
Consequently, M(Λ)=exp[i→θ⋅→J+i→α⋅K]=exp[→JI⋅(i→θ−→a)+→JD⋅(i→θ+→a)]=exp[→J[1/2]⊗11⋅(i→θ−→a)+11⊗→J[1]⋅(i→θ+→a)].
This trivially amounts to exp[→J[1/2]⊗11⋅(i→θ−→a)] exp[11⊗→J[1]⋅(i→θ+→a)]=(exp[→J[1/2]⋅(i→θ−→a)]⊗11)(11⊗exp[→J[1]⋅(i→θ+→a)])=exp[→J[1/2]⋅(i→θ−→a)]⊗exp[→J[1]⋅(i→θ+→a)],
The 6d vectors these matrices act on tensor-resolve to v⊗w. So, naturally, if you compacted your notation by acting on a 2×3 rectangular matrix ϕ, the posited left-right (transpose) multiplication obtains, since green matrices act on the 2d columns thereof, and blue ones, transposed to act on their left, on its 3d rows.
No comments:
Post a Comment