Sunday, August 9, 2015

homework and exercises - General Irreducible Representation of Lorentz Group


Background (just for context, you can skip it if you're familiar with Lorentz representations)



A Lorentz transformation can be represented by the matrix $M(\Lambda)=exp(\frac{i}{2}\omega_{\mu\nu}J^{\mu\nu})$, where $J^{\mu\nu}$ are the 6 Lorentz generators which satisfy the Lorentz commutator algebra. From these generators we can express both boosts $K^i=J^{0i}$ and rotations $J^i=\epsilon^{ijk}J^{jk}/2$ (here $i,j=1,2,3$, while $\mu,\nu=1,2,3,4$).


In particular, we can form two independent linear combinations:


$\vec{J_I}=\frac{1}{2}(\vec{J}+i\vec{K}) \ \ \ \vec{J_D}=\frac{1}{2}(\vec{J}-i\vec{K})$


Which satisfy the SU(2) algebra (i.e. $[J^i_{I,D},J^j_{I,D}]=i\epsilon^{ijk}J^k_{I,D}/2$), and even commute between themselves.


This is extremedly useful as we can build any Lorentz representation by knowing how to represent SU(2) only. We know this from QM courses, that is, we can build the SU(2) matrices $\vec{J}^{[j]}$ of dimension $(2j+1)$ by giving their spin 0,1/2,1,etc. (i.e, for j=0, $\vec{J}^{[0]}=1$; for j=1/2, $\vec{J}^{[1/2]}=\vec{\sigma}/2$; and so on).


From above, we see we can express then the rotation generators as $\vec{J}=\vec{J_I}+\vec{J_D}$. However, for $\vec{J_I}$ and $\vec{J_D}$ we can have different spins $j_I$ and $j_D$, in general $j_I\neq j_D$, so both matrices will have a different dimensions. We can fix this by taking "reducible" representations where the $\vec{J_I}$ have the irreducible blocks $\vec{J}^{[j_I]}$ repeated $(2j_D+1)$ times on its diagonal; similarly for $\vec{J^{D}}$. Now both matrices of have dimension $(2j_I+1)(2j_D+1)$ and can be written, with $l=(l_I,l_D)$, as:


$(\vec{J_I}_{l'l})=\vec{J}^{[j_I]}_{l'_Il_I}\delta_{l'_Dl_D} \ \ \ (\vec{J_D}_{l'l})=\vec{J}^{[j_D]}_{l'_Dl_D}\delta_{l'_Il_I}$


Therefore, the rotation generators are $\vec{J}=\vec{J_I}+\vec{J_D}$, so the Lorentz representation $(j_I,j_D)$ of dimension $(2j_I+1)(2j_D+1)$ can be in general reduced with respect to the subgroup of rotations, and includes the total spins $|j_I-j_D|,...,j_I+j_D$ obtained by combining spins $j_I$ and $j_D$.


Problem


We can rebuild the $M(\Lambda)$ Lorentz transformation in terms of $J_I$ and $J_D$ (it's more convenient to write $\Lambda=exp[i\vec{\theta}\cdot\vec{J}+i\vec{\alpha}\cdot\vec{K}]$).



I want to show that $M(\Lambda)$ decomposes into a product:


$M^{l'l}=M^I_{l'_Il_I}M^D_{l'_Dl_D}$


I began by replacing $\vec{J}=\vec{J_I}+\vec{J_D}$ and $\vec{K}=i(\vec{J_I}-\vec{J_D})$ and separate the exponential into two exponentials:


$M(\Lambda)=exp[i\vec{\theta}\cdot\vec{J}+i\vec{\alpha}\cdot{K}]=exp[\vec{J_I}\cdot(i\vec{\theta}-\vec{a})+\vec{J_D}\cdot(i\vec{\theta}+\vec{a})]$


Since by definition $J_I$ and $J_D$ commute, we can indeed separate them (by the BCH theorem):


$M(\Lambda)=exp[\vec{J_I}\cdot(i\vec{\theta}-\vec{a})]exp[\vec{J_D}\cdot(i\vec{\theta}+\vec{a})]$


Substituting the definitions of $J_I$ and $J_D$ in terms of $J^{[j_I]}$ and $J^{[j_D]}$, we get:


$M(\Lambda)=exp[\vec{J}^{[j_I]}_{l'_Il_I}\delta_{l'_Dl_D}\cdot(i\vec{\theta}-\vec{a})]exp[\vec{J}^{[j_D]}_{l'_Dl_D}\delta_{l'_Il_I}\cdot(i\vec{\theta}+\vec{a})]$


And defining:


$M^I_{l'_Il_I}=exp[\vec{J}^{[j_I]}_{l'_Il_I}\delta_{l'_Dl_D}\cdot(i\vec{\theta}-\vec{a})] \ \ \ \ ; \ \ \ \ M^I_{l'_Dl_D}=exp[\vec{J}^{[j_D]}_{l'_Dl_D}\delta_{l'_Il_I}\cdot(i\vec{\theta}+\vec{a})]$



It seems the problem is complete, and both $M^I_{l'_Il_I}$ and $M^I_{l'_Dl_D}$ are square matrices of dimension $(2j_I+1)(2j_D+1)$. However, at a later exercise where I use this matrices, for it to be solvable, I must need that $M^I_{l'_Il_I}$ and $M^I_{l'_Dl_D}$ be instead of dimensions $(2j_I+1)$ and $(2j_D+1)$ respectively! So either my definitions are incorrect and the matrices have the wrong dimensions, or the other exercise is wrong (or I'm interpreting it incorrectly).


Reference: S. Weinberg (1995), The Theory of Quantum Fields Vol. I, p. 229


$\ $


$\ $


Additional information:


In case it's relevant, the next exercise (which seems to contradict my results here) asks the following:


The vectors $\phi$ on which the matrices $M$ act have components $\phi_l=\phi(l_I,l_D)$, so they can be thought as rectangular matrices $(2j_I+1)x(2j_D+1)$. We need to show that these "vectors" transform as $\phi\rightarrow M^I\phi (M^D)^T$.


Since I calculated that $M^I$ is a square matrix of size $(2j_I+1)(2j_D+1)$ and $\phi$ is a rectangular matrix of size $(2j_I+1)x(2j_D+1)$ we can't even multiply the first product.


Progress


I read the Wikipedia article proposed in one of the comments, and while the notation is a bit complex, I managed to extract the most important information about the tensor products from there and other articles.



After redoing some algebra, let's start with the previous equation:


$M(\Lambda)=exp[\vec{J_I}\cdot(i\vec{\theta}-\vec{a})]exp[\vec{J_D}\cdot(i\vec{\theta}+\vec{a})]$


While it's true the matrices $J_I$ and $J_I$ where redefined to have dimension $N=(2j_I+1)(2j_D+1)$, we're looking for a tensor product decomposition of the matrix $M$. Instead of substituting directly, however, let's treat them as two separate tensor spaces of dimensions $(2j_I+1)$ and $(2j_D+1)$. In this way, we're looking at a decomposition of the form:


$M[(2j_I+1)(2j_D+1)]=M^I[(2j_I+1)]\otimes M^D[(2j_D+1)]$


Defining now:


$M^I=exp[\vec{J}^{[j_I]}\cdot(i\vec{\theta}-\vec{a})]$


$M^D=exp[\vec{J}^{[j_D]}\cdot(i\vec{\theta}+\vec{a})]$


We recover the tensor product as:


$M=M^I\otimes M^D$


So now $M^I$ has dimension $(2j_I+1)$ and $M^D$ has dimension $(2j_D+1)$. I checked this is correct from the third reference. However, this seems to be inconsistent, as they should actually be rectangular matrices with dimension $(2j_I+1)\times (2j_D+1)$, so there seems to be a problem still.



References


http://maths.dur.ac.uk/users/philipp.b.lampe/LorentzBadDriburg.pdf http://www.math.mcgill.ca/walcher/phys580/PoincareFieldsMarcAntoine.pdf http://www.int.washington.edu/users/dbkaplan/571_12/Lorentz.pdf



Answer



I think your issues would disappear if only you utilized your knowledge of Kronecker multiplication of angular momentum reps, which is what Weinberg assumes in his turgid abstract generalization.


To make things as simple as possible, choose by way of example the doublet representation, $\color{green}{j^{I}}=1/2$ and the triplet representation, $\color{blue}{j^{D}}=1$, so their tensor product acts on a 2×3=6 dimensional vector space, and covers the Rarita-Schwinger rep (1/2,1) you are apparently interested in.


The angular momentum 6×6 square matrices acting on this 6d space are the celebrated coproducts, $$\vec{J}=\vec{J_I}+\vec{J_D} = \color{green}{ \vec{J^{[1/2]}}} \otimes \color{blue}{1\!\! 1} + \color{green}{1\!\! 1} \otimes \color{blue}{ \vec{J^{[1]}}} .$$ The direct product can be visualized, e.g., by sticking right tensor factor matrices into every numerical entry of the left factor matrix, keeping that numerical entry as a coefficient. (Convince yourself you appreciate this with the diagonal $J_3$ matrix, diag(3/2,1/2,-1/2,1/2,-1/2,-3/2), cf. problem 4 in these notes.)


Consequently, $$M(\Lambda)=\exp[i\vec{\theta}\cdot\vec{J}+i\vec{\alpha}\cdot{K}] \\ =\exp[\vec{J_I}\cdot(i\vec{\theta}-\vec{a})+\vec{J_D}\cdot(i\vec{\theta}+\vec{a})] \\ =\exp[ \color{green}{ \vec{J^{[1/2]}}} \otimes \color{blue}{1\!\! 1}\cdot(i\vec{\theta}-\vec{a})+ \color{green}{1\!\! 1} \otimes \color{blue}{ \vec{J^{[1]}}}\cdot(i\vec{\theta}+\vec{a})]. $$


This trivially amounts to $$ \exp[ \color{green}{ \vec{J^{[1/2]}}} \otimes \color{blue}{1\!\! 1}\cdot(i\vec{\theta}-\vec{a})]~~\exp [ \color{green}{1\!\! 1} \otimes \color{blue}{ \vec{J^{[1]}}}\cdot(i\vec{\theta}+\vec{a})] \\ = \left ( \exp[ \color{green}{ \vec{J^{[1/2]}}} \cdot(i\vec{\theta}-\vec{a})] \otimes \color{blue}{1\!\! 1} \right )\left ( \color{green}{1\!\! 1} \otimes \exp[ \color{blue}{ \vec{J^{[1]}}}\cdot(i\vec{\theta}+\vec{a})] \right ) \\ = \exp[ \color{green}{ \vec{J^{[1/2]}}} \cdot(i\vec{\theta}-\vec{a})]\otimes \exp[ \color{blue}{ \vec{J^{[1]}}}\cdot(i\vec{\theta}+\vec{a})], $$ so, then, a square 6×6 matrix, tensored out of a (green) 2×2 one and a (blue) 3×3 one. (You tried to evoke this by exponentiating matrix elements, which is largely meaningless, however.)


The 6d vectors these matrices act on tensor-resolve to $\color{green}{ v}\otimes \color{blue}{w} $. So, naturally, if you compacted your notation by acting on a 2×3 rectangular matrix $\color{magenta}{\phi}$, the posited left-right (transpose) multiplication obtains, since green matrices act on the 2d columns thereof, and blue ones, transposed to act on their left, on its 3d rows.


No comments:

Post a Comment

classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...