Thursday, August 27, 2015

vectors - Understanding Tensor-operations, covariance, contravariance, ... in the context of Special Relativity


I'm currently learning about special relativity but I'm having a really hard time grasping the Tensor-operations.


Let's take the Minkowski scalar product of 2 four-vectors:


$$\pmb U . \pmb V = U^0V^0-U^1V^1-U^2V^2-U^3V^3$$


If we introduce the Minkowskimetric $\eta_{\mu\nu}$, we can rewrite this as $$\pmb U . \pmb V = U^\mu V^\nu \eta_{\mu\nu}$$


The following is also defined: $$V_\mu = \eta_{\mu\nu}V^\nu$$


Now we can apparently find that $\eta_{\mu\nu}\eta^{\nu\lambda}=\delta^\lambda_\mu$


Here's where I'm already a bit lost. I understand that, depending on how something transforms, it's either covariant or contravariant and that that determines whether the index is written at the top or bottom. What I don't understand is how you find the $\eta^{\nu\lambda}$ in the first place and what the real significance is of the switched position of the indices. I also don't see why then for the $\delta$ we have an index at the top AND bottom.


What is the physical meaning of having the indices at the top or bottom?



Answer




I'll try to give a brief overview. Say you have a vector space $E$. Given a basis $\{\vec{e}_\mu\}$, a vector $\vec{V}$ can be decomposed as $\vec{V} = V^\mu \vec{e}_\mu$. The position of the indices is determined by the transformation law: if we switch to another basis $\{\vec{e}_{\mu'}\}$ given by $\vec{e}_\mu = \vec{e}_{\mu'} \Lambda^{\mu'}_{\;\mu}$, the components of $\vec{V}$ transform as $V^\mu = (\Lambda^{-1})^\mu_{\;\mu'} V^{\mu'}$. Because $V^\mu$ and $\vec{e}_\mu$ transform oppositely, the combination $\vec{V} = V^\mu\vec{e}_\mu$ is invariant.


Now consider the dual space $E^*$. This is defined as the space of linear functionals on $E$; that is, linear functions from $E$ to $\mathbb{R}$. This is a finite dimensional vector space of the same dimension as $E$. Given a basis $\{\vec{e}_\mu\}$ of $E$ there is a unique dual basis $\{\sigma^\mu\}$ of $E^*$ such that $\sigma^\mu (\vec{e}_\nu) = \delta^\mu_{\;\nu}$. Any functional (also called 1-form or covector or covariant vector) $\omega \in E^*$ can be written as $\omega = \omega_\mu \sigma^\mu$. Using this we get that $\omega(\vec{V}) = \omega_\mu V^\mu$. Again, the position of the indices is determined by the transformation; upper indices transform with one matrix, lower indices transform with the inverse. The notation is designed so that an upper index contracted with a lower index is invariant.


So far, "vectors" with indices up and down are completely different mathematical objects. But now say there is an inner product defined on $E$. Given a basis we can define the metric $\eta_{\mu\nu} = \vec{e}_\mu \cdot \vec{e}_\nu$, and you can show that this implies $\vec{U}\cdot\vec{V} = \eta_{\mu\nu}U^\mu V^\nu$. Its transformation law (which you can work out) is consistent with the placement of the indices, and you can work it out from the definition. Now given a vector $\vec{W}$ we can define a functional $\omega$ defined by $\omega(\vec{V}) = \vec{W}\cdot \vec{V}$, and this determines a one-to-one correspondence (more precisely, an isomorphism) between $E$ and $E^*$. The components of $\omega$ are $\omega_\mu = \eta_{\mu\nu} W^\nu$.


Because of this correspondence, we usually call functionals "vectors with lower indices" and write $W_\mu = \eta_{\mu\nu} W^\nu$. But remember, $W_\mu$ are the components of a linear functional while $W^\mu$ are the components of a vector. It is only through the metric that one can be converted to the other. As I said, the position of the indices is determined (or, if you prefer, consistent with) the transformation law. These transformation laws are a natural consequence of the vector space structure, not postulates.


Lastly, given a 1-form we can go back to a vector. Since we get $W_\mu$ by multiplying the n-tuple $W^\mu$ by the matrix $\eta_{\mu\nu}$, to go the other way we need the inverse of the metric. We denote this matrix by $\eta^{\mu\nu}$; $\eta_{\mu\nu}\eta^{\nu\lambda} = \delta^\lambda_{\;\mu}$ is just the statement that the matrices are inverses of each other. It can also be shown that the placement of the indices in $\eta^{\mu\nu}$ is consistent with its transformation law. Now we can write $W^\mu = \eta^{\mu\nu}W_\nu$. This is particularly easy in Minkowski space with inertial coordinates, because the metric is its own inverse.


This is just the tip of the iceberg, but it should be enough to get you started. I haven't even mentioned tensors or given the transformation laws in full; I recommend you consult a book for that. I learned all this from Frankel's The Geometry of Physics, but it might be a little too general for your purposes. Once you more or less understand tensors, you can answer your question about $\delta^\mu_{\;\nu}$ yourself. $\delta^\mu_{\;\nu}$ is supposed to be the identity tensor, equal in all coordinate systems. Try to work out what would happen if you had $\delta_{\mu\nu}$. How does this transform? How does $\delta^\mu_{\;\nu}$?


No comments:

Post a Comment

classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...