Saturday, April 20, 2019

symmetry - Why is the Fourier transform more useful than the Hartley transform in physics?


The Hartley transform is defined as $$ H(\omega) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f(t) \, \mbox{cas}(\omega t) \mathrm{d}t, $$ with $\mbox{cas}(\omega t) = \cos(\omega t) + \sin(\omega t)$.


The Fourier transform on the other hand is defined very similar as $$ F(\omega) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty f(t) \, \mbox{exp}(i \omega t) \mathrm{d}t, $$ with $\mbox{exp}(i \omega t) = \cos(\omega t) + i \sin(\omega t)$.


But although the Fourier transform requires complex numbers it is used much more in physics than the Hartley transform. Why is that? Are their any properties that make the Fourier transformation more "physical"? Or what is the advantage of the Fourier transformation over the Hartley transformation?




Answer




The Fourier transform is special because the complex exponential functions are eigenfunctions of he translation. In any linear problem with translation invariance, the Fourier transform turns a differential equation into an algebraic one. A simple example is a driven damped harmonic oscillator. The equation of motion is $$\ddot{\phi}(t) + 2\beta\dot{\phi}(t) + \omega_0^2 \phi(t) = J(t)$$ where $\beta$ is the damping parameter, $\omega_0$ is the natural resonance frequency, and $J(t)$ is the drive. This problem is invariant under translation in time.


This is a differential equation, which means it's pretty complicated; it involves limits and all kinds of complex calculus stuff. If we use $$\phi(t) = \int \frac{d\omega}{2\pi} \tilde{\phi}(\omega) e^{i \omega t}$$ we get a new version of the equation of motion in the frequency basis: \begin{align} (-\omega^2 + i 2 \beta \omega + \omega_0^2)\tilde{\phi}(\omega) =& \tilde{J}(\omega) \\ \tilde{\phi}(\omega) =& \frac{-\tilde{J}(\omega)}{\omega^2 - i 2 \beta \omega - \omega_0^2} \, . \end{align} This is way simpler: we now have an algebraic equation instead of a differential equation. In particular, when $J(t)$ is sinusoidal then $\tilde{J}(\omega)$ is a delta function and we can actually just solve for $\phi(t)$. For example, if $J(t) = A \cos(\Omega t)$, then $\tilde{J}(\omega) = (1/2)(\delta(\Omega - \omega) + \delta(\Omega + \omega))$ and we find $$\phi(t) = \text{Re} \left[ \frac{-A e^{i \Omega t}}{\Omega^2 - i 2 \beta \Omega - \omega_0^2} \right] \, .$$


The mathematical reason this was all so easy is that the exponential functions $\exp(i \omega t)$ are eigenvectors of the derivative: $(d/dt)\exp(i \omega t) = i \omega \exp(i \omega t)$. We could have written all of this stuff in vector notation and it would make a few conceptual issues super-duper clear, but that's a topic for another post (on Math.SE, probably).



The Fourier transform actually contains redundant information if the the original function is purely real valued. If $\phi(t) \in \mathbb{R}$, then it turns out that the negative frequency components in the Fourier transform are just the complex conjugates of the positive parts, i.e. $\tilde{\phi}(-\omega) = \tilde{\phi}(\omega)^*$. Because of this, we can rewrite a Fourier representation of a real signal in terms of only positive frequency parts: \begin{align} \phi(t) =& \int_{-\infty}^\infty \frac{d\omega}{2\pi} \tilde{\phi}(\omega)e^{i \omega t} \\ =& \int_0^\infty \frac{d\omega}{2\pi} \tilde{\phi}(\omega)e^{i \omega t} + \int_{-\infty}^0 \frac{d\omega}{2\pi} \tilde{\phi}(\omega)e^{i \omega t} \\ =& \int_0^\infty \frac{d\omega}{2\pi} \tilde{\phi}(\omega)e^{i \omega t} + \int_0^\infty \frac{d\omega}{2\pi} \tilde{\phi}(-\omega)e^{-i \omega t} \\ =& 2 \text{Re} \left[ \int_0^\infty \frac{d\omega}{2\pi} \tilde{\phi}(\omega) e^{i \omega t}\right] \, . \end{align}



An arbitrary signal has an amplitude and phase at each frequency. The Fourier transform encapsulates this in the amplitude and phase of $\tilde{\phi}(\omega)$. To get that in a cosine transform, we have to add a parameter $\theta(\omega)$ to handle the phase, i.e.$^{[a]}$ $$\phi(t) = \int_0^\infty \frac{d\omega}{2\pi} M(\omega) \cos(\omega t + \theta(\omega)) \, .$$ The point is that at each frequency you need two numbers. Hartley deals with this by putting some of the information in the negative frequencies. You can see this in the fact that the Fourier ($F$) and Hartley ($H$) transforms are related by $$F(\omega) = \frac{H(\omega) + H(-\omega)}{2} - i \frac{H(\omega) - H(-\omega)}{2} \, .$$




It's just a bit more of a pain in the butt. Fourier turns derivatives into factors of $i \omega$ because $$(d/dt)\exp(i\omega t) = i \omega \exp(i \omega t) \, .$$ The derivative of the Hartley kernel is \begin{align} (d/dt) \text{cas}(\omega t) =& (d/dt) (\cos(\omega t) + \sin(\omega t)) \\ =& \omega (-\sin(\omega t) + \cos(\omega t)) \\ =& \omega \, \text{cas}(-\omega t) \, . \end{align} As you see, the Hartley transform turns derivatives into an inversion in the direction of time. We could definitely still use this, it's just a little less familiar because it loses the resemblance to the eigenvalue problem you get with Fourier.



No and yes. The Fourier and Hartley transforms are both examples of basis transformations. A function $f(t)$ can be viewed as the components of a vector $\left \lvert f \right \rangle$ expressed in a particular basis. The Fourier or Hartley transform $\tilde{f}(\omega)$ provides the components of the same vector in a different basis. Expressing a single physical object in different bases (or coordinates, or languages) doesn't change the nature of the object, so in a sense neither Fourier nor Hartley is more physical than the other. Fourier is just almost always more convenient mathematically.


However, the reason Fourier is more convenient is that the Fourier transform is an eigenvector of the operation of translations in time. Define $$V'(t) = V(t + t') \, .$$ Then $$\tilde{V'}(\omega) = \int dt \, V(t+t') \exp[-i \omega t] = e^{i \omega t'} \tilde{V}(\omega) \,. $$ This is intimately related to the fact that the kernel of Fourier is the eigenfunction of the derivative. You see, the derivative generates translations. The operator $\exp(t' (d/dt))$ shifts a function by $t'$. One way to see this is via Taylor expansion: $$\left(e^{t' (d/dt)}V \right)(t) \equiv \underbrace{\left( \sum_{n=0}^\infty \frac{t'^n}{n!} \left(\frac{d}{dt}\right)^n V\right)(t)}_\text{Taylor series about $t$} = V(t + t') \, .$$


Putting it all together: \begin{align} (e^{t' (d/dt)}V)(t) =& e^{t' (d/dt)} \int\frac{d\omega}{2\pi} \tilde{V}(\omega) e^{i \omega t} \\ =& \int\frac{d\omega}{2\pi} \tilde{V}(\omega) \underbrace{\sum_{n=0}^\infty \frac{t'^n}{n!} \left(\frac{d}{dt}\right)^n e^{i \omega t}}_\text{translation op. act on Fourier kernel} \\ =& \int\frac{d\omega}{2\pi} \tilde{V}(\omega) e^{i \omega t'} e^{i \omega t} \\ =& \int\frac{d\omega}{2\pi} \tilde{V}(\omega) e^{i \omega (t+t')} \\ =& V(t+t') \, . \end{align}


So yeah, the Fourier transform is nice because the kernel has a special property with respect to translations. Problems with translational invariance are enormously simplified if you express them in the Fourier basis. The example at the top of this post is such a problem: the homogeneous part of the differential equation is time translation invariant.


$[a]$: You can also write this as a superposition of a sine and cosine transform: $$\phi(t) = \int_0^\infty \frac{d\omega}{2\pi} \left( A(\omega) \cos(\omega t) + B(\omega) \sin(\omega t) \right) \, .$$


No comments:

Post a Comment

classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...