Thursday, March 23, 2017

electromagnetism - Why is bandwidth, range of frequencies, important when sending wave signals, such as in radio?


So in wired/wireless networking and radio, signals are sent in form of wave. Then the concept of bandwidth comes in, which is the difference between highest frequency and lowest frequency in a signal. But I do not get why bandwidth determines the maximum information per second that can be sent. If we are able to send signals of any frequency in the bandwidth, then as the number of signals that are of frequencies in an aggregated signal increases, information that can be sent increases without bound.


Is this not possible because when adding some frequency, information in some frequency is necessarily violated? And why does maximum information per second that can be sent depends only on bandwidth, not highest frequency in the aggregated signal?



Answer



From a physics perspective, the fundamental reason for this is something called the bandwidth theorem (and also the Fourier limit, bandwidth limit, and even the Heisenberg uncertainty principle). In essence, it says that the bandwidth $\Delta\omega$ of a pulse of signal and its duration $\Delta t$ are related: $$ \Delta\omega\,\Delta t\gtrsim 2\pi. $$ A signal with a limited time duration needs more than one frequency to be realizable. (Conversely, you need infinite time to confirm that a signal really is monochromatic.) The bandwidth theorem, which can be proved rigorously for reasonable definitions of the bandwidth and the duration, means that the smaller the time duration is, the larger the bandwidth it requires. It is a direct consequence of a basic fact of Fourier transforms, which is that shorter pulses will have broader support in frequency space.


(This last statement is easy to see. If you have a signal $f(t)$ and you make it longer by a factor $a>1$, so your new signal is $g(t)=f(t/a)$, the new signal's transform is now $$ \tilde g(\omega) =\int g(t)e^{i\omega t}\text dt =\int f(t/a)e^{i\omega t}\text dt =a\int f(\tau)e^{ia\omega \tau}\text d\tau =a\tilde f(a\omega), $$ and this now scales the other way, so it's narrower in frequency space.)


More intuitively, the theorem says that it's impossible to have a very short note with a clearly defined pitch. If you try to play a central A, at 440 Hz, for less than, say, 10 milliseconds, then you won't have enough periods to really lock in on the frequency, and what you hear is a broader range of notes.


Suppose your communications protocol consists of sending pulses of light down a fibre, with a fixed time interval $T$ between them, in such a way that sending a pulse means '1' and not sending it means '0'. The rate at which you can transmit information is essentially given by the pulse separation $T$, which you want to be as short as possible. However, you don't want this to be shorter than the duration $\Delta t$ of each pulse, or else the pulses may start triggering the detection of neighbouring pulses. Thus, to increase the capacity of the fibre, you need to use shorter pulses, and this requires a higher bandwidth.


Now, this is probably very much a physics perspective, and the communications protocols used by real-world fibres and radio links are much more complex. Nevertheless, this limitation will always be there, because there will always be an inverse relation between the width of a signal on the frequency and the time domains.



No comments:

Post a Comment

classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...