I ran into this while looking at measures of humongous amounts of data. How does the information (data) collected in an inverse femtobarn exposure compare to a gigabyte of data ?
Answer
I think what you're getting at is not some kind of mathematically rigorous equivalence, but more what it means for a particle physics experiment like ATLAS to collect 1 inverse femtobarn of data. And actually this is computable quire easily.
The design frequency of the LHC is 40Mhz (which corresponds to 25ns bunch spacing, but now i is at 50ns). But since most events are uninteresting background all modern experiments have a system called a "Trigger" which only records events which pass some rough requirements which would render them interesting (maybe a high-momentum electron or jet).
ATLAS is routinely recording at 300Hz (a $10^5$ reduction of rate from the initial collision rate). That is 300 events per second. The size of an event in terms of storage space varies from experiment to experiment and depends on the software it uses, but for ATLAS it is something of the order of 1.5MB/event.
Currently the LHC runs at peak luminosities of 12600 $\mu b^{-1}/s$ (microbarn per second), this decreases over time since the beam intensities decrease, let's just run with 1000$\mu b^{-1}/s$. An inverse femtobarn is $10^9\mu b^{-1}$
so we have:
$$\frac{300 \text{ Events}}{s}\frac{1.5 \text{ MB}}{\text{ Event}}\frac{s}{1000\mu\text{b}^{-1}} \approx 0.5 \frac{\text{MB}}{\mu\text{b}^{-1}}$$
so for $10^9 \mu b^{-1}$ we have
$$0.5\cdot10^9\text{MB}$$
so 500 TB of data
PS: this is just an back-of-the-envelope calculation of course. The rates are constantly changing and the luminosities as well. So collecting 1/fb of data in a low lumi setting requires much more data (since one would still max out the bandwidth of 300Hz recording) than in high lumi settings (where one is still bound by the 300Hz boundary, so the trigger would have to do a tighter selection)
No comments:
Post a Comment