Wednesday, December 31, 2014

general relativity - Why does no physical energy-momentum tensor exist for the gravitational field?


Starting with the Einstein-Hilbert Lagrangian


$$ L_{EH} = -\frac{1}{2}(R + 2\Lambda)$$



one can formally calculate a gravitational energy-momentum tensor


$$ T_{EH}^{\mu\nu} = -2 \frac{\delta L_{EH}}{\delta g_{\mu\nu}}$$


leading to


$$ T_{EH}^{\mu\nu} = -G_{\mu\nu} + \Lambda g_{\mu\nu} = -(R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R) + \Lambda g_{\mu\nu}. $$


But then, in the paragraph below Eq. (228) on page 62 of this paper, it is said that this quantity is not a physical quantity and that it is well known that for the gravitational field no (physical) energy-momentum tensor exists.


To me personally, this fact is rather surprising than well known. So can somebody explain to me (mathematically and/or "intuitively") why there is no energy-momentum tensor for the gravitational field?



Answer



The canonical energy-momentum tensor is exactly zero, due to the Einstein equation. The same holds for any diffeomorphism invariant theory.


By saying ''it doesnt exist'' one just means that it doesn't contain any useful information.


quantum field theory - Triviality of Yang Mills in $d>4$?


It has been proved that the $\phi^4$ theory is trivial in spacetime dimensions $d>4$. By trivial I mean that the field $\phi$ is a generalized free field, or in other words, it's only nonzero connected correlator is the two point correlator. This is a nonperturbative result, which manages to get around the fact that $\phi^4$ is nonrenormalizeable in dimensions $d>4$.


Here is the paper which proves this result: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.47.1


Have there been any results similar to this for the Yang-Mills theory? Yang-Mills is nonrenormalizeable in dimensions $d>4$ as well, so I imagine that if there were a similar result, $d=4$ should also be the critical dimension.



Answer



Good question. I am not aware of similar results for YM. The $\phi^4$ case uses correlation inequalities for ferromagnetic spin systems. Unfortunately, not many of those are known for gauge theories. YM is an example of model with non-Abelian group symmetry like $SU(N)$. Even for much simpler models with $O(N)$ symmetry like $N$-component $\phi^4$ or spherical spins, not much is known as far as correlation inequalities when $N\ge 3$.


condensed matter - Dispersion relation in tight binding model with even indices only


Given a tight binding model with Hamiltonian


$H= \sum_{i(even)}t[c_{i+1}^\dagger c_i+h.c]$


containing even indices only, how can we find out the dispersion relation?


Attempt:


My guess is that the dispersion relation is just $\frac{-2tcos(ka)}{2}$ i.e. half the regular tight binding model. I could not find the source for dispersion relation with two different hopping amplitudes. If I had, I would just replace one with zero.




quantum mechanics - Can I replace eigenvalue of $p$ operator with position space representation of $p$ operator?


\begin{equation} \begin{aligned} \langle x|\hat{p}|\psi\rangle &= \int dp\ \langle x|\hat{p}|p\rangle \langle p|\psi\rangle\\ &=\int dp\ p\langle x|p\rangle \langle p|\psi\rangle \\ &=\int dp \ \left(-i\hbar \frac{\partial}{\partial x}\right) \langle x|p\rangle \langle p|\psi\rangle \end{aligned} \end{equation}


Please explain how we can go from second step to third step? In the second step, the $p$ is an eigenvalue which has been replaced by position representation of the momentum operator in the third step. How can we replace a number by an operator?




atomic physics - Why is the nucleus so small and why is the atom 99.999% empty space?


A nucleus consists of protons and neutrons. Both are extremely heavy compared to electrons. Then how come they are contained within an extremely tiny space? And why does the atom consist of 99.999% empty space?


I do not understand the mathematics regarding this one bit. Please give me an explanation of this phenomenon in words as far as possible :)




Tuesday, December 30, 2014

nuclear physics - Adding many more neutrons to a nucleus decreases stability?



If you take any large nucleus and add protons to it, the electrostatic repulsion between them will make the nucleus more unstable, because the electrostatic force between them is more repulsive at a greater distance than the strong force is attractive


So how come if you add more neutrons, which don't have a charge and so there is no electrostatic force, the nucleus still becomes more unstable?


Also why aren't there groups of neutrons bound together, so purely neutron nuclei (because they are neutral, I'm guessing they couldn't form atoms, because of the electrons needed for an atom)




special relativity - How does wind speed affect the velocity of light?


As you know, there is a maximum speed things can go called $c$, the "speed of light." Light in a vacuum goes $c$. Light in the atmosphere, however, goes a little less than $c$.


My question is: what effect does wind have on light's velocity? Simply adding the wind's velocity to light's would not even be remotely close, since a 10 mph tail wind would probably push it over $c$.



Answer




If the air is not moving, we know the light moves at a speed $v=c/n$, where $n$ in the index of refraction of the air. Now if the air is moving at a speed $u$ relative to you, and the light is propagating in the same direction, then you can find the apparent speed of light by the relative addition of velocities formula. In this case, you will find the apparent speed of light to be $$\frac{v+u}{1+\dfrac{uv}{c^2}}. $$


energy conservation - LED conversion efficiency exceed 100%


I have read this article, Which says that conversion efficiency of the LED have exceeded 100%. The results are published in Physical Review Letters



In their experiments, the researchers reduced the LED’s input power to just 30 picowatts and measured an output of 69 picowatts of light - an efficiency of 230%.




How is this possible, is it not the violation of conservation of energy or I am missing something.


Edit: I missed that the LED worked as TE cooler and absorb heat from atmosphere to convert it to light. But still it is overwhelming observation for me.


Is it really possible that a device use thermal energy to produce light.



Answer



The device is apparently working as a heat pump, for which I give a brief theoretical analysis here.


In the example given, the $P_h=69{\rm pW}$ light output comprises the $W=30{\rm pW}$ input by the researchers together with $P_c=39{\rm pW}$ of heat that was formerly in the chip.


We can model the process by ideal heat pumping as follows. Heat drawn from the chip will lead to a drop in the chip's entropy of $\Delta S_c = -\frac{P_c}{T_c}$, and the light and output to the ambient World increases the entropy of the latter by $\Delta S_w = \frac{W + P_c}{T_h}$, where $T_h$ is the effective temperature of the light (measuring the latter's degree of thermalization together with its optical grasp). Since the light ends up in the environment, its effective temperature is ambient or greater.


The total entropy change of the World is then


$$\frac{W}{T_h} + P_c\,\left(\frac{1}{T_h} - \frac{1}{T_c}\right)$$



We know that $T_h>T_c$ because the effective "exhaust" temperature is at least ambient and $T_c$ must wind up less than this, because heat is being pumped out. So the second term with the brackets is negative: this means we must supply enough work $W$ to at least make the quantity positive. So the device can very plausibly (and probably does) work as claimed and comply with both the first and second laws of thermodynamics.


Monday, December 29, 2014

Are there any other pairs similar to virtual and normal photons?



Are there virtual particles for every kind of particle there is?



Answer



Don't take "virtual particles" too seriously.



Quantum Field Theory for a single (non-interacting) field is quite easy to solve. One of the things that can be calculated is the Feynman propagator $G_F(x,y)$, wich describes the probability amplitude for the propagation of a particle from spacetime point $x$ to spacetime point $y$. Obviouslly, the Feynman propagator will depend on the mass, four-momentum and quantum numbers of the particle.


But single, non-interacting fields are so boring. We'll be more interested in interacting fields. This is also the case that can be tested experimentally: measuring cross sections, decay widths and all that stuff. But we run in trouble: QFT for interacting fields is extremely hard!


So, we have to use a low-energy approximation: perturbation theory [Even if the energies in particle colliders are really high by ours standards, they are low enough to apply perturbation theory]. In the lagrangian you will find terms such as $$\mathcal{L}_i \sim g \psi^\alpha \phi^\beta$$ where $\psi$ and $\phi$ are two fields, $\alpha$ and $\beta$ integer exponents and $g$ a coupling constant, which quantifies the strength of the interaction. The trick to calculate any observable is to make an expansion of the time-evolution operator in powers of the coupling $g$. The math is pretty tedious, but in the end you find out that by a miraculous coincidence (in fact, Wick's theorem) all the observables depend on the Feynman propagators of the fields involved in the interaction lagrangian (with one caveat: in the propagator, the mass and four-momentum have a wrong relationship $p^\mu p_\mu \neq m^2$).


And here is where the genious of Feynman comes into play: each term (a horrendous integral) in the perturbative expansion can be represented as [a sum of] pretty diagrams. In a Feynman diagram, each external line is one of the incoming/outcoming particles, each internal line is one of the propagators due to Wick's theorem, and the vertices between lines are determined according to the interaction lagrangian (in the example above, in each vertex there are $\alpha$ lines of type $\psi$ and $\beta$ lines of type $\phi$). Feynman diagrams are a very useful tool to summarize and write the terms in the perturbative expansion. They are NOT a depiction of particles moving around, colliding, merging and branching. In this sense, the internal lines in these diagrams aren't meant to represent particles, even though they are usually called 'virtual particles'. Maybe 'internal propagator' would be less misleading.


So, tl;dr: "Virtual particles" are not a special type of particle, and there are Feynman diagrams with internal propagators corresponding to all the fields in the Lagrangian.


Sunday, December 28, 2014

optics - holographic projection in thin air?




from where I come from ,they taught us in high school that it is possible to project holograms in thin air simply by illuminating the hologram with the "correct light" , and having a semi transparent medium in the path of the reflecting light "e.g. Water vapor or smoke" resulting in something like this image enter image description here


-is this true & if not is there any technology that can project such 3dImages in thin air ?




Is the gravity of light equal to the gravity of mass under $E=mc^2$?



Under $E=mc^2$, 1kg of matter has $9\times 10^{10}$ joules of energy. So, if I had just the light shining from $9\times 10^8$ 100 Watt light bulbs, would that light have the same amount of gravity as the 1kg of matter?



Answer



No, a photon with energy $E$ behaves differently with respect to gravity than a slow-moving object of mass $m=E/c^2$ does.


In fact, that difference was behind one of the first tests of general relativity. General relativity is very nearly consistent with Newtonian gravity when it comes to slow moving objects in our solar system. However, if you use Newtonian gravity, and treat a photon passing by the sun as being an object of mass $E/c^2$, the amount by which the photon should be deflected is off by a factor of $2$ from what general relativity predicts. During the solar eclipse of 1919, Arthur Eddington made measurements of the deflection of starlight which passed close to the sun. His results were consistent with general relativity, and inconsistent with what was predicted with the combination of Newtonian gravity and $E=mc^2$.


Light also behaves differently from a slow-moving mass as far as being a source of gravity. General relativity describes gravity as involving the curvature of spacetime. The source of that curvature is the stress–energy tensor, which has 16 components, instead of being just one number (mass) that's the source of gravity according to Newtonian gravity. A photon's energy and momentum contribute differently to the stress-energy tensor than does an object with a non-zero rest mass.


However, if the light in question is bouncing around inside of a perfect mirrored box, and the lengths and time scales of interest are large compared to the size of the box and how long it takes light to bounce from one side of the box to the other, then yes, it would work to treat the light more simply as a stationary gravitational source with a mass of $E/c^2$.


special relativity - Difference between timelike and spacelike vectors


Other than one having a positive invariant scalar product and the other a negative one, what are the actual physical differences between these vectors?



Answer



A timelike vector connects two events that are causally connected, that is the second event is in the light cone of the first event. A spacelike vector connects two events that are causally disconnected, that is the second event is outside the light cone of the first event.


In that sense, the timelike vector can be considered to define a four-velocity direction of an observer and thus the time axis of that observer (to be a four-velocity it should be normalised). On the other hand a spacelike vector can be considered as defining a space axis (a spatial direction) of an observer. Seen like that the two vectors define a time interval or a length on the appropriate inertial frame, as Vladimir said.


thermodynamics - Why does pressure increase with water depth?


Let's say I took a tiny metal sphere that when put under water has the surface area to allow, at any point in time, to be surrounded by up to 1000 water molecules. Now lets say we put this sphere first in shallow water and then in the Mariana Trench. Obviously, the sphere would feel much more pressure in the deep water! But why is that? Let's look at the formula's:



For the pressure from the 1000 water moleculse:$$P=1000*\frac{F_{molecule(H2O)}}{A_{sphere}}$$


Assuming the collisions of the water molecules with the sphere are perfectly elastic and are always in the same direction and happen in the same period of time: $$P=1000*\frac{2m_{molecule(H2O)}*v_{molecule(H2O)}}{A_{sphere}*\Delta t}$$


So the only variable here is the velocity of the water molecules. But we know that deep water is colder than shallow water so the kinetic energy, and thus velocity, of the water molecules in the mariana trench is lower and so it doesn't make much sense that the pressure would be higher.


P.S. To be explicit. My logic here is that the only things capable of DOING the force (to cause pressure) are the water molecules directly in contact with the sphere. That is where an exchange of energy would be happening.



Answer



The problem is that you're modeling the liquid like an ideal gas, whose molecules independently bounce off the ball, but liquids are characterized by strong interactions at short distances.


A better (but still inaccurate) model would be to treat the liquid like a solid locally, i.e. imagine each of the liquid molecules connected in a chain by springs. An increase in pressure means that the springs are compressed more and more, so they push outward onto your object more and more.


In terms of your variables, we should have $F \sim k \Delta x$, not $F \sim 2mv/\Delta t$. In this model, pressure can be transmitted from molecules far away, just like tension is transmitted through a rope.


quantum mechanics - Is there a deterministic observable that has only single eigenvalue?


Is there an observable in quantum mechanics which has only one eigenvalue and an eigenspace associated with that single eigenvalue? This observable is deterministic in the sense that it gives same measurement value all the time. But the final state would be any of the wave functions living in its eigenspace corresponding to the single eigenvector, with different probabilities.


What would that mean practically, to quantum mechanics?



Answer



If I understand the question and the comments correctly, what is needed is an everywhere defined operator that preserves norms and has only a single point in the spectrum. The first condition forces the operator to be a partial isometry, while the second forces it to be a multiple of the identity. The intersection is then any operator $zI$, where $z$ is a complex number of norm one and $I$ is the identity operator.



quantum field theory - Can we test QFT on a curved spacetime?


It is possible to extend a quantum field theory to a curved spacetime. But does this lead to predictions that can be tested and measured? Had it been confirmed?


The underlying reason I am asking this is: curved spacetime means emergence of gravity and therefore General Relativity regime. And we know that GR and QFT are incompatible. I realise that in order to include gravity, one should put its Lagrangian in from the very beginning and this, I guess, does not work. But does the current mathematical framework for extending the known field theories to a curved spacetime work?



Answer



The biggest prediction of QFT on curved (not dynamical!) space-time is the Hawking radiation . This radiation can in principle be measured experimentally, even though it's an effect so small that with current technology there's probably no hope of a measurement. It's still possible that with some clever way of maximizing the experimental signal we could achieve such goal (for instance with something like the measurement of the proton life time, in which there's no hope to follow a single proton for $10^{33} yr$, but it's "easy" to do that with $10^{33}$ protons.)



Moreover, in the solar system gravity is weak, the space is only slightly curved. Therefore, while QFT on flat space-time can be routinely tested in a laboratory on earth, for a significant curved space-time effect you have to usually look at astrophysical or cosmological experiments, with all the related uncertainties.


The math: Is known that a theory with interacting gravitons (spin 2 massless particles) is not renormalizable. QFT with gravitons as an effective field theory could work, in the sense that you can make predictions like in the Fermi's theory of weak interactions. For instance see the beautiful treatment of Schwartz (Quantum Field Theory and The Standard Model, p.404) in which he finds the quantum gravity predictions to the Mercury's perihelion shift.


Simplest way to analytically determine whether a claimed heat transfer process obeys the second law of thermodynamics?


I want to find the simplest method to determine whether a proposed heat transfer process violates the second law of thermodynamics. Specifically I am looking for a method that meets the following needs:



  • A general method that can be used to analyse any heat transfer process.

  • I want to do this without resorting to word definitions such as the Clausius statement of the second law.


To explain my progress so far, a few days ago the "obvious answer" that I would have given was that $\Delta S_{Universe}$ for the process must be $>=0$.



However, on closer inspection $\Delta S_{Universe}>=0$ does not always mean a process is possible. An example of a process that satisfies this criterion, but is clearly impossible, is using a thermal reservoir to heat a body from $T_R-20K$ to $T_R + 10K$.


The reason that this process is impossible is that the part of the process that involves heating from $T_R$ to $T_R+10K$ involves heat transfer from a cooler body to a hotter body, which results in a decrease of $S_{Universe}$.


Based on this insight I came up with the following rule: Every stage of the process must result in a positive $\Delta S_{Universe}$.


The way we can test for this condition is as follows:



  1. Formulate an equation for $\Delta S_{Universe}$

  2. Differentiate the above equation with respect to the state variable of interest (in this case the temperature of the body)

  3. Set $ \displaystyle\frac{d\Delta S_{Universe}}{dT_{Body}} = 0$ in the above equation

  4. Solve for $T_{Body}$, which will henceforth be referred to as $T_{MaxS}$

  5. Check whether either of the following conditions are true: $T_{BodyFinal} < T_{MaxS} < T_{BodyInitial}$ or $T_{BodyInitial} < T_{MaxS} < T_{BodyFinal}$


  6. If either of the above conditions are true, then by the mean value theorem, some part of the process must have resulted in a negative $\Delta S_{Universe}$. If not, then the process obeys the second law of thermodynamics.


The above method appears to be applicable to all processes, however the issue is that even for a simple two body system this method is difficult and time consuming to carry out. For a worked example of this method see the previous thread that I started:


Can calculations find positive entropy change for heat transfer from cold reservoir to hot body?


Therefore my question is: Does anyone know of a simpler method to analytically determine whether a proposed process obeys the second law of thermodynamics? This must also meet the criteria mentioned above.


I appreciate anyone's time and thank you in advance.




Saturday, December 27, 2014

particle physics - Can quarks of different colors annihilate?


Wikipedia: "Antiparticles have exactly opposite additive quantum numbers from particles, so the sums of all quantum numbers of the original pair are zero."


Is it possible to annihilate a quark-antiquark pair if the antiquark has a color different from the quark?


Eg.: Will red up and anti-green anti-up annihilate? And if so, what is the product?



Answer




Well, you can just take the ordinary quark/anti-quark/gluon vertex for that and get a red/anti-green gluon (which will then decay/hadronize further, since it is color-charged and thus confined).


cosmology - How necessary is inflation given the laws of physics?


How deeply is cosmic inflation required for the laws of physics? Is inflation required for a universe remotely like ours, or is it simply a contingent on the starting conditions of the universe?


For example, is time as we experience it a byproduct of inflation, something that could not exist without it?


The (limited) research I've done seems to suggest that inflation is deeply embedded in the laws of physics, but I don't really understand that. If it is so, how can our part of the universe not expand (within a galaxy), yet the laws of physics still apply here?



Answer




Time has nothing to do with inflation. Clocks would tick even if the universe had not inflated.


Inflation is not “deeply embedded” in the laws of physics. You can have the Standard Model and a Big Bang based on General Relativity without having inflation.


Inflation is an ad hoc add-on to cosmological models to explain certain features of our universe — such as its homogeneity, isotropy, flatness, lack of magnetic monopoles, etc. — that would be hard to understand without it. Most inflationary models uses a so-far-unobserved scalar “inflaton” field to cause a brief period of inflationary expansion. We know that scalar fields exist (the Higgs field is scalar) and they can have the negative pressure that is required for inflation. Thus many physicists see inflation as fitting comfortably and plausibly into existing ideas about particle physics and cosmology, but inflation is not required by them.


Addendum for @safesphere: A scalar Higgs field is a critical part of the Standard Model of particle physics. With this field, the model is in impressive agreement with all observations of electromagnetic, weak, and strong interactions between various particles at, say, the Large Hadron Collider. Without it, the model utterly fails. For mainstream physicists, this evidence more than suffices to consider the Higgs field to “exist”. Whenever a physicist says “X exists”, she means “A model with X in it works really well to explain what we observe”.


Friday, December 26, 2014

electromagnetism - Is the EmDrive, or "Relativity Drive" possible?


In 2006, New Scientist magazine published an article titled Relativity drive: The end of wings and wheels1 [1] about the EmDrive [Wikipedia] which stirred up a fair degree of controversy and some claims that New Scientist was engaging in pseudo-science.



Since the original article the inventor claims that a "Technology Transfer contract with a major US aerospace company was successfully completed", and that papers have been published by Professor Yang Juan of The North Western Polytechnical University, Xi'an, China. 2


Furthermore, it was reported in Wired magazine that the Chinese were going to attempt to build the device.


Assuming that the inventor is operating in good faith and that the device actually works, is there another explanation of the claimed resulting propulsion?


Notes:
1. Direct links to the article may not work as it seems to have been archived.
2. The abstracts provided on the EmDrive website claim that they are Chinese language journals which makes them very difficult to chase down and verify.



Answer



It is impossible to generate momentum in a closed object without emitting something, so the drive is either not generating thrust, or throwing something backwards. There is no doubt about this.


Assuming that the thrust measurement is accurate, that something could be radiation. This explanation is exceedingly unlikely, since to get mN of radiation pressure you need an enormous amount of energy, since in 1s you get 1 ${\rm gm s^{-1}}$ of momentum, which in radiation can only be carried by $3 \times 10^5$ J (multiply by c), so you need 30,000 Watts of energy to push with mN force, or at least a million Watts for 80 mN. So, it's not radiation.


But a leaky microwave cavity can heat the water-vapor in the air around the object, and the heat can lead to a current of air away from the object. With a air current, you can produce mN thrusts from a relatively small amount of energy, and with a barely noticible breeze. To get mN force, you need to accelerate $300 \ {\rm cm^3}$ of air (1 gram) to 1 m/s every second, or to get 80 mN, accelerate $1 {\rm m^3}$ of air (3000 g) to 0.2 m/s (barely perceptible) and this can be done with a hot-cold thermal gradient behind the device which is hard to notice. If the thrust measurements are not in error, this is the certain cause.



So at best, Shawyer has invented a very inefficient and expensive fan.




EDIT: The initial tests were at atmospheric pressure. To test the fan hypothesis, an easy way is to vary the pressure, another easy way is to put dust in the air to see the air-currents. The experimenters didn't do any of this (or at least didn't publish it if they did), instead, they ran the device inside a vacuum chamber but at ambient pressure after putting it through a vacuum cycle to simulate space. This is not a vacuum test, but it can mislead one on a first read.


In response to criticism of this faux-vacuum test, they did a second test in a real vacuum. This time, they used a torsion pendulum to find a teeny-tiny thrust of no relation to the first purported thrust. The second run in vacuum has completely different effects, possibly due to interactions between charge building up on the device and metallic components of the torsion pendulum, possibly due to deliberate misreporting by these folks, who didn't bother to explain what was going on in the first experiments they hyped up. Since they didn't bother to do a any systematic analysis of the effect on the first run, to vary air-pressure, look at air flows with dust, whatever, or if they did this they didn't bother to admit their initial error, this is not particularly honest experimental work, and there's not much point in talking about it any more. These folks are simply wasting people's time.


condensed matter - Symmetry breaking in Bose-Hubbard model


According to Landau's symmetry breaking theory, there is a symmetry breaking when phase transition occurs.




  1. What is the symmetry breaking of superfluid-Mott insulator transition in Bose-Hubbard model?





  2. Why metallic state to Mott insulator state transition in Fermi-Hubbard model is not a phase transition, but a crossover.





Answer



The Mott transition in the Bose-Hubbard model is a quantum phase transition. From the point of view of field theory, that does not change much compare to standard (finite-temperature) phase transitions. The main difference is that you now have to take into account the quantum fluctuations which correspond to the "imaginary time" direction in addition to the d dimensions of space. It also means that there are at least two control parameters (i.e. parameters that have to be fine-tuned to have the transition), a non-thermal control parameter (such as the hopping amplitude or the density) and the temperature (which must be zero by definition).


Other than that, you can use Landau theory to understand the transition (which is second-order) at zero-temperature. The disordered phase is the Mott insulator, and the ordered one is the superfluid, where the non-zero order parameter is the condensate density (I will only talk about the 3D case, which is the simplest, as I won't have to deal with BKT phases). The broken symmetry is the usual one for Bose-Einstein condensate : the U(1) symmetry. One can then show that there is two universality classes, depending on the way the transition is made (at constant density or with a change of density at the transition).


Now, at finite temperature, things are different. First, the Mott insulator does not exist anymore, as a finite temperature can excite particles and one gets a finite compressibility (or conductivity). That might correspond to the cross-over you're talking about in the fermionic case. On the other hand, the superfluid exists at least up to a critical temperature.


thermodynamics - In heat conduction, what does it actually mean to be in the steady state?


I have read about the method of heat conduction and I have some questions related to this topic:


If I consider a metal bar and supply heat in one end, the heat will flow through the bar and if I consider the bar to consist of many layers then each layer absorbs some amount of heat and the rest of the amount will flow to the next layer. So will all the layers attain the steady state at the same time?


And I have read that at steady state all the layers are no longer able to absorb any heat energy. Why can't the layers absorb any heat when they have reached steady state?





particle physics - Is there any theory for origination of charge?


We have a theory of a Higgs field that describes how a particle gets mass. Since mass and charge both are intrinsic properties of a particle, is there any similar theory for how particles get electric charge?




special relativity - Why does a Lorentz scalar field transform as $U^{-1}(Lambda)phi(x)U(Lambda) = phi(Lambda^{-1}x)$?


This problem is from Srednicki page 19. Why $U^{-1}(\Lambda)\phi(x)U(\Lambda) = \phi(\Lambda^{-1}x)$?


Can anyone derive this?


$\phi$ is a scalar and $\Lambda$ Lorentz transformation.



Answer



That equation is, in fact, the definition of a Lorentz-scalar, but perhaps a few words will convince you that it is a well-motivated definition.


A helpful starting analogy.


Forget about relativistic field theory for a moment. Let's consider, instead, someone who wants to measure the temperature everywhere in a room. The temperature can be represented by a scalar field, namely a function $T:\mathrm{room}\to \mathbb R$ where $\mathrm{room}$ is some subset of three-dimensional Euclidean space $\mathbb R^3$. Now, suppose that someone takes the temperature distribution and rotates it by a rotation $R\in\mathrm{SO}(3)$, by drawing a picture, you should be able to convince yourself that the new temperature distribution $T_R$ that he would measure would be related to the old temperature distribution as follows: \begin{align} T_R(R\mathbf x) = T(\mathbf x), \end{align} In other words, the value of the transformed (rotated) temperature distribution at the transformed point is the same as the value of the un-transformed temperature distribution at the un-transformed point.


Classical field theory.



Now, let's go to classical relativistic field theory. Consider some scalar field on four-dimensional Minkowski space $\phi:\mathbb R^{1,3}\to \mathbb R$. By analogy with the temperature distribution, we define a Lorentz transformed field $\phi_\Lambda$ (often denote $\phi'$ in physics) by \begin{align} \phi_\Lambda(\Lambda x) = \phi(x). \end{align} for all $x\in \mathbb R^{3,1}$ and for all $\Lambda\in\mathrm{SO}(1,3)^+$. Notice that this can be re-written as follows: \begin{align} \phi_\Lambda(x) = \phi(\Lambda^{-1} x). \tag{$\star$} \end{align} The transformed field evaluated at a spacetime point $x$ agrees with the un-transformed field at the spacetime point $\Lambda^{-1} x$.


QFT.


But now, let's consider QFT. In this case, $\phi$ assign an operator (well really an operator distribution) to each spacetime point. Now in relativistic QFT, there exists a unitary representation $U:\mathrm{SO}(3,1)^+\to U(\mathcal H)$ of the Lorentz group acting on the Hilbert space $\mathcal H$ of the theory which transforms states $|\psi\rangle\in \mathcal H$ as follows: \begin{align} |\psi\rangle \to U(\Lambda)|\psi\rangle \end{align} Now suppose that $A:\mathcal H\to\mathcal H$ is a linear operator, is there some natural way that such an operator transforms under $U(\mathcal H)$? Yes there is, recall that when we make a change of basis in a vector space, this induces a change in the matrix representations of operators by similarity transformation. If we think of the Lorentz transformation as a change of basis, then it is natural to define a transformed operator by \begin{align} A_\Lambda = U(\Lambda)^{-1} A U(\Lambda). \end{align} If we apply this to the operator $\phi(x)$ at a given spacetime point $x$, then we have \begin{align} \phi(x)_\Lambda = U(\Lambda)^{-1} \phi(x) U(\Lambda) \tag{$\star\star$} \end{align} The transformation law used in QFT then follows by demanding that $(\star\star)$ which derives from the notion of transforming a linear operator on $\mathcal H$ agrees with the notion $(\star)$ of transforming a field in classical field theory. Explicitly, in this notation if we demand that \begin{align} \phi(x)_\Lambda = \phi_\Lambda(x), \end{align} then we obtain the desired definition of a Lorentz scalar field; \begin{align} U(\Lambda)^{-1} \phi(x) U(\Lambda) = \phi(\Lambda^{-1}x). \end{align}


Note.


The notion of scalar, vector, and tensor fields used in QFT might remind you of the notions of scalar, vector, and tensor operators used in the non-relativistic quantum mechanics of, for example, particles with angular momentum. This is not an accident; they are closely related concepts.


The additional complication we get in QFT is that fields are operator-valued functions of spacetime, not just operators, so we have to decide what to do with the spacetime argument of the field when we transform. We dealt with this complication above by essentially combining the notion of tensor operator in quantum mechanics, with the notion of field transformation in classical field theory.


For more mathematical remarks on tensor operators on Hilbert spaces, see


Tensor Operators


electricity - Why do electric sparks appear blue/purple?



Electric sparks tend to appear blue or purple or white in color. Why?



Answer



Air is normally a bad conductor of electricity, but with enough voltage it can be converted to plasma, which is a good conductor. In a plasma, the electrons constantly bind to and leave atoms. Each time an electron binds to an atom, it emits the energy in light. As a result, the plasma glows the color of a photon with that energy. There are a few different energy levels that get involved, so the spectrum has a few different peaks. The final color depends on the gas you use. For example, neon looks red or red-orange. Air ends up looking blue, so electricity passing through air makes it glow blue.


special relativity - What's the purpose of the arbitary additive constants in Einstein's Inertia of Energy Paper?


In Einstein's paper: Does the Inertia of a Body Depend upon its Energy content? he introduces arbitary additive constants whose purpose I'm confused about.



The paper has a frame $(x,y,z)$ where a body at rest emits plane waves of light in opposite directions, each with an energy $\frac 1 2 L$ at an angle $+\theta$, $-\theta$ to the $x$ axis where the energy of the body before and after transmission is $E_0$ and $E_1$. This process is measured in a frame $(\xi,\eta,\zeta)$ moving along $x$ with velocity $v$, where the energies of the body before and after are $H_0$ and $H_1$. Subtracting the total energy in frame $(\xi,\eta, \zeta)$ from $(x,y,z)$


$$H_0 − E_0 − (H_1 − E_1) = L\left(\frac {1} {\sqrt{1 - \frac {v^2}{c^2}}} -1\right)$$


He writes with my emphasis



The two differences of the form H − E occurring in this expression have simple physical significations. H and E are energy values of the same body referred to two systems of co-ordinates which are in motion relatively to each other, the body being at rest in one of the two systems (system (x, y, z)). Thus it is clear that the difference H − E can differ from the kinetic energy K of the body, with respect to the other system $(\xi,\eta,\zeta)$, only by an additive constant C, which depends on the choice of the arbitrary additive constants of the energies H and E



Obviously the same energy scale is used in both frames when measuring $E$ and $H$, so what's the purpose of these arbitarary additive constants?



Answer



Suppose I have a body that (on some measurement scale) has an energy of $E$ in the rest frame. It will have energy $H=\gamma E$ in the moving frame, so the difference is $H-E = E(\gamma - 1)$. Now suppose I want to change to a different energy scale with the same units but a different additive constant, so that I now consider the rest energy to be $E'=E+E_0$, with $E_0$ being an arbitrary constant. Now the energy in the moving frame will be $H'= \gamma E' = \gamma(E+E_0) = H + \gamma E_0$, and the difference will be $H'-E' = (E+E_0)(\gamma - 1) = (H-E) + E_0(\gamma - 1)$. I think that Einstein's $C$ is the $E_0(\gamma-1)$ term here.


Thursday, December 25, 2014

quantum mechanics - Books for linear operator and spectral theory



I need some books to learn the basis of linear operator theory and the spectral theory with, if it's possible, physics application to quantum mechanics. Can somebody help me?



Answer



I think a good, and classic, reference for your case is the following,




The very last chapter of Kreyszig deals with Quantum Mechanics.


And, once you've learned how to "translate" the language of Functional Analysis into that of Quantum Mechanics, you can go to more advanced texts in specific topics.


general relativity - Is there a fundamental reason why gravitational mass is the same as inertial mass?


The principle of equivalence - that, locally, you can't distinguish between a uniform gravitational field and a non-inertial frame accelerating in the sense opposite to the gravitational field - is dependent on the equality of gravitational and inertial mass. Is there any deeper reason for why this equality of "charge corresponding to gravitation" (that is, the gravitational mass) and the inertial mass (that, in Newtonian mechanics, enters the equation $F=ma$) should hold? While it has been observed to be true to a very high precision, is there any theoretical backing or justification for this? You could, for example (I wonder what physics would look like then, though), have the "charge corresponding to electromagnetic theory" equal to the the inertial mass, but that isn't seen to be the case.




quantum mechanics - Weyl Ordering Rule


While studying Path Integrals in Quantum Mechanics I have found that [Srednicki: Eqn. no. 6.6] the quantum Hamiltonian $\hat{H}(\hat{P},\hat{Q})$ can be given in terms of the classical Hamiltonian $H(p,q)$ by


$$\hat{H}(\hat{P},\hat{Q}) \equiv \int {dx\over2\pi}\,{dk\over2\pi}\, e^{ix\hat{P} + ik\hat{Q}} \int dp\,dq\,e^{-ixp-ikq}\,H(p,q)\; \tag{6.6}$$


if we adopt the Weyl ordering.


How can I derive this equation?



Answer



Let the position and momentum operators in $n$ phase-space dimensions be collectively denoted $\hat{Z}^I$, and let the corresponding symbols be denoted $z^{I}$, where $I\in\{1,\ldots,n\}$. The operator $\hat{f}(\hat{Z})$ corresponding to the Weyl-symbol $f(z)$ is


$$ \hat{f}(\hat{Z})~\stackrel{\begin{matrix}\text{symmetri-}\\ \text{zation}\end{matrix}}{=}~ \left.\sum_{m=0}^{\infty}\frac{1}{m!}\left[\hat{Z}^1\frac{\partial}{\partial z^1}+\ldots +\hat{Z}^n\frac{\partial}{\partial z^n} \right]^m f(z)\right|_{z=0} \qquad $$ $$~\stackrel{\begin{matrix}\text{Taylor}\\ \text{expan.}\end{matrix}}{=}~ \left.\exp\left[\sum_{I=1}^n\hat{Z}^I\frac{\partial}{\partial z^I}\right] f(z)\right|_{z=0} \qquad $$ $$~=~\int_{\mathbb{R}^{n}} \! d^{n}z~\delta^{n}(z)~ \exp\left[\sum_{I=1}^n\hat{Z}^I\frac{\partial}{\partial z^I}\right] f(z) $$ $$ ~\stackrel{\delta\text{-fct}}{=}~\int_{\mathbb{R}^{2n}} \! \frac{d^{n}z~d^{n}k}{(2\pi)^{n}} \exp\left[-i\sum_{J=1}^n k_Jz^J\right] \exp\left[\sum_{I=1}^n \hat{Z}^I\frac{\partial}{\partial z^I}\right] f(z)$$ $$~\stackrel{\text{int. by parts}}{=}~\int_{\mathbb{R}^{2n}} \! \frac{d^{n}z~d^{n}k}{(2\pi)^{n}} f(z)~ \exp\left[-\sum_{I=1}^n\hat{Z}^I\frac{\partial}{\partial z^I}\right] \exp\left[-i\sum_{J=1}^n k_Jz^J\right] $$ $$~=~\int_{\mathbb{R}^{2n}} \! \frac{d^{n}z~d^{n}k}{(2\pi)^{n}} f(z)~ \exp\left[i\sum_{I=1}^n k_I\hat{Z}^I\right] \exp\left[-i\sum_{J=1}^n k_Jz^J\right] $$ $$~\stackrel{\text{BCH}}{=}~\int_{\mathbb{R}^{2n}} \! \frac{d^{n}z~d^{n}k}{(2\pi)^{n}} f(z)~ \exp\left[i\sum_{I=1}^n k_I(\hat{Z}^I-z^I)\right].$$



The above manipulations make sense for a sufficiently well-behaved function $z\mapsto f(z)$.


Example: If the Weyl-symbol is of the form $f(z)=g\left(\sum_{I=1}^n k_I z^I\right)$ for some analytic function $g:\mathbb{C}\to \mathbb{C}$, then the corresponding operator is $\hat{f}(\hat{Z})=g\left(\sum_{I=1}^n k_I\hat{Z}^I\right)$.


Wednesday, December 24, 2014

optics - How does one calculate the polarization state of random light after total internal reflection


How does one calculate the polarization state of random light after having been totally reflected by a single dielectric interface? Please consider pure specular reflexions from a plane interface between two dielectric mediums of indexes $n_1,\,n_2$ when the angle of incidence $\theta_1$ is greater than the critical angle $\arcsin(n_2/n_1)$.



Answer



I'll stick to pure total internal, specular reflexion in this answer.


When TIR happens, both linear polarisation components are fully relfected, but the phase change (the Goos-Hänchen shift) is different for the two states. In scalar theory the Goos-Hänchen shift (see my answer here) for the two states, but the full vector theory shows a subtle difference. What this means practically is that the two polarisation states seem to reflect from ever so slightly different depths into the denser medium beyond the totally internally reflecting interface.


The Fresnel equations still apply in this situation. Now, of course, we get $\sin\theta_t>1$ so that $\cos\theta_t = \sqrt{1-\sin\theta_t^2}$ is imaginary. We interpret the trigonometric functions exactly as they are interpreted in the derivation of the Fresnel equations (e.g. in Reference [1]), to wit, the sine and cosine are the ratios $k_x/k$ and $k_z/k$ of the wavevector components tangential and normal to the interface, respectively. The cosine is imaginary beyond the interface, so the wave is evanescent and exponentially decaying with depth as in my answer cited above. So, from the Fresnel equations:


$$r_s = \frac{n_1 \cos \theta_i - n_2 \cos \theta_t}{n_1 \cos \theta_i + n_2 \cos \theta_t} = \frac{n_1 c_1 - i\, n_2 c_2}{n_1 c_1 + i\,n_2 c_2}$$


$$t_s = \frac{2 n_1 \cos \theta_i}{n_1 \cos \theta_i + n_2 \cos \theta_t}= \frac{2 n_1 c_1}{n_1 c_1 + i\,n_2 c_2}$$



$$r_p = \frac{n_2 \cos \theta_i - n_1 \cos \theta_\text{t}}{n_1 \cos \theta_t + n_2 \cos \theta_i} = \frac{n_2 c_1 -i\, n_1 c_2}{n_1 c_2 + i\, n_2 c_1} = -i$$


$$t_p = \frac{2 n_1\cos \theta_i}{n_1 \cos \theta_t + n_2 \cos \theta_i}= \frac{2 n_1 c_1}{n_1 c_2 + i\,n_2 c_1}\tag{1}$$


where $c_1 = \cos\theta_1$ and $c_2 = \text{Im}(\cos\theta_2)$ are both real numbers. Take heed that the reflexion co-efficients are either a complex numbers of the form $r_s=z^*/z$ or $r_p = -i$ and thus have unity magnitude, so all the power is reflected. The Goos-Hänchen shifts for the two orthogonal linear polarisations are:


$$\phi_s = -2\,\arg(n_1 c_1 + i\,n_2 c_2) = -2\,\arctan\left(\frac{n_2 c_2}{n_1 c_1}\right)$$


$$\phi_p = -\frac{\pi}{2}\tag{2}$$


which are almost equal in most cases: they deviate more significantly from one another for highly glancing angles $\theta_1\approx\pi/2$ (which condition invalidates the scalar theory of my other answer).


In particular, as the incidence angle approaches $\pi/2$ and the reflexion becomes highly grazing, $\phi_s\to-\pi$ and $\phi_p\to-\pi/2$. That is, the TIR mechanism mimics a quarter wave plate.


So, for your random polarisation, you are going to either represent it as a known, pure polarisation with a $2\times1$ Jones vector $X = \left(\begin{array}{c}x_p\\x_s\end{array}\right)$ and transform it by:


$$X\mapsto U\,X;\;\text{where}\;U = \left(\begin{array}{cc}e^{i\,\phi_p}&0\\0&e^{i\,\phi_s}\end{array}\right)$$


or if the light is depolarised (a mixed state) you will use the Mueller calculus / density matrix formalism:



$$\rho\mapsto U\,\rho\,U^\dagger;\;\text{where}\;\rho = \sum\limits_{j=0}^3 s_j \sigma_j$$


$\sigma_0 = {\rm id}$ is the $2\times 2$ identity matrix, $\sigma_j$ the Pauli spin matrice and the co-efficients $s_j$ are the four Stokes parameters as described in my answer on dealing with calculations with depolarised light.


Reference:


[1] §1.5 "Reflexion and Refraction of a Plane Wave" in the seventh edition of Born and Wolf, "Principles of Optics".


quantum field theory - Understanding the energy density of the the false vacuum


This note by Alan Guth says that



The false vacuum, however, cannot rapidly lower its energy density, so the energy density remains constant and the total energy increases. Since energy is conserved, the extra energy must be supplied by the agent that pulled on the piston.






  1. Why is it that the false vacuum cannot lower its energy rapidly?




  2. Do we know of a substance which behaves in this unusual fashion? In short, why is the energy density doesn't dilute with expansion?





Answer



From continuity equation ($\varepsilon$ - energy density, $p$ - pressure, $H$ - Hubble parametre) \begin{equation} \dot{\varepsilon}=-3H(p+\varepsilon) \end{equation} you can see that for a substance with negative pressure $p=-\varepsilon$, above equation gives $\varepsilon=const$, with or without expansion. Such equation of state is exactly what we need from the inflaton field during the inflation. Since pressure and energy density are defined as $$ \varepsilon=\frac{1}{2}\dot{\phi}^2+V(\phi)~,~~p=\frac{1}{2}\dot{\phi}^2-V(\phi)~, $$ when potential energy dominates, $V(\phi)\gg\frac{1}{2}\dot{\phi}^2$, we have $p\approx-\varepsilon$.


For further details see "Physical foundations of Cosmology" (inspire-hep link) by Mukhanov, or Baumann's lecture notes (pdf link, on pages 19-20 derivation of the continuity equation is given).


Tuesday, December 23, 2014

mathematical physics - Causality and natural modeling of physical systems using integral forms



I posed a closely related question here but it received a tumbleweeds award. So I thought I would post it from a different angle to see if I can illicit at least some thoughtful comments if not answers.


The modeling of many physical systems utilize the mathematical tools of calculus, by writing the relationship of physical quantities in the form of differential equations.



Considering time dependent operations of integration and differentiation, the dynamics of a physical system may be expressed in terms of one form or the other. A good example are the Maxwell Equations which are often written in both differential and integral forms.


Integral forms tend to express where the system has been up to where the system is at present while differential forms tend to express where a system is now and where it will be in the near future. So the two forms tend to imply a sense of causality.


So this brings me to my question. Since we tend to observe a causal universe (at least at a macroscopic level) are integral forms a more natural approach to modeling systems?


I'm using the word 'natural' in the sense that the nature of the universe tends to work one way vs another. In this case I'm saying nature tends to integrate rather than differentiate to propagate change. We can write our equations in differential form, solve them and predict, and they are useful tools. But isn't mother nature's path one of integration?


I tend to believe this is so by my experience in simulating systems. Simulating systems in an integral form rather than differential form always seems to lead to better results.




diffraction - What happens to waves when they hit smaller apertures than their wavelenghts?


I was wondering this for quite a long time now. Let's say you have a water wave (like ripples, not the ones you see during tsunamis) with wavelength 10 m. Imagine you put a boundary with an opening of 1 m. Will diffraction be observed? According to my research, no. But, then, what would be seen?





Monday, December 22, 2014

quantum mechanics - Are there exact analytical solutions to the electronic states of the hydrogen molecular ion $mathrm H_2^+$?


The hydrogen molecular ion (a.k.a. dihydrogen cation) $\mathrm H_2^+$ is the simplest possible molecular system, and as such you'd hope to be able to make some leeway in solving it, but it turns out that it's much harder than you'd hope. As it turns out, if you phrase it in spheroidal coordinates then the stationary Schrödinger equation for the electron (with stationary nuclei), $$ \left[-\frac12\nabla^2-\frac{1}{\|\mathbf r-\mathbf R_1\|}-\frac{1}{\|\mathbf r-\mathbf R_2\|}\right]\psi(\mathbf r)=E\psi(\mathbf r) \tag 1 $$ becomes separable, but - last I heard - the resulting equations do not admit exact analytical equations in anything you'd call either closed form or special-function-like.


(More specifically, the separation is not as clean as in the hydrogen atom, where you get an angular and then a radial eigenvalue problems, but instead you get a coupled 'bi-eigenvalue problem' that's harder to solve.)


On the other hand, Wikipedia lists the system in its List of quantum-mechanical systems with analytical solutions with a note that there are "Solutions in terms of generalized Lambert W function", so maybe I'm missing something.


Tracing the Wikipedia references leads to arXiv:physics/0607081, which seems to me to only (i) only work for the eigenvalue, not the eigenfunctions, (ii) work with generalizations of the Lambert $W$ function, and (iii) not be particularly closed-form either. However, I may be missing the end of some reference trail here.


So: are there known eigenfunctions of $(1)$ in exact analytical form, or even in terms of special functions (whose definition goes beyond "the solution of this given equation")?


If the answer to this is negative, then that's probably a very tall order to prove, since statements of the kind "there is no result of that type in the literature" are inherently hard to tie down. In that case, though, I will settle for a thorough exploration of the literature pointed at by the Wikipedia claim, and an explanation of what it does and does not provide.




Edit, given the large number (currently 8) of non-answers that this thread has received. Apparently some clarifications are in order.





  • The question of whether a given solution does or does not qualify for the description of 'analytic', 'closed-form' or 'exact' is obviously a subjective call to a nontrivial extent. However, there are a lot of interesting shades of gray between 'the solution is an elementary function' and 'if you define the special function $f$ as the solution of the equation, then the equation is solvable in terms of special functions', and I want to know where this problem sits between those two extremes.


    As such, I would like to set the bar at functions that include at least one nontrivial connection. Thus, I would argue that a direct Frobenius-method series solution is not really sufficient if it has no further analysis and no additional connections to other properties of the resulting functions. (In particular, if one wants to allow series solutions with no further connections, then it is worth considering carefully what other systems then become 'solvable' to the same degree.)




  • It is well known that there are perfectly good approximate and numerical solutions to this problem, including several that are systematically convergent; moreover, even if an analytical solution exists, those numerical and approximate solutions are probably more useful and quite possibly more accurate than the 'exact' solution. That is irrelevant to the question at hand, which is simply about how far (or lack thereof) one can take 'exact' analytical methods in quantum mechanics.



  • Obviously the Schrödinger equation at stake here is an approximation (as it ignores e.g. nuclear motion and relativistic effects such as spin-orbit coupling and other fine-structure effects), but that is irrelevant to the question of whether this specific problem has exact solutions or not.




mathematical physics - Introduction to string theory




I am in the last year of MSc. and would like to read string theory. I have the Zwiebach Book, but along with it what other advanced book can be followed, which can be a complimentary to Zwiebach. I would like a more mathematically rigorous book or lecture notes along with Zwiebach.


Specifically, mention whether the book discusses string theory




  • Rigorously?




  • Intuitively?





What's the scope of the book? Does it cover the advanced materials, e.g. Matrix string theory, F-theory, string field theory, etc. Maybe even String Phenomenology?



Answer



The canonical textbook is the two-volume set by Polchinski. David Tong has very nice notes up following this text.


You should be able to find various review articles on the arXiv as well, for instance:


http://arxiv.org/abs/hep-th/0207249


http://arxiv.org/abs/hep-th/0207142


Hope that helps...


Sunday, December 21, 2014

newtonian mechanics - Pendulum in Accelerating Elevator



I have been looking for this for quite some time now. A simple pendulum behaves in SHM. Let's put that pendulum in an upward accelerating elevator. The component of the force that acts in SHM $(\text{mg}\sin\theta)$ still stays the same in my head.


However, websites and books tell me to use $m(g+a)\sin\theta$ where $a$ is the acceleration of the elevator.


I tried to look up Free Body Diagrams, but I can't find any for the case of accelerating frames. Can someone explicitly prove this without using the flimsy argument of "Think it's a noninertial frame with a new effective g"?



Answer



Well it depends on the context of your question. If you're being introduced to General Relativity, then you're just going to assume, in the spirit of the equivalence principle, that gravity and the acceleration cannot be told apart from the pendulum's standpoint, so the acceleration is obviously $a+g$.


If you need to do it from first principles in a Newtonian setting, draw a free body diagram of the bob. First, let's do the unaccelerated pendulum. On the FBD, if you resolve the tension in the thread holding up the bob $(-T\,\sin\theta,\,T\,\cos\theta)$ together with the weight $(0,\,-m\,g)$ into horizontal and vertical components, you get:


$$-T\,\sin\theta = m\,\ddot{x}$$ $$T\,\cos\theta - m\,g = m\,\ddot{y}$$


but now, if you do it again with the bob and thread system accelerating upwards with constant acceleration $a$, then the $y$-component of the acceleration measured relative to the "inertial" (in Newtonian gravity) frame stationary wrt the ground is $\ddot{y}+a$ whilst $\ddot{x}$ is unaffected. So now, put these back into the equations above, and you find you get the same as the first set but with $g$ replaced by $g+a$.


energy - Difference between expected $p_0$-value and observed $p_0$-value?


In the ATLAS paper for the higgs discovery, they used 2 kinds of different $p_0$-values. One is the expected $p_0$-value, the other one is observed $p_0$-value.


What's the difference between the expected $p_0$-value and observed $p_0$-value?


How can one expect a $p_0$-value without real experiment? Seems like the expected p-$p_0$-value is from simulation by assuming the higgs as a given mass. If so, why does the observed $p_0$-value has a large discrepancy with the expected one at 125GeV?


enter image description here



Answer




If there is no resonance there how will the data for invariant mass look?


atlashiggs


The lower curve is an early type of the plot you are showing, just the difference with the background so that the resonance can be seen. The solid line shows the background, i.e. monte carlo events created with no Higgs in the model, which is the basic use of Monte Carlo when checking with data. One can extract the statistical probability of the curve being above the background, how many standard deviations away from it the data are, and this is connected mathematically with the probability of having this resonance.


In the image you show it is the probabilities themselves that are extracted for each year of data, the dashed lines are when the monte carlo events are used without a resonance, and the solid lines are clearly declared as data, summed for consecutive years of data taking.


On the right axis the $σ$ is the standard deviation from the null result for each set of data displayed. The little circles are the published $σ$ . One gives the mass fitted and the standard deviations from background in the publication for the PDG tables.


I guess that the monte carlo background has a gradual enhancement at the resonance after the first year, because of the "look elsewhere" effect, to eliminate the statistical bias of checking at exactly this invariant mass for a resonance.


homework and exercises - Does a rotating disk develop a potential difference between the centre and rim?


This stems from thinking about the question If a perfect conductor were to move, what happens to the electrons?.


Suppose we have a rotating disk with no external magnetic field, so this is not a homopolar generator/Faraday disk experiment. The rotation creates a difference in potential energy between the centre and the rim, so does this mean that if we connect a wire (with suitable brushes) between the centre and the rim electrons will flow from the centre through the disk to the rim then back through the wire? That is, does the rotation create an electrical potential difference between the centre and the rim?


It seems obvious to me that the answer is yes, however I have never seen the calculation done. Attempts to Google it fail because the results are swamped by articles on Faraday disks and/or homopolar generators.



Answer



Electrons in a conducting disk in order to maintain equilibrium will have to have a centripetal force on them equal to the local change in potential energy with respect to a change in radius, that is


$$ m_e\omega^2 r = -e{d\phi\over dr} $$


After integrating, we get a potential difference between the center and a point R out


$$ \Delta\phi = -{m_e\omega^2 R^2\over 2e} $$



A conducting disk spinning at a rate of six million radians per second should generate about one volt of potential ten centimeters out from the center. I hope this was helpful. ;)


general relativity - Are black holes very dense matter or empty?


The popular description of black holes, especially outside the academia, is that they are highly dense objects; so dense that even light (as particle or as waves) cannot escape it once it falls inside the event horizon.


But then we hear things like black holes are really empty, as the matter is no longer there. It was formed due to highly compact matter but now energy of that matter that formed it and whatever fell into it thereafter is converted into the energy of warped space-time. Hence, we cannot speak of extreme matter-density but only of extreme energy density. Black holes are then empty, given that emptiness is absence of matter. Aren't these descriptions contradictory that they are highly dense matter as well as empty?


Also, if this explanation is true, it implies that if enough matter is gathered, matter ceases to exist.


(Sorry! Scientifically and Mathematically immature but curious amateur here)



Answer



The phrase black hole tends to be used without specifying exactly what it means, and defining exactly what you mean is important to answer your question.


The archetypal black hole is a mathematical object discovered by Karl Schwarzschild in 1915 - the Schwarzschild metric. The curious thing about this object is that it contains no matter. Techically it is a vacuum solution to Einstein's equations. There is a parameter in the Schwarzschild metric that looks like a mass, but this is actually the ADM mass i.e. it is a mass associated with the overall geometry. I suspect this is what you are referring to in your second paragraph.


The other important fact you need to know about the Schwarzschild metric is that it is time independent i.e. it describes an object that doesn't change with time and therefore must have existed for an infinite time in the past and continue to exist for an infinite time into the future. Given all this you would be forgiven for wondering why we bother with such an obviously unrealistic object. The answer is that we expect the Schwarzschild metric to be a good approximation to a real black hole, that is a collapsing star will rapidly form something that is in practice indistinguishable from a Schwarzschild black hole - actually it would form a Kerr black hole since all stars (probably) rotate.


To describe a real star collapsing you need a different metric. This turns out to be fiendishly complicated, though there is a simplified model called the Oppenheimer-Snyder metric. Although the OS metric is unrealistically simplified we expect that it describes the main features of black hole formation, and for our purposes the two key points are:





  1. the singularity takes an infinite coordinate time to form




  2. the OS metric can't describe what happens at the singularity




Regarding point (1): time is a complicated thing in relativity. Someone watching the collapse from a safe distance experiences a different time from someone on the surface of the collapsing star and falling with it. For the outside observer the collapse slows as it approaches the formation of a black hole and the black hole never forms. That is, it takes an infinite time to form the black hole.


This isn't the case for an observer falling in with the star. They see the singularity form in a finite (short!) time, but ... the Oppenheimer-Snyder metric becomes singular at the singularity, and that means it cannot describe what happens there. So we cannot tell what happens to the matter at the centre of the black hole. This isn't just because the OS metric is a simplified model, we expect that even the most sophisticated description of a collapse will have the same problem. The whole point of a singularity is that our equations become singular there and cannot describe what happens.



All this means that there is no answer to your question, but hopefully I've given you a better idea of the physics involved. In particular matter doesn't mysteriously cease to exist in some magical way as a black hole forms.


gravity - Wouldn't dark matter throw off the calculation of Earth's 'light' mass and estimates of its composition?


The Cavendish experiment first determined the mass of the Earth and (arguably) the gravitational constant. However, given the ubiquitous nature of dark matter, it seems reasonable that at least some of Earth's total mass comes from dark matter accumulated at the center of the Earth.


If this is the case, Earth's calculated mass would include the masses of both 'light' matter and dark matter. The Cavendish experiment would be offset less by this, because most of the dark matter in the Earth would be near to its gravitational center (which is far from the site of the experiment). So G would be more well-estimated than the light mass of the Earth.



This would all be academic, if it weren't for the fact that models of the interior of the Earth assume that all matter is light matter (I presume).


We know the mass of individual electrons and protons to high precision from particle physics, as well as the larger mass of atoms from the interactions of the strong force; this knowledge can used to estimate what materials Earth contains given its density (from mass and size). But the true light density of the Earth could be significantly less than Earth's combined masses would suggest. It seems to me that we could be greatly overestimating the amount of light matter inside the earth, leading to an overly-dense geologic model of the Earth's core.


Is this possible? Is it likely? If not, why not?



Answer



Dark matter does not readily "accumulate". If(?) it exists then it interacts very weakly with normal matter and is primarily influenced by gravity. The Earth's gravity is far too small to make a local concentration of dark matter. The local dark matter would be moving in the Galactic potential at speeds similar to that of the Sun around the Galaxy ($\sim 250$ km/s); this is too fast to be gravitationally captured by the Earth and the cross-section to inelastic interaction by any other means is thought to be too small (this is why it is called dark matter - there are no electromagnetic interactions) to catch dark the matter. The same may not be true for the Sun, which offers a deeper gravitational potential and a "thicker" target (e.g. Vincent et al. 2015).


In fact there is little dark matter in the local disk plane of our Galaxy at all. It is estimated that the local dark matter density is around $\sim 0.01$ $M_{\odot}$/pc$^3$ (Garbari et al. 2012, Bovy & Tremaine 2012) corresponding to only a few $10^{-22}$ kg/m$^3$. For comparison, the density of the interplanetary medium is abut 100 times greater.


Saturday, December 20, 2014

thermodynamics - What is the differentiating factor between work and heat?'


Although work and heat do the same thing (increase or decrease the internal energy of the system), There is still a fundamental difference between them. For example, The way in which entropy is defined is a very good way to differentiate between work and heat. But, why is there such a distinction between the two things? Is it the limitation of Newtonian mechanics that it never accounted for something like heat which could also change the energy of the system? Is the word "Thermodynamical work" or "Hidden work" suitable for heat?




visible light - Is it (theoretically) physically possible to project an image into thin air?


Is there some law of physics that strictly prohibits the projection of 2D or 3D images into thin air (such as holograms in movies) or is a solution to achieve this still up for grabs by an eventual discoverer?




Is there a field equation which can reduce into all three flavors of spin (zero, one, one half)?


Is there a known particle field equation of a similar form $$ \begin{equation} (\Gamma^n \pi_n)^2 \Psi = (mc)^2 \Psi \tag{1} \end{equation} $$ such that by reducing the number of degrees of freedom for the spinor $\Psi$ into a spinor of lesser degrees of freedom, such as a scalar $\psi_0$, two three-vectors $\boldsymbol{\psi}_\pm$ or two two-vectors $\boldsymbol{\phi}_\pm$, it reduces Eq. 1 into either ...




  • a spin zero field equation $$ \begin{equation} \pi^n \pi_n \psi_0 = (mc)^2 \psi_0, \tag{2} \end{equation} $$

  • a spin one field equation $$ \begin{equation} (I\pi_0\pm i \boldsymbol{\pi} \times) (I\pi_0\mp i \boldsymbol{\pi} \times) \boldsymbol{\psi}_ \pm = (mc)^2 \boldsymbol{\psi}_ \pm \tag{3} \end{equation} $$

  • or a spin 1/2 field equation $$ \begin{equation} (I\pi_0\pm\boldsymbol{\sigma}\cdot\boldsymbol{\pi}) (I\pi_0\mp\boldsymbol{\sigma}\cdot\boldsymbol{\pi}) \boldsymbol{\phi}_\pm = (mc)^2 \boldsymbol{\phi}_\pm? \tag{4} \end{equation} $$


In these expressions $\pi_n$ is the four-component momentum operator which includes the electromagnetic four-potential interaction $A_n$ with the particle's charge $q$ written as $$ \begin{equation} \pi_n = i\hbar \partial_n - q A_n , \tag{5} \end{equation} $$ and $$ \begin{equation} \boldsymbol{\pi} = -i\hbar \boldsymbol{\nabla} - q \boldsymbol{A} \tag{6} \end{equation} $$ uses bold to indicate a euclidean vector, specific to 3-components. The three two-by-two matrices $\boldsymbol{\sigma}$ in Eq. 4 are the Pauli Spin Matrices.




Friday, December 19, 2014

Calculating curvature of spacetime when energy is present



I am only about half-way of studying SR and GR and I am not yet familiar with a formula to calculate the curvature of spacetime when energy is present. To be more specific, I want to calculate curvature when electrical energy is present.


Also, I would like to know how energy influences curvature of an object of certain mass charged with electrical energy (e.g. an iron object).



Answer



As said in the comments, you need to use Einstein's equations (no cosmological constant for simplicity):


$$R_{\mu\nu} - \frac12 R g_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu}$$


Your energy goes into the energy-momentum tensor $T_{\mu\nu}$; in particular, there is a formula which you can use to find the energy-momentum tensor of an electromagnetic field. The left hand side contains $R_{\mu\nu}$ and $R$, which are very complicated functions of the metric tensor $g_{\mu\nu}$ and its derivatives. Since all the tensors here are symmetric, this is a system of 10 coupled nonlinear partial differential equations which you can solve in principle to find the metric tensor.


In practice, almost no one does that. If you think that the curvature will be small then you can get an approximate version of the equation that is linear and relatively straightforward to solve. If you can't do that then you will either need some symmetry to simplify the metric tensor (such as spherical symmetry for the Schwarzschild solution), or solve the equations numerically, which isn't easy either. Wikipedia has some information on the subject.


mathematical physics - Extension to continuous in proofs of rigid body mechanics


I'm studying rigid body mechanics and I've seen several proofs of properties related to total angular momentum, kinetic energy, etc. that all regard discrete set of points. For example, to show that in an inertial frame $\frac{\text d \mathbf {L}}{\text d t}=\mathbf{\Gamma}^{\text {ext}}$, one writes down the sum $$\mathbf{L}=\sum \mathbf{r} \times \mathbf{p},$$ and does the required derivatives.


How can such arguments be extended with rigour to continuous rigid bodies? To give an example, in "Rudin, Principle of Mathematical Analysis", in the chapter of Riemann Stieltjes integral, there's an example regarding the moment of inertia of a "straight line" body, that can be defined uniquely with the Riemann-Stieltjes integral $$\int _0 ^{\ell} x^2\, \text dm(x). $$ Altough it's restricted to a "straight-line" body, the result is very elegant.


Back to the example of angular momentum, I think that one could define the total angular momentum as $$\mathbf{L} =\int _{V} \mathbf {r} \times \mathbf {v(\mathbf{r})} \,\text d m(\mathbf{r}),$$where the integral is extended over the volume of the body (actually I've never seen a definition of total angular momentum for continuous rigid bodies, since it always appears as $$\mathbf{L} = \mathbf{I}\mathbf{\omega},$$ and, of course, this is the useful identity).


So, how can the discrete arguments be extended to continuous arguments?




What would happen to matter if it was squeezed indefinitely?


I hope that this is a fun question for you physicists to answer.


Say you had a perfect piston - its infinitely strong, infinitely dense, has infinite compression ... you get the idea. Then you fill it with some type of matter, like water or dirt or something. What would happen to the matter as you compressed it indefinitely?


Edit: I'm getting some responses that it would form a black hole. For this question I was looking for something a little deeper, if you don't mind. Like if water kept getting compressed would it eventually turn into a solid, then some sort of energy fireball cloud? I'm not as concerned about the end result, black hole, as I am in the sequence.



Answer



You asked for process. I'm assuming infinite material strength here, as in the piston cannot be stopped (infinite force on an infinite strength material that can resist infinite temperature).



  • Solids will be compressed, resulting in lots of heat as this happens (with infinite pressure, and infinitely strong materials and thus force, the matter will give), until they reach a liquid state, gaseous state, or start losing electrons and ionizing, or just stays solid all the way up to Electron Degeneracy - it depends greatly on the substance what happens here. With current realistic materials, the piston would break. Since it doesn't break, and there's infinite force behind it, the substance gets compressed and heated anyway.


  • Liquids will be compressed, resulting in lots of heat as this happens (with infinite pressure, and infinitely strong materials and force, the matter will give) into a gas, plasma, or Electron Degeneracy (depends on substance). With current realistic materials, the piston would break. Since it doesn't break, and there's infinite force behind it, the substance gets compressed and heated anyway.

  • Gaseous substances will then easily compress, resulting in lots of heating as they do, until they heat up enough that the electrons freely float among the nuclei, and you have just made a Plasma.

  • Now at a Plasma, the matter is slightly ionized (+1,+2) as the outermost electrons will have escaped and thus resulting in positive charges. The matter will continue to compress and heat

  • More compression, resulting in more heat. More electrons are too energetic to orbit the nuclei, resulting in higher positive charges (+3,+4 as allowable...).

  • More compression, resulting in more heat. More electrons are too energetic to orbit the nuclei, resulting in higher positive charges (+5,+6 as allowable...).

  • More compression, resulting in more heat. More electrons are too energetic to orbit the nuclei, resulting in higher positive charges (+7,+8 as allowable... until they're all gone). At some point you will surpass electron degeneracy pressure and form:

  • Electron Degenerate matter where no electron can orbit the nuclei, but now freely traverse the highly positively charged nuclei 'soup'. Keep adding pressure, and you'll form:

  • Proton Degenerate matter where only the repulsion of the protons is holding the nuclei apart. Keep adding pressure, and you'll form:

  • Neutron Degenerate matter where the electrons and protons join and cancel, leaving you with basically a huge neutral atom full of mostly neutrons, being held apart by the quarks. Keep adding pressure, and you'll (in theory) form:

  • Quark Degenerate matter where the quarks, or at least the standard up/down quarks, can no longer hold the pressure and perhaps combine/change form. Keep adding pressure, and in theory you might form:


  • Preon Degenerate matter which would sort of be like one big subatomic particle (though you might skip this one), and finally:

  • A singularity aka Black Hole


telescopes - How do we stabilise satellites so precisely?


Look at the Hubble Ultra Deep Field photo. The stars in it are on the order of 1 arcsecond across. To an order of magnitude, this is $10^{-6}$ radians in a $10\text m$ telescope which was held steady for $10^6$ seconds.


In other words, the velocity of the aperture of the telescope around the light sensors had to be on the order of one angstrom per second.


Perhaps my maths is wrong, but this seems like an extraordinary feat of control. I can't quite believe it. The computer programmer in me suspects that, since the image was captured across a number of occasions, each occasion would be smeared somewhat less than a single long exposure and some kind of correspondence-finding algorithm could alight the images (and infer the drift of the telescope). In any case, even if that is what they did, the satellite is held amazingly steady.


How do we achieve this?



Answer




Actually reaction wheels or control moment gyros are only part of the answer. To maintain the the accuracy and precision on the order of what Hubble demands requires a fully integrated Feedback Control System of actuators and sensors. For microradian pointing, reaction wheels provide only the first stage of isolating disturbances in a multi-stage pointing control system.


Disturbances that can interfere with attitude stabilization include those from outside the spacecraft, such magnetic anomalies and atmospheric drag from planetary orbits, or solar winds from spacecraft further away from a planet - as examples. Or disturbance can come from the spacecraft itself such as vibrational modes excited by solar array stepping.


Reaction wheels or CMG's can be used to change the attitude of the spacecraft, and together with feedback from gyros or inertial measurement units (IMU's) closed loop control systems maintain the attitude to perhaps 10's of microradians in the face of the disturbances.


But to get down to microradian or submicroradian stability usually requires optical components in the line of sight that compensate for the residual higher frequency jitter that the reaction wheel control system is unable to remove. A fast steering mirror for example can be tipped or tilted to re-align the optical path according to what the imaging sensor reads from the target star or galaxy.


thermodynamics - Fermi level in a solid


I am confused as to how fermi level is defined within a crystal/ solid It is normally situated midway between valence band and conduction band, is this how it is defined or is there other reasons this is so? In different fields, the fermi level has different definitions (e.g. the energy needed to add one electron to a solid, the highest occupied energy level at 0K). Do they correspond to the same thing or do they actually mean different things? Is Fermi level different from ionization energy? Also can Fermi level be with a conduction band?



Answer




It is normally situated midway between valence band and conduction band, is this how it is defined or is there other reasons this is so?



Not necessarily. You're right - if the definition of $\mu$ was simply that all states with energy $E<\mu$ are occupied at $T=0$, then $\mu$ could be anywhere within the band gap. To understand exactly where it should be placed, you need to consider small but nonzero $T$.



If $T$ is tiny, then


$$f(E) = \frac{1}{\exp[(E-\mu)/T]+1} \approx \begin{cases}1 - \exp[(E-\mu)/T], & E <\mu \\ \exp[-(E-\mu)/T], & E>\mu\end{cases}$$


Now consider a filled valence band with maximum energy $\epsilon$ and an empty conduction band with minimum energy $\epsilon + \Delta$, where $\Delta$ is the band gap. We need to make sure that the number of electrons in the conduction band at finite $T$ is equal to the number of electrons which have been "promoted" from the valence band.


Recall that the number of occupied states in the energy interval $(E,E+dE)$ is $n(E)\cdot f(E) dE$ where $n(E)$ is the density of states at energy $E$. The number of particles in the conduction band is approximately $$N_C = \int_{\epsilon+\Delta}^\infty g(E) \exp[-(E-\mu)/T] dE \approx g(\epsilon+\Delta) \exp[\mu/T] \int_{\epsilon+\Delta}^\infty \exp[-E/T] dE$$ $$ \approx T\cdot g(\epsilon+\Delta)\cdot \exp[-(\epsilon+\Delta-\mu)/T] $$


Whereas the number of vacancies in the valence band is approximately


$$N-N_V = \int_0^{\epsilon} g(E) \exp[-(\mu-E)/T] dE \approx g(\epsilon)\exp[-\mu/T]\int_0^\epsilon \exp[E/T]dE$$ $$= T\cdot g(\epsilon) \cdot \exp[-\mu/T] (\exp[\epsilon/T]-1) \approx T\cdot g(\epsilon) \cdot \exp[(\epsilon-\mu)/T]$$ where we've used the small-$T$ limit.


Defining the absolute activity $z\equiv \exp[\mu/T]$, equating these two expressions gives $$g(\epsilon+\Delta)\exp[-(\epsilon+\Delta)/T] \cdot z = g(\epsilon) \exp[\epsilon/T] \cdot z^{-1}$$ and so $$z^2 =\frac{g(\epsilon)}{g(\epsilon+\Delta)} \cdot \exp[(2\epsilon+\Delta)/T]$$


If $g(\epsilon)=g(\epsilon+\Delta)$ (i.e. the density of states is "symmetric" across the band gap), then this means that


$$z^2 = \exp[2\mu/T] = \exp[(2\epsilon+\Delta)/T]$$ $$\implies 2\mu = 2\epsilon+\Delta$$ $$ \implies \mu = \epsilon+\frac{\Delta}{2}$$


so $\mu$ is directly in the center of the band gap. If $g(\epsilon)\neq g(\epsilon+\Delta)$ (and why should it be?), then this is no longer true, and $\mu$ will be off-center. In general,



$$\mu = \epsilon + \frac{\Delta}{2} + \frac{T}{2}\log\left(\frac{g(\epsilon)}{g(\epsilon+\Delta)}\right)$$


At $T=0$, the Fermi level is always dead-center in the middle of the band gap; at small but nonzero $T$ (small compared to the Fermi level, which can still be very large in terms of our day-to-day experience), the Fermi level is slightly shifted if the density of states differs across the band gap. In particular, this happens with doped semiconductors.





In different fields, the fermi level has different definitions (e.g. the energy needed to add one electron to a solid, the highest occupied energy level at 0K). Do they correspond to the same thing or do they actually mean different things?



First, you should be careful to distinguish the Fermi level from the Fermi energy. The former is the $\mu$ we have been discussing, and does not need to be an actually occupied energy level (after all, it can be in the middle of the band gap). The latter is well-defined only for systems of non-interacting fermions, and refers to the highest occupied energy level at $T=0$.


The Fermi level $\mu$ can be thought of as being defined by the Fermi-Dirac distribution function, which is the way I tend to think about it.



Is Fermi level different from ionization energy?




Yes. The work function is the energy required to move an electron from the surface of a metal to the surrounding vacuum; this is the difference between the Fermi level (which is occupied in metals) and the electric potential energy that the electron would have in vacuum. Remember that due to attractive interactions with the atomic lattice, the electron is effectively bound within the solid, which means that its total energy (kinetic + potential) is negative with respect to the vacuum.



Also can Fermi level be with a conduction band?



Yes - this is the case in metals. However, in metals the distinction between the valence band and the conduction band is essentially meaningless, as the partially filled band has properties of both.


quantum mechanics - Coherent states of harmonic oscillator


Why are coherent states of the harmonic oscillator called coherent? Coherent in what sense? Why are these states so special/useful?


From Wikipedia:



In physics, two wave sources are perfectly coherent if they have a constant phase difference and the same frequency.




Answer




Coherent states are eigenvectors for the (bosonic) annihilator,$$\hat a ~|\alpha\rangle = \alpha~|\alpha\rangle,$$and if we define the position and momentum quadratures as $\hat x = \hat a^\dagger + \hat a,$ $\hat p = i \hat a^\dagger - i \hat a,$ we have $[\hat x, \hat p] = 2i$ and the dimensionless Hamiltonian $\hbar\omega ~ \hat a^\dagger \hat a = \frac12 \hbar\omega~x^2 + \frac12 \hbar\omega~p^2 + \text{const.}$ to guide us. We can immediately see that in the coherent state we have $\langle x \rangle = \alpha^* + \alpha = 2 ~\Re~{\alpha}$ whereas $\langle p \rangle = i~\alpha^* - i ~ \alpha = 2 ~\Im~\alpha, $ so the position and momentum plane is basically just the complex plane $\mathbb C$ that $\alpha$ lives on.


Now this Hamiltonian of course has an eigenbasis $\hat a^\dagger \hat a ~ |n\rangle = n ~|n\rangle$ and in terms of that basis we see a recurrence that if $|\alpha\rangle = \sum_n c_n |n\rangle$ then we can work out that $\alpha |\alpha\rangle = \hat a |\alpha\rangle$ implies $$c_n \sqrt{n} = \alpha c_{n-1},\text { so } c_n = c_0 \frac{\alpha^n}{\sqrt{n!}}.$$Then working out $1 = \langle \alpha|\alpha\rangle = c_0\sum_n \big(|\alpha|^2\big)^n/n! = c_0 \exp\big(|\alpha|^2\big)$ gives the normalization constant $c_0.$


However note that under this Hamiltonian, $|n\rangle \mapsto e^{-in\omega t} |n\rangle$ and therefore, $$|\alpha\rangle \mapsto \exp\left(-|\alpha|^2\right) \sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}} ~ e^{-i~\omega t~n} |n\rangle,$$which we see on the right hand side combines by normal exponent rules to form $(\alpha e^{-i\omega t})^n.$ In other words the time evolution is that $|\alpha(t)\rangle = |\alpha_0 e^{-i\omega t}\rangle,$ and our coherent state simply makes a circle on the complex plane as it evolves.


It is in this precise sense that I understand the word "coherent," it is the meaning "it stays perfectly together as it goes along its journey." It's the same way I would say "lasers are a coherent phenomenon; light by its nature wants to spread out in a $1/r^2$ law but in a laser, the different wave packets are all arranged with just the right phase differences so that they destructively interfere for the waves that are trying to get out of the beam proper, and constructively interfere in the next position of the beam."


classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...