Saturday, February 28, 2015

electromagnetic radiation - Can a wave be two dimensional?


I am having a hard time picturing waves, the image that comes to mind is a bobbing device submerged in still water which generates pulses in all directions (similarly in air). Then how can a wave be two dimensional? The classic image of an electromagnetic wave is a 3D wave function, x-y represent the electrical component, z-x represents the magnetic component. But this is a wave function, so its just a plot, and that is not how I should picture it (or is it?). So in real life, if I have a source of EMR in space, I can imagine concentric spheres of Electrical fields, but what about the magnetic fields? How will they be perpendicular to a sphere of electric fields? Can someone help me picture EMR? Thanks! (apologies in advance for the noobish language, I am an absolute beginner to physics)



Answer



The picture below shows 3-D wave with wave fronts.
enter image description here
Imagine holding the wave fronts and turning into half a circle. When you look into the same 3-D wave from the upside now, you will just see half circle. This is your 2-D wave. The same you see in books with half circles (wave fronts) emanating from a point.


enter image description here


In the following picture, take off the components of the electric and magnetic field below the line of propagation from your mind. See the upper half, if you increase the length of wave front you will get the first picture. Similarly it applies to the other component. I hope it helped. enter image description here


cosmology - Is space unending?


Is the space unending, i.e it has no boundaries? If yes, how can a thing exist which is non-ending? Its impossible for me to imagine something like that. Secondly, if its not and has boundaries then it must be inside some other thing which is either non-ending or itself contained in another thing. Yet, again same problem. If it is also inside some other thing, and the other thing is also inside other thing, then this series will never end. Imagining about this scenario makes me insane.




quantum mechanics - Interpreting Argyres' spectrum of spontaneously broken SUSY QM


I can't understand the spectrum in the figure on page 19 from Argyres' lecture notes on supersymmetry: http://www.physics.uc.edu/~argyres/661/susy1996.pdf



AryresFigure


Argyres is considering a supersymmetric quantum mechanical system of an anharmonic oscillator, with a superpotential $W\sim x^3$. The plots of $W$ and $V$ make perfect sense. What doesn't make sense is the spectrum on the right.


Why are there both x's and o's over each Hamiltonian $H_1$ and $H_2$?? I thought $H_1$ is exclusively the spin-up Hamiltonian and $H_2$ is exclusively the spin-down Hamiltonian, so therefore the spectrum consist of a column of just x's over $H_1$ and a column just of o's over $H_2$.


Additional request: Would someone please write down the form of $H_1$ and $H_2$ in their answer, so to make sure we're on the same page? A graph of the the respective potentials $V_1$ and $V_2$ would be even better.


Take a look at the much more sensible figure on page 7. This is something that I can comprehend.


enter image description here




Is the photoelectric effect a type of nuclear decay?


If the frequency of light is $f$ and if $f \ge f_t$, where $f_t$ is the threshold frequency, electrons are emitted if light is shined on a metal surface. By my understanding, the light comes in and is absorbed by the atom. The atom then has too much energy and emits an electron. I don't remember much about nuclear decay, but when an atom has too much energy it can emit an alpha particle or a gamma ray or whatever else (something about decaying down to a stable line).


Anyways, so if an atom has too much energy and ejects an electron, is that considered a nuclear decay? Also, does the photoelectric effect then ionize materials? Could I take a metal sheet, shine light on it, and get a metal sheet that is now positive? Could I do this at home?



Answer



The photoelectric effect involves the shell (electrons) of the atom. Photon energies are typically in the range of electron volts.


To excite the nucleus, you'll need higher energies (MeV or hundreds of keV).


What you are probably looking for is called photonuclear reaction.




Anyways, so if an atom has too much energy and ejects an electron, is that considered a nuclear decay?



If this electron is coming from the shell, then it's NOT a decay of the nucleus (aka nuclear decay). If however, this is coming from a $\beta^-$ decay (bound neutron decays to a bound proton plus an electron and an anti-electron neutrino) of the nucleus, then yes, this is a nuclear decay.


One has to be careful when talking about 'the atom has too much energy': one has to clarify whether it's an electron in the shell which is not in the energetically lowest possible state or whether it's the nucleus which is not in its energetically lowest state.



Could I take a metal sheet, shine light on it, and get a metal sheet that is now positive?



One would have to apply some force, e.g. by means of an electric field, to get the electrons to move away from the sheet of metal. Also, one would have to do this in vacuum in order to avoid the electrons being slowed down by the gas molecules. A phototube for example is a device which works in this way.


Thursday, February 26, 2015

general relativity - Do particles produce gravitational waves?


We have obviously detected gravitational waves at very large scales, but what about small scales? I accept the answer that they would be indetectable, however I would think it would, considering the incredible speed of a photon, for example.




Why do photons add mass to a black hole?


Why do photons add mass to a black hole?


When photons are taken irreversibly into a black hole does the mass of the BH increase?



Answer



This is really just an expansion of Graham's answer.


It's a commonly made mistake that gravity, and therefore a black hole, is caused by matter. In fact the spacetime curvature is related to a quantity called the stress-energy tensor. This is usually represented by a matrix with ten independant values in it (it's a 4x4 matrix but it's symmetric so six of the elements in it are duplicated).



Only one of the elements in the matrix, $T_{00}$, depends directly on the mass, and actually that element gives the energy density, where mass is counted as energy using Einstein's equation $e = mc^2$.


So photons affect spacetime curvature because they contribute to the energy density even though they have no mass. Actually photons contribute to other elements of the matrix as well because they have a non-zero momentum and this too affects the spacetime curvature.


Re your question:



When photons are taken irreversibly into a black hole does the mass of the BH increase?



Yes, the mass of the black hole will increase by the photon energy divided by $c^2$.


Re your comment to Graham's question, yes, provided you add more energy than the black hole is radiating you will maintain or increase the black hole. You could add the energy using lots of low energy photons or a few high energy photons. It's the total energy added that matters.


quantum electrodynamics - Is the fine structure constant actually a constant or does its value depend on the energy scale?


The value of the fine structure constant is given as $$ \alpha = \frac{e^2}{4\pi\varepsilon_0\hbar c} = \frac{1}{137.035\,999..} $$ It's value is only dependent on physical constants (the elementary charge $e$, speed of light $c$, Plancks constant $\hbar$), the vacuum permitivvity $\varepsilon_0$) and the mathematical constant $\pi$, which are considered to be constant under all cirumstances.


However the Wikipedia article Coupling constant states



In particular, at low energies, α ≈ 1/137, whereas at the scale of the Z boson, about 90 GeV, one measures α ≈ 1/127.




I don't understand how this can be possible, except that one of the physical constants above or even $\pi$ are actually not constant, but dependent on the energy scale. But that seems nonsense.


So what do physicists mean when they say that the fine structure constant $\alpha$ increases with energy? Can you perhaps reformulate the quoted sentence above so that it makes more sense?



Answer



Expanding on what Vladimir said: the thing that is changing with energy is $e$ (the others are not constants so much as conversion factors between length and time, time and energy, etc.). The reason the charge can vary is that the vacuum is not entirely empty. Sloppily speaking, near a charge, the electric field interacts with virtual (electron/positron) pairs and the effect is that the virtual pairs screens the "raw" electric field. Thus, if you're far away, you see one value, but as you get closer the electric field raises faster than $1/r^2$. With scattering experiments, how close you get to a charge is directly related to the in-going energy of the particles. Now, in modern physics, we account for this by saying that the charge $e$ changes with energy scale; this sounds bizarre in the form I just explained (since you might expect that we just declare the force to be not $1/r^2$), but it turns out that this is the neatest way intellectually to understand it, due to a convergence of issues to do with wanting to preserve observed symmetries in the theory at all scales.


Incidentally, for things like colour charges in QCD, the vacuum anti-screens, which is to say that the observed field strength increases as you get further away. Heuristically, this is what leads to confinement of quarks in the "normal" phase.


Wednesday, February 25, 2015

condensed matter - Why does Density Functional Theory (DFT) underestimate bandgaps?


Density Functional Theory (DFT) is formulated to obtain ground state properties of atoms, molecules and condensed matter. However, why is DFT not able to predict the exact band gaps of semiconductors and insulators?


Does it mean that the band gaps of semiconductors and insulators are not the ground states?




Loss in energy by light after reflections


Energy of light is proportional to its frequency.So if light loses energy on reflecting off a surface then why doesn't it change its colour?



Answer





Energy of light is proportional to its frequency.



The energy of a photon is proportional to its frequency.


The power of a light beam is determined by the photon energy and the number of photons it delivers per unit time.



So if light loses energy on reflecting off a surface then why doesn't it change its colour?



An imperfect reflector causes the light beam to lose energy by absorbing some of the photons, not by changing the energy of each reflecting photon.


general relativity - What are the local covariant tensors one can form from the metric?


Normally in differential geometry, we assume that the only way to produce a tensorial quantity by differentiation is to (1) start with a tensor, and then (2) apply a covariant derivative (not a plain old partial derivative). Applying this to GR, I think one way of stating the equivalence principle is that the only tensorial object that we expect to be "built in" to the vacuum is the metric. Since the covariant derivative is basically defined as a derivative that produces zero when you apply it to the metric, this means that you can't get anything of interest (i.e., local and tensorial) by appying the process described by #1 and #2 to the vacuum. This can be used as a fancy way of arguing that the Newtonian gravitational field $\mathbf{g}$ isn't a tensor, since in the Newtonian limit, it's essentially the gradient of the metric.


However, the process described by #1 and #2 is sufficient but not necessary. In fact, one way of defining curvature is by taking non-covariant derivatives on the metric to form the Christoffel symbols, and then doing further operations involving non-covariant derivatives to get the Riemann curvature tensor -- which surprisingly ends up being a valid tensor.


It seems, then, that the Riemann tensor is a special case. I originally thought that there might be a uniqueness theorem that proves that if we want to produce a local, tensorial quantity from the metric, the only possibilities are the Riemann tensor or curvature polynomials formed from the Riemann tensor and its covariant derivatives.



[EDITS] A comment by joshphysics and the answer by BebopButUnsteady helped me to refine this conjecure as follows.


Joshphysics pointed out that things like $g_{ab}g_{cd}$ might be considered trivial counterexamples. I can think of two possible ways of dealing with this:


(1) BebopButUnsteady's answer shows that this is in some sense not a counterexample at all, since the metric itself can be expressed as a Taylor series in terms of the Riemann tensor and its derivatives. If the metric is analytic, and if we're willing to accept infinite series, then this means that there is no information in the metric that isn't recoverable from the Riemann tensor.


(2) What doesn't seem to exist, apart from curvature polynomials formed from the Riemann tensor and its covariant derivatives, is (a) any varying scalar field, or (b) any vector field. (Part b is basically the equivalence principle.)



Answer



The answer to your question is affirmative in the following sense:


In the Riemann normal coordinates at $p$ the coefficients of the Taylor expansion of the metric $g_{ij}(x)$ are polynomials in the Riemann tensor at $p$ and its covariant derivatives at $p$. [Assuming the proof in this random thing I googled[a] is correct, starting at (5.1)].


I think this is the correct formalization of your conjecture in the sense that if we are making a tensor out of $g$ the only thing we can use are $g$ and its expansion in normal coordinates. I'll maybe try to write out why I think this is case.


BTW, the local condition is very necessary, since otherwise we could define things like the length of the shortest loop containing $p$ that is in a certain homotopy class, which clearly "depends only on the metric" but is not made out of polynomials the curvature.


Added after this answer was accepted



For those interested, I asked a question on Math SE that contains what I believe to be the correct formalization of the question: "What tensors can I produce from the metric tensor?" “Natural” constructions of tensor fields from tensor fields on a manifold


[a]: Guarrera, D.T., Johnson, N.G., Wolfe, H.F. (2002) The Taylor Expansion of a Riemannian Metric
$\mbox{ } $ $\mbox{ } $ $\mbox{ } $ $\mbox{ } $ $\mbox{ } $ $\mbox{ } $ $\mbox{ } $ http://www.rose-hulman.edu/mathjournal/archives/2002/vol3-n2/Wolfe/Rmn_Metric.pdf


thermodynamics - Temperature and kinetic energy of molecules


I was wondering if temperature is related to the average translational kinetic energy of the molecules then why does the average kinetic energy of say a moving object not affect the temperature? What is the mechanism regarding these vibrational motions that imparts temperature. Also do a gas and liquid at the same temperature possess the same kinetic energy? and a heavier molecule traveling at the same speed as a lighter molecule will have a greater temperature?



Answer





I was wondering if temperature is related to the average translational kinetic energy of the molecules



Yes temperature is related to the average translational kinetic energy of the molecules (technically called its kinetic temperature), or its internal translational kinetic energy.



then why does the average kinetic energy of say a moving object not affect the temperature?



Because the average translational kinetic energy (measured by temperature) of the contents of the system is the internal kinetic energy of the system, whereas the translational kinetic energy of the center of mass of the system as a whole is the external kinetic energy of the system, i.e., its kinetic energy with respect to an external (to the system) frame of reference, such as the room reference frame where the object is located. The motion of the center of mass of the object does not effect its temperature because temperature is caused by the random motion of the atoms and molecules of the object. That random motion is not altered by the collective motion of the center of mass of the object.



What is the mechanism regarding these vibrational motions that imparts temperature.




It is not "vibrational motions" that is associated with temperature. It is, as you have already noted, translational motions that are associated with temperature. Taking a gas as an example, the greater the average translational kinetic energy of the gas molecules, the greater the number of collisions per unit time between the molecules themselves and any surfaces they contact. If, for example, a glass bulb thermometer is placed in the container, collisions of the gas molecules with the glass transfers kinetic energy from the gas into the fluid in the thermometer. That causes the thermometer fluid to expand and rise in the thermometer tube, providing a temperature reading.



Also do a gas and liquid at the same temperature possess the same kinetic energy?



They posses the same average translational kinetic energy, since that is what temperature measures. But there are also vibrational and rotational kinetic energies of molecules that may be different for the gas and liquid and that are not associated with the temperature of the gas and liquid.



and a heavier molecule traveling at the same speed as a lighter molecule will have a greater temperature?



First of all, temperature is generally considered a macroscopic property of the system, i.e. individual molecules are generally not considered to have a temperature. That said, a heavier molecule (molecule of greater mass) traveling at the same speed as a lighter molecule (molecule of lesser mass) will have a greater kinetic energy since kinetic energy is $\frac{mv^2}{2}$.


Hope this helps.



What is the influence of QED vacuum in electron-double-slit experiments?


In a recent question on superpositions of different quarks it was explained, that the superpositions of different electric charged particles cannot exist, in contrast to strangeness quantum number. It was reasoned that this is because of decoherence that comes from the QED vacuum - the photon interacts ("measures") with charged particles -> thus their charge is known -> destroys decoherence.


But of course, double slit experiments with single electrons have been performed already in the 70s.


My questions:



Why do virtual photons not cause decoherence of the spatial coherence of single electrons?




If they would cause decoherence, it would result in reduction of the contrast in single electron double-slit experiments.




electrostatics - Minimizing potential energy of a dipole in an electric field


My test paper asked me which way a dipole should be orientated in an electric field to minimize its potential energy. My answer was that the dipole should lie parallel to the electric field with the positive end of the dipole pointing in the direction of the electric field because that way, the positive end would be closer to the negative "attracting" of the field, and similarly the negative end of the dipole would be closer to the positive side of the field (i.e. the side from which the electric field is coming). Is this the right way of thinking about it?



Answer



That's a pretty good way to look at it. To be more mathematically explicit, notice that the energy of an electric dipole (see here) with dipole moment $\mathbf p$ in an electric field $\mathbf E$ is $$ U = -\mathbf p\cdot\mathbf E = -|\mathbf p||\mathbf E|\cos\theta $$ This expression is minimized when $\cos\theta = 1$, which is when the angle between the dipole moment vector (which for a physical dipole points from the negative to the positive charge) and the field is $0$. This is precisely the condition for the dipole to be aligned with the electric field as you described.


If the dipole is not aligned with the field, then it will experience a torque that tends to align it with the field. You can see why this happens in the physical dipole; the positive charge feels a force in the direction of the field, while the negative charge feels a force in the direction opposite the field, and these both tend to rotate the dipole to align in with the field.


Tuesday, February 24, 2015

quantum mechanics - Is Angular Momentum truly fundamental?


This may seem like a slightly trite question, but it is one that has long intrigued me.


Since I formally learned classical (Newtonian) mechanics, it has often struck me that angular momentum (and generally rotational dynamics) can be fully derived from normal (linear) momentum and dynamics. Simply by considering circular motion of a point mass and introducing new quantities, it seems one can describe and explain angular momentum fully without any new postulates. In this sense, I am lead to believe only ordinary momentum and dynamics are fundamental to mechanics, with rotational stuff effectively being a corollary.


Then at a later point I learned quantum mechanics. Alright, so orbital angular momentum does not really disturb my picture of the origin/fundamentality, but when we consider the concept of spin, this introduces a problem in this proposed (philosophical) understanding. Spin is apparently intrinsic angular momentum; that is, it applies to a point particle. Something can possess angular momentum that is not actually moving/rotating - a concept that does not exist in classical mechanics! Does this imply that angular momentum is in fact a fundamental quantity, intrinsic to the universe in some sense?



It somewhat bothers me that that fundamental particles such as electrons and quarks can possess their own angular momentum (spin), when otherwise angular momentum/rotational dynamics would fall out quite naturally from normal (linear) mechanics. There are of course some fringe theories that propose that even these so-called fundamental particles are composite, but at the moment physicists widely accept the concept of intrinsic angular momentum. In any case, can this dilemma be resolved, or do we simply have to extend our framework of fundamental quantities?




classical mechanics - Do unstable equilibria lead to a violation of Liouville's theorem?


Liouville's theorem says that the flow in phase space is like an incompressible fluid. One implication of this is that if two systems start at different points in phase space their phase-space trajectories cannot merge. But for a potential with an unstable equilibrium, I think I've found a counterexample.


Consider the potential below (excuse bad graphic design skills). Potential with unstable equilibrium


A particle starting at rest at point A, $(q,p) = (x_A,0)$ at $t = 0$, would accelerate down the potential towards the left. Because it has the amount of energy indicated by the purple line, it would come to rest at the local maximum B at $t = T$, an unstable equilibrium $(q,p) = (x_B,0)$. However any particle started at rest at the top of the local maximum B at $t = 0$ would also stay that way forever, including up to $t = T$. Thus there appears to be two trajectories that merge together in violation of Liouville's thorem.




Monday, February 23, 2015

general relativity - How long does it take for a black hole to form?


The well-known fable of an astronaut sending signals out to an external observer while falling toward an event horizon states that the time lapse between such signals becomes greater even if in the astronaut sends them out periodically (as judged in his inertial frame). When viewed from earth and weighing time-dilation due to the gravitational field of the collapsing star, how is it possible that a black hole can form in finite time (for any external observer) if it takes an infinite amount of time to "see" events occurring at the event horizon?




quantum mechanics - Single photon and double-slit experiment




Laser fires single particles of light, called photons, through the slits. Even though only single photons of light are being fired through the slits and They create three pattern again. How single particles of light can create this wave pattern?




astrophysics - Help figure out the planet M




let's imagine there is a planet named M. it has 2 or 3 moons and one of them is as big as a moon can be. the moon is in lag range 1 and it blocks the light of the star for at least half of the year making the whole planet night time. the planet itself takes 1000 days to orbit its star. and days and nights are very long.


these are the concerns:


1: how big and how close/far should this big moon be for the situation above to work?


2: how big can the planet be in order for humans to live normal lives.


3: will vibrant life be possible on such a planet?


4: if the light is blocked by the moon, won't it freeze the planet?


5: how can I have very very long night but still have good temperatures?


6: if the question number 4 is correct, can I use the moons to duplicate the effect on IO and produce volcanos to warm things?


7: if the planet is tidally locked with its biggest moon, will there be ALWAYS darkness or will there be also day time?



The main question of this article is: How can I make such a planet function and have life forms and plants like earth?


p.s: I could even change the star, this whole system is imaginary.


and i CANT seperate these because then i would have to copy and paste ALL the details of the imaginary planet and its orbit and its moons over and over again. wouldnt that be worse ?



Answer



1: Mass of the planet and mass of the star determines the distance and then it's just a geometry problem, draw a triangle between the star and the moon and a point on the planet, think about eclipses.


2: I think this one is about gravity, to big and they wont be able to move around.


3: Not probable. A 1000 day orbit means that it's about 3 times further from its star than we are from ours, so depending on the temperature of the star it's likely out of the habitable zone.


4: Shadows for half the year will definitely cool things down.


5: Possibly a massive green house effect?


6: you have multiple moons so probably can squeeze things, but wouldn't it squeeze the small moons harder? At this point i'd be hooping for an endor moon.



7: I dont' think the rotation of the planet is relevant, if the moon is stuck in L1 and big enough to eclipse the planet there will be darkness year round no matter how it spins


homework and exercises - Formula for molar specific heat capacity in polytropic process


I found this formula for a polytropic process, defined by $PV^n = {\rm constant}$, in a book:


$$C = \frac R{\gamma-1} + \frac R{1-n} $$ where $C$ is molar specific heat and $\gamma$ is adiabatic exponent. I do not know how it was derived, can someone guide me?



Answer



That $C$ is the specific heat for the given cycle, i.e. $$dQ=nCdT$$ This is for $n$ moles of gas.(not the $n$ you stated in question)


I will assume $$PV^z=\text{constant}$$


$$nCdT=dU+PdV$$ $$\int nCdT=\int nC_vdT+\int PdV$$


$$nC\Delta T=nC_v \Delta T+\int \frac{PV^z}{V^z}dV$$


As numerator is a constant, take it out!



Also note that $$P_iV_i^z=P_fV_f^z$$


$i = \text{initial}$


$f=\text{final}$


Focusing on integral only,


$$PV^z\int V^{-z}dV$$


$$PV^z\left[\frac{V^{-z+1}}{-z+1}\right]^{V_f}_{V_i}$$


Note that the $PV^z$ is same for initial and final step. So, we write multiply it inside and do this ingenious work :


$$-\frac{P_iV_i^zV_i^{-z+1}}{-z+1}+\frac{P_fV_f^zV_f^{-z+1}}{-z+1}$$


$$-\frac{P_iV_i}{-z+1}+\frac{P_fV_f}{-z+1}$$


Note that $PV=nRT$



$$\frac{nR\Delta T}{-z+1}$$


where $\Delta T=T_f-T_i$


Final equation :


$$nC\Delta T=nC_v \Delta T+\frac{nR\Delta T}{-z+1}$$


$$C=C_v+\frac{R}{1-z}$$


This will bring you the original equation, you can find $C_v$ by


$$C_p/C_v=\gamma$$


$$C_p-C_v=R$$


Using $C_p=\gamma C_v$,


$$C_v\left(\gamma-1\right)=R$$



$$C_v=\frac{R}{\gamma-1}$$


Substituting in original equation,


$$C=\frac{R}{\gamma-1}+\frac{R}{1-z}$$


Sunday, February 22, 2015

photons - How electromagnetic waves are created?


Macroscopically, electromagnetic waves are produced by a changing dipole or an oscillating charged particle as shown below:


enter image description here


In this case, the frequency of the electromagnetic radiation is equal to the frequency of oscillation.


However, we also know that electromagnetic radiation is produced by simply accelerating a charged particle like below:


enter image description here



I have a few questions:



  1. What frequency will the radiation of the second animation be? For simplicity, lets say it undergoes the exact same acceleration that the charge in the first animation does from t=0 to t=T/4, but then abruptly stops accelerating (so essentially its just the first quarter of a sinusoid acceleration).

  2. If a charged particle is then hit by the radiation emitted by the second animation, would it just feel a force in a single direction, rather than an oscillatory force?

  3. In terms of the actual photons, it would seem the first animation emits only photons with the same frequency as the particle's oscillation. What about the second animation? As far as the particle knows for t=0 to t=T/4 it is under the exact same acceleration as the first animation, so would it would emit the exact same photons for that period?




quantum field theory - The relation between critical surface and the (renormalization) fixed point


In the book, I read some remarks about the criticality:



Iterations of the renormalization (group) map generate a sequence of points in the space of couplings, which we call a renormalization group trajectory. Since the correlation length is reduced by a factor $b<1$ at each step, a typical renormalization-group trajectory tends to take the system away from criticality. Because the correlation length is infinite at the critical point, it takes an infinite of iterations to leave that point. In general, a system is critical not only at a given point in the coupling space but on a whole "hypersurface", which we call the critical surface, or sometimes the critical line.



I think these remarks are fine. But then the authors give a statement:




The critical surface is the set of points in coupling space whose renormalization-group trajectories end up at the fixed point: $$\lim_{n\rightarrow\infty} T^n(J)=J_c,$$ where $T$ is the renormalization-group map and $J$ represents the couplings in general.



The question is, how to properly understand this statement? For this, I particularly have two small questions. Why the critical line need to hit the fixed point? And why the critical line need to end up at the fixed point rather leave the fixed point under the renormalization group transformation?



Answer



I'll try to explain why there could be a critical line and not just a critical point, and hopefully that will answer your question. If you think about the Ising model, we have the standard Hamiltonian:


\begin{equation} -\beta H = J_1\sum_{}s_i s_j + h\sum_{i}s_i \end{equation} where $\sum_{}$ is a sum over nearest neighbors. This model has a fixed point at a critical value of $J_1$ and $h$. Simple generalizations of the Ising model might include interactions between next nearest neighbors as well as nearest neighbors


\begin{equation} -\beta H = J_1\sum_{}s_i s_j + J_2\sum_{(i,j)}s_i s_j + h\sum_{i}s_i \end{equation}


where $(i,j)$ indicates next nearest neighbor pairs. This model can also be critical for certain values of $J_1$ and $J_2$ and these points lie on a critical line. This makes intuitive sense because $J_1$ and $J_2$ are clearly capturing similar physical effects (they both bias spins towards alignment), and therefore they are somewhat redundant.


It's important to recognize that the critical point actually exists in a higher dimensional space $\{J\}$. Points on the critical line (or surface) all end up at the critical point under iterated RG transformations. The reason this is important is because points which are arbitrarily close to the critical surface end up arbitrarily close to the critical point under RG transformation, and they are then all driven away from the critical point in the same way. This is the origin of universality.


quantum mechanics - Does natural unit of information and entropy, nat, play special role in the freebit picture?


Please refer this question to understand why I consider the freebit picture important.


In short, it is conjectured, that for certain real systems the most complete physical description possible involves Knightian uncertainty so that some qubits in nature actually "freebits".


So I decided to calculate entropy of a system that involves Knightian uncertainty.


Lets define c-freebit (classical freebit) as a two-level system whose Knightian uncertainty is maximal.


We know that entropy of a two-level system depends on the probabilities of the respective levels, if the probability of the state 0 is $p_0= x$, then the entropy (in natural units) is:


$$S= -\sum_{i=0}^1 p_i \ln p_i = -x \ln x - (1 - x) \ln (1 - x)$$


So if $x=1/2$ then $S=\ln 2$ nat, equal to 1 bit.



But when we introduce the Knigtian uncertainty $x\in[a,b]$, the total entropy becomes $$S=\int_a^b \frac{-(1-x) \ln (1-x)-x \ln (x)}{b-a} \, dx$$


For the maximum case, $a=0$, $b=1$ we get $S=\frac12$ nat.


This is remarkable, because we got a rational fraction of natural units of information! The nat, which plays no role in probablistic description of anything suddenly appears in the evidential description of the Dempster–Shafer formalism!


We obtained that a c-freebit contains 1/2 nat of information!


Given this result I wonder whether something similar happens in quantum world. Can anybody please provide the calculation for entropy of a quantum freebit?




Conversion of mass and energy



First of all I am not a scientist and all these doubts are coming from my curiosity.


When Googling about Einstein's $E = mc^2$. I understand that mass and energy are convertible.


What it exactly means?


Also can we have greater amount of energy from smaller mass objects?


And if this conversion is possible then why we cant convert 1kg of iron to large energy?


(I know that the question is really worst. But I am a little bit weak in English and almost all Google searches disappoint me.)


Any simple explanation?




Heat equation on ball - one-dimensional description


I want to solve the transient heat equation on a ball. The boundary condition is the same over the hole outer surface. So this should reduce to a one-dimensional problem in radial direction. However I cannot use the one-dim heat equation, since the surface through which the heat flows goes quadratic with the radius.


Is the following approach correct: Take the heat equation, transform it into sperical coordinates and eliminate the derivatives in angular directions. I think the last part is justified, because of the rotational symmetry of the boundary condition. So I end up with this equation:


$$ \frac{\partial T}{\partial t} = \alpha \frac{1}{r^2}\frac{\partial}{\partial r} \left( r^2\frac{\partial T}{\partial r}\right) $$ with thermal diffusivity $\alpha$. Is this the correct reformulation of the heat equation on a ball to a one-dimensional problem?


EDIT: When I solve this with the finite element method, will there be a problem in the centre of the ball? The $1/r^2$ factor might cause a problem.



Answer



That sounds about right to me. Although I've never done this for the heat equation specifically, that's much the same approach we use to solve the Schrödinger equation in spherical coordinates, e.g. for the hydrogen atom. Your full solution will be products of the radial functions $T(r,t)$ with appropriate spherical harmonics.


When determining the radial part of the solution, assuming this goes like the Schrödinger equation, one of the boundary conditions you'll need to use is consistency at the origin: basically you need your solution for $T(r)$ to be finite and differentiable at $r=0$. I can come back and expand on this later, or perhaps someone else will give you a more detailed explanation before I get to it.


quantum field theory - Why do many people say that virtual particles do not conserve energy?


I've seen this claim made all over the Internet. It's on Wikipedia. It's in John Baez's FAQ on virtual particles, it's in many popular books. I've even seen it mentioned offhand in academic papers. So I assume there must be some truth to it. And yet, whenever I have looked at textbooks describing how the math of Feynman diagrams works, I just can't see how this could be true. Am I missing something? I've spent years searching the internet for an explanation of in what sense energy conservation is violated and yet, I've never seen anything other than vague statements like "because they exist for a short period of time, they can borrow energy from the vacuum".



The rules of Feynman diagrams, as I am familiar with them, guarantee that energy and momentum are conserved at every vertex in the diagram. As I understand it, this is not just true for external vertices, but for all internal vertices as well, no matter how many loops deep inside you are. It's true, you integrate the loops over all possible energies and momenta independently, but there is always a delta function in momentum space that forces the sum of the energies of the virtual particles in the loops to add up to exactly the total energy of the incoming or outgoing particles. So for example, in a photon propagator at 1-loop, you have an electron and a positron in the loop, and both can have any energy, but the sum of their energies must add up to the energy of the photon. Right?? Or am I missing something?


I have a couple guesses as to what people might mean when they say they think energy is not conserved by virtual particles...


My first guess is that they are ignoring the actual energy of the particle and instead calculating what effective energy it would have if you looked at the mass and momentum of the particle, and then imposed the classical equations of motion on it. But this is not the energy it has, right? Because the particle is off-shell! It's mass is irrelevant, because there is no mass-conservation rule, only an energy-conservation rule.


My second guess is that perhaps they are talking only about vacuum energy diagrams, where you add together loops of virtual particles which have no incoming or outgoing particles at all. Here there is no delta function that makes the total energy of the virtual particles match the total energy of any incoming or outgoing particles. But then what do they mean by energy conservation if not "total energy in intermediate states matches total energy in incoming and outgoing states"?


My third guess is that maybe they're talking about configuration-space Feynman diagrams instead of momentum-space diagrams. Because now the states we're talking about are not energy-eigenstates, you are effectively adding together a sum of diagrams each with a different total energy. But first, the expected value of energy is conserved at all times. As is guaranteed by quantum mechanics. It's only because you're asking about the energy of part of the superposition instead of the whole thing that you get an impartial answer (that's not summed up yet). And second... isn't the whole idea of a particle (whether real or virtual) a plane wave (or wave packet) that's an energy and momentum eigenstate? So in what sense is this a sensible way to think about the question at all?


Because I've seen this claim repeated so many times, I am very curious if there is something real behind it, and I'm sure there must be. But somehow, I have never seen an explanation of where this idea comes from.




homework and exercises - Why is $vec E_{text{Vacuum}}=epsilon_{r}cdotvec E_{text{Dielectric}}$?


Basically, I'm asking why the electric field in a vacuum (or the applied electric field) is related to the electric field in a dielectric by the relative permittivity $\epsilon_{r}$.


For context I'll provide the following question with a known solution:



To estimate the effective separation, $\vec d$, in an induced atomic dipole we assume that only electrons in the outer shell of the atom are displaced. Sulphur atoms have $4$ electrons in their outer shells. Sulphur has $3.8 \times 10^{28}$ atoms per meter cubed and a relative permittivity $\epsilon_{r} = 4.0$.


Estimate the value of $d$ when an external field of $1\mathrm{kV}\mathrm{m}^{−1}$ is applied to a block of sulphur.





The solution to this is (with more details added):


$$\vec P=\chi_e\epsilon_0\vec E_{\text{Dielectric}}=4N\cdot q \vec d$$ Now since $$\fbox{$\color{red}{\vec E_{\text{Dielectric}}=\frac{\vec E_{\text{Applied}}}{\epsilon_{r}}}$}$$ Therefore $$4N\cdot q \vec d=\frac{(\epsilon_r -1)}{\epsilon_r}\epsilon_0\,\vec E_{\text{Applied}}\tag{1}$$ Since $\epsilon_r =1+\chi_e$



Solving equation $(1)$ for $\vec d$ and substituting $$\epsilon_{r} = 4.0$$ $$\vec E_{\text{Applied}}= 10^3 \mathrm{V}\mathrm{m}^{-1}$$ $$q=\text{elementary charge}=e^{-}=1.6\times 10^{-19}\mathrm{C}$$ $$\mathrm{N}=\text{Number density of sulphur}=1.38\times 10^{28}\,\mathrm{m}^{-3}$$


gives


$$\vec d =\frac{4-1}{4}\epsilon_0\times 10^3\times\frac{1}{4\times 3.8\times 10^{28}\times 1.6\times 10^{-19}}\approx 2.7\times 10^{-19}\mathrm{m}$$




I understand everything about this solution apart from the fact why the formula boxed in red holds.


The author simply stated this fact without justification. I would like to know why the electric field for the Dielectric is the applied Electric field reduced by a factor of $\epsilon_r$.


Why not $$\vec E_{\text{Dielectric}}=\frac{\vec E_{\text{Applied}}}{3\epsilon_{r}}$$ or why is it even divided by $\epsilon_r$ in the first place?


This may seem a futile and obvious question to some of you but I have just started learning about electromagnetic fields in matter (instead of just in vacuums) so this is far from trivial to me.


Could anyone please provide me some insight/intuition as to why $$\vec E_{\text{Dielectric}}=\frac{\vec E_{\text{Applied}}}{\epsilon_{r}}?$$




quantum mechanics - Momentum operator expression


In the course of calculating $$\langle x|\hat{p}|\psi\rangle$$ I have a step which is: $$ \langle x|\frac{\hbar}{i}\frac{d}{dx}|\psi\rangle=\frac{\hbar}{i}\frac{d}{dx}\langle x|\psi\rangle.$$ What is the mathematical justification for that step? I am "intuitively" thinking that $-i\hbar d/dx$ and $ \langle x|$ "commute", but on second thought, that doesn't seem reasonable, since a bra and an operator are not even in the same vector space.



Answer



That is not the point really. The momentum operator is defined to be the operator which acts on the position basis as


$$\langle x |P |\psi\rangle = -i\hbar \dfrac{d}{dx}\langle x|\psi\rangle.$$


This is a definition. Recall that given one state $|\phi\rangle$ on the Hilbert space of the system, it is completely defined if you give its projections onto a complet set (be it discrete or continuous). To see this notice that if you know that



$$\langle x|\phi\rangle=\phi(x)$$


for a given $\phi(x)$, then using the completeness relation


$$\mathbf{1}=\int_{\mathbb{R}}|x\rangle \langle x| dx$$


you obtain that


$$|\phi\rangle = \mathbf{1}|\phi\rangle=\int_{\mathbb{R}}|x\rangle \langle x|\phi\rangle dx = \int_{\mathbb{R}} \phi(x)|x\rangle dx.$$


Now let $|\psi\rangle$ be one state, you want to define what is $P|\psi\rangle$. Whatever that is, it is another state. So it suffices to specify its projection onto the position basis, and that is exactly the expression above.


On the other hand: why is it reasonable that


$$\langle x|P|\psi\rangle = -i\hbar \dfrac{d}{dx}\langle x|\psi\rangle$$


is another, and considerably more interesting question.


This is essentialy the idea of quantization. Basically when you have a classical system you have one Hamiltonian description which includes one Poisson bracket $\{,\}$.



If $q,p$ is one pair of canonically conjugate variables (position and momentum) then the relation $$\{q,p\}=1$$ holds, and it is telling that $p$ is the generator of translations in $q$.


In order to preserve this in the quantum theory and still have that the quantized $P$ is the generator of the quantized $Q$ we need that


$$\{,\}\mapsto -\dfrac{i}{\hbar}[,].$$


Notice that this means $[Q,P]=i\hbar$, which is the way, in the language of quantum mechanics, of saying that $P$ generates translations in $Q$. This is roughly speaking one possible way to motivate the quantization prescription which should in some sense "deform" the Poisson bracket to the commutator. Another way to motivate this is to notice that $[Q,P]=i\hbar$ is equivalent to Heisenberg's uncertainty principle.


Notice that $[Q,P]=i\hbar$ is one algebraic relation between the operators. This is interesting because you can shift the point of view: instead of first thinking about states (in QM the Hilbert space) and then of the observables (the operators), we can talk about the observables of the system in one abstract way.


This means you need to talk about the so-called $\ast$-algebras. This is the abstract version of the algebra of operators in a Hilbert space. On the other hand, if you have one $\ast$-algebra you can talk about representations of this algebra, which in turn are the way that set with its algebraic operations can be realized as actual operators in some Hilbert space.


Thus if you have one classical algebra of observables with some Poisson bracket, you can talk about quantization as finding one quantum algebra of observables, the $\ast$-algebra, where we have the commutation relations holding as the quantized Poisson bracket.


The relation $[Q,P]=i\hbar$ is called the canonical commutation relation and roughly speaking if one builds the (abstract) algebra on which $[Q,P]=i\hbar$ holds, then there by Stone-Von Neumann's theorem there is precisely one and only one representation of this algebra up to unitary equivalence. This representation is the one on which the Hilbert space is $L^2(\mathbb{R})$ the space of square integrable complex functions, $Q$ acts on $L^2(\mathbb{R})$ as $Q\psi(q)=q\psi(q)$ and $P$ acts on $L^2(\mathbb{R})$ as


$$P\psi(q)=-i\hbar \dfrac{\partial \psi}{\partial q}.$$


This is the so-called Schrödinger's representation. This justifies the usual approach on which one assumes one abstract Hilbert space $\mathscr{H}$ together with operators $Q,P$ satisfying $[Q,P]$ and a basis $|q\rangle$ such that $Q|q\rangle = q|q\rangle$ and $P$ acts as we have discused. Notice that mapping $|\psi\rangle \mapsto \psi$ where $\psi(q)=\langle q|\psi\rangle$ identifies $\mathscr{H}$ with the Schödinger representation.



In summary: $[Q,P]=i\hbar$ is equivalent to existing a representation on which $Q$ acts as multiplication and $P$ as the derivative and in turn requiring $[Q,P]=i\hbar$ is what we mean by quantizing a system with classical $\{q,p\}=1$.


Saturday, February 21, 2015

geometric optics - How does the eye perceive a real image?


Okay, so I'm trying to grasp how the human eye will perceive the real image created for example by a convex lens. Take the upper image in this picture for example.


enter image description here


If you were to place a screen at the focal plane, you'd see the projected image, and the size relative to the original object would be as the arrows suggest. What would I see if I was standing in the focal plane? Would I see the image with the same size change? Would I see anything?


If I stood much farther away, would I see a "virtual image" (I know it wouldn't actually be virtual), that is would it look to me as if the object actually was in the focal plane, and was of the size of the arrow in it?




optics - Mechanism for visible light frequency mixing in storm clouds


So I know that when red and blue light (or the frequencies/wavelengths we percieve as such) hit our eyes with the correct proportions, our eyes and brains interpret that as the color purple.


In contrast, I have just read that that the bright emerald green color that severe thunderstorms can have is caused by tall thunderheads that are creating a lot of blue light through internal scattering that are then lit by red light from a late afternoon sun, and the combination of those two colors makes green.


Clearly what is not happening is that the red and blue wavelengths are not scattering separately in the cloud and then hitting our eyes, because then we should see the thunderstorm as purple.


So what is happening? How are the two colors being "mixed" or something in the cloud to create the wavelength(s) that we see as green?



Regarding the green clouds and whether the wavelengths are actually green or if it's an illusion, see: http://www.scientificamerican.com/article/fact-or-fiction-if-sky-is-green-run-for-cover-tornado-is-coming/


Related: Why does adding red light with blue light give purple light?


Frequency mixing seems to happen during scattering, so that is a clue to what's happening, but it's not clear to me if only some types of scattering cause frequency mixing or if all types do. If only some types cause mixing, then is one or more of those types caused by storm clouds? Assuming frequency mixing caused by scattering is the mechanism for producing green wavelengths, how are the other frequencies produced by mixing (e.g. overtones) not visible enough to affect the color perception (are they absorbed or not detected by human eyes or merely of too low intensity to matter)?




general relativity - Is the stress energy tensor continuous at the interface between an object and vacuum?


The stress energy tensor has (as I understand) zero value in the region where there is no matter and non-zero value where there is matter. Suppose there is one matter object in space, now how does the value of the stress energy tensor change as we go from inside of the object to outside, is it a continuous change or does it jump to zero?




soft question - Is time a Scalar or a Vector?


In Wikipedia it's said that time is a scalar quantity. But its hard to understand that how? As stated that we consider only the magnitude of time then its a scalar. But on basis of time we define yesterday, today and tomorrow then what it will be?



Answer



To pick up on twistor59's point, time is not a vector but a time interval is.


The confusion arises because you have to define carefully what you mean by the word time. In special relativity we label spacetime points by their co-ordinates $(t, x, y, z)$, where $t$ is the time co-ordinate. The numbers $t$, $x$, etc are not themselves vectors because they just label positions in spacetime. So in this sense the time co-ordinate, $t$, is not a vector any more than the spatial co-ordinates are.


But we often use the word time to mean a time interval, and in this sense the time is the vector joining the spacetime points $(t, x, y, z)$ and $(t + t', x, y, z)$, where $t'$ is the time interval you measure with your stopwatch between the two points. The interval between the two points is $(t', 0, 0, 0)$ and this is a vector.



cosmology - Are many-worlds and the multiverse really the same thing?



Are many-worlds and the multiverse really the same thing?


Not too long ago, Susskind and Bousso uploaded the article "The Multiverse Interpretation of Quantum Mechanics" with the thesis that the many-worlds interpretation and the multiverse of eternal inflation are one and the same thing. The parallel worlds of one are exactly the same thing as the parallel worlds of the other.


First, they claim decoherence can't happen over a complete description of the future light-cone of the measurement. Then, they apply that principle to eternal inflation. Without decoherence, superpositions of nucleating bubbles and metastable vacua can't decohere. According to the anthropic principle, most bubbles have no conscious observers, but an exponentially small minority do. Apply black hole complementarity to causal horizons.


Then, somehow, in a way I can't follow, they combine causal diamond worlds into a global multiverse. Then they claim decoherence is reversible.


My head is spinning. What are your opinions on this paper?




Friday, February 20, 2015

spacetime - Simultaneity in General Relativity


Take the following situation:


An astronomer is on the surface of Sun (assume he's not rotating around the Sun). He measures two stars from two locations in the universe exploding. Both stars exploded 3000 years ago. Now, the astronomer goes somewhere else but he remains stationary relative to the sun (perhaps outside the event horizon of a Schwarzschild black hole that is neither moving towards or away from the Sun). Will the astronomer still measure the stars exploding at the same time?


I read that the concept of relativity of simultaneity in general relativity is kind of meaningless, but isn't my question in the above situation valid? Does the concept of relativity of simultaneity hold in General Relativity?


There seems to be a bit of confusion on my description (as can be seen in the comment section): Essentially I am asking: if we take into account the light travel time (time the astronomer "saw" it minus the time for the light to travel to the observer), will the explosion still be simultaneous?



Answer



I'll assume that you do a good job of using various clues (the time you see the light, your location when you see it, the direction of the light, and some estimate of the distance to the star) and correctly work out more or less where each explosion took place in spacetime.


In this case, no matter where you are, no matter your speed, and no matter anything else about you, you will derive the same spacetime locations for the explosions, because the locations are an objective property of the external world and we're assuming that you measured them correctly.



There are a lot of different ways you could write down the locations. Those ways are called coordinate systems. Some coordinate systems have a coordinate called "t" in them, and depending on the coordinate system, the t coordinates of the two explosions might be the same or might be different. This isn't a property of the explosions, but of the arbitrary choice of coordinates.


The choice of coordinates is really up to you. In introductions to special relativity it's common to assume that everybody picks an "egocentric" coordinate system (a term I just made up for coordinates in which they're at rest at the origin). If everyone does that, then people moving at different speeds are likely to disagree about the equality of the t coordinates of various things. But (if they're good scientists) they won't disagree about the objective locations of those things, because they'll understand that their choice of coordinates doesn't influence the actual locations. And also (if they're good scientists) they'll understand that they don't need to choose egocentric coordinates, and especially if they're collaborating with someone else they'd be better off agreeing on a common coordinate system to communicate their results.


As I said in my other recent answer, in general relativity the choice of useful coordinate systems tends to be more restricted because most spacetimes have fewer symmetries than the Minkowski spacetime of special relativity. You can still use any coordinates you want, but most will be inconvenient because the metric will have an unnecessarily complicated form. In particular, it tends to be inconvenient to use egocentric coordinates.


When you're talking about the universe on a large scale, only one time/"t" coordinate, called cosmological time, is convenient, because it's the only one that respects the large-scale spacetime symmetries of the universe we live in. When you see a "time since the big bang" in articles about cosmology or astronomy, it's cosmological time.


When you work out the coordinates of the two stars, you'll probably end up with the same t coordinates as someone else working independently on the same problem, because you both will pick the most convenient t coordinate, and that's cosmological time. It doesn't matter where you are or how fast you're moving, since it's dictated by the objective "shape of the universe" which everyone agrees on in principle, if they have accurate enough equipment to measure it well. You could pick a different coordinate system and disagree with the other experimenter, but that doesn't say anything profound about the nature of objective reality, it just says that you picked a different coordinate system.


homework and exercises - The use of the commutators in quantum mechanics: explanations



Considering that I've never studied quantum mechanics before I have need to understand the operator commutator. My start is: $[A,B]=AB-BA \tag{a}$


Now, why must be


$$\left[\frac{\partial }{\partial x},x\right]\stackrel{?}{=}1 \tag{1}$$ I have thought, from the rule (a),


This identity $$\left[x,\frac{\partial }{\partial x}\right]=-1 \tag{2}$$ is easy because $[A,B]=-[B,A]$. I have not understood, also, (3) and (4) $$\left[i\hslash\frac {\partial}{\partial x},x\right]=i\hslash \tag{3}$$


$$[p_x,x]=i\hslash \tag{4}$$ where $p_x$ is the momentum on $x-$ axis.



Answer




Equations (a), (1), (2), (3) and (4) all are operator equations. Therefore you need to understand what an operator equation actually is.



Now, why must be $$ \left[\frac{\partial }{\partial x},x\right]\stackrel{?}{=}1 \tag{1}$$



That means, the operators on the left-hand-side and on the right-hand-side always yield the same result when applied to arbitrary functions.


Hence, here you must prove that $$ \left[\frac{\partial}{\partial x},x\right] \psi(x) = 1\cdot \psi(x) $$ for every function $\psi(x)$.


The proof is a long sequence of very elementary steps: $$\begin{align} &\left[\frac{\partial }{\partial x},x\right] \psi(x) \\ = &\left(\frac{\partial}{\partial x} x - x \frac{\partial }{\partial x}\right) \psi(x) \\ = &\frac{\partial}{\partial x} x \psi(x) - x \frac{\partial }{\partial x} \psi(x) \\ = &\frac{\partial x}{\partial x}\psi(x) + x \frac{\partial \psi(x)}{\partial x} - x \frac{\partial \psi(x)}{\partial x} \\ = &\frac{\partial x}{\partial x}\psi(x) \\ = &1\cdot \psi(x) \\ \end{align}$$


quantum mechanics - What is the connection between Poisson brackets and commutators?


The Poisson bracket is defined as:


$$\{f,g\} ~:=~ \sum_{i=1}^{N} \left[ \frac{\partial f}{\partial q_{i}} \frac{\partial g}{\partial p_{i}} - \frac{\partial f}{\partial p_{i}} \frac{\partial g}{\partial q_{i}} \right]. $$


The anticommutator is defined as:



$$ \{a,b\} ~:=~ ab + ba. $$


The commutator is defined as:


$$ [a,b] ~:=~ ab - ba. $$


What are the connections between all of them?


Edit: Does the Poisson bracket define some uncertainty principle as well?



Answer



Poisson brackets play more or less the same role in classical mechanics that commutators do in quantum mechanics. For example, Hamilton's equation in classical mechanics is analogous to the Heisenberg equation in quantum mechanics:


$$\begin{align}\frac{\mathrm{d}f}{\mathrm{d}t} &= \{f,H\} + \frac{\partial f}{\partial t} & \frac{\mathrm{d}\hat f}{\mathrm{d}t} &= -\frac{i}{\hbar}[\hat f,\hat H] + \frac{\partial \hat f}{\partial t}\end{align}$$


where $H$ is the Hamiltonian and $f$ is either a function of the state variables $q$ and $p$ (in the classical equation), or an operator acting on the quantum state $|\psi\rangle$ (in the quantum equation). The hat indicates that it's an operator.


Also, when you're converting a classical theory to its quantum version, the way to do it is to reinterpret all the variables as operators, and then impose a commutation relation on the fundamental operators: $[\hat q,\hat p] = C$ where $C$ is some constant. To determine the value of that constant, you can use the Poisson bracket of the corresponding quantities in the classical theory as motivation, according to the formula $[\hat q,\hat p] = i\hbar \{q,p\}$. For example, in basic quantum mechanics, the commutator of position and momentum is $[\hat x,\hat p] = i\hbar$, because in classical mechanics, $\{x,p\} = 1$.



Anticommutators are not directly related to Poisson brackets, but they are a logical extension of commutators. After all, if you can fix the value of $\hat{A}\hat{B} - \hat{B}\hat{A}$ and get a sensible theory out of that, it's natural to wonder what sort of theory you'd get if you fixed the value of $\hat{A}\hat{B} + \hat{B}\hat{A}$ instead. This plays a major role in quantum field theory, where fixing the commutator gives you a theory of bosons and fixing the anticommutator gives you a theory of fermions.


electromagnetism - Galilean Relativity and Electrodynamics


Consider the following:


On the one hand, the principle of relativity, by Galileo, (totally applied to the Newtonian mechanics) says:



There is no mechanical experiment that you could perform to measure the difference between inertial frames of reference.



On the other hand, Maxwell's equations (or the laws of electrodynamics; laws of motion) under the principle of relativity, by Galileo, yield a non-equivalence of inertial frames of reference.



My question is: From Maxwell's equations we get an electromagnetic wave. By asking which frame of reference the wave has the velocity of $c$ we then realize that the aether is this reference. We all know that this is wrong today, but, from the perspective of a physicist of XIX century, when we try to measure the velocity of $c$ in a moving frame (with respect to the aether frame) with the Galileo's relativity principle, we then realize that the speed $c$ is different, say: $\displaystyle c' = v_{s} + c$ (*)


Is this formula $\displaystyle c' = v_{s} + c\,$ another way to verify that Maxwell's equations are different in different frames? (**)


(*) where $c$ is the speed of light with respect to the rest frame of reference with respect to the aether, $v_{s}$ is a velocity of a moving frame $S$ with respect to the aether and $c'$ is the speed of light with respect to the S reference frame.


(**) and then there is no equivalence of inertial frames for electromagnetism; and then the physics is different in one reference at rest with respect to the aether compared to a moving one, also with respect to the ether; and then there is one "absolute" frame where Maxwell's equations hold: the aether frame; and then there exists a type of an experiment that could mesure the absolute motion; and then this contradicts the principle of relativity for electromagnetism.



Answer



First of all, the Galilean and Newtonian spacetimes are not completely the same. The Galilean spacetime is a tuple $(\mathbb{R}^4,t_{ab},h^{ab},\nabla)$ (see Galilean spacetime interval?) while the Newtonian spacetime is a tuple $(\mathbb{R}^4,t_{ab},h^{ab},\nabla,\lambda^a)$ where $\lambda^a$ is a field that adds the preferred frame of rest:


$$\lambda^a=\left(\dfrac{\partial}{\partial t}\right)^a$$


In other words, the Galilean approach is relativity, which (with the Galilean transformation) is incompatible with the Maxwell's equations, while Newton adds the rest frame, which can be viewed as the frame of the aether. This approach allows adding electromagnetism to mechanics by breaking the relativity principle for electromagnetism.


Your conclusions that the Maxwell equations are not invariant under the Galilean trabsformation is correct and well known. The idea of the preferred rest frame of the aether was dismissed by the Michelson–Morley experiment that established the independence of the speed of light from the frame of reference. Lorentz has shown that changing the coordinate transformation from Galilean to that, which Poincaré later named after Lorentz, would introduce the time dilation and length contraction to the aether and make the speed of light independent of the frame. In other words, the aether became undetectable. In turn Einstein has pointed out that the Lorentz transformation can be viewed in respect to spacetime itself and the existence of the undetectable aether is no longer required. At last mechanics and electromagnetism were united under the same relativity principle.


Direct exposure to the vacuum of space


I was watching a few sci-fi movies and was wondering the real science explaining what would happen if you were to be subject to the conditions of outerspace.



I read the wikipedia article on space exposure, but was still confused. If a person was about the same distance from the sun as earth is, would they still freeze to death? (as shown in the movie Sunshine)


I'm reading from all sorts of sites with conflicting information about what would actually happen when a person is exposed to the vacuum of space...



Answer



You'd freeze to death faster in the Atlantic ocean.


Space has essentially no thermal conductivity. All the heat you lose will be radiated away. According to the Stefan-Boltzman law, $W = \sigma T^4$, you would lose at most 500 watts per square meter of body surface area. By contrast, the convective heat transfer coefficient in water is about 12,500 watts/square meter / degree Kelvin temperature difference. So, I think freezing would be the least of your concerns.


cosmology - Why cosmic background radiation is not ether?


why cosmic background radiation is not ether? I mean it's everywhere and it' a radiation then we can measure Doppler effect by moving with a velocity.



Answer



The luminiferous aether was, by definition, a hypothesized medium that was needed for electromagnetic waves to propagate through space. The cosmic microwave background isn't needed for photons to move; indeed, they move through space even if one removes (shields) the cosmic microwave radiation. So that's why CMS isn't luminiferous aether.


On the other hand, the aether also made a particular prediction, the existence of a preferred reference frame. In this sense, the CMB plays the same role as the aether. Cosmologists use the reference frame associated with the CMB as the preferred coordinate system. However, this ability to pick a "preferred" coordinate system depends on the environment – the CMB is just some property of the environment that could possibly be different as well (e.g. if you shield it). In this respect, it still differs from the luminiferous aether that couldn't have been shielded and that guaranteed the existence of a preferred coordinate system in any situation, regardless of details of the environment.



Thursday, February 19, 2015

condensed matter - How to visualize multi-dimensions in topological periodic table?


This is a question for those who are familiar with topological periodic table. The first row and right side columns represents dimensions of topological materials in periodic table. I know that for $d=1$ we are talking about nanowire, for $d=2$ we are talking about film or coating on the surface, for $d=3$ we are talking about bulk material. What $d=4$,$d=5$.....$d=8$ represents? How to visualize it?




optics - Unpolarized light - Why don't light sources then look invisible?




Pardon me if this has been asked before. I was just thinking about how light from a source like the sun is unpolarized. I have two questions:




  1. Is unpolarized light from the sun 100% random? Meaning all wavelengths in the visible spectrum, polarized in all directions? (please correct me if "polarized" is not the right term here)




  2. If that's the case, how do the random waves not cancel each other out, rendering the sun invisible to us when we look at it?





Of course my thinking is flawed, so I'm just wondering what I'm missing. The analogy I could come up with is that waves in the ocean are random (I assume?), yet they still occur. Thanks in advance to anyone who can fix my brain.




Locality in Quantum Mechanics


We speak of locality or non-locality of an equation in QM, depending on whether it has no differential operators of order higher than two.


My question is, how could one tell from looking at the concrete solutions of the equation whether the equ. was local or not...or, to put it another way, what would it mean to say that a concrete solution was non-local?


edit: let me emphasise this grew from a problem in one-particle quantum mechanics. (Klein-Gordon eq.) Let me clarify that I am asking what is the physical meaning of saying a solution, or space of solutions, is non-local. Answers that hinge on the form of the equation are...better than nothing, but I am hoping for a more physical answer, since it is the solutions which are physical, not the way they are written down ....


This question, which I had already read, is related but the relation is unclear. Why are higher order Lagrangians called 'non-local'?




Answer



Presuming that there aren't nonlocal constraints, a differential operator that is polynomial in differential operators is local, it doesn't have to be quadratic. My understanding is that irrational or transcendental functions of differential operators are generally nonlocal (though that's perhaps a Question for math.SE).


A given space of solutions implies a particular nonlocal choice of boundary conditions, unless the equations are on a compact manifold (which, however, is itself a nonlocal structure). There is always an element of nonlocality when we discuss solutions in contrast to equations.


[For the anti-locality of the operator $(-\nabla^2+m^2)^\lambda$ for odd dimension and non-integer $\lambda$, one can see I.E. Segal, R.W. Goodman, J. Math. Mech. 14 (1965) 629 (for a review of this paper, see here).]


EDIT: Sorry, I should have gone straight to Hegerfeldt's theorem. Schrodinger's equation is enough like the heat equation to be nonlocal in Hegerfeldt's sense. There are two theorems, from 1974 in PRD and from 1994 in PRL, but in arXiv:quant-ph/9809030 we have, of course with references to the originals,



Theorem 1. Consider a free relativistic particle of positive or zero mass and arbitrary spin. Assume that at time $t=0$ the particle is localized with probability 1 in a bounded region V . Then there is a nonzero probability of finding the particle arbitrarily far away at any later time.


Theorem 2. Let the operator $H$ be self-adjoint and bounded from below. Let $\mathcal{O}$ be any operator satisfying $$0\le \mathcal{O} \le \mathrm{const.}$$ Let $\psi_0$ be any vector and define $$\psi_t \equiv \mathrm{e}^{-\mathrm{i}Ht}\psi_0.$$ Then one of the following two alternatives holds. (i) $\left<\psi_t,\mathcal{O}\psi_t\right>\not=0$ for almost all $t$ (and the set of such t's is dense and open) (ii) $\left<\psi_t,\mathcal{O}\psi_t\right>\equiv 0$ for all $t$.



Exactly how to understand Hegerfeldt's theorem is another question. It seems almost as if it isn't mentioned because it's so inconvenient (the second theorem, in particular, has a rather simple statement with rather general conditions), but a lot depends on how we define local and nonlocal.



I usually take Hegerfeldt's theorem to be a non-relativistic cognate of the Reeh-Schlieder theorem in axiomatic QFT, although that's perhaps heterodox, where microcausality is close to the only definition of local. Microcausality is one of the axioms that leads to the Reeh-Schlieder theorem, so, no nonlocality.


Where is the potential energy saved?



If you increase the h (=height), potential energy will be increased given by U=mgh.


Where does the energy go, into atoms?




mathematical physics - Introductory texts for functionals and calculus of variation



I am going to learn some math about functionALs (like functional derivative, functional integration, functional Fourier transform) and calculus of variation. Just looking forward to any good introductory text for this topic. Any idea will be appreciated.




Wednesday, February 18, 2015

energy - Have the Rowan University "hydrino" findings been replicated elsewhere?


In 2009, Rowan University released a paper claiming to replicate Blacklight Power's results on energy generation using hydrino states of the hydrogen atom. The paper (link now dead) appears to describe the procedure in every detail as far as my untrained eye can tell.


The press release 11/29/10 states:



Cranbury, NJ (November 29, 2010)—BlackLight Power, Inc. (BLP) today announced that CIHT (Catalyst-Induced-Hydrino-Transition) technology has been independently confirmed by Dr. K.V. Ramanujachary, Rowan University Meritorious Professor of Chemistry and Biochemistry.[...]




Answer




I am highly skeptical of this result, primarily because the theories promoted by Black Light Power are improbable to the point of being gibberish. The energy states of hydrogen can be calculated exactly, and have been both calculated and measured spectroscopically to extremely high precision, and experiment and theory are in perfect agreement. If the modern understanding of quantum physics (including QED) were incomplete enough to leave room for mysterious lower-energy states in hydrogen, there would've been some indication of this in one of the countless experiments that have been done on hydrogen.


Another good reason to be skeptical of this result is that the report in question seems to have been "released" only via Black Light Power's web site. The only mention of the authors of this report in conjunction with "hydrinos" that Google can find come from Black Light Power. This result has not appeared in any scientific journal known to Google. Or even the Rowan University web site. This is not what I would call a ringing endorsement of the work.


As for the report itself, it is entirely concerned with chemical NMR spectra, and I don't have any first-hand experience with those. I know just enough about the field to know that there can be subtle issues involved with the recording and interpretation of these. I'm more inclined to believe that the mysterious peaks seen in their samples are some NMR artifact than that they are the signature of radically new physics.


It's conceivable, barely, that this really does represent some dramatic new discovery, and has not yet appeared in print because it's working its way through the peer review process, taking a long time because extraordinary claims require extraordinary scrutiny. The principal person behind Black Light Power has been making claims like this since I was in grad school in the 1990's, though, and has yet to produce anything solid. I wouldn't hold my breath waiting for this to appear in a reputable peer-reviewed journal, if I were you.


quantum field theory - Different representations of the Lorentz algebra



I've found many definitions of Lorentz generators that satisfy the Lorentz algebra: $$[L_{\mu\nu},L_{\rho\sigma}]=i(\eta_{\mu\sigma}L_{\nu\rho}-\eta_{\mu\rho}L_{\nu\sigma}-\eta_{\nu\sigma}L_{\mu\rho}+\eta_{\nu\rho}L_{\mu\sigma}),$$ but I don't know the difference between them.


Firstly, there is the straightforward deduction evaluating the derivate of the Lorentz transformation at zero and multiplying it by $-i$. It's a very physical approach.


Another possibility is to define:


$$\left(J_{\mu\nu}\right)_{ab}=-i(\eta_{\mu a}\eta_{\nu b}-\eta_{\nu a}\eta_{\mu b})$$


This will hold for any dimension. I find it a bit confusing because we mix matrix indices with component indices.


We could also define:


$$M_{\mu\nu}=i(x_\mu\partial_\nu-x_\nu\partial_\mu)+S_{\mu\nu}$$


Where $S_{\mu\nu}$ is Hermitian, conmutes with $M_{\mu\nu}$ and satisfies the Lorentz algebra. I think this way is more geometrical because we can see a Lorentz transformation as a rotation mixing space and time.


The two last options look quite similar to me.


Lastly, we could start with the gamma matrices $\gamma^\mu$, that obey the Clifford algebra: $$\{\gamma^{\mu},\gamma^{\nu}\}=2\eta^{\mu\nu}\mathbb{I}$$ (this is easy to prove in QFT using Dirac's and KG's equations). And define: $$S^{\mu\nu}=\frac{i}{4}[\gamma^{\mu},\gamma^{\nu}]$$



It seems that this is the most abstract definition. By the way, how are Clifford algebras used in QFT, besides gamma matrices (I know they are related to quaternions and octonions, but I never saw these applied to Physics)?


Are there any more possible definitions?


Which are the advantages and disadvantages of each?


Are some of them more fundamental and general than the others?



Answer



UPDATE - Answer edited to be consistent with the latest version of the question.


The different definitions you mentioned are NOT definitions. In fact, what you are describing are different representations of the Lorentz Algebra. Representation theory plays a very important role in physics.


As far as the Lie algebra are concerned, the generators $L_{\mu\nu}$ are simply some operators with some defined commutation properties.


The choices $L_{\mu\nu} = J_{\mu\nu}, S_{\mu\nu}$ and $M_{\mu\nu}$ are different realizations or representations of the same algebra. Here, I am defining \begin{align} \left( J_{\mu\nu} \right)_{ab} &= - i \left( \eta_{\mu a} \eta_{\nu b} - \eta_{\mu b} \eta_{\nu a} \right) \\ \left( S_{\mu\nu}\right)_{ab} &= \frac{i}{4} [ \gamma_\mu , \gamma_\nu ]_{ab} \\ M_{\mu\nu} &= i \left( x_\mu \partial_\nu + x_\nu \partial_\mu \right) \end{align} Another possible representation is the trivial one, where $L_{\mu\nu}=0$.


Why is it important to have these different representations?



In physics, one has several different fields (denoting particles). We know that these fields must transform in some way under the Lorentz group (among other things). The question then is, How do fields transform under the Lorentz group? The answer is simple. We pick different representations of the Lorentz algebra, and then define the fields to transform under that representation! For example



  1. Objects transforming under the trivial representation are called scalars.

  2. Objects transforming under $S_{\mu\nu}$ are called spinors.

  3. Objects transforming under $J_{\mu\nu}$ are called vectors.


One can come up with other representations as well, but these ones are the most common.


What about $M_{\mu\nu}$ you ask? The objects I described above are actually how NON-fields transform (for lack of a better term. I am simply referring to objects with no space-time dependence). On the other hand, in physics, we care about FIELDS. In order to describe these guys, one needs to define not only the transformation of their components but also the space time dependences. This is done by including the $M_{\mu\nu}$ representation to all the definitions described above. We then have



  1. Fields transforming under the trivial representation $L_{\mu\nu}= 0 + M_{\mu\nu}$ are called scalar fields.


  2. Fields transforming under $S_{\mu\nu} + M_{\mu\nu} $ are called spinor fields.

  3. Fields transforming under $J_{\mu\nu} + M_{\mu\nu}$ are called vector fields.


Mathematically, nothing makes these representations any more fundamental than the others. However, most of the particles in nature can be grouped into scalars (Higgs, pion), spinors (quarks, leptons) and vectors (photon, W-boson, Z-boson). Thus, the above representations are often all that one talks about.


As far as I know, Clifford Algebras are used only in constructing spinor representations of the Lorentz algebra. There maybe some obscure context in some other part of physics where this pops up, but I haven't seen it. Of course, I am no expert in all of physics, so don't take my word for it. Others might have a different perspective of this.




Finally, just to be explicit about how fields transform (as requested) I mention it here. A general field $\Phi_a(x)$ transforms under a Lorentz transformation as $$ \Phi_a(x) \to \sum_b \left[ \exp \left( \frac{i}{2} \omega^{\mu\nu} L_{\mu\nu} \right) \right]_{ab} \Phi_b(x) $$ where $L_{\mu\nu}$ is the representation corresponding to the type of field $\Phi_a(x)$ and $\omega^{\mu\nu}$ is the parameter of the Lorentz transformation. For example, if $\Phi_a(x)$ is a spinor, then $$ \Phi_a(x) \to \sum_b \left[ \exp \left( \frac{i}{2} \omega^{\mu\nu} \left( S_{\mu\nu} + M_{\mu\nu} \right) \right) \right]_{ab} \Phi_b(x) $$


quantum field theory - Electron Positron annihilation Feynman Diagram



I am having some trouble understanding this fenyman diagram, it seems to indicate that the electron produces the positron, as the arrow of the positron is pointing from the electron.


Additionally the arrow is in directed downwards, implying that the particle is going backwards in time? Is this diagram wrong, or does the arrow mean something else, or does the positron go back in time?!?





electromagnetism - Can the path of a charged particle under the influence of a magnetic field be considered piecewise linear?


Ordinarily we consider the path of a charged particle under the influence of a magnetic field to be curved. However, in order for the trajectory of the particle to change, it must emit a photon. Therefore, in principle, if we were able to view the path of the particle at sufficiently high resolution, would its path actually be polygonal, with a photon emitted at each vertex?



Answer



You're asking what would happen if we could view things with an unlimited high resolution. You view the emissions of the synchotron photons as discrete events and you ask is the path linear between these emissions. The problem is - quantum particles do not have trajectories so it's not meaningful to ask about the actual path followed by the particle. All you can do is make measurements and ask about the sequence of measurement results. There is then a limit on the resolution of what you will find, such as the ~micron limit in the pictures in Anna's answer, where the measurements are the ionization events.


Tuesday, February 17, 2015

quantum electrodynamics - Is there a way to calculate the photoelectric effect in QED via a Feynman diagram?


The photoelectric effect is the historic origin of the quantum particle description of light. From it we learn that when light is shone onto a metal single photons interact with single electrons in the metal which are ejected if the absorbed energy is larger than the binding energy of the metal.


The (free) process is:


$$e^-+\gamma\rightarrow e^-.$$


However, this process violates conservation of energy (all final states are real particles). Of course the reason the process occurs is because the electron is initially bound (not free), and some energy goes into releasing the electron from the metal potential.



The question is, is there anyway to do this calculation in QED, somehow incorporating the binding energy in the calculation?




special relativity - What is $c + (-c)$?


If object A is moving at velocity $v$ (normalized so that $c=1$) relative to a ground observer emits object B at velocity $w$ relative to A, the velocity of B relative to the ground observer is $$ v \oplus w = \frac{v+w}{1+vw} $$


As expected, $v \oplus 1 = 1$, as "nothing can go faster than light".
Similarly, $v \oplus -1 = -1$. (same thing in the other direction)


But what if object A is moving at the speed of light and emits object B at the speed of light in the exact opposite direction? In other words, what is the value of $$1 \oplus -1?$$ Putting the values in the formula yields the indeterminate form $\frac{0}{0}$. This invites making sense of things by taking a limit, but $$ \lim_{(v,w)\to (1,-1)} \frac{v+w}{1+vw}$$ is not well-defined, because the limit depends on the path taken.


So what would the ground observer see? Is this even a meaningful question?


Edit: I understand $1 \oplus -1$ doesn't make sense mathematically (thought I made it clear above!), I'm asking what would happen physically. I'm getting the sense that my fears were correct, it's physically a nonsensical situation.





quantum mechanics - Nonexistence of a Probability for the Klein-Gordon Equation


David Bohm in his wonderful monograph Quantum Theory, in Section 4.6 discusses the difficulties one encounters in trying to develop a relativistic quantum mechanics. He starts from the relation \begin{equation} \hbar^2 \omega^2 = m^2 c^4 + \hbar^2 k^2 c^4 \end{equation} (which is equivalent to the classical relation $E^2=m^2 c^4 + p^2 c^2$), from which one derives (by proceeding as in Section 3.19) the second-order equation (Klein-Gordon equation): \begin{equation} \frac{\partial^2 \psi}{\partial t^2} = c^2 \Delta \psi - \frac{m^2 c^4}{\hbar^2} \psi. \end{equation} Then he tries to define a probability function $P$ involving $\psi$ and its partial drivatives $\frac{\partial \psi}{ \partial t}$, $\frac{\partial \psi}{\partial x_i}$: \begin{equation} P(x,t)= \hbar^2 \left| \frac{\partial \psi}{ \partial t} \right|^2 + \hbar^2 c^2 \lvert \nabla \psi \rvert^2 + m^2 c^4 \lvert \psi \rvert^2, \end{equation} which can be seen to have an integral $\int P(\mathbf{x},t) d\mathbf{x}$ which is conserved over time. Anyway, Bohm says that this function does not give rise to a physically acceptable probability, since if we choose e.g. $\psi= \exp i \left( \frac{Et-\mathbf{p} \cdot \mathbf{x} } {\hbar} \right)$, we get \begin{equation} P(x,t)=E^2+p^2c^2+m^2c^4=2E^2, \end{equation} so that $P$ behaves likes the (4,4)-component of a rank-2 tensor. From this he concludes that under a Lorentz transformation the integral $\int P(\mathbf{x},t) d\mathbf{x}$ transforms like an energy, that is like the fourth component of a four-vector, so it is not invariant (for a proof of the last statement see my post Tensors and the Klein-Gordon Equation).


Bohm then states without proof that it is not possible to define any (reasonable) probability density function, by using the solution $\psi$ of the wave equation above and its partial derivatives, which is invariant under Lorentz transformation.


Does someone know some compelling reason why this is true?



Answer



I have discovered that Kazemi, Hashamipour and Barati in their work Probability density of relativistic spinless particles suceeded in finding in the one-dimensional case a physically acceptable probability function for the Klein-Gordon equation. This probability function satisfies all the properties of a meaningful probability function, and in particular its integral is Lorentz invariant.


Anyhow, this probability function does not disprove Bohm's statement, since $P(x,t)$ does not depend on $\psi(x,t)$ and the values of the partial derivatives of $\psi$ computed in $(x,t)$, but it is a functional of $\psi$, that is it depends on the whole function $\psi$. So this probability function is not a counterexample to Bohm's statement.



Finally, I have found a very interesting work Uniqueness of conserved currents in quantum mechanics by Peter Holland, who shows that an essentially unique conserved four-vector current $\mathbf{J}$ exists for the Klein-Gordon equation, which has covariant components


\begin{equation} J_{\mu} = \frac{i \hbar}{2m} \left( \psi^{*} \partial_{\mu} \psi - \psi \partial_{\mu} \psi^{*} \right). \end{equation}


This current corresponds to the one usually defined for the Klein-Gordon equation. Its density component is $P=\frac{i \hbar}{2m} \left(\psi^{*} \frac{\partial \psi}{\partial t} - \psi \frac{\partial \psi^{*}}{\partial t} \right) $, we see that $P$ does not satisfy the property $P \geq 0$ so that Bohm's statement follows.


We must anyhow remark that Holland assumes in his proof that that $\mathbf{J}$ depends only on $\psi$ and its first derivatives, which is an assumption not explicitly made by Bohm, even though a perfectly plausible one (see the argument given by Holland in his work to justify the requirement that conserved currents should depend solely on the 'state variables').


classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...