Monday, August 31, 2015

special relativity - Continuous Lorentz Transformation for time-like and space-like points



On the page 28 of the book "An Introduction to Quantum Field Theory" by Michael E. Peskin and Daniel V. Schroeder in the last paragraph it says: "When $(x−y)^2<0$ we can perform a Lorentz transformation taking $(x−y)→−(x−y)$. Note that if $(x−y)^2>0$ there is no continuous Lorentz transformation that takes $(x−y)→−(x−y)$."


Is the fact that there is no continuous transformation due to the necessity of having to cross over the null surface of the light cone (see fig. 2.4 in the book) to get from say, $t$ to $–t$, and therefore the transformation cannot be continuous? If so, if this could be explained I would appreciate it.



Answer



I know what figure you mean :-).


Yes, a Lorentz transformation cannot cross the null surface. Otherwise one could convert time-like into space-like distances. And since $\mathrm ds^2$ is Lorentz invariant, it cannot flip its sign either.



electricity - How much of current flows through a bird sitting on a power line?


I've been googling for hours and went through over a hundred answers. Now, some say the bird doesn't form a closed loop, some say the current is so small that it doesn't kill the bird. From as much as I understand electricity, I'm sure the bird does make a parallel connection to the wire, so there must be some electrons moving through it's body.


Could someone please explain which is the right answer and why? And if it's the case that a parallel circuit is formed, how can I calculate the current going through the birds body?




Answer



The copper cable inside the telephone wire has some finite resistance per unit length, call it $\alpha$. If current $I_0$ is flowing through the wire, then the voltage drop across a length $L$ of cable is $V=I_0\alpha L$. Here $L$ is the distance between the bird's feet. The bird forms a parallel branch whose resistance $R$ is probably almost entirely due to the resistance of the insulator on the wire under the bird's feet, not the bird's body itself. The current though the bird is $I_B=V/R=I_0\alpha L/R$. Because $\alpha$ is small and $R$ is large, this current is extremely small compared to $I_0$.


The people who tell you $I_B$ is zero are basically right. They're applying approximations that are excellent in most situations and that are excellent in this situation. Specifically, we normally take $\alpha\approx0$ for a wire, so there is no voltage drop between any two points on the same wire.


quantum mechanics - How does the Many Worlds Interpretation define probability?



I'm not asking for any derivation. What is probability related to in the MWI? Is related to the fraction of observers that see various outcomes? Or something more objective?




Sunday, August 30, 2015

quantum mechanics - Difference between $|0rangle$ and $0$ in the context of isospin


I know this has been asked before, but I am confused having read it in the context of isospin, where the creation operators act on the "vacuum" state (representing no particles) $$a^\dagger_m|0\rangle =|m\rangle$$ to create nucleon labelled by $m$ (m can be proton/neutron). And the action of annihilation operator is $$a|0\rangle =0$$ As $|0\rangle$ was already the vacuum state with no particles, what exactly did the annihilator annihilate?



In the linked question, I could understand David's and Ted's answer in the context of the Harmonic oscillator, angular momentum and such. But in this case, it is hard to imagine $|0\rangle$ as a normalized non-zero vector , which does not contain anything whose energy is not zero.



Answer



The state $|0\rangle$ is not the zero vector in Hilbert space, it is a state containing no physical excitations. Since the Harmonic oscillator was not enough, perhaps you can understand this as follows.


Let me make a box with mirrors, and then prepare the following state


$$ |\psi\rangle = {1\over \sqrt{2}} |0\rangle + {1\over\sqrt{2}} |k\rangle $$


This state is a superposition of the state with no-photons in the box, and the state with one photon in the box with wavenumber k. Now take the expected value of the number-of-photons operator in this state, using the number operator N, given by:


$$ N = \int_k a^\dagger_k a_k $$


$$ \langle \psi | N | \psi\rangle $$


The vacuum part gives 1/2 of zero, while the k-part gives 1/2 of 1. So the expected number of particles is 1/2, and this requires that the state $|0\rangle$ be annihilated by the annihilation operator--- that it must have eigenvalue 0, that it must go to the zero vector. The interpretation is that there is a probability 1/2 of zero particles, and a probability 1/2 of 1 particle, so the expected average number of particles is 1/2.


This example is to show why the state $|0\rangle$, the state where there are zero particles, is an ordinary unit-length state. It can exist in superpositions with other states.



nuclear physics - What happens if we put together a proton and an antineutron?


A hydrogen nucleus consists of a single proton. A 2-hydrogen (deuterium) nucleus consists of a proton and a neutron. A tritium nucleus consists of a proton and two neutrons.


This makes me wonder how an atomic nucleus made of a proton and a "minus one neutron" would look like, and the closest thing to a "minus one neutron" I can imagine is an antineutron.



What happens if we combine a proton and an antineutron? Are things like this even possible?


If such a thing is an atomic nucleus, can we add an electron and get an atom?



Edit: in the comments below, I also asked this more specific question (I suppose it's useful to also mention it here in order not to create a complete chaos):



If the proton and antineutron annihilate, is it still possible that the thing they annihilate to remains somehow stable enough to behave like an atomic nucleus?





Answer



Given that the valence quark content of a proton is $(uud)$ and that of a anti-neutron in $(\overline{udd})$ the answer is that sooner later some of the constituent quarks will annihilate and you get a spray assorted particles.


The lifetime of such a nucleus will depend on it's orbital angular momentum, with s-states being very short lived and high angular momentum states lasting a little longer.


antimatter - Anti-Particle of Neutron


The anti-particle corresponding to a proton or an electron is a particle with an equal mass, but an opposite charge. So what is the anti-particle corresponding to a neutron (which does not possess a charge)? And if it is just another neutron, will its collision with the original neutron be as destructive as the collision of a proton with an anti-proton or an electron with an anti-electron?



Answer



The anti-particle corresponding to a neutron is an anti neutron!


The neutron is made up of one up quark and two down quarks. The anti-neutron is made up of an anti-up quark and two anti-down quarks. Both have zero charge because the charges of the quarks within them balance out.


You are correct that elementary particles with no charge are often their own anti-particles. These tend to be vector bosons; for example the photon and the Z boson are their own anti-particles. The W$^-$ and W$^+$ are each other's anti-particles. It's a bit more complicated with the gluons because they carry a colour charge.



Amongst the fermions there are no particles known that are their own anti-particles. If such particles exist they would obey the Majorana equation and these theoretical particles are known as Majorana fermions.


newtonian mechanics - Why are work and energy considered different in physics when the units are the same?


There is a question that explains work and energy on stack exchange but I did not see this aspect of my problem. Please just point me to my error and to the correct answer that I missed.


What I am asking is this: Why in physics when the units are the same that does not necessarily mean you have the same thing.? Let me explain. Please let me use m for meter, sec= second , and kg = kilogram as the units for brevity sake.


The units for work are kg * m/sec^2 * m. The units for kinetic energy are kg* (m/sec)^2. They look that same to me. I need them to be the same so I can figure out the principle of least action. Comments are welcome.



Answer



One definition of work is "a change in energy." Any change in a physical quantity must have the same units as that quantity.


Different kinds of work are associated with different kinds of energy: conservative work is associated with potential energy, non-conservative work with mechanical energy, and total work with kinetic energy. In fact, that's one way to see the oft-quoted Law of Conservation of Energy:


$$ W_{total}=W_{non-conservative}+W_{conservative}\\ \Delta KE=\Delta E - \Delta PE \\ \therefore \Delta E=\Delta KE + \Delta PE $$


So just like impulse (which is a change in momentum) has the same units as momentum, work has the same units as energy. Any change in a physical quantity must have the same units as that quantity. A change in velocity has units of velocity, etc.


A more difficult question might be why torque has the same units as energy. This is more subtle, but the key concept is this: units are not the only thing that determines a quantity's interpretation. Context matters too. Energy and torque may have the same units, but they are very different things and would never be confused for one another because they appear in very different contexts.



One cannot blindly look at the units of a quantity and know what is being discussed. A dimensionful quantity might be meaningless or meaningful depending on the context, and it's meaning can change with that context. Action times speed divided by length has the same units as energy but without any meaningful interpretation (as far as I'm aware).


Saturday, August 29, 2015

optics - With a box that has perfect mirrors on the inside would it be possible to trap light?



With a box that has many perfect relective mirrors, would it be possible to trap a beam of light in the box indefinetly?




gravity - What allows new born suns to travel away from each other?



What allows massive new born suns to move away from each other, as they have been observed. I would think that their massive gravity would prevent this and cause them to slam into each other.




statistical mechanics - Pressure inside a box with only a single molecule


Suppose that we have a cube with dimensions 25x25x25 centimeters. And we put a single hydrogen molecule. How can we calculate the pressure?




lie algebra - Why complexify in order to construct Dirac representation?



Suppose we have a theory is covariant under the Spin group Spin(2n-1; 1). We consider the real vector space $V = R^{2n-1,1}$, which naturally comes with a Lorentzian inner product. On this vector space we introduce an orthonormal basis $e_0; e_1; ... ; e_{2n-1}$, where $e _0$ denotes the time direction.


To construct the Dirac representation of Spin(2n-1; 1) we take the complexified space $T = \mathbb{C} $. My question is why is it that in order to construct the Dirac representation we complexify the space?


NOTE: Theory is even dimensional and of Lorentzian signature.




homework and exercises - The actual period of a pendulum at 90°. Looking for the correct formula


Do you have access to any scientific experiment which gives the period of a pendulum when the angle is $90^\circ$:


enter image description here


this article says $T$ varies to about $18\%$ up to $90^\circ,$ so for a seconds pendulum the complete period would be roughly $2.36.$





  1. I d'like to know the ratio at $90^\circ$ with a good approximation, and the span it can vary in relation to air friction, if the bob is not massive.




  2. Also, I'd like to know the max. speed of the bob at $P_0$ and how it compares with the speed of same ball sliding without friction on a circular surface, would it be the same?




enter image description here


What I am trying to understand here is rather complex, I'll try to explain:


If the block is placed on an air-track or other frictionless surface, the block would acquire a certain speed, momentum and KE in the opposite direction, right? same amount of momentum is lost by the bob and we can find speed loss.





  1. Isn't the same amount of speed lost by the bob when it is attached to a pendulum?




  2. Is this an important factor or may be the most important factor (more relevant than air friction or friction at the pivot) for the increase of the period when the angle is greater than $30^\circ$?




  3. I figured out that this factor can account for about one third of the $18\%$ increase, can you give a more accurate value?





What I am suggesting is that this (loss of horizontal momentum, which is made clear when the bob is not attached to a rod but slides on a block) is the most important factor that makes the actual period increase when the starting point deviates on the vertical direction.




homework and exercises - Capacitance of spherical capacitor when earthed



The capacitance of a spherical capacitor is given by $$C=\frac{4\pi \varepsilon_{0}}{\left ( \frac{1}{a}-\frac{1}{b} \right )}$$


Spherical Capacitor


How can I find out the capacitance of the capacitor when



  1. The inner sphere is earthed.


  2. The outer hollow sphere is earthed.




Friday, August 28, 2015

Why can't electrostatic field lines form closed loops?



My physics textbook says "Electrostatic field lines do not form closed loops. This is a consequence of the conservative nature of electric field." But I can't quite understand this. Can anyone elaborate?



Answer



A force is said to be conservative if its work along a trajectory to go from a point $A$ to a point $B$ is equal to the difference $U(A)-U(B)$ where $U$ is a function called potential energy. This implies that if $A=B$ then there is no change in potential energy. This fact is independent of the increase or not of the kinetic energy.


If a conservative force were to form loops, it could provide a non zero net work (because the direction of the force could always be the same as that of the looping trajectory) to go from A and then back to A, while at the same time its conservative character would ensure that this work should be zero; which is a contradiction.


Hence, "conservative force" and "forming loops" are two incompatible properties that cannot be satisfied at the same time.


optics - What happens to light in a perfect reflective sphere?


Let's say you have the ability to shine some light into a perfectly round sphere and the sphere's interior surface was perfectly smooth and reflective and there was no way for the light to escape.



If you could observe the inside of the sphere, what would you observe? A glow? And would temperature affect the outcome?


Seems silly, it's just something I've always thought about but never spent enough time (until now) to actually find an answer.



Answer



OK, the inside of the sphere is perfectly-reflecting, and there's an ideal optical diode to let light in but keep it inside. As you keep the light turned on, the photon density in the sphere goes up and up, of course. It "looks" brighter and brighter, but you don't see that because the light can't escape. After turning the light off, it stays bright, the photons just keep bouncing around. If you "stick your head in" to look, you see a bright uniform glow that quickly dies away because your head and eyes are absorbing all the photons.


But do the photons bounce around forever? No!! Even a perfectly-reflective sphere will still interact with the light, because of radiation pressure. Each time a photon bounces off a wall, the wall gets kicked backwards, gaining energy at the expense of the photon (on average). Light can't produce a smooth force, only a series of kicks with shot noise statistics, because one photon hits the wall at a time. These kicks eventually heat up the walls, and cool down the photons. (From the photon's point of view, the photon frequency is going down because of Doppler-shifts during reflection off the moving walls.) Eventually everything equilibrates to a uniform temperature, hotter than the sphere started out. I don't know how long that would take. [In any realistic circumstance this radiation pressure effect can be ignored, because it is much less important than the "reflection is not 100% perfect" effect.]


quantum mechanics - Is Angular Momentum truly fundamental?


This may seem like a slightly trite question, but it is one that has long intrigued me.



Since I formally learned classical (Newtonian) mechanics, it has often struck me that angular momentum (and generally rotational dynamics) can be fully derived from normal (linear) momentum and dynamics. Simply by considering circular motion of a point mass and introducing new quantities, it seems one can describe and explain angular momentum fully without any new postulates. In this sense, I am lead to believe only ordinary momentum and dynamics are fundamental to mechanics, with rotational stuff effectively being a corollary.


Then at a later point I learned quantum mechanics. Alright, so orbital angular momentum does not really disturb my picture of the origin/fundamentality, but when we consider the concept of spin, this introduces a problem in this proposed (philosophical) understanding. Spin is apparently intrinsic angular momentum; that is, it applies to a point particle. Something can possess angular momentum that is not actually moving/rotating - a concept that does not exist in classical mechanics! Does this imply that angular momentum is in fact a fundamental quantity, intrinsic to the universe in some sense?


It somewhat bothers me that that fundamental particles such as electrons and quarks can possess their own angular momentum (spin), when otherwise angular momentum/rotational dynamics would fall out quite naturally from normal (linear) mechanics. There are of course some fringe theories that propose that even these so-called fundamental particles are composite, but at the moment physicists widely accept the concept of intrinsic angular momentum. In any case, can this dilemma be resolved, or do we simply have to extend our framework of fundamental quantities?




Thursday, August 27, 2015

friction - Grip of the train wheels


How do the wheels of a train have sufficient grip on a metal track? I mean both of the surfaces are smooth (and not flexible) and it is okay if there is no inclination, but how about on an inclined track?




homework and exercises - Yang-Mills constraints and Poisson brackets


Let's have constraints for Yang-Mills theory: $$ \varphi_{a} = \partial_{i}\pi^{i}_{a} - f_{abc}\pi^{b}_{i}A^{c}_{i}. $$ I have read the statement that $$ \tag 1 [\varphi_{a}(\mathbf x), \varphi_{b}(\mathbf y )] = f_{abc}\varphi^{c}(\mathbf x) \delta (\mathbf x - \mathbf y). $$ $(1)$ can be computed by using canonical Poisson brackets $$ [A_{a}^{i}(\mathbf x ), \pi_{b}^{j}(\mathbf y )] = \delta_{ab}\delta^{ij}\delta (\mathbf x - \mathbf y ), \quad [\pi_{a}^{i}(\mathbf x ), \pi_{b}^{j}(\mathbf y )] = [A_{a}^{i}(\mathbf x ), A_{b}^{j}(\mathbf y )] = 0. $$ But I can't get $(1)$. The problem is in two summands which have the form $$ \tag 2 -\varepsilon_{klm}[\partial_{i}\pi^{i}_{a}(\mathbf x ), A_{m}^{j}(\mathbf y )]\pi_{l}^{j}(\mathbf y) = -\varepsilon_{kla}\pi_{l}^{i}(\mathbf y)\partial_{i}^{\mathbf x}\delta (x - y) = -\varepsilon_{kla}\pi_{l}^{i}(\mathbf y)\partial_{i}^{\mathbf y}\delta(\mathbf x - \mathbf y) $$ For getting $(1)$ I need to move derivative away from delta-function, but it can be done (in my opinion) only if there also is integration of $(2)$ over $x, y$. Where did I make the mistake? How to get $(1)$?



Answer



It seems OP's question (v4) is related to the proper handling of derivatives of Dirac delta distributions. Reductions are performed with the help of (the appropriate 3D generalizations of) the following formulas:


$$\tag{A} \{\partial_x+\partial_y\}\delta (x-y)~=~ 0,$$


$$\tag{B} \{f(x)-f(y)\}~\delta (x-y)~=~ 0,$$


$$\tag{C} \{f(y)-f(x)\}~\partial_x \delta (x-y)~=~ \delta (x-y) ~f^{\prime}(x),$$


which may be derived e.g. using test functions. Pay attention to the non-zero right-hand side of eq. (C), which we suspect is the culprit of OP's question.



vectors - Understanding Tensor-operations, covariance, contravariance, ... in the context of Special Relativity


I'm currently learning about special relativity but I'm having a really hard time grasping the Tensor-operations.


Let's take the Minkowski scalar product of 2 four-vectors:


$$\pmb U . \pmb V = U^0V^0-U^1V^1-U^2V^2-U^3V^3$$


If we introduce the Minkowskimetric $\eta_{\mu\nu}$, we can rewrite this as $$\pmb U . \pmb V = U^\mu V^\nu \eta_{\mu\nu}$$


The following is also defined: $$V_\mu = \eta_{\mu\nu}V^\nu$$


Now we can apparently find that $\eta_{\mu\nu}\eta^{\nu\lambda}=\delta^\lambda_\mu$


Here's where I'm already a bit lost. I understand that, depending on how something transforms, it's either covariant or contravariant and that that determines whether the index is written at the top or bottom. What I don't understand is how you find the $\eta^{\nu\lambda}$ in the first place and what the real significance is of the switched position of the indices. I also don't see why then for the $\delta$ we have an index at the top AND bottom.


What is the physical meaning of having the indices at the top or bottom?



Answer




I'll try to give a brief overview. Say you have a vector space $E$. Given a basis $\{\vec{e}_\mu\}$, a vector $\vec{V}$ can be decomposed as $\vec{V} = V^\mu \vec{e}_\mu$. The position of the indices is determined by the transformation law: if we switch to another basis $\{\vec{e}_{\mu'}\}$ given by $\vec{e}_\mu = \vec{e}_{\mu'} \Lambda^{\mu'}_{\;\mu}$, the components of $\vec{V}$ transform as $V^\mu = (\Lambda^{-1})^\mu_{\;\mu'} V^{\mu'}$. Because $V^\mu$ and $\vec{e}_\mu$ transform oppositely, the combination $\vec{V} = V^\mu\vec{e}_\mu$ is invariant.


Now consider the dual space $E^*$. This is defined as the space of linear functionals on $E$; that is, linear functions from $E$ to $\mathbb{R}$. This is a finite dimensional vector space of the same dimension as $E$. Given a basis $\{\vec{e}_\mu\}$ of $E$ there is a unique dual basis $\{\sigma^\mu\}$ of $E^*$ such that $\sigma^\mu (\vec{e}_\nu) = \delta^\mu_{\;\nu}$. Any functional (also called 1-form or covector or covariant vector) $\omega \in E^*$ can be written as $\omega = \omega_\mu \sigma^\mu$. Using this we get that $\omega(\vec{V}) = \omega_\mu V^\mu$. Again, the position of the indices is determined by the transformation; upper indices transform with one matrix, lower indices transform with the inverse. The notation is designed so that an upper index contracted with a lower index is invariant.


So far, "vectors" with indices up and down are completely different mathematical objects. But now say there is an inner product defined on $E$. Given a basis we can define the metric $\eta_{\mu\nu} = \vec{e}_\mu \cdot \vec{e}_\nu$, and you can show that this implies $\vec{U}\cdot\vec{V} = \eta_{\mu\nu}U^\mu V^\nu$. Its transformation law (which you can work out) is consistent with the placement of the indices, and you can work it out from the definition. Now given a vector $\vec{W}$ we can define a functional $\omega$ defined by $\omega(\vec{V}) = \vec{W}\cdot \vec{V}$, and this determines a one-to-one correspondence (more precisely, an isomorphism) between $E$ and $E^*$. The components of $\omega$ are $\omega_\mu = \eta_{\mu\nu} W^\nu$.


Because of this correspondence, we usually call functionals "vectors with lower indices" and write $W_\mu = \eta_{\mu\nu} W^\nu$. But remember, $W_\mu$ are the components of a linear functional while $W^\mu$ are the components of a vector. It is only through the metric that one can be converted to the other. As I said, the position of the indices is determined (or, if you prefer, consistent with) the transformation law. These transformation laws are a natural consequence of the vector space structure, not postulates.


Lastly, given a 1-form we can go back to a vector. Since we get $W_\mu$ by multiplying the n-tuple $W^\mu$ by the matrix $\eta_{\mu\nu}$, to go the other way we need the inverse of the metric. We denote this matrix by $\eta^{\mu\nu}$; $\eta_{\mu\nu}\eta^{\nu\lambda} = \delta^\lambda_{\;\mu}$ is just the statement that the matrices are inverses of each other. It can also be shown that the placement of the indices in $\eta^{\mu\nu}$ is consistent with its transformation law. Now we can write $W^\mu = \eta^{\mu\nu}W_\nu$. This is particularly easy in Minkowski space with inertial coordinates, because the metric is its own inverse.


This is just the tip of the iceberg, but it should be enough to get you started. I haven't even mentioned tensors or given the transformation laws in full; I recommend you consult a book for that. I learned all this from Frankel's The Geometry of Physics, but it might be a little too general for your purposes. Once you more or less understand tensors, you can answer your question about $\delta^\mu_{\;\nu}$ yourself. $\delta^\mu_{\;\nu}$ is supposed to be the identity tensor, equal in all coordinate systems. Try to work out what would happen if you had $\delta_{\mu\nu}$. How does this transform? How does $\delta^\mu_{\;\nu}$?


Would someone versed in relative motion consider it more accurate to say that a car slammed into, or collided with, a wall?


I've read that it's just as correct to think of A moving toward B (ie. A doesn't move, but B does) as it is to think about B as moving toward A (again, A doesn't move, but B does) or as the two moving toward each other.


If I imagine a universe in which only two objects exists - then that seems like it would be the case. However, in that universe there isn't really a point of reference to say whether one object isn't moving.


Is there a useful and accurate way to think about these kinds of relations?


Thank you -Hal.



Answer



Any point may be chosen as a reference point, as speed is always relative to something else. Let's imagine I'm driving on the highway, and I'm going 10mph relative to the car I'm overtaking. Unfortunately for me, the other car was moving at the speed limit(60mph) and there was a speed camera, which gave me a ticket for going 70mph relative to the camera.


If there were only 2 objects in a universe, the only things one could establish for sure is the speed of one object relative to the other(and vice versa), and the direction of movement. One can't establish that both(or neither) objects would be moving, because one can't define relative to what they are moving. And for describing speed, one needs a reference.


What would a Helmholz coil-like mass configuration look like? (produces locally uniform gravity field)


A Helmholtz coil is an arrangement of two circular coils that produces a magnetic field in the center which is locally uniform in direction and magnitude, or at least nearly so. The configuration is optimal when the radius of each coil is equal to the separation between coils.


If the coils were replaced with massive rings, would this also produce a locally uniform gravitational field in both direction and magnitude? Or would a different diameter to separation be better?


Is there a well-recognized name for this configuration of masses?



Answer



The charge moving in a circle produces a magnetic dipole, and it is the proximity of the two magnetic dipoles that produces an approximately constant magnetic field in between the two coils.


However a ring of matter does not produce a gravitational dipole, unsurprisingly since there is no such thing as a negative mass so the analogous gravitational dipole doesn't exist. Indeed at the point exactly between the two rings of matter the gravitational field would be zero since the gravitational attractions to the two rings would be equal and opposite.



The electromagnetic analogy would be to consider the electric field created by two charged rings. The geometry of this field would be the same the same as the geometry of the gravitational field created by two massive rings.


conservation laws - Decay of elementary particle?


Many elementary particles decay, for instance a charm quark (according to wikipedia) will decay into a strange quark (and I assume some other elementary particles but I don't know what they are). Does this imply that on some fundamental level both charm quarks and strange quarks share composition?


My (probably naive) reasoning is that in chemistry if I have substance A isolated in a vacuum, and it decays into B and C, then A almost 100% composed of some ratio of B and C, as an example: $$\rm CaCO_3\rightarrow CaO + CO_2$$ which can be used to demonstrate that calcium carbide consists of the same substances that compose calcium oxide and carbon dioxide. Does this logic not apply on a quantum level?



Answer



In particle physics we have a list of "quantum numbers" that describe a particle. Different types of interactions may conserve, or may not conserve, different quantum numbers.


You give the example of decays which change quark flavor. Baryons and mesons (which we model as being made of quarks, even though individual quarks are confined) are assigned "flavor quantum numbers": the $D$ mesons have charm quantum number $C=\pm1$, the $K$ mesons have strangeness $S=\pm1$ but charm $C=0$, and so on. The strong and electromagnetic interactions do not change the flavor quantum numbers in a system, but the weak interaction does. So the stoichiometry skills that you learned in chemistry work for strong-interaction scattering, such as the strangeness-conserving production of hypernuclei, but not for weak decays which change those quantum numbers.



(In fact, you might say that it only makes sense for us to talk about flavor quantum numbers because the interaction that changes them is weak.)


A few of these quantum numbers are conserved by all known interactions. Those include



  • electric charge: the number of positive charges minus the number of negative charges

  • baryon number: the number of protons, neutrons, and hyperons, minus the number of antiprotons, antineutrons, and antihyperons. In the quark model, each quark has baryon number 1/3 (and antiquark $-$1/3), so you can use baryon stoichiometry to analyze reactions where mesons are produced.

  • lepton number: the number of electrons, muons, taus, and neutrinos, minus the number of their antiparticles.


When you do chemical stoichiometry, like in your calcium carbonate decomposition reaction, you're conserving electric charge, the number of electrons, and the number of protons and neutrons. Your baryon number conservation is constrained because there's no interaction at the energies chemists care about which allows protons to change into neutrons or vice-versa, so you have to conserve proton and neutron numbers separately. Furthermore there's no interaction, at the energies that chemists care about, which allows a nucleon to hop from one nucleus to another, so you have to separately conserve the number of calciums, the number of carbons, etc.


It's tempting and useful to take these conservation laws and use them to conclude that a calcium nucleus is "made of" twenty protons and twenty-ish neutrons. But that approach breaks down when you start to consider the flavor-changing weak interactions. The muon decays by the weak interaction into a neutrino, an antineutrino, and an electron; but there's evidence against any model where the muon "contains" those decay products in the way that we can say a nucleus "contains" nucleons.


Wednesday, August 26, 2015

general relativity - If the universe is 3D, how is space-time like a "fabric"?



I have been taught that space-time should be viewed as a fabric and that objects with a large gravitational influence indent that fabric. My question is, if the singularity of a black-hole punctures space-time, how is this accomplished if the universe is 3D? Can an object move completely around the black-hole in all directions? Would you be able to travel "below" a black-hole?




soft question - Online physics collaboration tools


I.e. online discussion with your friends. A forum is probably too overkill in this case.


Yet so far nothing can beat direct communication. Important feature: the ability to archive discussions. We don't want to retell our story to people who just missed the "conference" (let the newcomers dig through the archives by themselves).



Answer



I should mention the wonderful userscripts written by Valery Alexeev which utilize mathjax and allow you to render latex on any webpage, even if it does not support such rendering internally. Currently supported webpages include arXiv and gmail, though it should be easy to add your own to the list.


Used in combination with mathim.com and Tiddlywiki we have a powerful platform for online tex-enabled collaborations.


collision - Is momentum conserved in this situation?


Suppose we have object A , which has momentum mv. Then it elastically collides with a wall, so it's momentum now -mv, So how is momentum conserved here ?




Answer



object A has it's momentum change by $\Delta p_{A} = -2m_A v_A$. This means that the wall must gain momentum $\Delta p_{wall} = +2m_A v_A$. To conserve momentum. We can calculate the change in velocity of the wall now. $\Delta v_{wall} = \Delta p_{wall}/m_{wall} = +2 \frac{m_A}{m_{wall}}v_A$. The point here is that the mass of the wall us much greater than the mass of the object $m_{wall} \gg m_A$. so we basically treat $\Delta v_{wall}$ as being zero. This is why there is an apparent breaking of conservation of momentum, but as you can see, if you take into account the momentum of the wall you'll see momentum is conserved.


electricity - How can there be a current and an electric field in an idealized wire with no voltage drop?


enter image description here


In an ideal circuit, How can there be a current b/w points a & b, when there is no potential difference and thus no electric field between a & b? If there is no current, then where does current across the resistor come from, because that means no charge is coming from the battery (?).



Answer



An electric field isn't necessarily required to sustain a current. Remember electric charge is accelerated by an electric field.


In the case of an ideal conductor, which is assumed to connect the source to the resistor, the current can be any value and the voltage across the conductor is identically zero.



This isn't a contradiction. Consider the motion of an object in the absence of friction. No force is required to sustain that motion (only to change it).


Analogously, in the absence of resistance in the ideal conductor, no electric field is required to sustain a current through.


If it helps, consider a non-ideal conductor with some total resistance R. The voltage across to sustain a current $I$ through is:


$$V = I\cdot R$$


Now, let $R$ go to zero and see that, for any value of $I$, the voltage across is zero.




I reluctantly add this because, after some discussion in the comments, I think there is some confusion over the meaning and purpose of ideal circuit theory.


When the OP opens the question with "In an ideal circuit", he sets the context as ideal circuit theory which is a well known, well understood, widely used branch of electrical engineering. Perhaps the OP isn't aware of this context. Perhaps some of those that answered and/or commented aren't aware. Thus, this addendum.


What needs to be made clear is that ideal circuits and circuit elements are used to model physical circuits and physical circuit elements. The ideal circuit elements are meant to correspond to mathematical terms in the equations for the solution of the circuit. They do not represent physically realizable electric circuit components.


Thus, any answer along the lines of "there are no ideal circuits" entirely misses the point.



And, any complaint along the lines of "there must a voltage across because of Ohm's Law" entirely misses the point.


The confusion lies, I think, with the distinction between a physical schematic or, if you will, a "wiring diagram", and an ideal circuit schematic.


What's the difference?


The first represents the physical components and their connections. Useful for technicians, test engineers, etc. etc. but not for calculations and/or simulations.


For that, an ideal circuit schematic is used either explicitly or implicitly to translate the physical circuit into a mathematical model that can be used to calculate and simulate.


For example, here's the schematic symbol for an ideal transformer with the secondary connected to a load:


enter image description here


Unlike a real, i.e., physical transformer, the ideal transformer is lossless and has infinite bandwidth. How would one calculate or simulate a real transformer? By augmenting the ideal circuit schematic with additional ideal circuit elements that model the non-ideal characteristics.


For example, an ideal circuit model of a real transformer looks like this:


enter image description here



Note that every circuit element in that diagram is ideal and thus, isn't physically realizable but the entire ideal circuit corresponds to a good mathematical model of a real transformer that can used for calculations and simulations.


To further drive this point home, let's consider the OPs schematic as a "wiring" diagram for a physical battery connected with wires to a physical resistor.


Since this a DC circuit, a simple model of a battery is an ideal voltage source in series with some small value ideal resistor. A simple model of a physical wire is small value ideal resistor. Thus:


enter image description here


But, and again, each circuit element above is ideal including the wires that connect the ideal circuit elements.


And, again, for the ideal wire, there is no voltage across for any value of current through. This defines the ideal wire and that's really all that needs to be said about this.


Tuesday, August 25, 2015

quantum chromodynamics - Is the QCD potential really monotonic ? How does it prevent two quarks from meson to annihilate?


The QCD potential is made of two terms -(4/3) * alpha_s / r that describes the short distance


and the term +k*r that describes the long distance


Of course, alpha is a function of energy, so it is a function of radius.


But in the measurements of QCD potential, I always see that the graphical potential VQCD=f(r) is monotonic, so I would think that it only attracts the two quarks.


So what does prevent the two quarks of the meson to annihilate from the QCD potential. Remark : in the word 'QCD potential', there is Q=quantum, so normally this potential should take into account the quantum effects. Does it take it into account ?



Does the potential stops decreasing at a low radius value ? At which value ? Why the measurements have not been made down to this low r value.


Thank you for your help.




velocity - Functional derivative in Lagrangian field theory


The following functional derivative holds: \begin{align} \frac{\delta q(t)}{\delta q(t')} ~=~ \delta(t-t') \end{align} and \begin{align} \frac{\delta \dot{q}(t)}{\delta q(t')} ~=~ \delta'(t-t') \end{align} where $'$ is $d/dt$.


Question: What is \begin{align} \frac{\delta q(t)}{\delta \dot{q}(t')}? \end{align} I'm asking this because in QFT, the lecturer defined the canonical momentum field to $\phi$ by \begin{align} \pi(x,t) ~:=~ \frac{\delta L(t)}{\delta \dot{\phi}(x,t)}, \end{align} where $L$ is the Lagrangian, a functional of the field: $L[\phi,\partial_\mu \phi] = \int d^d x \mathcal{L}(\phi,\partial_\mu \phi)$.


I know I should get \begin{align} \pi(x,t) ~=~ \frac{\partial \mathcal{L(x,t)}}{\partial \dot{\phi}(x,t)}. \end{align} (Note it's now a partial derivative with respect to the Lagrangian density.) But doing it I get: \begin{align} \delta L = \int d^dx \frac{\partial \mathcal{L}}{\partial \phi} \delta \phi + \frac{\partial \mathcal{L}}{\partial \partial_\mu \phi} \delta\partial_\mu \phi. \end{align} So somehow we ignore the first term $\int d^dx \frac{\partial \mathcal{L}}{\partial \phi} \delta \phi$! Why is that?



It can't be that we are treating $\delta \phi$ and $\delta \dot{\phi}$ as independent because if I were to take the functional derivative w.r.t. $\phi(x')$, I would have to move the dot from $\delta \dot{\phi}$ over to $\frac{\partial \mathcal{L}}{\partial \dot{\phi}}$ which will give me


$$\int d^dx (\frac{\partial \mathcal{L}}{\partial \phi} - \partial_\mu\frac{\partial \mathcal{L}}{\partial \partial_\mu \phi}) \delta \phi$$


i.e. the functional derivative gives the Euler-Lagrange equations.


So how to I take the functional derivative of a functional w.r.t to the derivative of a function?




astronomy - Would it matter if the Earth rotated clockwise?


In the Futurama episode "That Darn Katz!" they save the world by rotating the Earth backwards saying it shouldn't matter (which direction Earth rotates). If Earth rotated clockwise and remained in it's current counter-clockwise orbit around the Sun, would it affect anything beyond the direction the constellations move across the sky?



Answer




Barring whatever fantastic energies would be required to stop the mass of the Earth from rotating and then changing the direction of the rotation, one of the major things I can see changing would be the expectations of weather patterns.


Part of what affects our weather is known as the Coriolis Effect.


enter image description here


While there would certainly be effects from the change of the weather from the Coriolis effect how this would change the weather could only be guessed. It is certain to be transformative, and if it were an abrupt change, possibly catastrophic as deserts might move, food crops might be affected, and likely expected storm patterns would be changed.


Edit:


I neglected to think about the effect this would have on our structures which would collapse due to shearing action unless the stopping action included a planetary stasis field. Objects in motion tending to stay in motion and rotation speeds of over 1000 mph and the inconvenient theory of conservation of momentum. I forgot to think about our oceans as well, since the Coriolis effect would change them, too.


I also neglected to consider the molten core of the Earth (pesky spinning, geo-magnetic iron). Stopping the rotation of the molten core might cause the magnetic field of the Earth to collapse, allowing the world to be bathed in cosmic radiation (killing every living thing).


My assumption was when they stopped the rotation of the Earth and reset it, they would consider having a magnetic field a good thing and would be sure to stop reset the whole thing. Granted, the Earth of the year 3000 may have other advantages which might offset any changes from the altered rotation of the Earth. As a funny idea, it has potential, but the serious ramifications of such a feat boggle the imagination.


Monday, August 24, 2015

If hydrogen and helium are lighter than air, why won't liquid hydrogen and liquid helium defy gravity?


Title says it all. If hydrogen and helium are lighter than air, why won't liquid hydrogen and liquid helium defy gravity?



Answer



Gaseous hydrogen and helium are lighter than air. Hydrogen, helium and air are close approximations to ideal gases, and for an ideal gas the volume of one mole of gas is about 22.4 litres. That means the density of an ideal gas is proportional to its molecular weight, so hydrogen ($M_w = 2$) and helium ($M_w = 4$) are lighter than air (average $M_w = 28.8$).


However you're asking about liquid hydrogen and helium, and liquids are much denser than gases because the molecules are much more tightly packed. For example the density of liquid hydrogen is around $68 \,\mathrm{kg/m}^3$ compared to air at about $1.3 \,\mathrm{kg/m^3}$. That's why liquid hydrogen doesn't float in air.



Incidentally, the density of liquid nitrogen (a close approximation to liquid air) is about $800 \,\mathrm{kg/m^3}$ so liquid hydrogen would float on liquid air.


electromagnetism - Magnetic Potential Energy


Does a charged rod or a shell have a magnetic potential energy when it is kept in a uniform magnetic field?


Also, can anyone quantitatively explain why a wire loop carrying current would tend to be circular?




gravity - Gravitational force between two masses



I get it that there will be a gravitational force between objects attracted towards gravity but can there be a gravitational force between two objects resting on horizontal plane? In other words, does an object experience gravitational force in all directions?



Answer



The question is confusing, but I think you might possibly mean the following: Are two objects resting on a horizontal table gravitationally attracted to each other? If that's your question, then the answer is yes. The gravitational attraction is very weak for "normal-sized" objects, though. You can use the rule $$ F={Gm_1m_2\over r^2} $$ to work it out. In this formula, $G=6.67\times 10^{-11}\,{\rm N\,m^2/kg^2}$, $m_1,m_2$ are the masses, and $r$ is the separation between their centers. (Strictly speaking this is only correct if the objects are spheres.)


Cavendish managed to measure this attraction back in the 1700s, in a truly amazing experiment.


terminology - Is Pauli-repulsion a "force" that is completely separate from the 4 fundamental forces?



You can have two electrons that experience each other's force by the exchange of photons (i.e. the electromagnetic force). Yet if you compress them really strongly, the electromagnetic interaction will no longer be the main force pushing them apart to balance the force that pushes them towards each other. Instead, you get a a repulsive force as a consequence of the Pauli exclusion principle. As I have read so far, this seems like a "force" that is completely separate from the other well known forces like the strong, electroweak and gravitational interaction (even though the graviton hasn't been observed so far).


So my question is: is Pauli-repulsion a phenomenon that has also not yet been explained in terms of any of the three other forces that we know of?


Note: does this apply to degenerate pressure too (which was explained to me as $\Delta p$ increasing because $\Delta x$ became smaller because the particles are confined to a smaller space (Heisenberg u.p.), as is what happens when stars collapse)?



Answer



The pauli exclusion principle is not a repulsive force. It applies to fermions. It says that two electrons cannot occupy an energy state in a potential well with exactly the same quantum numbers. They have to differ by at least one quantum number. It is the Pauli exclusion principle that organizes the electron shells filling them sequentially from low to higher energy levels in atoms, otherwise they would all pile up at the lowest energy level. Also the periodic table of elements filling the baryons in the strong potential well. It makes matter as we know it.



Yet if you compress them really strongly, the electromagnetic interaction will no longer be the main force pushing them apart to balance the force that pushes them towards each other. Instead, you get a a repulsive force as a consequence of the Pauli exclusion principle.



The above is a misunderstanding.


It is not a force, since at the particle level forces have carriers that are exchanged between particles so that momentum and energy change.



In your "compression" description there is a continuum and not a quantized state so the PEP does not apply. When one scatters an electron on an electron one can get very close until the exchange particle ( the photon in this case)


enter image description here transfers enough energy in the center of mass system to start creating other elementary particles. The process is accurately described by quantum electrodynamics.


supersymmetry - Do the Grassmann coordinates in the superfield formalism have any physical meaning?


In the superfield formalism we consider fields in a space who has four so called bosonic coordinates $x^{\nu}$ and four so called fermionic coordinates $\theta_1$,$\theta_2$,$\bar{\theta_1}$,$\bar{\theta_2}$.


$x^{\mu}$ are of course the physical space-time coordinates, but, do the Grassmannian coordinates have an analog interpretation like some kind of extra dimension or should I view them as a mere formal artifact?




Sunday, August 23, 2015

astronomy - When is the right ascension of the mean sun 0?


I understand that the right ascension of the mean sun changes (at least over a specified period) by a constant rate, but where is it zero? I had naively assumed that it would be zero at the most recent vernal equinox, but when I try to calculate the equation of time using this assumption and true sun positions, all my values are about 7.5 minutes larger than they should be.


When (at what date and UT time) is the right ascension of the mean sun 0? And why?



Answer



No, the right ascension of the mean Sun is NOT zero at the vernal equinox. It is in fact nearly identical to the ecliptic longitude of the mean Sun (the difference is due to UT vs ephemeris time), and this is defined such that it coincides with the ecliptic longitude of the apparent Sun when the Earth is at perihelion. So that should be the starting time to calculate the equation of (ephemeris) time.


Using an ephemerides site like this one, one can check that the Earth was at perihelion on Jan 2 2013, at 4:37 UT, and will return to perihelion on Jan 4 2014, at 11:58 UT. The difference between these is called an anomalistic year. The same site shows that on Jan 2 2013, at 4:37 UT, the Sun had an apparent right ascension


$$ \alpha_p = 18^\text{h}51^\text{m}56^\text{s}, $$


and the corresponding ecliptic longitude is


$$ \lambda_p = \tan^{-1}(\tan\alpha_p/\cos\varepsilon) = 18^\text{h}47^\text{m}46^\text{s}, $$ where $\varepsilon=23^\circ 26' 21.4''$ is the obliquity of the ecliptic. The equation of ephemeris time is then $$ \Delta t = M + \lambda_p - \alpha, $$ where $M = 2\pi(t-t_p)/t_Y$ is the mean anomaly, $t_p$ is the moment of perihelion and $t_Y$ is the length of the anomalistic year. The quantity $M + \lambda_p$ is the ecliptic longitude of the mean Sun. The apparent right ascension $\alpha$ can be calculated from $$ \begin{align} M &= E - e\sin E,\\ v &= 2\tan^{-1}\left[\sqrt{\frac{1+e}{1-e}}\tan\frac{E}{2}\right],\\ \lambda &= v + \lambda_p,\\ \alpha &= \tan^{-1}(\tan\lambda\cos\varepsilon), \end{align} $$ with $E,e,v,\lambda$ the eccentric anomaly, orbital eccentricity, true anomaly and apparent ecliptic longitude (see also wiki). This should give you an approximation of the equation of time, accurate to a few seconds. If you want a really detailed calculation, have a look at this paper.


classical mechanics - When is the principle of virtual work valid?


The principle of virtual work says that forces of constraint don't do net work under virtual displacements that are consistent with constraints.


Goldstein says something I don't understand. He says that if sliding friction forces are present then the principle of virtual work fails. But then he proceeds to say that this doesn't really matter because friction is a macroscopic phenomena.


The only way I can interpret this is for the friction forces to be a constraint force. But I thought constraint forces were pretty much always forces whose net effect is known but their exact force exerted is difficult to know. For friction we know its force exerted, so why would you treat it as a constraining force?


I also don't understand why friction being a macroscopic phenomena means it doesn't matter for this. Is it because we considering a system of particles?




What's is the origin of Orbital Angular Momentum of electrons in atoms?


Consider the Hydrogen 1s electron. We know that, in the quantum picture, the electron isn't orbiting or rotating at all, rather we simply state that the electron is spread over the entire space with the probability of finding it being maximum a radial distance $R$ (=Bohr radius) away from the nucleus. This helps explain why the electron does not radiate EM radiation while in the atom.


But, with this understanding, I do not understand the source of the orbital angular momentum. Is it intrinsic like the spin?



Answer



In an atom, the electron is not just spread out evenly and motionless around the nucleus. The electron is still moving, however, it is moving in a very special way such that the wave that it forms around the nucleus keeps the shape of the orbital. In some sense, the orbital is constantly rotating.


To understand precisely what is happening lets calculate some observables. Consider the Hydrogen $1s$ state which is described by



\begin{equation} \psi _{ 1,0,0} = R _1 (r) Y _0 ^0 = R _{1,0} (r) \frac{1}{ \sqrt{ 4\pi } } \end{equation} where $ R _{1,0} \equiv 2 a _0 ^{ - 3/2} e ^{ - r / a _0 } $ is some function of only distance from the origin and is irrelevant for this discussion and the wavefunction is denoted by the quantum numbers, $n$, $ \ell $, and $ m $, $ \psi _{ n , \ell , m } $. The expectation value of momentum in the angular directions are both zero, \begin{equation} \int \,d^3r \psi _{ 1,0,0 } ^\ast p _\phi \psi _{ 1,0,0 } = \int \,d^3r \psi _{ 1,0,0 } ^\ast p _\theta \psi _{ 1,0,0 } = 0 \end{equation} where $ p _\phi \equiv - i \frac{1}{ r } \frac{ \partial }{ \partial \phi } $ and $ p _\theta \equiv \frac{1}{ r \sin \theta } \frac{ \partial }{ \partial \theta } $.


However this is not the case for the $ 2P _z $ state ($ \ell = 1, m = 1 $) for example. Here we have, \begin{align} \left\langle p _\phi \right\rangle & = - i \int \,d^3r \frac{1}{ r}\psi _{ 1,1,1} ^\ast \frac{ \partial }{ \partial \phi }\psi _{ 1,1,1} \\ & = - i \int d r r R _{2,1} (r) ^\ast R _{ 2,1} (r) \int d \phi ( - i ) \sqrt{ \frac{ 3 }{ 8\pi }} \int d \theta \sin ^3 \theta \\ & = - \left( \int d r R _{2,1} (r) ^\ast R _{2,1} (r) \right) \sqrt{ \frac{ 3 }{ 8\pi }} 2\pi \frac{ 4 }{ 3} \\ & \neq 0 \end{align} where $ R _{2 1} (r) \equiv \frac{1}{ \sqrt{3} } ( 2 a _0 ) ^{ - 3/2} \frac{ r }{ a _0 } e ^{ - r / 2 a _0 } $ (again the particular form is irrelevant for our discussion, the important point being that its integral is not zero). Thus there is momentum moving in the $ \hat{\phi} $ direction. The electron is certainly spread out in a "dumbell" shape, but the "dumbell" isn't staying still. Its constantly rotating around in space instead.


Note that this is distinct from the spin of an electron which does not involve any movement in real space, but is instead an intrinsic property of a particle.


homework and exercises - Finding a 'vector potential' such that $mathbf E = nablatimes mathbf C$ for a point charge


Supposedly, "Any divergence-free vector field can be expressed as the curl of some other divergence-free vector field" over a simply-connected domain.



So, what is one such vector potential which works for half of the Coulomb field?



To be clear, I want a vector potential whose curl equals the vector field $\mathbf R/|\mathbf R|^3$ for $z>0$ (for any $x$ and any $y$). $\mathbf R$ is the position vector $(x,y,z)$.



I know the scalar potential method is usually used instead of this, but am curious about how ugly a vector potential would look. If this gets answered, it should then be easy to answer this.




Saturday, August 22, 2015

fourier transform - What is the advantage of using exponential function over trigonometric function in analyzing waves?



A.P.French in his book Vibrations and Waves writes:



. . . Why should the exponential function be such an important contribution to the analysis of vibrations? The prime reason is the special property of the exponential function. . .its reappearance after every operation of differentiation or integration.



Now,what is the advantage of using exponential function over trigonometric function? They are directly linked by De-Moiver's theorem.



Answer



I'm taking a bit of a gamble here from your age on your user page: we do have a 15 year old string theorist on this site (who is also from your homeland), so at the risk of seeming belittling, here is something I found really satisfying in relation to your question when I was about your age.


Differentiation of trigonometric functions is fiddly. When you differentiate a $\sin$ it becomes a $\cos$, when you differentiate a $\cos$ it becomes a $-\sin$. So you have to differentiate twice to get a trigonometric function back to its original form. The second derivative $\mathrm{d}_t^2 f(\omega\,x)$ is equivalent to multiplying by $-\omega^2$ (where $f$ is any linear combination of $\sin\,\omega\,t$ and $\cos\,\omega\,t$), but the first derivative in general changes the function to something linearly independent from it.


In contrast, differentiation of $\exp(i\,\omega t)$ is equivalent to a simple scaling, which means it can greatly simplify operations containing derivatives of all orders, not just even ones, as is the case with $\sin$ or $\cos$.


This is all equivalent to saying that $\sin$ and $\cos$ fulfill second but not first order differential equations, whereas $e^{\pm i\,t}$, which are special linear combinations of $\sin$ and $\cos$ fulfill first order DEs. Here is a wonderful way of thinking that for me unified $e^{i\,t}$, $\sin t$, $\cos t$ and justified to me the complex number field when I was 17. We begin by thinking of the kinematics of something moving on a path around the unit circle. A position vector on this circle is $\vec{r} = \left(\begin{array}{c}x\\y\end{array}\right)$ such that $\left<\vec{r},\,\vec{r}\right>=x^2+y^2=1$. We now find the equation of motion of a point $\vec{r}(t) = \left(\begin{array}{c}x(t)\\y(t)\end{array}\right)$ constrained to the circle is defined by ${\rm d}_t \left<\vec{r},\,\vec{r}\right> = 0$, whence $\left<{\rm d}_t \vec{r}(t),\,\vec{r}(t)\right> = 0$, whence (with a little fiddling):



$${\rm d}_t \left(\begin{array}{c}x(t)\\y(t)\end{array}\right) = v(t) \left(\begin{array}{cc}0&-1\\1&0\end{array}\right) \left(\begin{array}{c}x(t)\\y(t)\end{array}\right)$$


where $v(t) = \sqrt{\dot{x}^2+\dot{y}^2}$. We take, for simplicity, $v(t) = 1$ so that we immediately have, by the universal convergence of the matrix exponential, when supposing that the the path begins at the point $x=1,y=0$ at time $t=0$:


$$\vec{r}(t) = \exp\left[\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)\,t\right]\left(\begin{array}{c}1\\0\end{array}\right)=\left(\mathrm{id} + i\,t + \frac{i^2\,t}{2!} + \frac{i^3\,t}{3!} + \cdots\right)\left(\begin{array}{c}1\\0\end{array}\right)$$


where I have defined


$$i= \left(\begin{array}{cc}0&-1\\1&0\end{array}\right)$$


You can play around with this one and check that this $i$ has all the properties that the "everyday" $i$ has; in particular $i^2=-1$. Indeed, you can go a little further and prove that "numbers" of the form:


$$a \,\mathrm{id} + b\,i = \left(\begin{array}{cc}a&-b\\b&a\end{array}\right)$$


add, subtract, multiply and divide exactly like the everyday complex numbers. The field of matrices of the form above is isomorphic to the complex number field. Mathematically it is therefore indistinguishable from the complex number field. Then we separate real parts (multipliers of the identity matrix) and imaginary parts (multipliers of the $i$ matrix to define, in this field:


$$e^{i\,t} = \cos(t) + \sin{t}\,i$$


and this entity has the useful property that its derivative is a simple scale factor times the original function and we get:



$$\cos(t) = \mathrm{id} - \frac{t^2}{2!}\mathrm{id}+ \frac{t^4}{4!}\mathrm{id} + \cdots$$ $$\sin(t) = \mathrm{id}\, t - \frac{t^3}{3!}\mathrm{id}+ \frac{t^5}{5!}\mathrm{id} + \cdots$$


How do these match up with everyday $\sin$ and $\cos$? Well, work out the co-ordinates of the point beginning at position $(1,0)$ and you will see that they are indeed given by the same $\sin$ and $\cos$ as defined above.


You may also enjoy the lecture by Richard Feynman:


"Algebra"; Lecture 22, Volume 1, The Feynman Lectures on Physics


electricity - Which dissipates more power, a small or big resistor?



I was talking to someone about trying to dissipate the most heat from a metal crucible (essentially just a resistor $R$). He argued that you wanted the resistor to have a high resistance because $P=I^2R$, so $P\propto R$.


But I thought back to messing around with resistors and batteries, where I remember that if I attached a much lower resistance to a battery, it would begin to heat up much faster than a bigger resistor. This also makes sense in the context of the power law, because $P=IV$ and $I=V/R$, so $P=V^2/R$. So $P\propto 1/R$ and a smaller resistor dissipates more heat.


The conclusion I came to is that, in the first scenario, the machine in question supplies current and that current is driven across the resistor, so it's appropriate to use $I^2 R$. However, in the second scenario, $V$ is set and $I$ is determined by $R$ from there.


Is that correct?



Answer



It depends on the internal resistance of the source.


Fist consider a "voltage supply". What does "voltage supply" even mean? A voltage supply is supposed to output a fixed voltage no matter what we connect it to. Is this even possible? Suppose we connect the two terminals of the voltage supply together through a piece of wire, i.e. a really low resistance load resistor $R_L$. The current output should be $V/R_L$. If this is a 9 Volt battery and my resistor is 0.1 $\Omega$ then I'd have a current of 90 Amps. A 9 Volt battery most certainly cannot output 90 Amps.


The way to model the limitation in the battery's maximum output current is to imagine that it is an ideal voltage supply in series with an internal resistance $R_i$. Now when we connect it to a load resistor $R_L$, the total current is $I=V/(R_i + R_L)$. If $R_L\rightarrow 0$ then the current goes to $I_{\text{max}}=V/R_i$, a finite value. In other words, the internal resistance sets a maximum output current.


Since the total current is $I=V/(R_i + R_L)$, the total power dissipated in the load resistor is


$$ P_L = I^2 R_L = V^2 \frac{R_L}{\left(R_i + R_L\right)^2}. $$



In order to find the value $R_L^*$ for which the power is maximized, differentiate with respect to $R_L$ and set that derivative equal to 0:


$$ \begin{align} \frac{dP_L}{dR_L} &= V^2 \frac{(R_i + R_L)^2 - 2 R_L(R_i + R_L)}{(R_i + R_L)^4}\\ 0 &= V^2 \frac{(R_i + R_L^*)^2 - 2 R_L^*(R_i + R_L^*)}{(R_i + R_L^*)^4} \\ 2 R_L^* &= R_i + R_L^* \\ R_L^* &= R_i \, . \end{align} $$


This is the result to remember: the power dissipated in the load is maximized when the load resistance is matched to the source's own internal resistance.


Now, any circuit you would reasonably call a "voltage source" must have a low internal resistance compared to typical load resistances. If it didn't then the voltage accross the load would depend on the load resistance, which would mean your source isn't doing a good job of being a fixed voltage source. So, because "voltage sources" have low output resistance, and because we showed that the power is maximized when the load resistance matches the source resistance, you will observe that the load gets hotter if it's low resistance. This is why you found that with batteries, which are designed to be voltage sources, the lower resistance loads got hotter.


Current sources are the other way around. They're designed for high internal resistance so you get a hotter load for a higher load resistance.


But in general, you get more power in the load if it's matched to the source.


soft question - Books that every physicist should read



Inspired by How should a physics student study mathematics? and in the same vein as Best books for mathematical background?, although in a more general fashion, I'd like to know if anyone is interested in doing a list of the books 'par excellence' for a physicist.


In spite of the frivolous nature of this post, I think it can be a valuable resource.


For example:




Course of Theoretical Physics - L.D. Landau, E.M. Lifshitz.


Mathematical Methods of Physics - Mathews, Walker. Very nice chapter on complex variables and evaluation of integrals, presenting must-know tricks to solve non-trivial problems. Also contains an introduction to groups and group representations with physical applications.



Mathematics of Classical and Quantum Physics - Byron and Fuller.


Topics in Algebra - I. N. Herstein. Extremely well written, introduce basic concepts in groups, rings, vector spaces, fields and linear transformations. Concepts are motivated and a nice set of problems accompany each chapter (some of them quite challenging).


Partial Differential Equations in Physics - Arnold Sommerfeld. Although a bit dated, very clear explanations. First chapter on Fourier Series is enlightening. The ratio interesting information/page is extremely large. Contains discussions on types of differential equations, integral equations, boundary value problems, special functions and eigenfunctions.





Thursday, August 20, 2015

locality - Uncertainty principle - momentum so precise that uncertainty of position is outside light-cone?


Thought experiment: what happens if we measure momentum of a particle so precisely, that the uncertainty of its position becomes absurd?


For example, what if the uncertainty of the position exceeds 1 light year? We know for a fact that the particle wasn't a light year away from the measuring device, or else how could the momentum have been measured?


What if the uncertainty extended beyond the bounds of the universe?


Isn't there some point at which we know for certain the particle was closer than what the uncertainty allows for?



Answer



You assume that you can instantly measure the momentum to arbitrary precision, and this isn't the case.


Let's consider a plane light wave to keep things simple, and suppose you want to measure the momentum so precisely that the position uncertainty becomes exceedingly large. How precisely do we have to measure the momentum? Well the uncertainty principle tells us (discarding numerical factors since this is all very approximate):


$$ \Delta p \approx \frac{h}{\Delta x} $$


For a photon the momentum is $p = hf/c$, so this means we have to measure the frequency to a precision of:



$$ \frac{h}{c}\Delta f \approx \frac{h}{\Delta x} $$


or:


$$ \Delta f \approx \frac{c}{\Delta x} $$


Suppose we want our $\Delta x$ to be one light year, our expression becomes:


$$ \Delta f \approx \frac{1}{1 \space \text{year}} $$


But to measure the frequency of a wave accurate to some precision $\Delta f$ takes a time of around $1/\Delta f$. This is because the frequency you measure is the wave frequency convolved with the Fourier transform of an envelope function, and in this case the width of the envelope function is the time you take to do the measurement.


So the time $T$ we take to measure our momentum to the required accuracy is:


$$ T \approx \frac{1}{\Delta f} \approx 1 \space \text{year} $$


The conclusion is that to measure the momentum precisely enough to make the position uncertainty 1 light year will take ... 1 year!


general relativity - Metric diameter of a ring singularity


In the Kerr metric the ring singularity is located at the coordinate radius $r=0$, which corresponds to a ring with the cartesian radius $R=a$.


So the center of the ring singularity in cartesian coordinates is at $r=-a, \ \theta=\pi/2$.


But the center in cartesian coordinates is also at $r=0, \ \theta=0$ (at $r=0$ all $\theta$ are in the equatorial plane, at least in Boyer Lindquist and also in Kerr Schild coordinates).


To calculate the physical diameter to see how much fits through the ring (in one example it is a tiger, in another one Alice & Bob), would I integrate


$$ (1) \ \ \ \ \theta=\pi/2 , \ \ d =2 \int_{-a}^0 \sqrt{|g_{rr}|} \ \ {\rm d}r = 2 \sqrt{(2-a) a}+4 \arcsin \left(\sqrt{\frac{a}{2}}\right)$$


in the equatorial plane, or is it rather


$$ (2) \ \ \ \ r=0 , \ \ d =\int_{-\pi/2}^{\pi/2} \sqrt{|g_{\theta \theta}|} \ \ {\rm d}\theta = 2a$$


since that should also cover the distance from one side of the ring to the opposite.


Approach $(2)$ gives exactly the diameter in cartesian coordinates, but I don't know if that's supposed to be so, or only a coincidence, since otherwise the metric distance is not nescessarily the same as the coordinate or cartesian distance.



So which one is it, $(1)$ or $(2)$? Or is it done in a completely different way?


The coordinates I used are Kerr Schild coordinates, which should cover the inside with the relevant components


$g_{r r}=-\frac{2 r}{a^2 \cos ^2 \theta +r^2}-1 \ , \ \ g_{\theta \theta }= -r^2 - a^2 \cos^2 \theta$


I guess it is approach $(2)$ since no one can enter the ring singularity from the equatorial plane, but I'd like to hear a 2nd opinion on that



Answer



The reasoning can be conducted in Boyer-Lindquist coordinates.


The ring singularity has coordinates $r = 0, \theta = \pi/2$. The radius of the ring is described by $r = 0, \theta =[0, \pi/2]$. That means we can integrate along a path defined by those coordinates with $dt = dr = d\phi = 0$.
$ds^2 = g_{\theta \theta} d\theta^2$
where:
$g_{\theta \theta} = r^2 + a^2 \cos^2 \theta = a^2 \cos^2 \theta, (r = 0)$

$R_{ring} = a \int^{\pi/2}_0 \cos \theta d\theta = a$
Note that $g_{\theta \theta}$ is positive, so the path is spacelike.
The approach (2) is correct.


Instead the approach (1), that is the integration over the radial coordinate $r$ is not viable as $r$ is null, i.e. constant along the path meaning $dr = 0$.


visible light - What determines the sharpness of a shadow?


What are the factors that affect the sharpness of a shadow?



I would think that the distance between the light source and the object, the distance between the object and the shadow, and the size of the light source would all have an effect.


How do they affect the shadow exactly? What is the full explanation?




Wednesday, August 19, 2015

quantum mechanics - Hamiltionian spectrum in unstable systems


I have heard that the eigenvalue of Hamiltonian in an unstable system can contain an imaginary part corresponding the tunneling. Is that true? If it is the case, then I am very confused about it.


Let consider the quantum mechanics case. Suppose a particle move in an potential shown as following (replace the field variable with x). enter image description here


Then the Hamiltonian is $$H=\frac{\hat{p}^2}{2m}+U(x).$$



We can simply obtain the eigenvalue of Hamiltonian by solving the eigen equation $$\left(\frac{\hat{p}^2}{2m}+U(x)-E\right)\psi(x)=0.$$ Now since every term inside the bracket is hermitian, how can the Hamiltonian get a complex eigenvalue?



Answer



The hamiltonian you're considering is not unstable, it is metastable.


If you place a particle at the bottom of the right-hand well at $+v$ it will stay there, even if it is not at the global minimum. Quantum mechanically, you get two closely spaced ground states, separated by ($h$ times) the frequency for the particule to tunnel back and forth.


An unstable system looks more like this:


Mathematica graphics


Classically, this system is metastable, too, but if it makes it past the barrier it is completely gone. Quantum mechanically, the particle can 'sit' in the well, in the sense that the wavefunction can reflect back and forth and form a resonance. However, the resonance isn't perfect, because the right-hand-side wall is not perfectly reflective - since the particle can tunnel out - so eventually you will lose all your population, and you cannot have a stationary state.


In terms of how you phrased the Schrödinger equation, it is true that the hamiltonian's eigenvalue equation looks hermitian, $$\left(\frac{\hat{p}^2}{2m}+U(x)-E\right)\psi(x)=0,$$ and indeed it is; you will normally have a full orthogonal set of continuum eigenfunctions with real eigenvalues. In addition to them, you will also have a resonance, which is exactly the wave I talked about above: it is a state with the property that $$ |\psi(t)⟩=e^{-iE_0t/\hbar}e^{-\Gamma t/2}|\psi(0)⟩, $$ so it continuously loses population (but it is otherwise stationary), and it may also have uglier decay properties at infinity than the regular eigenstates. This is often thought of as a state with a complex eigenvalue, but you need some caution before you interpret it as an eigenvalue equation since the state might not fall in your normal Hilbert spaces.


The best place I know about how to formalize all of this is at




Decay theory of unstable quantum systems. L Fonda, G C Ghirardi and A Rimini. Rep. Prog. Phys. 41, 587 (1978).



which is well worth a read.


quantum mechanics - How does wave function collapse when I measure position?


Text books say that when you measure a particle's position, its wave function collapses to one eigenstate, which is a delta function at that location. I'm confused here.




  1. A measurement always have limited accuracy. Does the wave function collapse to exactly a eigenstate no matter what accuracy I have?




  2. When a particle is in an eigenstate of position, I can represent the state in momentum basis, and calculate it's expected value (average) of kinetic energy. This gives me infinity. Can a particle ever be in such a state?






Answer





  1. No, it doesn't collapse to an eigenstate. Collapse to an eigenstate is a picture of an ideal measurement. In general the final state will not be describable by a wave function, because it's not a pure state, it is instead a mixed state. See this question, which is about inexact measurements.




  2. Position eigenstate in position representation is $\langle x_{}|x_0\rangle=\delta(x-x_0)$. This gives the following in the momentum representation: $\langle p_{}|x_0\rangle=e^{\frac{i}{\hbar}px}$. For this function probability density is constant, thus its expectation value is undefined (one can't find a center of infinite line). Similarly, for free particle expectation value of energy will also be undefined. This is because such state is an abstraction, a useful mathematical tool. Of course, such states can't be prepared in real experiment, but one can come very close to it, e.g. shoot an electron at a tiny slit and observe state of the electron at the very exit of that slit.




As to finding expectation value of energy in position eigenstate, first mistake which you make using the formula $\overline E=\langle x|\hat H|x\rangle$ is forgetting to normalize the eigenvector. But position operator has continuous spectrum, which makes all its eigenvectors unnormalizable (i.e. if you try to normalize them, you'll get null vector, which is meaningless as a state). Thus you can't directly find expectation value of energy in position eigenstate.



kinematics - Why does a ball bounce lower?


If a ball hits the floor after an acceleration then why does it bounces lower? I mean the Energy is passed to the floor then why does the floor give back less Energy?




quantum gravity - What are the theoretical explanations behind the "no-boundary condition" in cosmology?


In his book "The Grand Design" on page 167-173 Stephen Hawking explains how one can get rid of the so called "time zero issue", meaning by this the very special initial state needed for inflation to work (?), by applying the so called "no-boundery condition" which says that time behaves as a closed space dimension for times small enough such that the universe is still governed by quantum gravity.


To see better what Hawking is talking or writing about I`d like to understand this a bit more "technically" and detailed, so my questions are:


1) How can situations where QG kicks in lead to a transition of the time dimension to a space dimension? I mean what is the physical reason for this "ambiguity" of the signature of the correspondng spacetime metric?


2) What are the triggering causes of the "symmetry breaking" between the 4 space dimensions when starting from the no-boundary condition such that one of the space dimensions becomes an open time dimension at the beginning of time?


3) Are there (quantum gravity) theories or models that can explain 1) and 2) and if so, how?



Since I've just read about these issues recently in the "Grand Design" (and nothing else) I`m probably confused and hope that somebody can enlighten me by explaining some details ...



Answer



I don't have "The Grand Design", so I hope my answer is relevant:


The question I assume he's addressing is "what is the amplitude for arriving at a universe with 3 metric $h_{ij}$ on a spatial 3 slice $\Sigma$?" This amplitude, which will be a complex wavefunction on the space of 3 metrics, is given by the path integral over all the past 4 geometries which induce $h_{ij}$ on the slice. Unfortunately, classical Lorentzian 4 geometries have a past cosmological singularity, which renders things a bit problematic.


This path integral can be computed in the Euclidean domain, i.e. over Euclidean metrics. Using the saddle point approximation, the path integral is dominated by Euclidean metrics which are classical solutions of the field equations. The Hartle-Hawking guess/hypothesis/ansatz was to take the path integral over non singular compact Euclidean metrics - ones which don't have a cosmological singularity in the past.


Hawking describes an example: $\Sigma$ is a 3 sphere of radius a. This can be taken as the boundary of one of the pieces of an $S^4$ which has been sliced into two. This has the metric of a 4-sphere of radius $\sqrt{3\over\Lambda}$ where $\Lambda$ is the cosmological constant. Computing the action, he notes that for $a<\sqrt{3\over\Lambda}$ the wavefunction is exponential, but for $a>=\sqrt{3\over\Lambda}$, it's oscillatory.


Now Lorentzian signature De-Sitter space, thought of as a Lorentzian FRW universe can be Wick rotated to a Euclidean solution and, having done this, we arrive at an $S^4$ metric. So, if we wish, we can think of our $S^4$ slice as a Euclidean de Sitter space with boundary an $S^3$ of radius $\sqrt{3\over\Lambda}$. This motivates Hawking to talk of $\sqrt{3\over\Lambda}$ as the radius beyond which we can think of a Lorentz signature solution - we can think of smoothly joining an expanding Lorentzian de Sitter spacetime to this boudary. It happens at the same radius at which the wavefunction switches from exponential to oscillatory behaviour (as a function of the radius).


I guess your question is really how to interpret this situation. I've always thought of the Euclidean signature part as just a component in a mathematical model of a tunneling phenomenon-similar in some ways to the Yang Mills Euclidean instanton, which can cause fermion/antifermion transitions. In this case, the "tunneling" is from nothing to a De-Sitter universe. All we're talking about is a path integral contribution - a Euclidean 4 metric which happens to have an analytic continuation to a Lorentzian De-Sitter metric, so I don't see how there is a need for a mechanism to "cause" a signature change at a given point in time, because it's just a calculation, but I'm open to being educated by someone more knowledgeable.


Hawking describes this in a readable way in some more detail in chapter 5 of this book, which contains this De Sitter example.


quantum mechanics - Are Born-Oppenheimer energies analytic functions of nuclear positions?


I am looking for references to bibliography that explores the smoothness and analyticity of eigenvalues and eigenfunctions (and matrix elements in general) of a hamiltonian that depends on some parameter.


Consider, for example, the original setting of the Born-Oppenheimer approximation, to molecular dynamics, where the nuclear wavefunction is momentarily ignored and the hamiltonian becomes parametrized by the positions $\mathbf{R}_m$ of the nuclei, $$ \hat{H}(\mathbf{R}_m)=-\sum_{i=1}^N \frac{\hbar^2}{2m}\nabla^2_i+\sum_{i>j}\frac{e^2}{|\mathbf{r}_i-\mathbf{r}_j|}-\sum_{i,m}\frac{Z_m e^2}{|\mathbf{r}_i-\mathbf{R}_m|}. $$ The energies $E_n(\mathbf{R}_m)$ then become functions of all the nuclear coordinates and therefore make up the energy landscape which governs the nuclear wavefunctions' evolution. Since the original appearance of the $\mathbf{R}_m$ is in the analytic (well, meromorphic) functions $\frac{1}{|\mathbf{r}_i-\mathbf{R}_m|}$, I would expect further dependence on the $\mathbf{R}_m$ to be meromorphic (and would definitely expect physical meaning from poles and branch cuts).


What I am looking for is references to bibliography that will establish or disprove results of this type in as general a setting as possible. In particular, given a hamiltonian that depends on a set of parameters $z_1,\ldots,z_m$ in a suitably defined analytic way, I would like to see results establishing the analyticity of matrix elements (and thus, for example, of eigenvalues) involving the eigenvectors of the hamiltonian. I would also be interested in knowing what quantities can be extended analytically to the complex plane.


Any and all pointers will be deeply appreciated.



Answer



Suppose that for all $z$ in some open set $Z$ of complex numbers containing $z_0$, the Hamiltonian $H(z)$ is a compact perturbation of the self-adjoint $H(z_0)$ depending analytically on $z$. Then, for every simple eigenvalue $E_0$ of $H(z_0)$ and associated normalized eigenstate $\psi_0$, there exist a complex neighborhood $N$ of $z_0$ and unique functions $E(z)$ and $\psi(z)$, defined and analytic on $N$, such that $E(z_0)=E_0$, $\psi(z_0)=\psi_0$, and $H(z)\psi(z)=E(z)\psi(z)$ and $\psi_0^*\psi(z)=1$ for all $z\in N$.


The proof is essentially the inverse function theorem in a Banach space for the resulting nonlinear system, combined with the spectral theorem applied to $H(z_0)$. I guess you can find the relevant background results (if not a perturbation statement similar to the above) in the old book by Kato.



No assumption is needed that $H(z)$ is self-adjoint (would not be the case for all $z\in Z$ unless $H(z)$ is constant). Of course the eigenvalues will generally move into the complex domain if $z_0$ was real but $z$ is complex.


Weakening the assumptions will require stronger (so-called ''hard'') forms of the inverse function theorem, which generally take a lot of technicality to state and verify.


general relativity - How much does electromagnetic radiation contribute to dark matter?



EM radiation has a relativistic mass (see for instance, Does a photon exert a gravitational pull?), and therefore exerts a gravitational pull.


Intuitively it makes sense to include EM radiation itself in the galactic mass used to calculate rotation curves, but I've never actually seen that done before...


So: if we were to sum up all the electromagnetic radiation present in a galaxy, what fraction of the dark matter would it account for?



Answer



I found it surprisingly hard to find an authoritative statement of the density of the CMB. According to this article it's about $5 \times 10^{-34}\mathrm{g\ cm}^{-3}$, and since the critical density is somewhere in the range $10^{-30}$ to $10^{-29}\mathrm{g\ cm}^{-3}$ photons don't make a significant contribution.


Photons wouldn't be dark of course. If there were enough photons to make a significant contribution to the mass/energy of the universe we'd see them, just as we can see the CMB.


Response to comment: oops yes, I didn't read your question properly - sorry!


Anyhow, my comment that photons aren't dark matter still applies, but it's easy to make an estimate of the gravitational contribution of the EM radiation in e.g. the Solar System. The Sun converts about $4 \times 10^9$ kg of matter to energy every second. Since it weighs about $2 \times 10^{30}$ kg every second it loses about $2 \times 10^{-19}$% of it's mass every second.


If you're prepared to assume the photon density in the Solar System is dominated by the Sun's output (which seems plausible) and take the size of the Solar System to be Neptune's orbit, i.e. $1.5 \times 10^4$ light seconds then the mass/energy of photons in the Solar System is $3 \times 10^{-15}$% of the Sun's mass. So it's utterly insignificant.


The reason photons make a much lower contribution to the Solar System than to the universe as a whole is because mass is much more concentrated in the Solar System than in the universe as a whole.



The relationship between the energy and amplitude of a wave? Derivation?


From multiple online sources I read that $$E \propto A^2$$ but when I mentioned this in class, my teacher told me I was wrong and that it was directly proportional to amplitude instead.


As far as I know, every website I stumbled upon concerning this said that is the case. My teacher has a Ph.D and seems pretty experienced, so I don't see why he would make a mistake, are there cases where $E \propto A$?


I also saw this derivation:


$$\int_0^A {F(x)dx} = \int_0^A {kx dx} = \frac{1}{2} kA^2$$



located here, does anyone mind explaining it in a bit more detail? I have a basic understanding of what an integral is but I'm not sure what the poster in the link was saying. I know there is a pretty good explanation here, but it seems way too advanced for me (gave up once I saw partial derivatives, but I see that they're basically the same later on). The first one I linked seems like something I could understand.



Answer



The poster from that link is saying that the work done by the spring (that's Hooke's law there: $F=-kx$) is equal to the potential energy (PE) at maximum displacement, $A$; this PE comes from the kinetic energy (KE) and is equal to the integral of Hooke's law over the range 0 (minimum displacement) to $A$ (maximum displacement).




Anyway, your professor is wrong. The total energy in a wave comes from the sum of the the changes in potential energy, $$\Delta U=\frac12\left(\Delta m\right)\omega^2y^2,\tag{PE}$$ and in kinetic energy, $$\Delta K=\frac12\left(\Delta m\right)v^2\tag{KE}$$ where $\Delta m$ is the change in mass. If we assume that the density of the wave is uniform, then $\Delta m=\mu\Delta x$ where $\mu$ is the linear density. Thus the total energy is $$E=\Delta U+\Delta K=\frac12\omega^2y^2\,\mu\Delta x+\frac12v^2\,\mu \Delta x$$ As $y=A\sin\left(kx-\omega t\right)$ and $v=A\omega\cos(kx-\omega t)$, then the energy is proportional to the square of the amplitude: $$E\propto\omega^2 A^2$$


Tuesday, August 18, 2015

thermodynamics - Book recommendation for nonequilibrium thermo/stat mech




I'm doing an undergrad research project that lies at the intersection of biology and nonequilibrium thermodynamics, but I'm starting to realize almost none of my equilibrium thermo/stat mech knowledge carries over.


What's a good book on this subject that covers both near-equilibrium (e.g. linear response) results, as well as more recent far-from-equilibrium (e.g. Jarzynski and Crooks equalities) results? Coverage of nonequilibrium steady states and simulation methods is a plus.


I'm going for a physical understanding, not complete mathematical rigor; I know real/complex analysis but not, say, probability theory or functional analysis.



Answer



You may want to check 'Elements of Non-Equilibrium Statistical Mechanics' by V. Balakrishnan. The book does not cover too large a ground but focuses on the basic probabilistic tools of the subject. It has plenty of appendixes to help the reader not get distracted by technical details. Its most appealing attribute is that it makes the reader feel that the subject follows logically from known basic physics instead of making a leap into the subject by starting from Onsager relations and the likes.



classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...