Friday, September 30, 2016

friction - Why does water take a "wet" path?


Allow me to explain what I mean... ((I am asking this question in the physics section in hopes of finding a "physics" argument))


I shall make it clear what I mean by demonstrating an experiment...


Hold your hand upright with your index finger pointing up towards the sky. Now drop about 2-3 drops of water right on top of your finger. Water drop should (theoretically) stay there. Now tilt your hand ever so slightly towards one side. The water drop rolls down your finger down your wrist leaving a trail towards the tilted side right?


Now's here comes my question. Now if you bring your finger and the hand to the initial upright state and place another water drop on the same finger. Now we see that the water drop takes the same trail! Why does it happen?


My argument: The water trail which was previously left off reduced the friction in that path by filling the gaps between the molecules. Hence the new drop tends to slide more on that path and hence ends up travelling in it.


((This "effect" is also observed when tear drops keep rolling down in the same path. But that could just be because that path is the steepest.))




Answer



If you look closer at the edge of the water you'll see that it makes an angle with the surface. In the diagram below I've marked the angle $\theta$:


Contact angle


I've shown two cases, one where the angle is about 90° and one where it is much less than 90°. This angle is called the contact angle and it shows how well the water wets the surface. On a hydrophobic surface like plastic the contact angle will be high, like the upper diagram. On a hydrophilic surface the contact angle will be low, like the lower diagram.


If the contact angle is low, as in the lower diagram, the surface tension tends to pull the water outwards so it will spread more easily. So the water spreads more easily on a hydrophilic surface than it does with a hydrophobic surface.


The point of all this is that many organic substrates like skin interact with water. Dry skin is pretty hydrophobic so water has a high contact angle on it and a water drop won't spread out. But when you leave skin in contact with water it absorbs the water, swells and becomes more hydrophilic. This is why your skin wrinkles in the bath.


And this is why once water has formed a track on your skin it will tend to follow that same track. Where the water has flowed your skin has become more hydrophilic so water spreads more easily on it. Where your skin has remained dry it is still very hydrophobic so water won't spread on it.


If you tried your experiment but after the first drop you got a friend to dry your skin with a hair dryer you'd find successive drops of water would be less prone to follow the same track.


optics - How does a diverging lens in a Galilean telescope form an image at infinity when its object is at its focal plane?


This is a follow up question to Farcher's answer for the question - How does a Galilean telescope form an enlarged image even though it has a diverging lens?.


Let us consider the following ray diagram which shows a simple model of a Galilean telescope:


enter image description here


Image Source : Concepts of Physics by Dr. H.C.Verma, chapter "Optical Instruments", page 424, topic "Telescopes", sub topic "Galilean Telescope"


The following statement is from the book mentioned above:



If the telescope is set for normal adjustment, the final image $P''Q''$ is formed at infinity. Then $P'E=-f_e$ [where $f_e$ is the focal length of the eye piece] […]



$P'Q'$ is the image formed by the converging lens $L$. $P'Q'$ acts as an object for the diverging lens (eye piece). And it's said that for normal adjustment $P'Q'$ is at the focus of the bi-concave lens and the image $P''Q''$ forms at infinity.



In other words, the diverging lens forms an image at infinity for an object placed at its focal point. Isn't this a behaviour of a converging (convex) lens? This fact troubled me a lot, and I constructed the following ray diagram:


enter image description here


I've neglected the convex lens for the sake of simplicity.


It can be seen that the image $A'B'$ is formed at the midpoint of focal length on the same side of object $AB$ (image formed by the convex lens). I also verified it using the thin lens formula $\frac 1 v -\frac 1 u=\frac 1 f$. So for an object at the focal point of a diverging lens, the image forms midway between the object and the lens. But this is contradictory to what is being explained in my textbook, and in the answer linked above regarding Galilean telescopes.


In short, my question is - How does a diverging lens in a Galilean telescope form an image at infinity when its object is at its focal plane?



Answer



The first (convex) lens produces an image that is to the right of the diverging lens i.e. this acts as a virtual object for the diverging lens. So the rays look like the diagram below. I've drawn a point object to keep the diagram simple. This could for example be an image of a distant star.


Virtual object


When we say there is a virtual object we mean that to the left of the lens the light rays are converging as if they were coming to a focus at the point where the virtual object is. I've drawn those converging rays as solid blur lines to the left of the lens and as dashed line to the right of the lens to show how they would come to a focus at the object if the diverging lens was not there.


Now the diverging lens makes the rays diverge, which in this case means it reduces their convergence. With the diverging lens in place the light rays look like this:



Diverging lens


The diverging lens refracts the converging rays to be parallel i.e. as if they were coming from an object at infinity. This is how the diverging lens takes a virtual object at the focal point and produces a virtual image at infinity. The lens in your eye then brings the parallel rays to a focus on your retina so you can see the image.


Your diagram is actually perfectly correct, but it doesn't show what is happening in the telescope. Your diagram shows a virtual object at $u = f/2$ forming a real image at $v = f$, or by reversing the rays a real object at $u = f$ forming a virtual image at $v = f/2$.


We'll use the Cartesian convention, and to avoid possible sign confusions I'll write the focal length of the lens as $f = -F$, where $F$ is a positive constant. Then if we consider a virtual object a distance $F/2$ to the right of the lens that is at $u = +F/2$. Feeding this into the lens equation:


$$ \frac1u + \frac 1f = \frac1v $$


We get:


$$ \frac2F + \frac{-1}{F} = \frac1v $$


So $v = +F$ i.e. a real image at a distance $F$ to the right of the lens. If we reverse the rays we get a real object at a distance $F$ to the left of the lens, i.e. $u = -F$, so:


$$ \frac{-1}{F} + \frac{-1}{F} = \frac1v $$


Giving $v = -F/2$ i.e. a virtual image at a distance $F/2$ to the left of the lens. Neither of these match the situation in the telescope where we start with a virtual object a distance $F$ to the right of the lens i.e. $u = +F$. Putting this into our equation we get:



$$ \frac{+1}{F} + \frac{-1}{F} = \frac1v $$


so $1/v = 0$ i.e. the image is at infinity.


The reason your diagram gives the wrong result is that the direction of the light rays defines the positive direction. In your first diagram the light rays travel left to right, which is the usual convention, so positive is to the right. In your second diagram the (virtual) object is to the right of the diverging lens, so the (virtual) light rays have to be travelling towards the object i.e. left to right. You have drawn the rays travelling right to left, and that makes your object a real object not a virtual one.


Drawing the diagram for the (virtual) object at $u = +F$ and the (virtual) image at $v = -\infty$ is a bit hard, so to illustrate what the diagram looks like I've put the (virtual) object at $u = +\tfrac32 F$. This creates a (virtual) image at $v = -3F$:


Virtual object


Note that all light rays, real and virtual, travel left to right. If you move the (virtual) object leftwards towards $F$ the (virtual) image move leftwards towards negative infinity.


Thursday, September 29, 2016

visual - Hidden Message: Of Shapes and Numbers


I've created a puzzle quite a while back, unfortunately not many people wanted to play so I'm posting here instead.



The idea is that the image can be decipher to uncover a hidden message. The hint is given:


enter image description here


Hints: “Like time, their true forms are revealed once they’re divided by their seats”



The first step is to figure out how to read the numbers, Joe already figured one part of the first step. The answer of the puzzle will be in text form.


Hope you guys enjoy my game. Any comments on the puzzle is greatly welcome as well.



Answer




As Joe commented, the numbers seem to get larger as you move around the circle's edge like a clock face. Then the "hint" says "divided by their seat". So if the numbers are getting divided by something, and what they're divided by gets bigger as you move around the circle, the seat must be something that gets bigger as you move around the circle.


Next, starting with the 18 and moving clockwise we can see that each number is divisible by its position in that circle. So that's probably what seat means- position.


Trying the same thing for the outermost rectangle, we start with the 6 and move clockwise and get the same divisibility property. Same for the triangle, starting 20, same again for inner rectangle, starting 13. I'm just going with points on the corner of a shape being members of that shape, points on the sides are ignored, because otherwise we don't get this property of being divisible by incrementing numbers


So going through and dividing, the numbers become:



- Hexagon - 18, 1, 2, 2, 9, 20
- Outer rectangle - 6, 18, 15, 13
- Triangle - 20, 8, 5
- Inner rectangle - 13, 15, 15, 14




One of those comments mentioned converting the numbers by "a common method" and... well, they're all less than 26 now! Let's try turning them into letters alphabetically:



- Hexagon - RABBIT
- Outer rectangle - FROM
- Triangle - THE
- Inner - MOON



Solution:



Rabbit from the moon




quantum field theory - What do we mean when we say 't Hooft proved that Standard Model is renormalizable?


This question is inspired from Why should the Standard Model be renormalizable? Ron Maimon says that standard model is renormalizable, and though there seems to be conflicting (?) answers. Is this because each answer differs in what "renormalizability" means? Furthermore, what paper Ron Maimon is referencing to by saying "'t Hooft proved it"?


Also, in Renormalizability of standard model it says standard model is renormalizable. So standard model is renormalizable?



Answer



As far as I know, there is no rigorous proof that the standard model with spontaneous symmetry breaking is perturbatively renormalizable to all orders. I think the best available result in this direction is "Renormalization of Spontaneaously Broken SU(2) Yang-Mills Theory with Flow Equations" by Christoph Kopper and Volkhard F. Müller. There is also the older article "Renormalization of the electroweak standard model to all orders" by Elisabeth Kraus.


special relativity - Is spacetime absolute?


As I understand it Newton's Laws imply that space is relative, as the laws of physics are the same in all inertial frames and as such there is no way, even in principle, to distinguish a frame that is truly at rest (absolute space). Hence the concept is physically meaningless, and the positions of (and distances between) physical objects, events, etc. are relative to the frame in which one is observing them from. Advancing on to Special Relativity, by postulating that the speed of light (in vacuum) is constant in all frames of reference, we are forced to conclude that time is relative also (as if it weren't then different observers would observe a different speed of light (in vacuum)). This leads to the result that the concepts of space and time are no longer completely separate and independent of one another and instead intertwined (as an event that occurs at rest over some time period in one frame, will occur over some spatial interval and a different time interval in another frame). Hence they should be considered as should be considered as a single entity, called spacetime.


Sorry for the waffling so far, just want to check that my understanding is correct up to this point?!


My main question is, given that space and time are (individually) relative quantities, is spacetime itself relative, or can it be considered absolute (as after all, it is the mathematical space of all possible events and exists independently of the physical events that occur within it)?




quantum mechanics - What does QFT "get right" that QM "gets wrong"?


Or, why is QFT "better" than QM?


There may be many answers.


For one example of an answer to a parallel question, GR is better than Newtonian gravity (NG) because it gets the perihelion advance of Mercury right.


You could also say that GR predicts a better black hole than NG, but that's a harder sale.


For QFT versus QM, I've heard of the Lamb shift, but what else makes QFT superior?



Answer



The most important thing is that nonrelativistic QM (as it is formulated traditionally) cannot deal with changing particle number, because the position basis Hilbert space changes dimension as you increase the particle number. Quantum field theory allows particle number to change, and this is the main difference.


This is also important in cases where the number of particles is indefinite, like the fixed phase description of a BEC or a superfluid. In this case, the state of fixed phase macroscopic matter wave is a superposition of different numbers of particles. For this reason, nonrelativistic Schrodinger fields are useful in condensed matter physics, even in cases where the particle number is technically conserved, because the states one is interested in are in a classical limit where it is better to assume that the particle number is indefinite. This is exactly analogous to introducing a chemical potential and pretending that the particle number can fluctuate in statistical mechanics, in those cases where the particle number is exactly fixed. Or introducing a temperature in those cases where the energy is exactly fixed. It is still mathematically convenient to do so, and it does no harm in the thermodynamic limit. So it is convenient to use quantum fields to describe nonrelativistic situations where the particle number is fixed, but the behavior is best described by a classical collective wave motion.


homework and exercises - Solve the inverse function of the solution to a varying acceleration problem ODE


Suppose there are two positive charges $A$ and $B$, both with equal mass $m$ and the same charge quantity $q$. The initial distance between $AB$ is $R$; and the initial velocity of $B$ relatively to $A$ is $0$.


enter image description here


Suppose the reference coordinate system is using $A$ as the origin and $AB$ as $x$-axis; $r$ is the distance $B$ moves under the Coulomb's force


$$F(t)=\frac{q^2}{4\pi \epsilon_0\left(R+r(t)\right)^2}$$


Then I obtained the 2nd order nonlinear ODE:


$$\frac{dr(t)}{dt}=v(t)$$ and $$\frac{d^2r(t)}{dt^2}=\frac{dv(t)}{dt}=a(t)=2\frac{F(t)}{m}=\frac{q^2}{2\pi m\epsilon_0\left(R+r(t)\right)^2}$$



with initial/boundary conditions $$r(t)=0,\quad r'(t)=0$$


Questions are:



  1. How to find the exact solution $r(t)$ of the ODE ?

  2. How to find the exact solution $t(r)$ which is the inverse function of $r(t)$?



Answer



First of all, define the variable $u(t)=R+r(t)$, so your equation can be put as:


\begin{equation} {d^2u\over dt^2}={k\over u^2}, \text{ where $k$ is a constant} \end{equation}


Then, multiply by $du/dt$ both sides of this equation, leaving:



\begin{eqnarray} &&{du\over dt}{d^2u\over dt^2}={k\over u^2}{du\over dt}\\ &\Rightarrow&{1\over 2}{d\over dt}\left({du\over dt}\right)^2=-{d\over dt}\left({k\over u}\right)\\ &\Rightarrow&{d\over dt}\left[\left({du\over dt}\right)^2+{2k\over u}\right]=0\\ \end{eqnarray}


Then you have:


\begin{equation} \left({du\over dt}\right)^2+{2k\over u}=C,\text{ where $C$ is a constant} \end{equation}


Finally:


\begin{equation} {du\over dt}=\pm\sqrt{C-{2k\over u}}, \end{equation} which you can solve easily (fifty points to gryffindor!!!)


experimental physics - What's jet multiplicity?


That's a basic thing, but, surprisingly, it is very difficult to find concise explanation of:




  • What is the definition of jet multiplicity?

  • Why is it interesting?




Wednesday, September 28, 2016

angular momentum - Are the orientations of spin-axes and binary/planetary orbits random or is there any relationship with the Galactic plane?


In an answer to another question, a claim has been made that orbit/spin orientations are random (at least within our own Galaxy), except perhaps towards the Galactic centre.


I have dabbled in this area before ( http://adsabs.harvard.edu/abs/2010MNRAS.402.1380J ) and there is a recent paper by Rees & Zijlstra (2013) that suggests planetary nebulae in the Galactic bulge with bipolar morphologies have an alignment with the Galactic plane at 3.7 sigma significance. The hypothesis is that this is due to an alignment of the orbital axes of their parent binary systems, such that the orbital planes of the binary systems are perpendicular to the Galactic plane.


My question: Is there any other evidence for such alignments in other classes of source in the bulge or elsewhere in the Galaxy? Or are there any studies that comprehensively show that angular momentum vectors (spin or orbital) are randomly aligned?




Tuesday, September 27, 2016

electromagnetic radiation - Why does the speed of light $c$ have the value it does?



Why does light have the speed it does? why is it not considerably faster or slower than it is? I can't imagine science, being what it is, not pursuing a rational scientific explanation for the speed of light. Just saying "it is what it is" or being satisfied saying it is 1 ($c=1$), does not sound like science.




cosmology - Why is acceleration of expansion of space unexplained?



I didn't know how to better phrase the question so here we go.


It says that the farther we see from our galaxy the faster the other object seems to be going away (accelerating).


My common sense tells me to expect this due to following reason.


If space is expanding at every point equally then farther objects would naturally move away faster as there are more points of space between you and a far object than points of space between you and near object.


So as we move farther from an object the amount of space (say no of space points of length dx) increases hence there is more space expanding away from each other than for a near object and that is why far away object would appear to be accelerating.


But obviously its not that simple as i said or i may be completely wrong as scientists seem to consider this acceleration to non understood phenomenon like dark energy


So What am i missing here?



Answer



The FLRW energy equation for the motion of test masses in the universe is $$ \left(\frac{\dot a}{a}\right)^2 = \frac{8\pi G\rho}{3}. $$ the scale factor for space is $a$ and its time derivative is $\dot a$. I derived this from Newtonian dynamics. The density of mass $\rho$ for the case of a quantum vacuum energy level is constant. I now replace this with energy density with $\rho\rightarrow~\rho/c^2$. This leads to the dynamical equation $$ a(t) = a_0e^{t\sqrt{\frac{8\pi G\rho}{3c^2}}} $$ which is an exponential expansion. The cosmological constant is then $\Lambda~=\frac{8\pi G\rho}{3c^2}$, which is determined by the quantum vacuum energy.


The question is then what is the density of mass-energy, or more exactly what is the nature of the quantum vacuum. The vacuum is filled with virtual quanta. A pendulum sitting vertically will be a motionless plumb in classical mechanics. However, quantum mechanics informs us there is an uncertainty in its position and momentum $\Delta x\Delta p~\simeq~\hbar$. This means it can fluctuate about its vertical plumb position. The Hamiltonian for the harmonic oscillator transitions from the classical to quantum form as $$ H = \frac{1}{2m}p^2 + \frac{k}{2}x^2~\rightarrow~\frac{\omega}{2}(a^\dagger a + aa^\dagger) $$ This can by put in the more standard form with the quantum commutator $[a,~a^\dagger]~=~1$, and so $aa^\dagger = a^\dagger a + \frac{1}{2}$. The quantum Hamiltonian is then $$ H = \omega a^\dagger a + \frac{1}{2}\omega, $$ where this last term is a zero point energy for the fluctuation of the harmonic oscillator. For a quantum field there is a big summation over Hamiltonians for each frequency $$ H = \sum_n\omega_n \left(a_n^\dagger a_n + \frac{1}{2}\right) $$ The sum over this residual energy or zero point energy is summed up to the Planck energy. This leads to a huge vacuum energy. For most quantum work this term is eliminated, often by something called normal ordering. However this leads to a huge energy that can't be ignored in cosmology. The cosmological constant is $\Lambda \simeq 10^{-53}cm^{-2}$. The sum over these zero point energy leads to and expected $\Lambda \simeq 10^{67}cm^{-2}$. The difference is $120$ orders of magnitude off.



There is a lot of work on this subject. Much of it focuses on gauge fluxes through wrapped D-branes. Some progress has been made in reducing the expected vacuum energy. However, so far it has not been possible to show the very small vacuum energy we know from cosmology exists. It also can't be zero! It is my thinking that this reflects our lack of understanding in quantum gravity. We know some things about quantum gravity, but we really do not have a complete theory of it. This vacuum energy that propels cosmological expansion, called dark energy, is then not fully understood.


potential - How does a battery work and create a field inside it?




There is an explanation of how a battery works that says that inside the battery (in the positive charge convention) there is a field and the battery does work on the positive charge against the field to move it from the negative terminal to the positive terminal and it becomes full of potential energy, ready to be used in a circuit.


But from what I understand from a battery (an excess of electrons on one side and a lack of electrons on the other side) there isn't a field inside the battery and the battery doesn't take a charge and move it from one side to the other so it gains potential energy.


What I need is a chemical detailed explanation of how a battery works that tells more about how the battery's electric field is created.




quantum field theory - Understanding Weinberg's soft-photon theorem



The soft-photon theorem is the following statement due to Weinberg:



Consider an amplitude ${\cal M}$ involving some incoming and some outgoing particles. Now, consider the same amplitude with an additional soft-photon ($\omega_{\text{photon}} \to 0$) coupled to one of the particles. Call this amplitude ${\cal M}'$. The two amplitudes are related by $$ {\cal M}' = {\cal M} \frac{\eta q p \cdot \epsilon}{p \cdot p_\gamma - i \eta \varepsilon} $$ where $p$ is the momentum of the particle that the photon couples to, $\epsilon$ is the polarization of the photon and $p_\gamma$ is the momentum of the soft-photon. $\eta = 1$ for outgoing particles and $\eta = -1$ for incoming ones. Finally, $q$ is the charge of the particle.



The most striking thing about this theorem (to me) is the fact that the proportionality factor relating ${\cal M}$ and ${\cal M}'$ is independent of the type of particle that the photon couples to. It seems quite amazing to me that even though the coupling of photons to scalars, spinors, etc. takes such a different form, you still end up getting the same coupling above.


While I can show that this is indeed true for all the special cases of interest, my question is: Is there a general proof (or understanding) that describes this universal coupling of soft-photons?



Answer



The universality of the coupling of the photon to charged particles exhibited by this formula is only valid in the limit of ultrasoft photons. This is also known as the eikonal approximation, in which the photon couples only to the charge x velocity of the charged particle.


quantum mechanics - Amplitude of an electromagnetic wave containing a single photon


Given a light pulse in vacuum containing a single photon with an energy $E=h\nu$, what is the peak value of the electric / magnetic field?



Answer



The electric and magnetic fields of a single photon in a box are in fact very important and interesting. If you fix the size of the box, then yes, you can define the peak magnetic or electric field value. It's a concept that comes up in cavity QED, and was important to Serge Haroche's Nobel Prize this year (along with a number of other researchers). In that experiment, his group measured the electric field of single and a few photons trapped in a cavity. It's a very popular field right now.


However, to have a well defined energy, you need to specify a volume. In a laser, you find an electric field for a flux of photons (n photons per unit time), but if you confine the photon to a box you get an electric field per photon. I'll show you the second calculations because it's more interesting.


Put a single photon in a box of volume $V$. The energy of the photon is $\hbar \omega$ (or $\frac{3}{2} \hbar \omega$, if you count the zero-point energy, but for this rough calculation let's ignore that). Now, equate that to the classical energy of a magnetic and electric field in a box of volume $V$:



$$\hbar \omega = \frac{\epsilon_0}{2} |\vec E|^2 V + \frac{1}{2\mu_0} |\vec B|^2 V = \frac{1}{2} \epsilon_0 E_\textrm{peak}^2 V$$


There is an extra factor of $1/2$ because, typically, we're considering a standing wave. Also, I've set the magnetic and electric contributions to be equal, as should be true for light in vacuum. An interesting and related problem is the effect of a single photon on a single atom contained in the box, where the energy of the atom is $U = -\vec d \cdot \vec E$. If this sounds interesting, look up strong coupling regime, vacuum Rabi splitting, or cavity quantum electrodynamics. Incidentally, the electric field fluctuations of photons (or lack thereof!) in vacuum are responsible for the Lamb shift, a small but measureable shift in energies of the hydrogen atom.


gravity - Two orbiting planets in perpendicular planes


Inspired by this question. Can a 3 body problem, starting with two planets orbiting a larger one (so massive it may be taken to stand still) in perpendicular planes, be stable?
Is there known an analytical solution to this 3 body problem?
Or a qualitative description of the evolution?
Will the two bodies approach coplanar orbits?





Bounty for this:


Is Classical Mechanics (that is Newtons 1/r^2 law) sufficient for some large random collection of point-particles (with nonzero net angular momentum) in orbit around a larger one, sufficient to explain that they approach coplanar orbits?
Or is it needed to add collisions and energy loss etc..




newtonian mechanics - What causes a soccer ball to follow a curved path?


Soccer players kick the ball in a linear kick, though you find it to turn sideways, not even in one direction. Just mid air it changes that curve's direction, i.e., it swings, as footballers say. Is there any physical explanation?



Maybe this video will better emphasize this phenomenon: http://www.youtube.com/watch?feature=player_detailpage&v=ZEv7QEFNVq0#t=265s



Answer



There's an interesting paper that discusses some of the physics/maths involved in the spiral path of a football. Here's Roberto Carlos' goal against France (discussed in comments to question).


enter image description here



This is the way we interpret a famous goal by the Brazilian player Roberto Carlos against France in 1997. This free kick was shot from a distance of $35~\text{m}$. Roberto Carlos strongly hits the ball
($U_0 = 38~\text{m}/\text{s}$) with an angle of about $12^\circ$ relative to the direction of the goal; due to the rotation
($\omega_0 \approx 88~\text{rad}/\text{s}$, a value difficult to extract from the movies, yet plausible), it sidestepped the wall, bent toward the goal, hit the goal post and entered (Fig. 11). The goalkeeper Fabien Barthez did not move: without rotation, the ball would have left the field $4~\text{m}$ away from the goal! If the trajectory had been simple circle and not a spiral, the ball would have been still $1~\text{m}$ away[.]



From: Football curves - Journal of Fluids and Structures





Update:


Another article "Gaining insight about the unpredictable movement of knuckleballs by dropping beads into water" describes the so-called Knuckleball shot popularised by Ronaldo.



"When a sphere is in a flow, there is a critical velocity at which the wake behind the sphere and the drag force acting on the ball sharply decrease," explained hydrodynamics graduate student Caroline Cohen of France's Ecole Polytechnique. The decrease in the size of the wake can lead to a sideways force that increases the ball’s deviation from a straight-line path. Fluid physicists call this the "drag crisis."


Thrown relatively slowly and with minimum spin, compared with that of Major League fastballs, the knuckleball confuses batters by changing direction in an apparently random fashion late in flight.


But knuckleballs aren't restricted to baseball. In cricket, Indian fast bowler Zaheer Khan has been known to use a knuckler for his slower ball. Volleyball players experience knuckling as a spiked ball closes on them. And, most important to the French scientists, top players such as Spanish league Real Madrid star Cristiano Ronaldo can kick a soccer ball in such a way that it zigzags unpredictably en route to an opposing goalkeeper.



If an atom is fully ionized by removing all electrons, is it still an atom?


This is a question about terminology. To me, it's clear that the nucleus of an atom is still an atom. But a comment by Willie Wong at Is nature symmetric between particles and antiparticles? raises this question. Similarly, arguments go either way at the physics forums question: "is it possible for an atom to have no electrons?" http://www.physicsforums.com/showthread.php?t=50333




quantum mechanics - What is the sum over the transition rates?


I was looking at the solution to an exercise, and I came over this expression:


$$P_{i\to f} = \sum \limits_{f} {2 \pi \over \hbar }\; |\langle f |\hat V | i \rangle |^2 \delta(E_{fi}-E),$$


where I simplified this but it is in effect what the expression is a sum over the Fermi golden rule first order for the transition rate, summing over the final states. I would like to know why this sum is showing up and what it could mean in this context.



Answer



This is the total transition rate for a transition to occur. For instance, in a scattering experiment, there is a probability that an incoming particle will or will not be scattered by the target. If the quantity in which one is interested is the probability that a particle is scattered, then the quantity you have given above would give you the answer. This is because it gives the probability of scattering out of the initial state into all final states. This is not always the quantity that one is always interested in in a scattering experiment, however.



lorentz symmetry - Physical/geometrical interpretaion of the determinant of a matrix


Consider a matrix transformation $\mathbf{T}$ that acts on a vector $\mathbf{x}$:


$$ \mathbf{x}' = \mathbf{T}\mathbf{x} $$.


Now, I know that one-dimensional linear transformations expand the length by a factor $|det(\mathbf{T})|$, two-dimensional linear transformations expand the area by a factor $|det(\mathbf{T})|$, and three-dimensional linear transformations expand the volume by a factor $|det(\mathbf{T})|$.


How can I see this mathematically though? I was thinking of calculating the norm $||\mathbf{x}'|| = ||\mathbf{T}||\cdot||\mathbf{x}||$ but don't know how to deal with $||\mathbf{T}||$


The physics of this question is: do all Lorentz transformations have determinant equal to 1? Because they preserve the space-time interval?


EDIT:



What I was asking was more about: how can I see that the determinant of a matrix carries information about the area/volume change of the system it is acting on? Mathematically, how can I show that the area spanned by two vectors is unchanged IFF the determinant of the transformation matrix is 1?



Answer




do all Lorentz transformations have determinant equal to 1? Because they preserve the space-time interval?



Yes, they do, but preservation of the interval is not the right way to think about the "why."


We have two facts:




  1. Lorentz transformations have Jacobian determinant 1.





  2. Lorentz transformations preserve the spacetime interval.




There is no close or simple relationship between these facts. In particular, 1 does not imply 2. For example, the Galilean transformations have Jacobian determinant 1, but they do not preserve the spacetime interval. Similarly, we could define a rotation in the $x-t$ plane, and it would have Jacobian determinant 1 but not preserve the spacetime interval.


It is true that 2 implies 1, but this is simply because 2 is sufficient to completely characterize the Lorentz transformations and therefore give all their properties indirectly.


For a general proof of the unit Jacobian that applies to both Galilean transformations and Lorentz transformations, see the answer to this question: Motivation for preservation of spacetime volume by Lorentz transformation?


homework and exercises - Using 2D position, velocity, and mass to determine the parametric position equations for an orbiting body



I have a gravity-related question. I am programming an orbit simulator. I have everything up and running, but I would like to render the smaller body's orbital path (the larger body is fixed). To do this, I need the parametric equations for the body's position for any given time (i.e. I need an $x(t)$ and $y(t)$ function). I have access to each body's position, velocity, and mass. I have no other accessible variables.


P.S. Here is a link to what I have so far. The equations currently used to plot the orbit are $x(t)=x+v_xt$ and $y(t)=y+v_yt$.



Answer



It seems you've done the hard part already, which is to evolve the object's position as a function of time. And moreover, the simulation seems stable over a number of orbits. (But eventually things start to go wrong; you may want to look at an answer I wrote to What is the correct way of integrating in astronomy simulations?)


So my understanding is all you really need is the full orbit plotted. In that case, there's no need to make it a function of actual time (that's hard). Instead, we can fall back to Kepler's First Law, which says that the separation $r$ between the bodies obeys $$ r = \frac{r_\mathrm{max}(1-e)}{1+e\cos(\theta)}. $$ $r_\mathrm{max}$ is the maximum separation, which I believe is the initial separation in your particular simulation. The eccentricity $e$ (formula here) is given by $$ e^2 = 1 + \frac{2Er^4\dot{\theta}^2}{G^2M^2m}, $$ where $E$ is the orbital energy, $G$ is the gravitational constant, $M$ is the mass that is so heavy it essentially doesn't move, and $m$ is the lighter mass. This can more conveniently be rewritten $$ e^2 = 1 + 2 \left(\frac{rv}{GM}\right)^2 \left(\frac{v^2}{2} - \frac{GM}{r}\right), $$ where $v$ is the velocity of the moving particle.


Note that the convention is for $\theta$ to measure the angle from closest approach (i.e., $\theta = 0$ is the negative $y$-axis in your simulation). If you want $\theta$ to increase with time, then the (nonstandard) transformation to Cartesian coordinates is \begin{align} x & = -r \sin(\theta) \\ y & = -r \cos(\theta), \end{align} assuming $r = 0$ corresponds to the massive object. In any event, to plot the orbit all you need to do is calculate the constant $e$ at one point in the orbit, sample $\theta$ with as many points as you like, calculate the separations $r(\theta)$, and convert the $(r, \theta)$ pairs to $(x, y)$.


Sunday, September 25, 2016

homework and exercises - Problem regarding Buoyancy forces and Archimedes principle


A thin-walled container of mass $m$ floats vertically at the separation surface of the two liquids of density $ρ_1$ and $ρ_2$ . The whole mass of the container is concentrated in the part of height $h$.


The question is to determine the immersion depth $h'$of the container in the lower liquid if the bottom of the container has a thickness $h$ and an area $S$ and if the container itself is filled with the liquid of density $ρ_1$.


enter image description here


I applied the fundamental principle of the statics on the container


$Balance$ $Sheet$:


The weight of the container ($-mg$)


The buoyancy force applied by the fluid with density $ρ_2$ ($ρ_2gSh$)



The weight of the fluid with density $ρ_1$ (-$ρ_2gSh'$)


I summed up the forces and set them to zero and find my $h'$


but then realised that the fluid with density $ρ_1$ applied also a buoyoncy force on the top of the container , and the fluid with density $ρ_2$ applied a buoyoncy force on the other fluid.


Where's the problem ?



Answer



You can completely neglect the part of the container that sticks out into the liquid with density $\rho_1$ because its weight and its buoyancy cancel each other out exactly.


The balance for the rest of the container becomes:


$$\text{weight}=\text{buoyancy}$$


Assume the container has constant cross-section $S$, then with $mg$ the weight of the container plus the weight of the material between the bottom and $h$ ($^*$ proof below the fold): $$h'S\rho_1g+mg=(h'+h)S\rho_2g$$





$^*$

Buoyancy


$$W=mg+(h'+h'')\rho_1Sg$$ $$B=h''\rho_1Sg+(h'+h)\rho_2Sg$$ $$W=B$$ $$mg+(h'+h'')\rho_1Sg=h''\rho_1Sg+(h'+h)\rho_2Sg$$ Now decompose $W$ and $B$ into part above and below the liquid separation line: $$W_1=h'\rho_1Sg+mg\tag{1}$$ $$B_1=(h+h')\rho_2Sg$$ $$W_2=h''\rho_1Sg$$ $$B_2=h''\rho_1Sg$$ $$\implies W_2=B_2$$ With: $$W=W_1+W_2$$ $$B=B_1+B_2$$ $$W=B$$ Or: $$W_1+W_2=B_1+B_2$$ $$\implies W_1=B_1$$ Which with substitution gives us the expression above the fold.


speed of light - Microsecond trading with neutrinos


The Spread Networks corporation recently laid down 825 miles of fiberoptic cable between New York and Chicago, stretching across Pennsylvania, for the sole purpose of reducing the latency of microsecond trades to less than 13.33 milliseconds (http://www.spreadnetworks.com/spread-networks/spread-solutions/dark-fiber-networks/overview). The lesson I would draw from this is that, in the near future, oil and natural gas extraction won't be the only lucrative use of ocean platforms.


So here's my question - since trades are occurring on the scale of tens to hundreds of microseconds, and considering the amount of money involved, can one use neutrino beams to beat the limitation due to having to travel the great-circle/orthodromic distance between two trading hubs? I'm imagining something similar to the MINOS detector (http://en.wikipedia.org/wiki/MINOS), where a neutron beam was generated at Fermilab in Batavia, Illinois, and detected ~735 km away, ~700 meters under the ground in a Northern Minnesota mine.


Is it possible to beat a signal traveling at the speed of light across the great-circle distance from, say, New York to Tokyo, using a neutron beam traveling the earth? Is it realistic to talk about generating these beams on a microsecond time-scale?


Addendum - Over what distances can you reasonably detect a neutrino beam?



Answer



Whether or not neutrinos would be suitable for rapid trading, people have seriously considered their utility for signalling in difficult environments. I read an article a while back about a paper (published in Phys. Lett. B, but I can't access that from here) by Patrick Huber which proposed using neutrinos for through-the-earth communication to submarines as an alternative to ELF, where bandwidths become competitive. The submarine would pick up the modulated cherenkov radiation produced by the generation of muons in seawater. This certainly allows faster-than-great-circle transmission times, but this is not the reason why the technique is attractive. The preprint indicates that calculated antipode to antipode bandwidth is only 10 b/s which doesn't seem ready for high-intensity trading.


Addendum:


If we consider a continuous lossless fibre optic link between antipodes around the equator, the transmission time will be about 99 ms, whilst the through-earth travel time (at $\approx{c}$) is 42 ms. Obviously this counts for nothing if you have high-latency equipment at either end.



Whilst the improvement in transmission time hardly seems worth it, it occurs to me that this would be a useful technique for communicating between either side of a huge, highly oblate structure such as a wide but thin disk-shaped megastructure, however that's veering in to sci-fi territory.


Saturday, September 24, 2016

scattering - Nature of Cooper pairs


Some people say it is bound state, some say it is not. Which is more accurate? Problem is that I read in some books, including Ziman, that Cooper pairs are bound states but my teacher says that it is not true and that Bardeen had to explain it many times even to his peers.Now, i know that it has something to do with the resonance in scattering cross section...but oversimplifications with hand-waving about phonons that mediate interaction creating an attractive force and a bound state, like some kind of electron-electron molecule just make me angry. I know it is phonon mediated but it is not that simple, right?




condensed matter - Anyons only in 2+1 spacetime dimensions - better explanation


Regrading why anyons exist only in 2+1 spacetime dimensions (which have an arbitrary phase on exchange), I read the reason that the paths for exchange in 3D are deformable into each other while in 2D, they may not be deformed into each other. So what is next? How to prove from this, that there can be arbitrary phase generated on exchange? I understand that in 2D, we have braids, but can I get a proof for an arbitrary phase on exchange which specifically takes into account the topological equivalence of paths in 3D and not in 2D.



Answer




Please, let me first refer you to the original paper by Leinaas and Myrheim where the existence of anyonic statistics was first predicted before its actual discovery. All the ingredients of the understanding of the special properties of the two dimensional case already exist in this old paper, however, I'll try to cast it in a more modern terminology:


By passing to a polar center of mass coordinates, the configuration space of two scalar identical particles in $\mathbb{R}^n$ can be represented as:


$$\frac{\mathbb{R}^n \times \mathbb{R}^n}{\sim } = \mathbb{R}^n \times \mathbb{R}^+\times \frac{S^{n-1}}{\mathbb{Z}_2}$$


Where: $\sim$ is the identical particles equivalence relation, the $\mathbb{R}^n$ on the right hand side is the center of mass coordinates, $\mathbb{R}^+$ is the radial coordinate, and $ \frac{S^n}{\mathbb{Z}_2}$ are the angular coordinates. The equivalence relation corresponding to the exchange in the angular coordinates is simply the identification of the antipodal points on the sphere's surface, which accounts for the factor ${\mathbb{Z}_2}$ in the denominator. The central of mass and the radial coordinates are transparent to the exchange, thus we may concentrate on the angular coordinates, which actually consist of the real projective spaces:


$$ RP^n = \frac{S^n}{\mathbb{Z}_2}$$


These spaces are not simply connected, their fundamental groups can be easily deduced from the contractibility of the closed loops on the sphere with the identification of the antipodal points as indicated in the question:


$$\pi_1(RP^n ) = \mathbb{Z}_2, n>1$$


$$\pi_1(RP^1 ) = \mathbb{Z}$$


The usual way to quantize on a non-simply connected manifold is to construct wave functions on the universal covering with appropriate transformation properties upon the exchange:


$$ \psi(x^a) = e^{i\phi} \psi(x) $$



Where $x^a$ is the antipodal point to $x$. Of course, the transformation can be only a phase multiplication, because the choice of the point or its antipode should not change the expectation values of the observables since both points correspond to the same physical point on the configuration space.


Moreover, since $S^n$ is simply connected then, the map $S^n \rightarrow RP^n$ is a covering map, therefore, $ RP^n = \frac{S^n}{\pi_1(RP^n ) }$. The phase transformation must be a representation of $\pi_1(RP^n )$. In our case, when $n>2$, $\mathbb{Z}_2$ has only two phase representation, the trivial representation and the alternating representation.


However, in the two dimensional case, we may choose a representation $\gamma$ in which the generator $\Gamma$ of $ \mathbb{Z}$ is represented by an arbitrary constant phase $e^{i\phi}$, then the representation of an arbitrary element in $ \mathbb{Z}$ will be:


$$\gamma(\Gamma^n) = e^{in\phi}$$


Now, please remember that in the geometric picture of the fundamental group, the generators are represented by closed loops, thus the representation $\gamma$ assigns a phase to every closed loop (such that the composition law of the loops is reflected in the multiplication of the phases). This assignment can be described as the existence inequivalent quantizations corresponding to the set of maps:


$$ \mathrm{Hom}(\pi_1(RP^n ), U(1))$$


When $n>2$, this set contains only two classes, (Bosons and fermions), while when $n=2$, this set is infinite.


In summary, there will be a set of possible quantizations, in each set the angular wave equation should be solved with a different condition upon the exchange of the antipodal points on the sphere. The solution will correspond to a different particle type.


Leinaas and Myrheim, solved the problem of the identical two dimensional isotropic harmonic oscillator with wave functions transforming with an arbitrary phase upon the antipodal identification and found that the spectrum depends on the transformation phase factor.


Now, according to the classification theorem of flat connections, every (projective) representation of the fundamental group can be associated to a flat connection $A$ such that:



$$\gamma(\Gamma) = e^{\int_{\Gamma} A_{\gamma}}$$


This connection is essential for the construction of the quantum operators corresponding to the classical functions on the phase space according the Koopman - Van Hove representation:


$$ \hat{O}_{\gamma} = O - i \hbar X_{O} -A_{\gamma} (X_{O} )$$


Where $O$ is a function on the phase space, $ \hat{O}_{\gamma}$ is the corresponding quantum operator, and $X_O$ is the corresponding Hamiltonian vector field.


By the way, when the configuration space is $S^1$, this flat connection is the famous Aharonov-Bohm connection.


For further reading on the classification of inequivalent quantizations, please see the following two articles by N.P. Landsman and by Doebner Šťovíček, and Tolar . Actually, the first author (Landsman) has reservations (footnote 13 in the article) over the customary explanation using parallel transport and prefers the induced representation reasoning that I tried to follow.


quantum field theory - Given expectation values for E and B, can you find an associated state?


When we quantize the electromagnetic field, we develop the concept of the field operator $A(\vec{r},t)$ and the simultaneous eigenstates of momentum and the free field Hamiltonian (i.e., each eigenstate is given by specifying the number of photons with momentum $k$ and polarization $\mu$). We can then construct the operators for the electric and magnetic fields, and we can calculate their expectation values for an arbitrary state.


Now, suppose the expectation value of the electric field is $E(\vec{r},t)$ and the magnetic field is $B(\vec{r},t)$. Assuming $E$ and $B$ obey Maxwell's Equations, can we construct a state that has these expectation values? Is it unique, or could there be multiple states with the same expectation value for $E(\vec{r},t)$ and $B(\vec{r},t)$?


What if the expectation values are time independent (i.e., static fields $E(\vec{r},t)=E(\vec{r},0)$ and $B(\vec{r},t)=B(\vec{r},0)$ for all $t$)?



Answer



Of course one can construct states with any desired expectation values. This is no different from constructing a state of a simple harmonic oscillator with the desired expectation value of position and momentum, repeated for each field mode. Just make a wavepacket centered on the desired position and with the right phases. Note however that you cannot make a simultaneous eigenstate of the electric and magnetic fields since they don't commute with each other, but you can fix the expectation values.


I can prove the non-uniquess of such states just by giving an example: all states with a definite number of photons have zero expectation values $\langle \vec{E} \rangle = \langle \vec{B} \rangle = 0$. This follows because $\vec{E}$ and $\vec{B}$ are operators which change the photon number.


homework and exercises - The equivalent resistance betweeen A and B. I need the answer with proper explaination



enter image description here


how are we supposed to do this question if resistance is given as 1 ohm



Answer



You have two basic options:





  1. Realize that this is actually just two resistors in series with three parallel resistors, and analyze it using the equivalent resistances of resistors in parallel and series, or




  2. Use Kirchoff's laws to derive the equivalent resistance.




If you choose option 1, I'll help you out by revealing the parallel resistors in this weirdly drawn network. I've labeled the resistors 1-5 from left to right to make the transformation easier to follow:


Starting arrangement:


Starting arrangement


Step 1 - rotate R2 counterclockwise by 90°:



Step 1 - rotate R2 counterclockwise by 90°


Step 2 - rotate R3 clockwise by 90°:


Step 2 - rotate R3 clockwise by 90°


Step 3 - rotate R4 counterclockwise by 90°:


Step 3 - rotate R4 counterclockwise by 90°


Voila - a familiar-looking network.


quantum mechanics - Hilbert space of harmonic oscillator: Countable vs uncountable?


Hm, this just occurred to me while answering another question:


If I write the Hamiltonian for a harmonic oscillator as $$H = \frac{p^2}{2m} + \frac{1}{2} m \omega^2 x^2$$ then wouldn't one set of possible basis states be the set of $\delta$-functions $\psi_x = \delta(x)$, and that indicates that the size of my Hilbert space is that of $\mathbb{R}$.


On the other hand, we all know that we can diagonalize $H$ by going to the occupation number states, so the Hilbert space would be $|n\rangle, n \in \mathbb{N}_0$, so now the size of my Hilbert space is that of $\mathbb{N}$ instead.


Clearly they can't both be right, so where is the flaw in my logic?



Answer



This question was first posed to me by a friend of mine; for the subtleties involved, I love this question. :-)


The "flaw" is that you're not counting the dimension carefully. As other answers have pointed out, $\delta$-functions are not valid $\mathcal{L}^2(\mathbb{R})$ functions, so we need to define a kosher function which gives the $\delta$-function as a limiting case. This is essentially done by considering a UV regulator for your wavefunctions in space. Let's solve the simpler "particle in a box" problem, on a lattice. The answer for the harmonic oscillator will conceptually be the same. Also note that solving the problem on a lattice of size $a$ is akin to considering rectangular functions of width $a$ and unit area, as regulated versions of $\delta$-functions.


The UV-cutoff (smallest position resolution) becomes the maximum momentum possible for the particle's wavefunction and the IR-cutoff (roughly max width of wavefunction which will correspond to the size of the box) gives the minimum momentum quantum and hence the difference between levels. Now you can see that the number of states (finite) is the same in position basis and momentum basis. The subtlety is when you take the limit of small lattice spacing. Then the max momentum goes to "infinity" while the position resolution goes to zero -- but the position basis states are still countable!



In the harmonic oscillator case, the spread of the ground state (maximum spread) should correspond to the momentum quantum i.e. the lattice size in momentum space.


The physical intuition


When we consider the set of possible wavefunctions, we need them to be reasonably behaved i.e. only a countable number of discontinuities. In effect, such functions have only a countable number of degrees of freedom (unlike functions which can be very badly behaved). IIRC, this is one of the necessary conditions for a function to be fourier transformable.


ADDENDUM: See @tparker's answer for a nice explanation with a slightly more rigorous treatment justifying why wavefunctions have only countable degrees of freedom.


Collapse in Quantum Field Theory?



I do not want answers telling me that wave-function collapse is not real and decoherence is the answer (I know the situation with that). I am asking a question purely on the basis if wave-function collapse is the correct method. My question is: in normal quantum mechanics superposition of the state (position, momentum) exists until the wave-function collapses (how or why it collapses is not important in this question), now in quantum field theory we can also have superposition as in the superposition of Fock space states with different particle number. Can the superposition also collapse here under collapse interpretations of quantum mechanics/quantum field theory?


Laymans answers would be mainly appreciated...




Friday, September 23, 2016

quantum field theory - Is the world $C^infty$?


While it is quite common to use piecewise constant functions to describe reality, e.g. the optical properties of a layered system, or the Fermi–Dirac statistics at (the impossible to reach exactly) $T=0$, I wonder if in a fundamental theory such as QFT some statement on the analyticity of the fields can be made/assumed/proven/refuted?



Take for example the Klein-Gordon equation. Even if you start with the non-analytical Delta distribution, after infinitesimal time the field will smooth out to an analytical function. (Yeah I know, that is one of the problems of relativistic quantum mechanics and why QFT is "truer", but intuitively I don't assume path integrals to behave otherwise but smoothing, too).



Answer



This is a really interesting, but equally beguiling, question. Shock waves are discontinuities that develop in solutions of the wave equation. Phase transitions (of various kinds) are non-continuities in thermodynamics, but as thermodynamics is a study of aggregate quantitites, one might argue that the microscopic system is still continuous. However, the Higgs mechanism is an analogue in quantum field theory, where continuity is a bit harder to see. It is likely that smoothness is simply a convenience of our mathematical models (as was mentioned above). It is also possible that smooth spacetime is some aggregate/thermodynamic approximation of discrete microstates of spacetime -- but our model of that discrete system will probably be described by the mathematics of continuous functions.


(p.s.: Nonanalyticity is somehow akin to free will: our future is not determined by all time-derivatives of our past!)


How is it possible that consciousness-causes-collapse interpretations of QM are not falsified by the Quantum Zeno effect?



As I understand it, consciousness-causes-collapse (CCC) theories, although not very popular among physicists, have not been falsified (e.g. https://arxiv.org/abs/1609.00614).



This confuses me because my understanding of wavefunction collapse is that, at least some of the time, it must happen without a conscious observer present. The quantum Zeno effect, for instance, involves frequently "measuring" a radioactive element and thus preventing it from decaying. Each of the "measurements" in a quantum Zeno experiment are done by the measurement device (pulses of UV light).


While it is the case that no observer will become aware of these measurements until someone is conscious of them, it is still the case that a whole succession of collapses have occurred between conscious observations. This succession of collapses have had a measureable effect on the time evolution of the radioactive element, and the system would look different depending on whether they occurred or not.


My question is then: how do you maintain CCC theories on light of this? Doesn't this mean a single conscious measurement must be able to collapse a whole chain of multiple dependent collapse events far into the past? Or can it still be maintained as a single collapse at the moment of "measurement"? Or am I completely off base?




Thermodynamics: heat transfer


I hope this question isn't too simplistic but I've been in a discussion with someone who claims that no energy is transferred between a cool object and a warm one (either by radiation or conduction) because the 2nd Law of Thermodynamics states that heat always flows from hot to cold.


My intuitive understanding is that energy actually flows in both directions, but the net flow conforms to the 2nd Law as more goes from hot to cold than vice-versa. In effect, the 2nd Law is an emergent property of many interactions. Is this the case?


EDIT: I should add that I've researched as much as possible, but no sources seem to explicitly state anything but the standard 2nd Law statement. If the answer is can be found in a resource I can be pointed towards, I'd be hugely grateful and apologise for wasting anyone's time.



Answer



To see that you are correct, look to radiative heat transfer amongst black bodies. Consider two black bodies, call them bodies A and B, arranged as flat plates facing one another. Suppose body A has a temperature $T_A$ and body B has a temperature $T_B$. Both plates are black bodies, so each radiates energy at a rate given by the Stefan-Boltzmann law: $dE/dt = A\sigma T^4$, where $A$ and $T$ are the surface area and temperature of the body in question.



Since a black body absorbs all incoming radiation, body A transfers energy to body B, even if $T_A

event horizon - Can you have a giraffe shaped black hole?



My reasoning being - lets say a rock is approaching a black hole. It would essentially stop in time for an outside observer once past the event horizon but since it would also bring along some new mass by itself, some of it should stop before reaching the event horizon, becoming the new edge of the event horizon.


Assuming that is true, if I were to feed a black hole from a single direction, it should start growing a spike, right? Going further with the same technique, you should be able to shape the black hole into a giraffe if you wanted to. Are there any flaws in this future business plan of mine?



Answer



You are quite correct that if we drop an object into a black hole and watch it fall then we'll see it freeze at the event horizon. But this freeze occurs very close to the event horizon. In fact so close that it's barely distinguishable from the horizon. So dropping things into the black hole creates only a tiny perturbation and we couldn't use this trick to build any shape significantly different from a sphere.


If we consider the simplest case of a non-rotating black hole and drop an object from a long way away then the velocity of the infalling object is given by:


$$ v = \left(1 - \frac{r_s}{r}\right)\sqrt{\frac{r_s}{r}}c \tag{1} $$


I've discussed this before, in Will an object always fall at an infinite speed in a black hole?, and borrowing the graph from that post the velocity as a function of distance looks like:


Velocity


Note that:





  1. the infall velocity peaks at about three times the event horizon radius




  2. the peak velocity is about $0.385c$ or about $115,000$ km/sec




Integrating equation (1) to get the distance as a function of time is rather messy, but we can do a quick back of the envelope calculation. If we take a Solar mass black hole then the event horizon is at about $3$ km so the peak velocity is at $9$ km. That means the infalling object is only $9$ km away and moving inwards at $115,000$ km/sec, so you'll appreciate that it's going to cross most of the $6$ km towards the event horizon pretty quickly. In fact if I do a quick and dirty numerical integration I get the following graph for time taken as a function of distance:


Distance time


The infalling object gets to within 1% of the event horizon radius in less than a millisecond.



This is the problem with your idea. Even though strictly speaking we never see the objects pass through the event horizon they very quickly get so close to it that to a distant observer they appear to have merged with it. The end result is that the horizon remains effectively spherical and we can't use your idea to build interesting shapes.


This isn't just theoretical, because we have actually observed the merger of two black holes at the LIGO gravitational wave observatory. The black holes were rotating around each other not falling directly towards each other, but even so the merger was effectively complete after about $150$ ms - that is, after $150$ ms the merged object was indistinguishable from a single spherical black hole even though the two black holes technically take an infinite time to fully merge.


Thursday, September 22, 2016

Photon emission and absorption by atomic electrons


Assume a photon is produced by an atomic electron making a transition down from a certain energy level to another.


Can that photon only be absorbed by another atomic electron making exactly the opposite transition?



Is there any chance that the photon could be absorbed by an atomic electron undergoing a transition with a slightly different energy difference?




homework and exercises - Change in entropy of two isolated systems merged into one system


From Statistical Physics, 2nd Edition by F. Mandl:



Two vessels contain the same number $N$ molecules of the same perfect gas. Initially the two vessels are isolated from each other, the gases being at the same temperature $T$ but at different pressures $P_1$ and $P_2$. The partition separating the two gases is removed. Find the change in entropy of the system when equilibrium has been re-established, in terms of the initial pressures $P_1$ and $P_2$. Show that this entropy change is non-negative.



I'm a little confused about a few things.



  1. Is there a temperature change in this process? Intuitively, I would say no because $(T+T)/2=T$. My other guess would be that the temperature must change because we now have a third pressure, $P_3$, that is different from the pressure of the other two and also because we have increased the volume.

  2. I believe this is an irreversible process, correct? Because you can't realistically separate the gasses into that which came from vessel A and that which came from vessel B.



  3. Can the change in volume simply be called $V_A+V_B$? I thought it would be that simple but thinking more about it I feel as though the change in pressure and possible change in temperature might change things.




  4. When the partition is removed, is there a heat exchange between the 2 gases? My intuition says no because heat can only flow when there is a temperature difference and in this case both vessels are at temperature $T$.




My attempt:


So all in all I need to solve $\Delta S= \int \frac{dQ}{T}$


$$\Delta S= \int \frac{dQ}{T}$$ $$=\int \frac{dE+dW_{by}}{T}$$ We know that $dE=0$ and that $dW=PdV$



$$=\frac{1}{T}\int PdV$$ $$=\frac{P\Delta V}{T}$$


This is where I'm stuck - I don't think there is a valid thing to put in for $\Delta V$ because there were 2 systems that formed into 1 bigger system. If the final system is $V_1+V_2$, then what was its previous size? $V_1$ or $V_2$? Or can I say that it was $\frac{V_1+V_2}{2}$?



Answer



The total volume of the two rigid containers does not change, so the combined system does no work W on the surroundings. The two containers are presumably insulated, so no heat Q is exchanged with the surroundings. So, from the first law of thermodynamics, the change in internal energy of the combined system is zero. Since, for an ideal gas, internal energy is a function only of temperature, the final temperature of the combined system is equal to the initial temperature of the separate systems.


The process is irreversible, but not for the reason you gave. Since the same gas is present in both containers, the system can be returned to its original state, but not without incurring a change in the surroundings, involving heat transfer.


Quarky Quanta's intuition was correct with regard to the final equilibrium pressure of the combined system, provided n is the total number of moles of gas in the two original containers.


COMPLETION OF PROBLEM SOLUTION: $$V_1=\frac{NkT}{P_1}$$ $$V_2=\frac{NkT}{P_2}$$ $$V_1+V_2=\frac{NkT(P_1+P_2)}{P_1P_2}$$ $$P_F=\frac{2NkT}{(V_1+V_2)}=\frac{2P_1P_2}{(P_1+P_2)}$$ $$\Delta S=Nk\ln{\frac{P_1}{P_F}}+Nk\ln{\frac{P_2}{P_F}}=2Nk\ln{\left[\frac{(P_1+P_2)/2}{\sqrt{P_1P_2}}\right]}$$ So the change in entropy is determined by the ratio of the arithmetic mean of the initial pressures to their geometric mean (a ratio which is always greater than 1).


What's the meaning of the Feynman propagator for the driven quantum harmonic oscillator?


Consider a quantum harmonic oscillator that is driven for a finite time by a force $J(t)$, and work entirely in Heisenberg picture. Then we may define the 'in' and 'out' vacua $$|0_{\text{in}} \rangle, \quad |0_{\text{out}} \rangle$$ to be the ground states of the Hamiltonian at early and late times. In Schrodinger picture, the 'in' vacuum corresponds to a state in the usual QHO ground state before the driving starts, while the 'out' vacuum corresponds to a state that ends up in that state when the driving ends.


In Mukhanov and Winitzki's book the retarded Green's function is defined as a matrix element between 'in' states, $$\langle 0_{\text{in}} | \hat{q}(t) | 0_{\text{in}}\rangle = \int J(t') G_{\text{ret}}(t, t') \, dt', \quad G_{\text{ret}}(t, t') = \frac{\sin \omega(t - t')}{\omega} \theta(t - t').$$ This makes perfect sense thinking semiclassically, as $\langle \hat{q}(t) \rangle$ is just the average position of the particle, given that it was at rest in the far past; that's basically the definition of what a retarded propagator is. Similarly, one can define the advanced propagator using $|0_{\text{out}}\rangle$.


Finally, Mukhanov and Winitzki define the Feynman propagator by $$\langle 0_{\text{out}} | \hat{q}(t) | 0_{\text{in}}\rangle \propto \int J(t') G_{F}(t, t') \, dt'.$$ Now, I've been searching for an intuitive understanding of the Feynman propagator for years. Typical explanations in quantum field theory speak of "negative energy solutions" and antiparticles (e.g. here) which I've always been confused about, since they don't exist in ordinary quantum mechanics (as I asked here). But above we have a Feynman propagator for an exceptionally simple non-QFT system! So if there's an intuitive explanation at all it'll be right here, but I can't quite see what the matrix element means physically.


I have two questions: first, how is this equivalent to the usual definition of the Feynman propagator, involving a particular contour choice? Second, are there intuitive words one can drape around this definition? Does it provide any additional physical insight?




Wednesday, September 21, 2016

astrophysics - Plasma and Stars



I have read that most stars are made mostly of plasma.


My questions in this statement are:




  1. Are there stars not made of plasma?




  2. In what percentage stars are made of plasma?






Answer




Are there stars not made of plasma?



.....



Plasma is an electrically neutral medium of unbound positive and negative particles (i.e. the overall charge of a plasma is roughly zero). It is important to note that although the particles are unbound, they are not ‘free’ in the sense of not experiencing forces. When a charged particle moves, it generates an electric current with magnetic fields; in plasma, the movement of a charged particle affects and is affected by the general field created by the movement of other charges.



For more details see this link too.


A basic effect of the motion of charges is that electromagnetic radiation is created, i.e. light and thus stars certainly have plasma because they are called stars for being stationary sources of light in the night sky, in contrast to planets. The sun in the center of the solar system is a star and allows us to study the composition of stars, including the evident plasma.



......



Our Sun, and all of the other stars, are made of plasma, much of interstellar space is filled with a plasma, albeit a very sparse one, and intergalactic space too.



[note that "all other stars is not really correct in this wiki link. see below]


Stars that are not wholy plasma are neutron stars:



A neutron star is the collapsed core of a large (10–29 solar masses) star. Neutron stars are the smallest and densest stars known to exist.1 With a radius on the order of 10 km, they can, however, have a mass of about twice that of the Sun. They result from the supernova explosion of a massive star, combined with gravitational collapse, that compresses the core past the white dwarf star density to that of atomic nuclei.



....




Neutron stars that can be observed are very hot and typically have a surface temperature around 6×10^5 K.



They are complex stars.


Also very large stars that become supernovae, and in general the whole spectrum in the evolution of stars has stars which are not wholly plasma.


To be visible stars, they have to emit light so that their outer shell must be plasma.


So plasma in the outer atmosphere is necessary for a star to be visible in the night sky, but there do exist stars that are not wholly plasma. Thanks to DrunkenCodeMonkey for catching it.


you ask:



In what percentage stars are made of plasma?




They are mostly plasma, i.e neutral ionized matter, even the core, because of the very large kinetic energies acquired in the formation from the primordial plasma due to the gravitational attraction .



The core of the Sun extends from the center to about 20–25% of the solar radius. It has a density of up to 150 g/cm3 (about 150 times the density of water) and a temperature of close to 15.7 million kelvins (K). By contrast, the Sun's surface temperature is approximately 5,800 K.



This very high temperature does not allow nuclei and electrons to stabilize into neutral atoms, and even at that high density the core is a plasma. The temporary formation of neutral nuclei gives spectral lines detectable in the star's spectrum, but the temperatures are so high that no solid core can result. The amount of neutral atoms in a plasma is very small, and is controlled by the relevant equations, as was pointed out in the comments.


The planetary masses cooled off enough to acquire a solid core.


Tuesday, September 20, 2016

quantum field theory - How does QFT account for localization (into a finite volume of space), in practice?


In Quantum Field Theory there is until yet no agreement (as far as I know) on the issue of localization of particles. When one talks about a 'particle' in QFT, one usually means a single-particle state of definite momentum, or a wavepacket made out of such states. It is not clear, however, what (if any) are the states that correspond to something that is localized in space, or even something that is localized into a finite region of space.


Some textbooks on QFT (e.g. Peskin and Schroeder, page 24) suggest that (at least in the case of the free Klein-Gordon theory) the field operator $\phi(\vec{x})$ creates a particle at position $\vec{x}$, i.e., the state \begin{equation} |\vec{x}\rangle := \phi(\vec{x})|0\rangle \end{equation} would correspond to a particle localized at $\vec{x}$. However, it can be easily shown that such states are not mutually orthogonal, i.e., $\langle \vec{y}|\vec{x}\rangle\neq 0$ if $\vec{y}\neq \vec{x}$. So these states cannot possibly correspond to localized particles.


This bothers me and I would gladly hear other people's views on this. Still, I can imagine, for instance, that these states do actually correspond to effectively localized states, by which I mean that in practice it makes sense to regard them as localized states, even if they technically aren't. But this is only a shot in the dark; I have no idea whether that makes any sense. And if this is the case, then what is the justification for this view?


Other references advocate that one should use the eigenstates of the so-called Newton-Wigner position operator, which is explained in detail in this excellent answer. Although these states also have their peculiarities, they seem to be preferable over the states $\phi(\vec{x})|0\rangle$.


So theoretically it is not clear how we should describe localized particles. Nevertheless, in collider experiments, for instance, the particles (or perhaps I should say the quantum fields) clearly are effectively localized into a finite region of space. And there the theory really works! So apparently we are able to describe localized particles. So how does one describe this spatial dependence, in practice? I imagine one uses some kind of wavepackets? And does this give any insight into the theoretical problem?




quantum mechanics - Conceptual difficulty in understanding Continuous Vector Space


I have an extremely ridiculous doubt that has been bothering me, since I started learning quantum mechanics.


If we consider the finite dimensional vector space for the spin$\frac{1}{2}$ particles, I guess it is nothing but $\mathbb{C}^2$. Each vector has two components (which is why it is two dimensional right ?), each of which can be any complex number.


Now coming to the case of position space (say one-dimension). I was taught this LVS is infinite dimensional (also continuously infinite, unlike the number operator basis). I am not able to understand this subtle thing that it is infinte dimensional (is it something like $\mathbb{R}^\infty$?). It is quite confusing every time I encounter this kind of space. Also in this each component (of the infinite no. of them) can take any real value (infinite number of them)? I learnt that the way to represent these can be in term of complex-valued functions, I would like have it elucidated.



Answer



Your doubt is not ridiculous, it is probably simply due to the confused way often mathematics is taught in physics. (I am a physicist too and, during my career, I had to bear ridiculous misconceptions, wasting lot of time in tackling non-existent pseudo-mathematical problems instead of focusing on genuine physical issues). There are sensible mathematical definitions, but there is also a practical use of math in physics. Disasters arise, in my view, when the two levels are confused, especially while teaching students.


The Hilbert space of a particle in QM is not continuous: it is a separable Hilbert space, $L^2(\mathbb R)$ which, just in view of being separable, admits discrete countable orthogonal bases.


Moreover, a well known theorem proves that if a Hilbert space admits a countable orthonormal basis, then every other basis is countable (more generally, all Hilbert bases have same cardinality).



In $L^2(\mathbb R)$, a countable basis with physical meaning is, for instance, that made of the eigenvectors $\psi_n$ of the harmonic oscillator Hamiltonian operator.


However, it is convenient for practical computations also speaking of formal eigenvectors of, for example, the position operator: $|x\rangle$. In this case, $x \in \mathbb R$ so it could seem that $L^2(\mathbb R)$ admits also uncountable bases. It is false! $\{|x\rangle\}_{x\in \mathbb R}$ is not an orthonormal basis. It is just a formal object, (very) useful in computations.


If you want to make rigorous these objects, you should picture the space of the states as a direct integral over $\mathbb R$ of finite dimensional spaces $\mathbb C$, or as a rigged Hilbert space. In both cases however $\{|x\rangle\}_{x\in \mathbb R}$ is not an orthonormal Hilbertian basis. And $|x\rangle$ does not belong to $L^2(\mathbb R)$.


As a final remark, I would like to stress that the vectors of $L^2(\mathbb R)$ are equivalence classes of functions: $\psi$ is equivalent to $\phi$ iff $\int| \psi(x)−\phi(x)|^2 dx=0$, so if $\psi(x)\neq \phi(x)$ on a set whose measure vanishes, they define however the same vector of $L^2$. Consequently the value an element of the space assumes at a given $x$ does not make any sense, since each set $\{x\}$ has zero measure.


Monday, September 19, 2016

gravity - Why are our planets in the solar system all on the same disc/plane/layer?




I always see pictures of the solar system where our sun is in the middle and the planets surround the sun. All these planets move on orbits on the same layer. Why?



Answer



We haven't ironed out all the details about how planets form, but they almost certainly form from a disk of material around a young star. Because the disk lies in a single plane, the planets are broadly in that plane too.


But I'm just deferring the question. Why should a disk form around a young star? While the star is forming, there's a lot of gas and dust falling onto it. This material has angular momentum, so it swirls around the central object (i.e. the star) and the flow collides with itself. The collisions cancel out the angular momentum in what becomes the vertical direction and smear the material out in the horizontal direction, leading to a disk. Eventually, this disk fragments and forms planets. Like I said, the details aren't well understood, but we're pretty sure about the disk part, and that's why the planets are co-planar.


newtonian mechanics - What formula do I use to calculate the force of impact of a falling object?


I am trying to calculate the force of impact of a falling object. I did my egg drop project. I dropped the egg from 10m with a mass of 126kg and with the velocity of 14.1 m/s. Which formula should I use to calculate the force of impact? I found the kinetic energy first and later I used the following formula: $W=Fs$ to find the force of impact. Is that the right formula?




How is possible for current to flow so fast when charge flows so slow?


How is it possible for current to flow so fast when charge flows so slowly?


We know electrons travel very slowly while charge travels at ~the speed of light.




Answer



What is being confused here is not the flow of "current" but rather the transmission of energy.


The individual electrons in a wire move very slowly, as they can be modeled as constantly colliding with atoms (yes, this is a naive classical model, no quantum) and bouncing around randomly in the manner of a gas (the term "electron gas" is real and not inappropriate at all). Electric current is the very slow flow of this electron gas through the wire when an electric field is present. The term "flow of current" actually is misleading - there is no such "substance" called "current", current is a flow. "Flow of charges" or more specifically (in this case - in others, it may differ!) "flow of electrons" makes more sense. (After all, we don't talk about "current" as a substance which is contained within a river and which is what does the "flowing", i.e. "flow of current in the river", rather we talk of flowing "water in" the river, and "the current" means the flow of water.) See:


http://amasci.com/miscon/eleca.html#cflow


Energy, however, is not transmitted by one electron moving all the way around the circuit to the load, but rather through waves in the electrons and more importantly, the associated electric field. It's the same way that mechanical energy is transmitted in, say, a pole that is pushed from one end. The pole compresses slightly, and a sound wave thus appears, initially containing all the energy within your "push", and then travels down it, progressively distributing that energy amongst all the atoms within the pole until they are all moving in a single direction (here I imagine the pole pushed in a vacuum, as in interstellar space, with no other forces acting). The same goes with electrons in the circuit - though I should point out the following model is a bit simplistic but is more to convey the point of how the energy is transmitted than to detail the actual behavior of the electrons, which involves quantum mechanics and is subject to many of the same caveats as one sees within in an individual atom or molecule. But in this loose sense, when you throw the switch, now an electromagnetic wave travels down, setting the electrons ahead in motion and thus distributing its energy throughout the circuit. Of course, the core atoms of the metal are relatively fixed despite the electron motion, so the latter will tend to lose that energy to collision with them, unlike the pole where everyone, atoms and electrons together, start going in synchrony, and thus you have to keep supplying energy to them with a power source like a battery or generator which effectively keeps "pushing the pole" and thus keeps energy going into it - now think about a pole that is now not in vacuum but in molasses, and you have to keep pushing it to keep it moving. This pushing on atoms, of course, is how electrical devices can use electrically transmitted energy to do useful tasks.


Electromagnetic waves, and sound waves, thus energy, travel much faster than the electrons and the atoms in both the circuit and pushed pole. Energy is what lights up your light bulbs, and energy is what makes your computer operate. Since energy travels fast, these devices start operating "at the flick of a switch".


Saturday, September 17, 2016

hamiltonian formalism - Decoupled physics of the complex scalar field


The canonical commutation relations for a complex scalar field are of the form


$$[\phi(t,\vec{x}),\pi(t,\vec{y})]=i\delta^{(3)}(\vec{x}-\vec{y})$$ $$[\phi^{*}(t,\vec{x}),\pi^{*}(t,\vec{y})]=i\delta^{(3)}(\vec{x}-\vec{y})$$


How can these commutation relations be obtained from the commutation relations for two free real scalar fields?



Answer



I) The complex scalar field comes from two real/Hermitian scalar fields with equal-time CCRs



$$[\hat{\phi}^j(t,\vec{x}),\hat{\phi}^k(t,\vec{y})]~=~0, $$ $$[\hat{\phi}^j(t,\vec{x}),\hat{\pi}_k(t,\vec{y})]~=~i\hbar{\bf 1}~\delta^j_k~ \delta^{3}(\vec{x}-\vec{y}), $$ $$[\hat{\pi}_j(t,\vec{x}),\hat{\pi}_k(t,\vec{y})]~=~0, \qquad j,k~\in~\{1,2\}, \tag{A}$$


and the definitions


$$ \hat{\phi}~=~\frac{1}{\sqrt{2}}(\hat{\phi}^1+i\hat{\phi}^2),\tag{B}$$ $$ \hat{\pi}~=~\frac{1}{\sqrt{2}}(\hat{\pi}_1\color{red}{-}i\hat{\pi}_2),\tag{C}$$


cf. e.g. this Phys.SE post. This leads to OP's mentioned CCRs.


II) If the $\color{red}{\text{minus sign}}$ in eq. (C) seems strange, consider the following classical argument. The Lagrangian density is $$ {\cal L}~=~|\dot{\phi}|^2 - |\nabla \phi |^2 - {\cal V} ~=~\frac{1}{2}(\dot{\phi}^1)^2+\frac{1}{2}(\dot{\phi}^2)^2-\frac{1}{2}(\nabla \phi^1)^2-\frac{1}{2}(\nabla \phi^2)^2 - {\cal V} .\tag{D} $$ Therefore the momenta read


$$ \pi_j~=~\frac{\partial {\cal L}}{\partial \dot{\phi}^j}~=~\dot{\phi}^j, \qquad j~\in~\{1,2\}, \tag{E}$$


$$ \pi~=~\frac{\partial {\cal L}}{\partial \dot{\phi}} ~=~\frac{1}{\sqrt{2}}\left(\frac{\partial {\cal L}}{\partial \dot{\phi}^1}\color{red}{-}i \frac{\partial {\cal L}}{\partial \dot{\phi}^2} \right) ~=~\dot{\phi}^{\ast} ~=~\frac{1}{\sqrt{2}}(\pi_1\color{red}{-}i\pi_2).\tag{F}$$


III) For reference, let us mention that the Hamiltonian Lagrangian density reads


$$ {\cal L}_H~=~\pi \dot{\phi}+\pi^{\ast} \dot{\phi}^{\ast} -{\cal H} ~=~\pi_1 \dot{\phi}^1+\pi_2 \dot{\phi}^2 -{\cal H}, \tag{G}$$


where the Hamiltonian density is



$$ {\cal H}~=~|\pi|^2 + |\nabla \phi |^2 + {\cal V}~=~\frac{1}{2}(\pi_1)^2+\frac{1}{2}(\pi_2)^2+\frac{1}{2}(\nabla \phi^1)^2+\frac{1}{2}(\nabla \phi^2)^2 + {\cal V} .\tag{H}$$


classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...