Monday, June 30, 2014

newtonian mechanics - Why does a conservative force return the work done against it by a body to that body?


Newton's 3rd law of motion:


Newton's third law of motion, or the law of action and reaction, implies that there is no isolated force in nature. Whenever there is any force at all, there must be at least two forces. There can be no force without an opposing force. In fact,




Every force results from the interaction of two bodies. The two bodies involved experience equal and opposite forces.



The "WORDS" to which I refer are as follows:


When two bodies interact, and one body exerts a force on the other and does negative work, the other body simultaneously exerts a reactive force on the former and does positive work on it. (This statement is extracted from an answer written by Floris.)


Non-conservative force follows the "WORDS":


When a body moves on a rough floor, kinetic friction $F_k$ opposes the motion and does negative work on the body which is given by $W=-F_{k}\cdot d~ $, where $d$ is the displacement. Simultaneously, by Newton's third law, the body exerts the same force but opposite in direction on the rough floor and does positive work on the floor: partly to rotate it (though very very much less, since it is a part of the earth, which is a large sphere) and mostly to increase the kinetic energy of the molecules of the floor, (i.e., thermal energy). Thus, the positive work done by the body is given by $W=F_{k}\cdot d$. Thus, non-conservative forces like friction here follow the "WORDS". The agent of the friction, the rough floor, gains the positive work done on it, most of it going to heat. Thus, the body loses energy by the same amount.


Conservative forces don't follow the "WORDS":


When a body moves against conservative forces (gravitation, electric fields, etc.), conservative forces do negative work on the body. Here also, the body will also exert a force (as forces are mutual, and are attributed to Newton's 3rd law, the law of gravitation, Coulomb's law...) on the agents of the forces (earth, charge) & will do positive work on them. But instead of gaining that work, it is stored as potential energy. Thus, though the body loses energy, the agent doesn't gain that energy.


My question:



Why don't, unlike the rough floor, agents of the conservative force gain that work, but rather store it as potential energy???



Answer



I think there is some confusion about the terms and the relationship between conservative forces and energy conserving systems. I will make a few general statements:


1) A conservative force can be described by a potential function $\Phi(x)$ with $\vec F=-\nabla{\Phi}$, a non-conservative force can not be reduced to such a function.


2) Friction is a non-conservative force that also doesn't conserve energy, but not every non-conservative force is a friction force. There are non-conservative forces that conserve energy and can return all of it, but that can't be described by a scalar potential function of the mechanical coordinates. One such example would be a torsion spring, where the total work performed on the spring is not uniquely defined by the coordinates of the particle (rather it's defined as a potential of the total angle that the spring has been wound up). Time dependent forces may be energy conserving, but may not return all of the energy to the mass they act on. Another example is an energy conserving velocity dependent force like a magnetic field acting on a charged particle.


3) Even a conservative force may never return the work that was performed against it. A simple example is a particle that rolls up on an incline and keeps moving at constant velocity on the flat top of the potential.


The easiest way to avoid confusion is to apply the potential field definition. If a potential can be defined, we are looking at a conservative force. If it can not be, then we have a more complex physical situation to deal with, and then there are multiple different things that can happen, some of which are energy-conserving and some which are not (at least from the reduced point of view of the mass that the force is acting on).


electromagnetism - The exchange of photons gives rise to the electromagnetic force



Pardon me for my stubborn classical/semiclassical brain. But I bet I am not the only one finding such description confusing.



If EM force is caused by the exchange of photons, does that mean only when there are photons exchanged shall there be a force? To my knowledge, once charged particles are placed, the electromagnetic force is always there, uninterruptedly. According to such logic, there has to be a stream of infinite photons to build EM force, and there has to be no interval between one "exchange event" to another. A free light source from an EM field? The scenario is really hard to imagine.


For nuclei the scenario becomes even odder. The strong interaction between protons is caused by the exchange of massive pions. It sounds like the protons toss a stream of balls to one another to build an attractive force - and the balls should come from nothing.


Please correct me if I am wrong: the excitations of photons and pions all come from nothing. So there should be EM force and strong force everywhere, no matter what type of particles out there. Say, even electrical neutral, dipole-free particles can build EM force in-between. And I find no reason such exchanges of particles cannot happen in vacuum.


Hope there will be some decent firmware to refresh my classical brain with newer field language codes.



Answer



Update: I went over this answer and clarified some parts. Most importantly I expanded the Forces section to connect better with the question.




I like your reasoning and you actually come to the right conclusions, so congratulations on that! But understanding the relation between forces and particles isn't that simple and in my opinion the best one can do is provide you with the bottom-up description of how one arrives to the notion of force when one starts with particles. So here comes the firmware you wanted. I hope you won't find it too long-winded.


Particle physics


So let's start with particle physics. The building blocks are particles and interactions between them. That's all there is to it. Imagine you have a bunch of particles of various types (massive, massless, scalar, vector, charged, color-charged and so on) and at first you could suppose that all kinds of processes between this particles are allowed (e.g. three photons meeting at a point and creating a gluon and a quark; or sever electrons meeting at a point and creating four electrons a photon and three gravitons). Physics could indeed look like this and it would be an incomprehensible mess if it did.



Fortunately for us, there are few organizing principles that make the particle physics reasonably simple (but not too simple, mind you!). These principles are known as conservation laws. After having done large number of experiments, we became convinced that electric charged is conserved (the number is the same before and after the experiment). We have also found that momentum is conserved. And lots of other things too. This means that processes such as the ones I mentioned before are already ruled out because they violate some if these laws. Only processes that can survive (very strict) conservation requirements are to be considered possible in a theory that could describe our world.


Another important principle is that we want our interactions simple. This one is not of experimental nature but it is appealing and in any case, it's easier to start with simpler interactions and only if that doesn't work trying to introduce more complex ones. Again fortunately for us, it turns out basic interactions are very simple. In a given interaction point there is always just a small number of particles. Namely:



  • two: particle changing direction

  • three:

    • particle absorbing another particle, e.g. $e^- + \gamma \to e^-$

    • or one particle decaying to two other particles $W^- \to e^- + \bar\nu_e$




  • four: these ones don't have as nice interpretation as the above ones; but to give an example anyone, one has e.g. two gluons going in and two gluons going out


So one example of such a simple process is electron absorbing a photon. This violates no conservation law and actually turns out to be the building block of a theory of electromagnetism. Also, the fact that there is a nice theory for this interaction is connected to the fact that the charge is conserved (and in general there is a relation between conservation of quantities and the way we build our theories) but this connection is better left for another question.


Back to the forces


So, you are asking yourself what was all that long and boring talk about, aren't you? The main point is: our world (as we currently understand it) is indeed described by all those different species of particles that are omnipresent everywhere and interact by the bizarre interactions allowed by the conservation laws.


So when one wants to understand electromagnetic force all the way down, there is no other way (actually, there is one and I will mention it in the end; but I didn't want to over-complicate the picture) than to imagine huge number of photons flying all around, being absorbed and emitted by charged particles all the time.


So let's illustrate this on your problem of Coulomb interaction between two electrons. The complete contribution to the force between the two electrons consists of all the possible combination of elementary processes. E.g. first electron emits photon, this then flies to the other electron and gets absorbed, or first electron emits photon, this changes to electron-positron pair which quickly recombine into another photon and this then flies to the second electron and gets absorbed. There is huge number of these processes to take into account but actually the simplest ones contribute the most.


But while we're at Coulomb force, I'd like to mention striking difference to the classical case. There the theory tells you that you have an EM field also when one electron is present. But in quantum theory this wouldn't make sense. The electron would need to emit photons (because this is what corresponds to the field) but they would have nowhere to fly to. Besides, electron would be losing energy and so wouldn't be stable. And there are various other reasons while this is not possible.


What I am getting at is that a single electron doesn't produce any EM field until it meets another charged particle! Actually, this should make sense if you think about it for a while. How do you detect there is an electron if nothing else at all is present? The simple answer is: you're out of luck, you won't detect it. You always need some test particles. So the classical picture of an electrostatic EM field of a point particle describes only what would happen if another particle would be inserted in that field.


The above talk is part of the bigger bundle of issues with measurement (and indeed even of the very definition of) the mass, charge and other properties of system in quantum field theory. These issues are resolved by the means of renormalization but let's leave that for another day.



Quantum fields


Well, turns out all of the above talk about particles (although visually appealing and technically very useful) is just an approximation to the more precise picture of there existing just one quantum field for every particle type and the huge number of particles everywhere corresponding just to sharp local peaks of that field. These fields then interact by the means of quite complex interactions that reduce to the usual particle stuff when once look what those peaks are doing when they come close together.


This field view can be quite enlightening for certain topics and quite useless for others. One place where it is actually illuminating is when one is trying to understand to spontaneous appearance of so-called virtual particle-antiparticle pairs. It's not clear where do they appear from as particles. But from the point of view of the field, they are just local excitations. One should imagine quantum field as a sheet that is wiggling around all the time (by the means of inherent quantum wigglage) and from time to time wiggling hugely enough to create a peak that corresponds to the mentioned pair.


energy - oscillating piston shock waves


I am interesting to know whether there are analytical solutions for a piston gonging like a sine wave and generated shock wave and rarefaction. How the energy change during this process and how can we use mathematic equation to describe the process. More like the N-wave in the book "supersonic flow and shock waves". Also is there any numerical approach?




Sunday, June 29, 2014

visible light - A half covered lens


Let's say there's an object in front of a convex lens. I have a piece of paper behind the lens at the right location such that it will the lens will form an image on the paper. The object is illuminating.


Now if I put another piece of paper right before the lens, and cover up half of the lens. The image, instead of only half being visible, it will actually remain the same. The only difference is that it will be dimmer.



Now, the question is what if I remove the paper at the location where the image forms, and put my eye there. What will I see? Will I see the piece of paper which is blocking half of the lens at all? Does that mean I will just "see through" the paper?


I wanted to try this out myself, but unfortunately I don't have a lens besides me.



Answer



Your eye is just a lens and a screen. The lens is actually a compound lens, since both the cornea and the lens play a role in focussing the light, and the screen is your retina. So if your setup is initially:


Object-Lens-Screen


Then if you take out the screen and put your eye there you'll get:


Object-lens-Eye


So you can work out what appears on your retina with exactly the same calculations as any other multi lens setup. But there are some complications because your eye can adjust the focal length of the lens, and this process is instinctive so it's hard to control. In addition the pupil acts as a pinhole camera and will produce a rough image even when objects aren't at the correct position to be in perfect focus.


Exactly what will appear on your retina depends on the geometry you've chosen, but the eye has such a large depth of focus that I suspect you would see the paper cutting off half the field of vision.


Note that the iris in your eye works by constricting to reduce the size of the aperture, and you can't see your iris. This is exactly equivalent to the fact in the first experiment that the paper doesn't appear as an image on the screen.



fluid dynamics - Vasodilation decreases blood pressure


Okay it should make sense intuitively, but Bernoulli's principle says that a higher cross sectional flow area means less velocity which means higher pressure!


I also read in a related post that Bernoulli's principle is for inviscid flow. Blood has viscosity. But this alone should make the changes in pressure less extreme, not reverse the effects.



But does the fact that blood is a non-newtonian fluid change anything?



Answer



This Wiki article on vasodilation explains it all. Vasodilation decreases the resistance to blood flow while vasoconstriction increases it. So when the resistance to flow is low (high) the heart needs to pump blood at lower (higher) pressure to maintain the same average flow rate as in a normal blood vessel.


astronomy - Is there an established standard for naming exoplanets?


I understand that exoplanets are named by adding a lowercase letter to the a designation of the planet's parent star or stellar system, beginning with 'b' (the star itself is 'a') in order of discovery (and orbital distance from star where discoveries of more than one planet around a star are announced together). My question is whether there is an established standard for using a space between the stellar designation and the planet's letter. Is it Kepler-22 b or Kepler-22b? Journals seem to have different editorial policies that determine how they handle this, but the comprehensive databases available online (e.g., the Extrasolar Planets Encyclopaedia, the Visual Exoplanet Catalogue, and NASA's Planet Quest) seem to all use a space.



Answer




You are correct that the standard for naming exoplanets is normally the lower-case letter after the star name in the order of discovery. So in our system, Earth would be Sol b. If there are multiple stars in the system, like 16 Cyg (which has 16 Cyg A and B), then the planet's lower-case letter would be appended to the star's, such as 16 Cyg Bb.


So the space is normally there unless it's a binary star system in which case the space is not there because there already is one between the star's A/B/C/etc. designation and the name. (Different systems have been proposed, such as by Hessman et al. (2010.)


However, as you note, it appears as though the Kepler team, does not include a space. Their papers, for example, would list a planet as Kepler-22b. Or Kepler-16b (which is confusingly short for Kepler-16 (AB)-b). Newspaper/internet editors often in these cases will go to an established grammar guide or set of rules for their own publications instead of following what the team does. Just looking at the External Links on the Wikipedia page for Kepler-22b shows the BBC using "Kepler 22b," NASA properly using "Kepler-22b," and Planetary Habitability Lab using "Kepler-22 b."


Saturday, June 28, 2014

electromagnetism - Is it true that any system of accelerating charges will radiate?


I was recently told by a physics teacher that "any system of charges in which at least some of the charges are executing some sort of accelerated motion, will radiate and lose energy". This refers to classical electrodynamics (I'm not entering into quantum mechanics), that is, this statement should either be provable or disprovable from Maxwell's equations. I tried to think of a simple system of charges that would disproof this statement, since I intuitively think that it isn't true, but I haven't come up with any. The one example I could think of was a positive charge sitting on top of a negative charge, so that the net charge is zero. No matter how they move it is obvious that there will be no radiation. But my teachers did not accept this trivial example.


So my question is whether the statement is true or not, plus a proof or a counterexample of some charges executing accelerated motion without radiating.



Answer



The claim that accelerated charges must radiate is simply false. There are very many simple situations in which they do, but in general things should be examined on a case-by-case basis; there is not simple thumb rule like "acceleration yields radiation."


The simplest way to see this is to consider a wire carrying a constant current. This situation is magnetostatic, with well-understood electromagnetic field given by Biot-Savart law... which trivially lacks any radiation, as the field is constant. This is independent of the shape of the wire, and yet, if the wire is not straight, the charges within it must accelerate at some point.


A more exotic could be a uniformly rotating ring of charge. Both charge density and current density will be constant, and will produce constant electric and magnetic fields. This can be experimentally realized with toroidal superconductors.


cosmology - Do the laws of physics evolve?


Hubble's constant $a(t)$ appears to be changing over time. The fine stucture constant $\alpha$, like many others in QFT, is a running constant that varies, proportional to energy being used to measure it. Therefore, it could be agued that all running constants have 'evolved' over time as the Universe has expanded and cooled. Both the local and global curvature of the Universe changes over time implying that so too does the numerical value of $\pi$. All these things are however constants (well, let's say parameters since they are not really 'constant'.)



In a discussion with astronomer Sir Fred Hoyle, Feynman said "what today, do we not consider to be part of physics, that may ultimately become part of physics?" He then goes on to say "..it's interesting that in many other sciences there's a historical question, like in geology - the question how did the Earth evolve into the present condition? In biology - how did the various species evolve to get to be the way they are? But the one field that hasn't admitted any evolutionary question - is physics."


So have the laws of physics remained form-invariant over the liftetime of the Universe? Does the recent understanding of the aforementioned not-so-constant constants somehow filter into the actual form of the equations being used? Has advances in astronomical observations, enabling us to peer back in time as far back as the CMB, given us any evidence to suggest that the laws of nature have evolved? If Feynman thinks that "It might turn out that they're not the same all the time and that there is a historical, evolutionary question." then this is surely a question worth asking.


NB/ To be clear: this is a question concerning purely physics, whether the equations therein change as the Universe ages, and whether there is any observational evidence for this. It is not intended as an oportunity for a philosophical discussion.



Answer



For many (most? all?) physicists, it's something like an axiom (or an article of faith, if you prefer) that the true laws don't change over time. If we find out that one of our laws does change, we start looking for a deeper law that subsumes the original and that can be taken to be universal in time and space.


A good example is Coulomb's Law, or more generally the laws of electromagnetism. In a sense, you could say that Coulomb's Law changed form over time: in the early Universe, when the energy density was high enough that electroweak symmetry was unbroken, Coulomb's Law wasn't true in any meaningful or measurable sense. If you thought that Coulomb's Law today was a fundamental law of nature, then you'd say that that law changed form over time: it didn't use to be true, but now it is. But of course that's not the way we usually think of it. Instead, we say that Coulomb's Law was never a truly correct fundamental law of nature; it was always just a special case of a more general law, valid in certain circumstances.


A more interesting example, along the same lines: Lots of theories of the early Universe involve the idea that the Universe in the past was in a "false vacuum" state, but then our patch of the Universe decayed to the "true vacuum" (or maybe just another false vacuum!). If you were around then, you'd definitely perceive that as a complete change in the laws of physics: the particles that existed, and the ways those particles interacted, were completely different before and after the decay. But we tend not to think of that as a change in the laws of physics, just as a change in the circumstances within which we apply the laws.


The point is just that when you try to ask a question about whether the fundamental laws change over time, you have to be careful to distinguish between actual physics questions and merely semantic questions. Whether the Universe went through one of these false vacuum decays is (to me, anyway) a very interesting physics question. I care much less whether we describe such a decay as a change in the laws of physics.


optics - Does light loses its energy when it passes through denser medium?


I know it does not because it emerges out of denser medium at 300,000 KM per second, but according to $E=mc^2$ and given that speed of light decreases inside denser medium with refractive index greater than 1, does not it suggest that energy of light inside denser medium is less?



Answer



$E=mc^2$ is not really applicable to light. It is applicable to something that has mass.


The energy of light is given by $E=h\nu$ where $\nu$ is the frequency of light and $h$ is Planck's constant, which has a value of $\approx 6.626 \times 10^{-34} J.s$



When light enters a different medium, its frequency remains the same, and of course, so does Planck's constant. Hence, obviously, its energy remains the same throughout the entire exercise.


electrostatics - Heat produced when dielectric inserted in a capacitor


When a capacitor is connected to battery, it stores $\frac{C V^2}{2}$, while battery supplied $CV^2$ energy. Therefore, $\frac{C V^2}{2}$ energy gets lost as heat. When a capacitor is already charged and a dielectric is inserted in this charged capacitor (which is still connected to the battery), will there be any heat produced ?



Answer



The calculation by Ben clearly shows that the total energy stored in the battery and the capacitor is lower for the final situation than for the initial situation. Some energy has thus gone somewhere else.



The question is: Where did this energy go? And are we allowed to calculate to force on the dielectric by taking the derivative of the energy with respect to the position of the dielectric?


The answer is that it depends on in what way you let the dielectric slide into the capacitor. (I consider a solid dielectric here)


If the dielectric is slowly inserted into the capacitor, there will be no energy converted into heat at all. A force is needed to prevent the dielectric from sliding in. The dielectric is thus performing work on the object that is holding it back. All the missing energy will be transferred to the object holding back the dielectric.


In this situation calculating the force from the change in energy is justified.


Note that in this situation, the voltage over the capacitor will remain constant during the insertion of the dielectric and the current that is required to charge the capacitor can be made arbitrarily low by choosing a low enough insertion velocity.


The situation changes when instead of slowly inserting the dielectric, you let go of the dielectric and it is just left to move freely into the capacitor. In that case a large current is needed to increase the charge on the capacitor. The electrical resistance in the circuit will dissipate some energy into heat. The rest of the energy is converted into kinetic energy of the dielectric.


If there is little mechanical resistance, the dielectric will shoot out the other side of the capacitor and will be pulled back again. It will oscillate in this way, until the oscillations are damped by the electrical and mechanical resistance.


In this case it is not justified to calculate the force by just considering the energy in the capacitor and battery. One should also consider the energy dissipated in the resistance.


Note that in this case the voltage over the capacitor is no longer constant. The voltage drop over the electrical resistance will cause a voltage difference between the battery and the capacitor.


lagrangian formalism - The derivation of fractional equations



Recently I saw some physical problems that can be modeled by equations with fractional derivatives, and I had some doubts: is it possible to write an action that results in an equation with fractional derivatives? For example, consider a hypothetical physical system with the principle of least action. Is there a "wave equation" with the time-derivative $3/2$? Does such a question make sense?



Answer



When I've seen fractional derivatives I've assumed that one place where they would naturally arise is in physical situations where there's a fractional dependency on time.


For example, random walks typically result in movement proportional to $\sqrt{t}$. Googling for "fractional+derivative+random+walk" on arxiv.org finds some papers that explore this:


http://www.google.com/search?q=fractional+derivative+random+walk+site%3Aarxiv.org


So I'm wondering if there's a way of relating some of the diffusion versions of QM with fractional derivatives.


Friday, June 27, 2014

string theory - Confusion on Polchinski P.67


I am reading Polchinski's String Theory Volume I. While I am learning the basics of string scatterings in Ch.3, I went back to chapter 2 and review the part of vertex operators. However, I found that I didn't really understand eq. 2.8.18 and the paragraph below when I tried to derive what he said.


In the book,



The state–operator isomorphism is an important but unfamiliar idea and so it is useful to give also a more explicit path integral derivation. Consider a unit disk in the $z$-plane, with local operator $A$ at the origin and with the path integral fields $φ$ fixed to some specific boundary values $φ_b$ on the unit circle. The path integral over the fields $φ_i$ on the interior of the disk with $φ_b$ held fixed produces some functional $Ψ_A[φ_b]$, $$Ψ_A[φ_b] = \int [dφ_i]_{φ_b} \exp(−S[φ_i])A(0) . $$ A functional of the fields is the Schrodinger representation of a state, so this is a mapping from operators to states. The Schrodinger representation, which assigns a complex amplitude to each field configuration, has many uses. It is often omitted from field theory texts, the Fock space representation being emphasized.


To go the other way, start with some state $Ψ[φ_b]$. Consider a path integral over the annular region $1 \geq |z| \geq r$, with the fields $φ_b$ on the outer circle fixed and the fields $φ_b$ integrated over the inner circle as follows:


$$\int[dφ'_b][dφi]_{φ_b,φ'_b} \exp(−S[φ_i])r^{−L_0−\tilde{L}_0}Ψ[φ'_b]. \quad(2.8.18)$$ That is, the integral is weighted by the state (functional) $r^{−L_0−\tilde{L}_0}Ψ[φ_b]$. Now, the path integral over the annulus just corresponds to propagating from $|z| = r$ to $|z| = 1$, which is equivalent to acting with the operator $r^{L_0+\tilde{L}_0}$. This undoes the operator acting on $Ψ$, so the path integral (2.8.18) is again $Ψ[φ_b]$. Now take the limit as $r → 0$. The annulus becomes a disk, and the limit of the path integral over the inner circle can be thought of as defining some local operator at the origin. By construction the path integral on the disk with this operator reproduces $Ψ[φ_b]$ on the boundary.




$$ \Psi[\phi_b] = \int d[\phi'_b]d[\phi_i]_{\phi_b,\phi'_b}\exp(-S[\phi_i])r^{-L_0-\tilde{L}_0}\Psi[\phi'_b] $$




  1. If I follow (A.1.15), I could not see how the factor $r^{-L_0-\tilde{L}_0}$ arises.




  2. As he mentioned, the integral means propagation from $|z|=r$ to $|z|=1$. My previous understanding is the propagation in $r$-direction is the time evolution so we act $\exp(-iHt)$ on it. The time direction is $\mbox{Im}(w)$ in the cylinder coordinate $(z=\exp(-iw))$, so we have $\exp(-iH \mbox{Im}(w)) = r^{-i(L_0+\tilde{L}_0)}$ ?





My sincere thank for your time and effort on explaining it. :)


Thank @Prahar for the comment. I guessed his is referring to euclidean world-sheet, thanks for confirming this.




experimental physics - Why is the vibration in my wire acting so oddly?


I was soldering a very thin wire today, and when I had one end firmly soldered, I accidentally bumped the wire diagonally with my tweezers. What I'd expect to happen is that the wire oscillates for a little while in one axis, then stops. However, what actually occurred is quite different and much more interesting! I recorded it in real-time; https://youtu.be/O5nFNly7L7s (sorry for the poor macro focus), and recorded it again at 480FPS and imported it into Tracker video analysis; https://youtu.be/9jhDsypkqKQ.


As you can see, the rotational motion fully reverses!


Here are some still frames from Tracker:


The wire begins to rotate clockwise after being excited:


enter image description here


The wire begins to oscillate in one axis:


enter image description here


And, mindbogglingly, begins to rotate counterclockwise!


enter image description here



(clearer views in the videos above)


The X and Y axis motion plotted by tracker raises even more questions:


enter image description here


As you can see, the X axis motion simply stops, then restarts!


What's going on?


My first thought was that (because this wire wasn't originally straight) there was some sort of unusual standing wave set up, but this occurs even with a completely straight wire.


I'm absolutely sure there's something about two-axis simple harmonic motion that I'm missing, but I just cannot think of what is causing this. I've seen many other "home-experiment" questions on this site, so I thought this would an acceptable question; I hope it's not breaking any rules.


EDIT:


Okay, I've got some more data! I've set up a little solenoid plunger system that produces no torque or two-axis motion, and it's very repeatable. Here: https://youtu.be/ZAni6VMOVD8


What I've noticed is that I can get almost any wire (even with a 90-degree bend!) to exhibit single-axis motion with this setup, with no spinning or deviation; and if I try enough, the same thing can happen with the tweezers. It seems like if I slide the tweezers slightly when exciting the wire, I can reliably produce this odd motion. I don't know what that indicates.



EDIT2:


Okay, seems like with the plunger-solenoid I still can get this circular motion even with a straight wire.


EDIT3:


Okay, so I wanted to test @sammy's suggestion once and for all. I assume that changing the moment of inertia to torsion of the wire would affect his theory, so I soldered a small piece of wire perpendicularly to the end of the main wire:


enter image description here


Then I recorded the motion; enter image description here


And then I took off the perpendicular wire, and re-recorded the data: enter image description here And then I did it again (got noisy data first time): enter image description here


EDIT N: The final test!


Floris's hypothesis requires that the resonant frequency of a wire in each cardinal direction be different. To measure this, I used my solenoid setup that did not cause rotation, as above. I put a straight piece of wire between a light source and a light-dependent resistor and connected it to an oscilloscope;


enter image description here



The signal was very faint (42 millivolts), but my scope was able to pull it out of the noise. I have determined this:


In the +x direction, the resonant frequency of a just-straightened straight sample wire (unknown cycle frequency) is 51.81hz,+/-1hz;


enter image description here


In the +y direction, the resonant frequency of a sample wire is 60.60hz,+/-1hz;


enter image description here


So there's definitely a significant difference (~10 percent!) between the cardinal directions. Good enough proof for me.


EDIT N+1:


Actually, since my light detector above produces two pulses per sine wave, the actual vibration frequency is f/2; so the actual frequencies are 25.5 hz, and 30hz, which agrees roughly with @floris's data.



Answer



Your wire is not quite round (almost no wire is), and consequently it has a different vibration frequency along its principal axes1.



You are exciting a mixture of the two modes of oscillation by displacing the wire along an axis that is not aligned with either of the principal axes. The subsequent motion, when analyzed along the axis of initial excitation, is exactly what you are showing.


The first signal you show - which seems to "die" then come back to life, is exactly what you expect to see when you have two oscillations of slightly different frequency superposed; in fact, from the time to the first minimum we can estimate the approximate difference in frequency: it takes 19 oscillations to reach a minimum, and since the two waves started out in phase, that means they will be in phase again after about 38 oscillations, for a 2.5% difference in frequency.


Update


Here is the output of my little simulation. It took me a bit of time to tweak things, but with frequencies of 27 Hz and 27.7 Hz respectively and after adjusting the angle of excitation a little bit, and adding significant damping I was able to generate the following plots:


enter image description here


which looks a lot like the output of your tracker.


Your wire is describing a Lissajous figure. Very cool experiment - well done capturing so much detail! Here is an animation that I made, using a frequency difference of 0.5 Hz and a small amount of damping, and that shows how the rotation changes from clockwise to counterclockwise: enter image description here


For your reference, here is the Python code I used to generate the first pair of curves. Not the prettiest code... I scale things twice. You can probably figure out how to reduce the number of variables needed to generate the same curve - in the end it's a linear superposition of two oscillations, observed at a certain angle to their principal axes.


import numpy as np
import matplotlib.pyplot as plt

from math import pi, sin, cos

f1 = 27.7
f2 = 27
theta = 25*pi/180.

# different amplitudes of excitation
A1 = 2.0
A2 = 1.0


t = np.linspace(0,1,400)

#damping factor
k = 1.6

# raw oscillation along principal axes:
a1 = A1*np.cos(2*pi*f1*t)*np.exp(-k*t)
a2 = A2*np.cos(2*pi*f2*t)*np.exp(-k*t)

# rotate the axes of detection

y1 = cos(theta)*a1 - sin(theta)*a2
y2 = sin(theta)*a1 + cos(theta)*a2

plt.figure()
plt.subplot(2,1,1)
plt.plot(t,-20*y2) # needed additional scale factor
plt.xlabel('t')
plt.ylabel('x')

plt.subplot(2,1,2)

plt.plot(t,-50*y1) # and a second scale factor
plt.xlabel('t')
plt.ylabel('y')
plt.show()



1. The frequency of a rigid beam is proportional to $\sqrt{\frac{EI}{A\rho}}$, where $E$ is Young's modulus, $I$ is the second moment of area, $A$ is the cross sectional area and $\rho$ is the density (see section 4.2 of "The vibration of continuous structures"). For an elliptical cross section with semimajor axis $a$ and $b$, the second moment of area is proportional to $a^3 b$ (for vibration along axis $a$). The ratio of resonant frequencies along the two directions will be $\sqrt{\frac{a^3b}{ab^3}} = \frac{a}{b}$. From this it follows that a 30 gage wire (0.254 mm) with a 2.5% difference in resonant frequency needs the perpendicular measurements of diameter to be different by just 6 µm to give the effect you observed. Given the cost of a thickness gage with 1 µm resolution, this is really a very (cost) effective way to determine whether a wire is truly round.


visible light - What is the physical nature of electromagnetic waves?


I've been trying to work out what the physical nature of electromagnetic waves is, since I reasoned that given electromagnetic waves have wavelengths that are given in distance units, rather than units of energy or some other more abstract/non-physical unit, then electromagnetic waves must have a physical description.


I queried my roommate (who is studying computer hardware engineering in university) and though he could provide some appropriate equations relating properties of electromagnetic waves, it seemed as though their physical description was left as a blackbox by his professors.


For elucidation on what I'm asking:



a) What do a peak or a trough represent in physical space?


b) Does an electromagnetic wave traversing physical space over time fill an area, or rather in the case of non-polarized light a volume? If not, why do we use units of distance to measure amplitude and wavelength?


c) If it takes up a volume, does this volume shrink during a trough and expand during a peak?


d) If it takes up a volume, is the energy of the photon(s) becoming maximally diffuse and then minimally diffuse cyclically?


Just trying to fathom it for myself, my end result is a drawing of a single wave with a grid drawn overtop where I assume that each box is a planck area and we assign a 1 for boxes which are inside the area of the wave and a 0 for boxes outside. Using this technique I concluded for each moment in time I could assign a percentage value for the density (or alternatively the diffusivity; I do not know the word you would use to refer to the degree to which a system is diffuse, on a side note, if you do, please let me know!) of the photon(s) energy (perhaps in the form of physical oscillation over the distance) based on the ratio of 1s to 0s in that column. In this drawing I'm assuming polarized light and only looking at one of either the electric force or magnetic force, though I assume I could simple double my density percentage for each column to include both. This interpretation seems ill-conceived though; how does the probability wave of the photon(s) look in comparison to the wave I've drawn?


Very confused and seeking answers which might help shine some light on the matter. I ultimately fear that the issue with my attempts to discern a physical picture of an electromagnetic wave lie in the fact that the answer is truly unintuitive and unsatisfying. By the way, I'm a layman with an interest in physics, not a student of it, so please try to be as idiot-friendly as possible with your answers.


Thanks!



Answer



Fields


First you need to understand what a field is. There is a very good answer by dmckee on what a field really is which you can (and should read), but I'll try my own version. Mathematically, a field is something that has a value at every point of space and time. A typical example is temperature. The air in your room has a different temperature at every point and this temperature may change with time, so to each point in space and time we associate a number $T$. We might write $T(x, y, z, t)$, indicating that the temperature is a function of $x, y, z$ (space) and $t$ (time).



Temperature is a scalar field because at each point it is a scalar (i.e., a number). But we can have different kinds of fields. For example, the air in your room might be moving around, and so at each point it will have some velocity $\mathbf{v}(x,y,z,t)$. This velocity is a vector field, because at each point it has a magnitude and a direction (if you don't know what a vector is, picture it as a small arrow; the direction tells you which way the air is moving at that particular point, and the length of the arrow tells you how fast it is moving).


Waves


Air can carry waves, which we call sound. Sound is nothing more than a bunch of air molecules oscillating together in such a way that they carry energy from one place to another, in the same way that we see waves in water. With our fancy fields we can describe a wave by saying that at any given point the velocity is oscillating back and forth, and the phase of this oscillation changes as we move from place to place.


Temperature and velocity are fields that, in a sense, don't physically exist by themselves: they describe some property of a fluid, but it is the fluid that has physical reality, not its properties. But there are fields that are not a property of anything else, and the electromagnetic field is the most important among them.


Electromagnetic field


The electromagnetic field is described by two vector fields $\mathbf{E}$ and $\mathbf{B}$, called the electric and magnetic field respectively. For the purposes of light we can forget about $\mathbf{B}$ and just talk about the electric field. Just like the velocity of a fluid, this field can be represented by an arrow at every point in spacetime. Its physical intepretation is that if you place a charge somewhere, there is a force felt by the charge that points in the direction of $\mathbf{E}$ and is proportional to its magnitude. (Also there are magnetic effects but we're ignoring those). This is simply a more sophisticated view of the idea that like charges repel and opposite charges attract; instead of thinking of a force between the charges, we say that one charge creates an electric field near it, which is in turn felt by the other charge.


An electromagnetic wave is simply an oscillation of the electric and magnetic fields. At each point, the field's magnitude is increasing and decreasing with time. Wikpiedia has some nice gifs showing this process in time and space. The wavelength is a physical distance: it's the distance between two maxima or two minima of the field. The amplitude is not a distance, however: it measures how strong the field is, and so it is measured in units of field (Newton per Coulomb or Volt per meter for the electric field in SI units).


You can see in the usual pictures that an EM wave is a transverse wave; that is, the direction of the fields is perpendicular to the direction of propagation of the light. This is in contrast to a sound wave, which is longitudinal: that is, the molecules oscillate back and forth, and the move in the same line that the wave travels.


So, let's answer your questions:


a) The peaks and troughs are the points where the magnitude of the field is maximum in one direction or the other. As such, it doesn't make much sense to distinguish between peaks and troughs, because if you look from the other side they switch places.



b,c,d) A wave doesn't really take up space. There might be fields over a region of space, but the arrows you see in the animations don't have a physical length. They represent the magnitude of the fields, but they don't occupy physical space. Remember that there are two arrows (because of $\mathbf{E}$ and $\mathbf{B}$) at every point in space. As I've said before and has been said in the comments, wavelengths are lengths because they are the distance between two maxima, but amplitudes are not lengths.


The mental picture you describe in your question is, if you forgive me, a mess. You're mixing this description of EM waves with the quantum mechanical point of view, which is almost sure to lead to errors. QM usually deals in terms of particles, so the basic idea is that now light is thought of as a bunch of particles (photons), with a certain probability at each point in space to find a photon. The thing with quantum mechanics is that it's extremely weird and even the very best physicists have trouble forming an intuitive mental image of how it works. So please just forget about photons until you really understand the classical waves I've described in this post.


Thursday, June 26, 2014

What are the benefits of Gravitational wave studies?


Gravitational waves were first predicted by Albert Einstein and later indirectly confirmed in an number of experiments. Recently leading physicists announced that they may have a 'solid' proof that these waves exist. I wonder why we need to spend so much money on study of this phenomena. I can see at least two reasons: 1. Confirm one of the key predictions of Albert Einstein’s general theory of relativity. 2. Detect cataclysmic events in cosmos (such as collision of two black holes etc.)


Anything else?


Are there any benefits of this study in the fields of renewable energy (like newer, more efficient energy sources) or advanced propulsion technologies (building of faster space ships)?



Answer



scientific studies have no absolute benefit or loss as definable in human reasoning.



I think you want to ask what's the "Application" of the gravitational waves theory?


It's a very opinion based question. scientific research is neutral and it's application maybe classified as benefit (nuclear energy) or loss (nuclear armageddon) by human opinions.


That said the application may not be visible immediately but are discovered over time.


In late 1800s to early 1900s , the quantum theory and atomic models seemed very irrelevant to humanity but today 100 years after their formulation we have nuclear energy and GPS .


statistical mechanics - What is the information geometry of 1D Ising model for a complex magnetic field?


Consider the one-dimensional Ising model with constant magnetic field and node-dependent interaction on a finite lattice, given by


$$H(\sigma) = -\sum_{i = 1}^N J_i\sigma_i\sigma_{i + 1} - h\sum_{i = 1}^N\sigma_i$$


where $\sigma = \{\sigma_i\}_{i = 1,\dots, N}\in\Omega := \{\pm 1\}^N$, $\{J_i\}_{i = 1,\dots, N}$ are nearest neighbor interaction strength couplings, and $h \in \mathbb{R}$ is the magnetic field. Let's consider the ferromagnetic case, that is, $J_i \geq 0$ for $i = 1, \dots, N$, and for the sake of simplicity (though this doesn't matter in the thermodynamic limit), take periodic boundary conditions. Neither in the finite volume, nor in the thermodynamic limit does this model exhibit critical behavior for finite temperatures.


On the other hand, as soon as we allow $h$ to be complex (and fix the temperature), even in the finite volume $N$, the partition function has zeros as a function of $h$. In the thermodynamic limit these zeros accumulate on some set on the unit circle in the complex plane (Lee-Yang circle theorem).


Now the question: let's consider information geometry of the Ising model, as described above, when $h$ is real. In this case the induced metric is defined and the curvature does not develop singularities (obviously, since there are no phase transitions). Now, what about information geometry of the Ising model when $h$ is complex? This is a bit puzzling to me, since then the partition function attains zeros in the complex plane, so that the logarithm of the partition function is not defined everywhere on the complex plane, and the definition of metric doesn't extend directly to this case (the metric involves the log of the partition function), let alone curvature.


Is anyone aware of any literature in this direction? I thought it would be a good idea to ask before I try to develop suitable methods from scratch.



Answer




Should have read the cross-list first - you are already aware of the reference below :)




This might be of some use:


B P Dolan, D A Johnston and R Kenna The information geometry of the one-dimensional Potts model J. Phys. A: Math. Gen. 35 (2002) 9025–9035 [arXiv:cond-mat/0207180]


An information geometry metric is calculated for real h for 1D Potts/Ising models and then naively continued to complex h "to see what happens", the resulting curvature diverges along the Lee-Yang line.


energy - What happened to this folded paper in a hydraulic press?


The video below apparently shows a man folding an A3 size piece of paper 7 times, putting it under the hydraulic press each time. After 7 folds, on the last press, the paper sort of explodes. What's happening here?


https://www.youtube.com/watch?v=KuG_CeEZV6w




astrophysics - Causes of hexagonal shape of Saturn's jet stream



NASA has just shown a more detailed picture of the hexagonal vortex/storm on Saturn:



http://www.ibtimes.com/nasa-releases-images-saturns-hexagon-mega-storm-may-have-been-swirling-centuries-1496218
https://en.wikipedia.org/wiki/Saturn_hexagon



Is that theoretically understood what is the cause behind this eye-catching nontrivial, regular yet not circular, shape? If so, what is the cause? I expect some explanation in terms of "nonlinear equations" of "mathematical physics" and "solitons".


P.S. (added a day after this question and the first answer was posted): On my blog where I posted the same question, people came up with some articles and phrases like "Rossby waves" and "resonance of latitude-dependence Coriolis frequency".




Wednesday, June 25, 2014

If there was a black hole on earth, what would it look like?


If an answer does exist, I'd love to hear it. I'm trying to incorporate a doomed earth story in something I'm writing, and the end of the world I'm going for is a black hole.


Let's say the black hole was created in the Philippines (I chose it because it's on the equator). Here's a map for reference.


If I was standing 90 degrees of that, in Kenya, or if I was standing 180 degrees of that, in Mexico; what would I see in those two separate instances? What if I was standing in Japan above it, or in Australia below it?


Let's also say I was riding a boat towards the source of the black hole. Unlike most imagined scenarios where I am orbiting the black hole and slowly falling into it, this time I'm no longer orbiting it - I'm getting sucked into it instead. I'm imagining that the black hole will "half" be inside Earth and "half" will be in the atmosphere. But as it pulls me in, what will it look like to me (if I was massless, so no tidal forces)? It's probably not going to be a black hemisphere sticking out of the ground. Light will be lensed around it in some way so as to appear like a hole in the air.


Also, if the black hole was very small, it probably won't have a strong gravitational pull, and assuming it doesn't disappear, you'd be able to look at it (or at least, its effect on the light in its surroundings). Of course, as you approach it, it would still have an event horizon, so spacetime would still be warped if you're near enough and the surroundings will still experience tidal forces.


As the black hole gains mass, it will get larger, and so its event horizon will increase. If I was standing in Kenya and the event horizon "washes over me," (as in a water wave) what will I see?



I've so far imagined that, at least when seen from outer space, there will be a hole on the ground and light will be lensed around it. When you are on the ground, though, it becomes a bit harder to imagine.


I do hope someone can lend me a hand!


Edit: Someone had previously answered this question, but I'm not sure why it's been deleted. It was pointed out to me, though, that a black hole cannot stay in place. So an additional question for me is, does that mean it is impossible to be standing still, and the event horizon "washes over you"?


Edit2: Here's a guiding thought experiment that could answer the question. Imagine that you are in outer space directly above the black hole, looking at the black hole that is on the earth's surface. Can you imagine what it would look like? It would simply be a hole with the surrounding light gravitationally lensed around it (possibly lensing the entire earth if it's big enough - but let's say it's not).


Now imagine that you are on the direct opposite of where you were above - you're right behind the black hole this time, looking directly at the black hole but unable to see it because the earth is blocking your view.


Now try to imagine, from the second image, rotate the earth into the first image. I find it a bit hard to do. Just as the black hole is appearing as you rotate the earth, how would the light be lensed?



Answer



An interesting and horrifying possibility for your book could be that of a tiny black hole with negligible mass relative to the mass of the Earth. It would silently sink into the ground, completely unnoticed. It would make damped oscillations around the center of the Earth until eventually staying within the nucleus. There it would stay unnoticed, slowly growing like a parasite. The Earth radius would start to diminish, and thus the crust would have to adapt by means of earthquakes - very weak at first, but of increasing frequency and strength. Eventually, chains of volcanoes would appear along giant fault lines, heating and poisoning the atmosphere, bringing total obliteration. After that, the Earth would continue shrinking until all that was left would be a tiny black hole with little more than one Earth mass in the place where our planet was. And the Moon would stay there as a horrified witness of the catastrophe.


Nothing on Earth would be able to stop the process and save us. Nothing except Chuck Norris.





Note: Classical GR black holes are fully determined with just three values: mass, electric charge and angular momentum (No-hair theorem). If the charge is large enough in the black hole of your novel, then the hero might be able to confine it by using strong magnetic fields.


quasiparticles - Phonon carries zero spin. Why?


photons and phonons both have polarization, we attribute spin 1 for photons but spin 0 for phonons. Why?




statistical mechanics - CFT and temperature


I have tried to think about this for some time but could not really go anywhere. Sorry for the sloppy question and thanks for any pointer.


My question is about CFT at finite temperature and nonconformal theories. I have sometimes read that CFTs are not conformal at finite temperature but I could not make clear to myself what happened precisely. What theory does a CFT at finite temperature correspond to? To begin the "temperature state" is not unique but rather an equilibrium/Gibbs distribution over several quantum states -a canonical ensemble. So correlation functions are integrals over 2 measures (vs. vev in the CFT), an average over the correlation functions between each pair of states in the ensemble. Now many theories flow to a given CFT so which one should we pick at finite temperature? For instance what is the finite temperature of a gaussian/free massless field, say in 2D, i.e. the basic free boson c=1 CFT? I could imagine this is a massive boson with $m=1/T$ but am not sure. In general one would consider the reduced temperature, centered at the critical temperature. So is the finite temperature Ising model in the "thermodynamic limit" a sort of massive Wilson-Fisher fixed point?


There are often mentions of finite temperature CFT in the AdS/CFT correspondence. The dual of a schwarzschild black hole in AdS should be a finite temperature CFT. Is this correspondence conjectured to be exact at all couplings? QCD at high temperature is free, what does it mean for the dual strongly coupled string theory on AdS with no black hole? (I would guess this means the closed strings propagate freely in some appropriate


Well sorry for the fuzzy question. But I'll be very grateful if you try to empathize.




Tuesday, June 24, 2014

optics - what is the purpose of condenser lens in a slide projector?


What if we don't use any condenser lens? can't we use a single convex lens as a condenser? what difference will it make? kindly explain it in detail.




Acceleration of electrons from Cooper pairs and EM radiation


For the question What makes Cooper pairs of electrons so fit for an unhindered current through a superconducting wire? where was a nice comment from CountTo10:



Assume any photons are effectively massive inside the superconductor, (through symmetry breaking), also assume that as one electron gets held back in the metal lattice, the other one of the pair gets pulled forward. They do not posses enough energy to produce retarding photons, so they meet no resistance.



Not getting response to this question of mine Why don't superconducting coils radiate? I'm asking this in an other way:


It is common sense that in a superconducting coil an electric current is flowing. Since the involved electrons are moving in circles they are subject to an acceleration. Do this electrons emit EM radiation?





newtonian mechanics - Physical reason of difference between normal force in a banked curve and an inclined plane.




When determining the centripetal force on an object on a banked curve, it is stated that the banking angle for a given speed and radius is found by :


tanθ = v^2/rg


It is found as follows: The normal force on the object is resolved into components. The x-component (the one providing the centripetal force) is:


N*sinθ = mv^2/r


Then, the y component is set equal to mg:


N*cosθ = mg =>N = mg/cosθ


Dividing Nsinθ by Ncosθ, we get :


tanθ = v^2/rg


I'm fine with that.


Here is where I'm confused. When resolving the forces of an object resting on an inclined plane: The component down and parallel to the plane due to gravity is:



mg*sinθ.


The component representing the force of gravity into (perpendicular) to the plane is:


mg*cosθ.


The normal force is equal to this component into the plane by Newton's 3rd Law, so,


N =mg*cosθ.


Why in the first scenario (banked curve) is


N=mg/cosθ,


HOWEVER, in the second (inclined plane)


N=mg*cosθ ?


How can this be? There are two different values for N?



Is the normal force in the first scenario (banked curve) greater than mgcosθ because:


a) the normal force also is responsible for the centripetal acceleration, so it needs to be greater?


Or,


b) the car is not sliding down the curve, so the normal force is greater because of the translational equilibrium requirement? The textbook that I have (Resnick, Fundamentals of Physics) seems to suggest (b), since they use the equation for equilibrium in the Y direction. But if that is the case, then what is PHYSICALLY CREATING this "extra" normal force as compared to the second scenario (inclined plane)?




Monday, June 23, 2014

special relativity - Is the entropy a Lorentz invariant?


So this is the pure question that came into my mind right now.


Is the entropy a Lorentz invariant?


How does the entropy of a gas behaves, when for example it's accelerated at $v = \frac{c}{2}$ or more?




quantum field theory - Could this model have soliton solutions?



We consider a theory described by the Lagrangian,


$$\mathcal{L}=i\bar{\Psi}\gamma^\mu\partial_\mu\Psi-m\bar{\Psi}\Psi+\frac{1}{2}g(\bar{\Psi}\Psi)^2$$


The corresponding field equations are, $$(i\gamma^\mu\partial_\mu-m+g\bar{\Psi}\Psi)\Psi=0$$


Could this model have soliton solutions? Without the last term, it is just a Dirac field (if $g=0$), but it has to be included. This is similar to the Thirring model. I was looking for this field in books and papers but I haven't found it. If you know about it could you give me any reference?




electromagnetism - Dispersion of light in metals and the plasma frequency


I've been reading about the dielectric function and plasma oscillations recently and I encountered the following dispersion relation for EM waves in metals or in plasma (Is it correct to treat those the same?) $$ \omega^2 = c^2k^2 + \omega_p^2 $$ where $\omega_p^2 = 4\pi n_e e^2/m$ is the squared plasma frequency with $n_e$ the free electron density. Now I am pretty certain that both book where I found a derivation (An introduction to solid state physics - Charles Kittel ; Advanced solid state physics - Philip Phillips) of this result make a mistake. Is there a way to justify the errors I think they make? The equation they obtained seems to be stated regulary, but I find something different.


The two derivations are different so I will treat them separately. I added my doubts in bold font. I will work in CGS units.


Phillips (p122)


We start from the Maxwell equation $$ \nabla \times \vec{B} = \frac{4\pi \vec{j}}{c} +\frac{1}{c}\frac{\partial \vec{E}}{\partial t} . $$ We take the time derivative of this expression: $$ \frac{\partial}{c\partial t} \nabla \times \vec{B} = \frac{\partial}{\partial t}\frac{4\pi \vec{j}}{c^2} +\frac{1}{c^2}\frac{\partial^2 \vec{E}}{\partial t^2}. $$ Using the Maxwell–Faraday equation $\frac{\partial\vec{B}}{c\partial t} = -\nabla \times \vec{E}$ and the curl of a curl expression, the left hand side of this equation can be rewritten as $$ -\nabla\times (\nabla\times\vec{E} ) = -\nabla(\nabla\cdot\vec{E}) + \nabla^2 \vec{E}. $$ Using $\nabla \cdot \vec{E} =0$ (Is this assumption correct? Since he appears to be using the free electron model I doubt this can be used.) and $\frac{\partial \vec{j}}{\partial t} = e^2 n_e \vec{E}/m$ (This relation holds only for a free electron gas (via the Lorentz force). What makes it more legit to use this model rather than the Drude model (where the electrons experience collisions which gives rise to Ohm's law)? Using Ohm's law instead does modify the obtained result quite severely.), we find $$ (\frac{\partial^2}{\partial t^2} + c^2\nabla^2 - \omega_p^2)\vec{E} = 0. $$ From this we get $$ \omega^2 = c^2k^2 + \omega_p^2. $$


With my correction the wave equatoin becomes $(\frac{\partial^2}{\partial t^2} + c^2\nabla^2 - \omega_p^2\frac{\partial}{\partial t})\vec{E} = 0$, and hence $\omega^2 = c^2k^2 - i\omega \omega_p^2$. This result coincides with the dispersion relation for EM waves in a conductor as found in Griffiths' book on electrodynamics in section 9.4.


Kittel (p397)



Here the starting point is the wave equation in a nonmagnetic isotropic medium $$ \frac{\partial^2 \vec{D}}{\partial t^2} = c^2\nabla^2{\vec{E}} $$ If I am not mistaking, this is only correct for a perfect insulator where there can be no free currents, which I think is not a correct assumption for a plasma, a metal or an electron gas. This is then used for a linear material so that $\vec{D} = \epsilon\vec{E}$ and then $$ \epsilon(\omega) = 1-\frac{\omega_p^2}{\omega^2} $$ is used. How could one justify the steps which I think are wrong?




quantum field theory - Apparent failure of SUSY nonrenormalization theorem


I am having trouble reconciling two pieces of information.



Consider supersymmetric QED, i.e. a supersymmetric U(1) gauge theory with two chiral superfields of opposite charges, $h$ and $\hat{h}$. The Kähler potential $K$ is canonical, $ K = h^\dagger e^{2\,g\,q\,V} h + {\hat{h}}^\dagger e^{-2\,g\,q\,V} \hat{h} , $ while the superpotential $W$ is the simplest possible: $$ W = m\, h \, \hat{h}. $$



Renormalized mass and fields are related to bare/original ones by: $$m_0 = Z_m m_r, \qquad h_0 = {Z_h}^{1/2}\, h_r,\qquad \hat{h}_0 = {Z_h}^{1/2}\, \hat{h}_r.$$


The SUSY non-renormalization theorems say that $W$ is not perturbatively renormalized, implying that $$ Z_m Z_h = 1 \quad \Rightarrow \quad \delta_m = - \delta_h, $$ at the one loop level (as usual $Z_m = 1 + \delta_m$, $Z_h = 1 + \delta_h$). The counterterm Feynman rule for the scalar propagator of $h$ will then be: Counterterm structure i.e. the scalar propagator counterterm is proportional to $(p^2+m^2)$.




If one explicitly computes the divergent part of the $h$ self energy at one loop in dimensional regularization, one finds* that: $$ i \Sigma_h (p^2) \bigg|_\textrm{div} = i \frac{g^2 q^2 }{(4\pi)^2} \frac{2}{\epsilon} \big(-4 m^2\big). $$ *In the literature one can find this result e.g. in arXiv:hep-ph/9907393, section 4.3, equation (150), by playing with the integrals. This result is obtained in the Feynman gauge ($\xi = 1$).


i.e. the divergent part of the self-energy at one loop is proportional to just $m^2$.



QUESTION: I was expecting a divergent part proportional to $(p^2 + m^2)$, which is what can be cancelled by the aforementioned counterterm. Is this reasoning correct? What might have gone wrong?





Diagrams


To make this question more accessible, here are the diagrams (usual Feynman diagrams, not supergraphs) which sum to give $i \Sigma_h (p^2) \big|_\textrm{div}$:


Diagram divergent parts


I am assuming there is something naïve in my approach. Perhaps some subtlety with the gauge diagrams, or Wess-Zumino gauge (I don't know any more at this point).


Dimensional regularization


At the outset there doesn't seem to be a problem with using dimensional regularization. The SUSY violation introduced by it should be proportional to $\epsilon$ (Martin's SUSY Primer, p. 61), thus only affecting finite terms.


Symmetries and missing terms


An R-symmetry under which both $h$ and $\hat h$ have charge $+1$ forbids adding gauge invariant terms like $h \hat h$ to the Kähler.



The discrete symmetry under which $h \leftrightarrow \hat h$ and $V \rightarrow - V$ forbids adding a Fayet-Iliopoulos term.




electromagnetism - Is it possible to detect fake Tungsten aka Wolfram gold bars with a strong magnet?


Tungsten aka Wolfram is paramagnetic so it is weakly attracted to magnets.


A guy devised the following to test for Tungsten in gold bars:


http://www.youtube.com/watch?v=foELQ7T8_90


But he is using a paperclip and not real Tungsten.


Question: How strong a magnet do I need in order to achieve a significant attraction to Tungsten. By significant I mean that it should exert a force of several (1/100)*grams so that it can be measured with a scale as in the YouTube video. Will a Neodymium magnet do? If yes, what should be the strength of the magnet in kg?





Saturday, June 21, 2014

spectroscopy - What is decay associated spectra?


What is decay associated spectra?


Suppose we measure the fluorescence intensity over different wavelengths and over time, we get:


$$I(\lambda,t) = \sum_i^n \alpha_i(\lambda) \exp(\frac{-t}{\tau_i}).$$



The assumption is that there are n component,species, in the $I(\lambda,t)$. If we fit the right hand to the experimentally obtained $I(\lambda,t)$, and get $\alpha_i$ and $\tau_i$, then people call $\alpha_i$ the decay associated spectra.


Now, if we integrate over time we get the steady state emission spectra. The thing that I cannot understand is the decay associated spectra. What does it mean? If it is the steady state spectra of species i, then why does it become negative sometimes? People, say that when it becomes negative, it indicates energy transfer between species. Could someone please elaborate this concept more?



Answer



A bit of clarification: when a component of a decay associated spectrum (DAS) which takes on negative values for certain values of $\lambda$ is observed, that isn't actually meant to say that the component actually has a negative spectrum in physical reality. Rather, it is a sign that the modeling technique used, $$I(\lambda,t) = \sum_i^n \alpha_i(\lambda) \exp\left(-\frac{t}{\tau_i}\right),$$ is failing to correctly fit the decay process. In other words, negative values are a sign that the model is incorrect and is giving nonsense results that are incorrect.


A decay associated spectrum is defined as the spectra of the components that are calculated when fitting the experimental decay spectrum $I_e(\lambda,t)$ to a model in which there are $N$ chemical components whose decay transfer matrix $\mathbf{K}$ is diagonal, ie $$\mathbf{K}=\text{diag}(k_1,k_2,...,k_N).$$ In this situation, you obviously have individual exponential decays for each of the $N$ components, with time constant $\tau_i=k_i^{-1}$.


However, very often real photophysical systems are more intertwined, and their decay transfer matrix $\mathbf{K}$ is more generally a real symmetric matrix. By the spectral theorem, this can be diagonalized and solved for the decays; when fitting experimental data to this model, the spectra of the components calculated are referred to as species associated spectra (SAS).


In short, if you fit to a DAS model and you get negative DAS values, you're doing it wrong, and the system is more complicated than multiexponential decays, so you need to do an SAS fit.


Here is a decent article on the subject, which explains things in more detail.


Apologies in advance if anything that I wrote above is wrong, I just learned it all 10 minutes ago.


resource recommendations - Source for Interferometer experiments with single photons



The Phys.SE question Single Photon Single Slit Interferometry is about the results of such experiments. What I’m interested is to read about interferometer experiments with single shooted photons. Who carried out such experiments?



Answer



While it is not very comprehensive, John Townsend gives some basic references in the first chapter of his spiffy book Quantum Physics A fundamental approach to modern physics. (The first chapter examines a simplified quantum model of light remenicient of Feynman's pop-sci book on QED.)





  • Demonstrating that reliable single photon counting is possible P. Grangier, G. Roger, and A. Aspect, Europhys. Lett. 1, 173 (1986)




  • Basic Mach-Zender interferometry with single photons A. Aspect, P. Grangier, and G. Roger. J. Optics (Paris) 20, 119 (1989)




  • Delayed choice in a Mach_Zender interferometer V. Jacques, E. Wu, et. al Science, 315 988 (2007)





(The bold type is my interpretation of the reason the papers are referenced rather than titles, BTW.)


That is pretty sparse but it represents a starting point for a more comprehensive literature search.


planets - What would be the rate of acceleration from gravity in a hollow sphere?


Lets say the Earth is hollow and you are in the center of it (Same mass except all of it is on the outside like a beach ball) If you move slightly to one side now your distance is closer to that side therefore a stronger gravitational force however at the same time you have more mass now on the other side. At what rate would you fall? Which direction?


Also, is there a scenario where depending on the radius of the sphere you would fall the other direction or towards the empty center?



Answer



If the mass/charge is symmetrically distributed on your sphere, there is no force acting on you, anywhere within the sphere. This is because every force originating from some part of the sphere will be canceled by another part.


Like you said, if you move towards on side, the gravitational pull of that side will become stronger, but then there will also be "more" mass that is pulling you in the other direction. These two components cancel each other exactly.


thermodynamics - heat as a low quality energy



In a cyclic process, there is no change in internal energy. So the work done by the system must equal to the heat offered to the system. So if all heat is converted to work, how can heat by a low quality energy?




Topological insulators - Surface states have a phase?


When I look at the circle of the Dirac cone around the Dirac point of, let's say, $Bi_2Se_3$, then the electron winds around and it is true that it goes from momentum $-k$ and spin-up to $+k$ and spin-down. Now how can I use this fact to show that the Berry phase of $\pi$ arises?


When the electron completes one circle it did go from $k$ to $k$ and from spin-up to spin-up. So basically nothing changed...?


Picture that shows the Dirac cone in the Brillouin zone and the spin on the left hand side:



enter image description here




condensed matter - Spontaneous Symmetry Breaking Field - Physical Significance.


One way to define spontaneous symmetry breaking (SSB) is as follows (Morandi, Ercolessi and Napoli, 2001; my wording):



We can define SSB as occurring when: $$\lim_{h\rightarrow 0} \lim_{N\rightarrow \infty} \langle\mathcal{O}\rangle_{N,b}\ne 0\tag{1}$$ where $\mathcal{O}$ is some quantity which is not invariant under the symmetry.



I understand mathematically why this order of limits must be taken (rather then having the $h$ limit first). But why is this the case physically and why does the condition (1) correspond to SSB?




Thursday, June 19, 2014

homework and exercises - How fast would the Earth need to spin for us to feel weightless?


It's a classic question with many answers all over the Internet, but none here so I figured I'd ask it:



How fast would the Earth need to spin for a person (or anything for that matter) to feel weightless while on the surface at the equator?


In this situation everything on the Earth's surface would essentially be in orbit around the Earth at the radius of the Earth's surface (let us assume the atmosphere was also spun up to this angular velocity so there would be no air drag slowing things down). Let us also say by "surface of the Earth" we mean mean sea level.


You can decide for yourself if/how to factor in the bulge of the Earth. You can assume that the Earth somehow is able to maintain its present shape while spinning up.


Any comments on whether an Earth spinning slightly faster than this speed will cause it to break apart or not will also be appreciated.



Answer



How fast would a sphere need to rotate for a dust speck at its equator to achieve balance between gravitational attraction and centrifugal force?


If you do the math (equating $G M m / R^2$ to $m \omega^2 R$ and using $M = \frac{4\pi}{3} \rho R^3$ as well as $\omega = 2\pi f$), it follows that the size of the sphere is entirely irrelevant and that only the density $\rho$ of the sphere enters into the equation for $f$, the number of revolutions per unit time: $$f^2 = \frac{1}{3\pi}G\rho$$ For $\rho = 5.5 \times 10^3$ kilogram per cubic meter (the density of planet earth) it follows that $f=0.197 \times 10^{-3}$ revolutions per second, corresponding to a revolution period of $5070$ seconds (1 hour and 24 minutes).


electromagnetism - What are the differences between the differential and integral forms of (e.g. Maxwell's) equations?


I would like to understand what has to be differential and integral form of the same function, for example the famous equations of James Clerk Maxwell:




How to know where to apply each way? Excuse the ignorance, but always confused me the head.


Edit


Remembering that, I know that the concept of integration and derivation, is something like:




Answer



The equations are entirely equivalent, as can be proven using Gauss' and Stokes' theorems.


The integral forms are most useful when dealing with macroscopic problems with high degrees of symmetry (e.g. spherical or axial symmetry; or, following on from comments below, a line/surface integrals where the field is either parallel or perpendicular to the line/surface element).


The differential forms are strictly local - they deal with charge and current densities and fields at a point in space and time. The differential forms are far easier to manipulate when dealing with electromagnetic waves; they make it far easier to show that Maxwell's equations can be written in a covariant form, compatible with special relativity; and far easier to put into a computer to do numerical electromagnetism calculations.


I would think that these three points generalise to any system of differential vs integral forms in physics.



experimental physics - Is there a way to decrease the rate of nuclear Beta decay?


In that question and its answers it was mentioned that you could trigger radioactive decay by bombarding atoms with gamma rays of the right energy level (there may be other solutions I do not know about, but of course if you bombard with neutrons you can trigger nuclear reactions)


I am mainly interested by beta-decay. Is it possible to decrease the probability of the beta decay of some radioactive material by a physical treatment?


Is the rate fairly independant of temperature and external magnetic field for instance?



Answer



Yes. Have a look:




“The ‘Reifenschweiler effect’ is the observation that the beta-decay of tritium half-life 12.5 years is delayed reversibly by about 25-30% when the isotope is absorbed in 15 nm titanium-clusters in a temperature window in between 160-275 C. Remarkably at 360 C the original radioactivity reappears. The effect is absent in bulk metal. Discovered around 1960/1962 at Philips Research Eindhoven, The Netherlands Reifenschweiler extensively discussed his observation with o.a Casimir (the director of research at the time), Kistemaker (ultracentrifuge expert), and although no satisfactory explanation was found, R. was allowed to publish it. At the time a unique example as to how an electronic environment might affect nuclear phenomena.”



Here is a speculation to explain the effect.


if the first link stays dead , here is an archive of it.


As there is very little on the web since 2011, and the main site is nor responding, it was probably a speculation and measurement problems, see here.


thermodynamics - In a thermal equilibrium, why is the energy of individual photons proportional to the temperature?


In the book of The First Three Minutes, by Weinberg, he argues that



(p.65-66)when radiation and matter were in thermal equilibrium, the universe must have been filled with black-body radiation with a temperature equal to that of the material contents of the universe.



$$$$



(p.68)The photon picture allows us easily to understand the chief qualitative properties of black-body radiation. First, the principles of statistical mechanics tell us that the typical photon energy is proportional to the temperature, while Einstein's rule tells us that any photon's wavelength is inversely proportional to the photon energy. Hence, putting these two rules together, the typical wavelength of photons in black-body radiation is inversely proportional to the temperature.




However, I do not understand how the conclusion is arrived at, that typical photons (i.e photons emitted with maximum intensity at given temperature) have an energy that is proportional to the temperature ? Is there any explanation that shows that $E \propto T$ for a photon in thermal equilibrium ?


Note that, I have seen this question, but that does not contain any answer to my question.


Please also do note that, the book is making arguments using the historical development of the subject, it is mostly based on the experimental facts, so please do not throw any quantum theoretical formulas at us.



Answer



If the radiation and matter are in complete thermodynamic equilibrium, then the radiation has a blackbody spectrum.


The frequency at which the flux density (in frequency space) is maximised is given by Wien's law: $$ f_{\rm max} = \frac{2.82}{h} k_{B}T$$


Thus a photon at the frequency at which the radiation spectrum peaks has an energy $h f_{\rm max} = 2.82 k_B T$.


You can do some similar analysis to come up with the average energy of a photon, and it comes to $2.70 k_B T$.


If you are asking for an explanation that does not involve Wien's law, then you have to go back a step and find out why the blackbody spectrum has the form it does and why Wien's law results from that. This is well-trodden ground covered in any textbook on the subject.



string theory - Why does the universe exhibit three large-scale spatial dimensions?




Possible Duplicate:
Is 3+1 spacetime as privileged as is claimed?



Regardless of your favorite theory of how many dimensions the universe has in total, the universe seems to have a deep preference for displaying three fully interchangeable large-scale spatial dimensions (plus time) within any given frame of reference.


But why? That is, has anyone ever come up with a persuasive argument for why the number three is or is not the required number of large-scale spatial dimensions in a universe?


This question from way back in November 2010 appears related, but after reading through it I'm pretty sure it is only pointing out that the known laws of electrostatics have three spatial dimensions built into them, which is most certainly true. But universes with fewer or more than three large-scale spatial dimensions would presumably have different force rules reflecting their different geometries, so this kind of analysis only shows self-consistency within a universe.



(I should also warn responders in advance that while I respect the right of some theorists to suggest that it is three "because our universe evolved that way out of a fractal multiverse," I also respect my own position that such statements are equivalent to saying "We haven't the foggiest idea why." An answer along those lines would at the very least need some powerful anthropic principle support, e.g. a proof or near-proof that universes with less or more than three large-scale spatial dimensions would have extreme difficulty supporting life.)



Answer



A few quick references before you close the question:


There's a rather technical discussion on what's special about four dimensions on the Math Overflow (way over my head!).


The article I was thinking of is actually on Wikipedia. This picture from the article:


Dimensions


succinctly explains why our space is 3+1 dimensional.


Wednesday, June 18, 2014

special relativity - Axis/vector notation in Minkowski diagrams



Learning special relativity and working with Minkowski diagrams for a while now i am still trying to get my head around some oddity.


From my understanding time can often be seen as the dependent variable in those diagrams, so why put it to the vertical axis? What confuses me even more is the apparent convention to denote a point in space time $(t,x)$ even though $x$ is on the horizontal and $t$ is on the vertical axis, so the usual notation would be to refer to that vector as $(x, t)$.


Are there mostly historical reasons or why is it beneficial to use this notation and to violate usual conventions?



Answer



In special relativity, one deals instead with 4-vectors rather than the 3-vectors to which you're probably accustomed. In 4-vector notation, you have $x^n = [x^0, x^1, x^2, x^3]$, where the superscripts are not exponents, but rather indices of a rank 1 tensor (if you read about this, note that a rank 1 tensor with a raised index is a vector, whereas a similar tensor with a lowered index is a covector, or dual vector).
Expressed in normal Cartesian coordinates, the 4-vector is $\hat x= [ct, x, y, z]$.


In practice, the axes on a Minkowski diagram may be interchanged without any problems - the important property is the asymptote $ct = x$, which denotes the worldline of massless particles.


newtonian mechanics - What is the formula for calculating the tension of the rope section?



There is a circle of rope that rotates at a uniform angular velocity $ω$. What is the formula for calculating the tension of the rope section? Without gravity, the density of the rope is $ρ$, the radius of the rope circle is $R$, and the section radius of the rope is $r$.



enter image description here



Answer



Consider an infinitesimally small section of the of the string $d\theta$. The following diagram illustrates this:


enter image description here


The tension is of the same magnitude throughout the rope, and it acts perpendicular to the vector from the center of the string to the point of action.


From this diagram, you can tell that only the x-components to the left matter, since the y-components of the tensions cancel out. The x-components of the two tension vectors must be equal to the force required for centripetal acceleration.


$$2T\text{ sin}(d\theta/2) = Td\theta = (dm)\omega^2R$$


The small bit of mass can be found as follows:


$$dm = \rho dV = \rho A dx = \rho \pi r^2 (r d\theta) = \rho \pi r^3 d\theta$$


Then, cancelling out the $d\theta$ on both sides of the equation:



$$\boxed{T = \pi \rho r^3 \omega^2 R}$$


Note: This solution assumes $r << R$.


particle physics - Have all three flavors of solar neutrino been measured?


As far as I know that the sun exclusively produces electron neutrinos ($\nu_e$). When the flux of solar neutrinos ($\nu_e$) is measured on the earth, a depletion is observed in the $\nu_e$ flux i.e., some $\nu_e$'s have "disappeared" in their way from the sun to the earth. As far as I know that this conclusion is drawn, by measuring only$^1$ the $\nu_e$ flux in the detectors. The explanation is that some of the neutrinos get morphed into $\nu_\mu$ and $\nu_\tau$.


But to really test this hypothesis, a deficiency in the $\nu_e$ flux is not enough. There must be an experiment where the detector must also measure the $\nu_\mu$ and $\nu_\tau$ fluxes. If adding the fluxes over all three flavors turns out to be equal to the expected flux then only we can be sure that solar neutrinos have undergone oscillation. Has that been achieved in experiments?




$^1$The experiment carried out by Davis et al at the Homestake mines detected $\nu_e$ through the inverse beta decay $\nu_e+^{37}{\rm Cl}\to e^-+^{37}{\rm Ar},$ and found that they were getting about one-third of the number of $\nu_e$ that were predicted from the solar models.




Answer



The SNO experiment was sensitive to all three flavors of neutrinos, and hence provided definitive evidence for solar neutrino oscillations. That's why half of the 2015 Nobel prize for neutrino oscillations went to the director of this experiment, rather than the many other previous experiments.


As usual, there's a bit of confusion depending on whether you talk primarily to theorists or experimentalists. Judging from the model building occurring in the 90's, by 2001 most theorists considered it a given that neutrinos had masses and hence oscillated. But the SNO experiment was the first to really check it for sure.


classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...