Sunday, May 31, 2015

symmetry - Relativistic Hamiltonian Formulations




Possible Duplicate:
Hamiltonian mechanics and special relativity?



The Hamiltonian formulation is beautifully symmetric. It's a shame that the explicit time derivatives in Hamilton's equations mean that the Hamiltonian formulation is not manifestly Lorentz-covariant. Is there any variant of the Hamiltonian formulation that is manifestly relativistic?



Answer



The covariant Hamiltonian version of relativistic classical or quantum mechanics of a single particle is just like the nonrelativistic one, with time replace by eigentime; see, e.g., Thirring's mathematical physics course.


A covariant Hamiltonian version of relativistic classical field theory is the multisymplectic formalism; see, e.g.,

http://arxiv.org/pdf/math/9807080
http://lanl.arxiv.org/abs/1010.0337


A covariant Hamiltonian version of relativistic quantum field theory is the Tomonaga-Schwinger formalism; see, e.g.,
http://arxiv.org/pdf/gr-qc/0405006
http://arxiv.org/pdf/0912.0556
http://sargyrop.web.cern.ch/sargyrop/SDEsummary.pdf


visible light - Relation between wavelength and system size


We always say that when a given light wave interacts with atoms bound in a molecule, only waves with wavelength close to the inter-atomic-spacing are able to probe the system. In other context (macroscopic oscillations in a system), one also talks about the wavelength of some fluctuation in a system being larger than the system size, in which case such fluctuations are omitted/ignored.


Questions:




  • What is it that links the wavelength of a wave to its interaction with a system? Be it acoustic waves or EM. Physical intuition would be greatly appreciated, but please don't hesitate showing the math behind it as well, if you see it fit!





  • How does one go about quantifying such problems? i.e. if I have $\lambda_1$ slightly larger than system size $d$, or less larger, how do I conclude whether to consider such oscillations in the system or not?





Answer



In principle, a wave of any size will interact with a system of any size. The question should therefore be posed differently: how is the interaction of the two affected by their relative size?


Let's take the simple example of scatter. You are familiar (whether you know it or not) with Rayleigh scatter - it's an elastic light scattering phenomenon that makes the sky blue. The Rayleigh scatter cross section (effective probability of interaction) is given by


$$\sigma = \frac{2\pi^4}{3}\frac{d^6}{\lambda^4}\left(\frac{n^2-1}{n^2+2}\right)^2$$


In this equation, $d$ is the diameter of the particle, $n$ is its refractive index, and $\lambda$ is the wavelength of the scattered light. This expression applies when $d<<\lambda$ - typically 1/10th or smaller. So right there we have an interaction that occurs with a wave that's much bigger than the "system" (the particle, in this case).


As wavelengths become shorter, the scattering mechanism is better described as Mie scatter (wavelength on the same order as the particle) - this is characterized by resonances, meaning that some sizes of particles will scatter better than others, but it's not a monotonic relationship (like for Rayleigh scatter).


At even shorter wavelengths, light (or other waves, e.g. acoustic waves) start to behave more "normally" - that is the regime you usually think about when you are talking about direct visualization, optical microscopy, ultrasound imaging etc.



But just because the interactions of long waves with small objects don't easily make pretty images doesn't mean they are not happening - the physics may be a bit harder, and the interaction more statistical and less deterministic, but nonetheless - they do interact.


quantum field theory - What's the big difference between high energy physics and condensed matter physics?


Can anyone tell me the difference between high energy physics and condensed matter physics?


I am confused about how these concepts, such as second quantisation, Green's function and many others, in high energy physics are introduced into condensed matter physics?



Answer



A possible BIG difference was that condensed matter is non-relativistic (or Galilean relativistic) whereas high-energy physics is relativistic (or Lorentz-Poincaré relativistic). The tools are the same then, but writings are sometimes confusing for a few people. Some condensed matter physicists -- especially the young ones trained in condensed matter only -- are not well familiar with covariant notations (with greek indices everywhere !), and high-energy physicists are not necessarily familiar with the observables of condensed matter experiments. For instance, as a condensed matter physicist, I never calculated a cross section. I guess it is not natural for a high-energy physicist to calculate a magnetisation or a band structure... (please comment about it :-)


Also, for a few years (I would say in the 70's-80's but I could say something stupid), condensed matter physicists were interested in many-body problems at a given temperature, so they were using the real partition function (i.e. defined by statistical physics) and Matsubara frequency technique (in terms of the Green's functions it is a simple way to implement temperature in time-independent many body problems), whereas high-energy physicists were mainly concern by single particle processes, so their $Z$ function was mainly a tool to simplify calculations and was just marginally connected to a partition function.


At the present time, there is an emergent Lorentz relativity in modern condensed matter systems, such as topological systems and graphene, to cite a few of them, and high-energy physics is more and more concerned with many-body problems in order to elucidate the early times of the universe for instance, when there was a plasma of many high-energy particles. So in short there is a tendency for these two fields to overlap. Of course the energy ranges with which high-energy and condensed matter physicist play are not the same at all. At the high-energy level, there is no simple things as a solid that you can take in your hand ! In condensed matter, elementary particles are (dressed) electrons, not quarks for instance... Funnily enough, quantum field theory applies really well in condensed matter systems at low temperatures (quantum Hall, superconductivity, mesoscopic physics to cite a few nice applications), whereas high-energy physics is the realm of high temperatures (like collision particles, or the nuclear physics of stars for instance).



Historically, there has been a rich and interesting go and back movement in ideas between the two fields, like about the Green's function approach or diagrammatic (first applied in high-energy problems and adapted to many-body problems), phase transition (first applied in condensed matter and then adapted in high-energy), topological ideas (first considered in high-energy and recently applied to condensed matter problems), ... Each time it is not a simple copy of the ideas and methods from one sector to the other. Rather, the transition from one sector to the other requires adaptations which enrich both parts of the quantum field theory. That's mainly why it is working so well and nice since the mid XX's century !


quantum mechanics - What is the physical difference between states and unital completely positive maps?


Mathematically, completely positive maps on C*-algebras generalize positive linear functionals in that every positive linear functional on a C*-algebra $A$ is a completely positive map of $A$ into $\mathbb{C}$. Furthermore, we have the Stinespring construction as a powerful generalization of the GNS construction.


Certainly, the relationship between completely positive maps and positive linear functionals can only go so far. I am curious about what physics has to say about this analogy/generalization. It seems that completely positive maps should serve as generalized states of a quantum system, but I've mostly seen cp maps arise in the discussion of quantum channels and quantum operations. I'd like to know precisely in what sense a completely positive map can be viewed as a generalized physical state.



Question: What is a completely positive map, physically? Particularly, in what precise sense can a completely positive map be regarded as a generalized (physical) state?



If there are nice survey papers discussing the above relationship, such a reference may serve as an answer to my question.




special relativity - What would happen to electronic circuits when traveling near the speed of light?


Imagine a space ship, loaded with all sorts of computer systems, traveling near the speed of light.


Electricity itself is very fast, and can reach speeds close the speed of light. (up to 99% according to wikipedia). So, what would happen to the electronic circuits in this spaceship?



Will the computers shut down, because electricity can't reach the components? Or are they just not related to each other and will the computers keep working perfectly?



Answer



You're dealing with an incomplete form of relativity.


In the frame of the spaceship, nobody will notice anything different, since all inertial frames are equivalent


In the "ground" frame, electricity would be moving at a different speed, by the relation $$\rm v_{e,ground}=\frac{v_{ship}+v_{electricity}}{1+\frac{v_{ship}v_{electricity}}{c^2}}$$


We cannot simply use relative velocities, we need to use the above equation. If you compare this with the time dilation of the system, the computers will all seem to be working the same, albeit slower, from your POV.


Saturday, May 30, 2015

general relativity - What is the motivation from Physics for the Levi-Civita connection on GR?


On General Relativity the Levi Civita connection is quite important. Indeed, General Relativity is all about connecting the curvature of spacetime with the distribution of matter and energy, at least that is the intuition I've always read about.


Now, given a smooth manifold $M$ which is supposed to be spacetime, there is not a direct way to talk about "curvature" of $M$. The meaningful thing is to talk about the curvature of a connection defined on some bundle over $M$.


In General Relativity, the curvature appearing in Einstein's equations, is curvature of a connection on the bundle $TM$ introduced by means of a covariant derivative operation.


More than that, one picks one specific connection: the Levi Civita connection, which is the unique torsion free connection for which the covariant derivative of the metric tensor vanishes.



So in summary: the curvature of spacetime which is dealt in General Relativity comes from a connection, the connection is introduced by a coviarant derivative and finally the covariant derivative chosen (hence the connection chosen) is the Levi Civita connection.


Why is that? I mean, this is not the only existing connection. Why in General Relativity, the relevant connection from the Physics point of view is the Levi Civita connection?


What is the Physics motivation for the need of the Levi Civita connection? Reasoning with Physics, and remembering that what we want to achieve is a description of spacetime and gravity where matter influences the curvature of spacetime, what would be the Physics motivation for the Levi Civita connection?




general relativity - The Pioneer anomaly finally explained?


Pioneer 10 & 11 are robotic space probes launched by the NASA in the early 1970's. After leaving our solar system, an unusual deceleration of both spacecrafts has been measured to be approximately $$\ddot{r}_p = -(8.74±1.33)×10^{−10} \frac{m}{s^2}$$ with respect to our solar system.


Several attempts were made to explain this tiny effect, called the Pioneer anomaly, but none was fully accepted in the scientific community so far.


Two months ago, Frederico Francisco et al have proposed another solution to the problem. They assume, roughly speaking, that the thermal radiation of the spacecraft caused by plutonium on board along with the actual structure of the probes is responsible for this mystery of modern physics.


Here an image of Pioneer 10 taken from Wikipedia along with a sketch of the radiation model employed in the paper: Pioneer 10 and heat model


Hence my question:



Is the Pioneer anomaly finally explained?



Sincerely




Answer



I'll stick my neck out and say that the answer to your question is simply "yes."


First off, these detailed thermal models are complex and hard to do, so we want confirmation from independent groups. We have that: Rievers and Lämmerzahl, "High precision thermal modeling of complex systems with application to the flyby and Pioneer anomaly," gr-qc/1104.3985


Second, we could ask whether these results contradict previous work. The answer is basically no. Previous work was simply sloppy. There is a nice talk on this topic here by Toth: http://streamer.perimeterinstitute.ca/Flash/a2cc528b-1d36-4a2e-af73-5f81b8b17477/viewer.html There is a long history where people did back-of-the-envelope estimates of the thermal effects and said, "Look, the order of magnitude is too small to matter!" It just turns out that the back-of-the-envelope were wrong.


Finally, all of this stuff is very tough to be sure of, because there are so many uncertainties about things like the degradation of the white paint on the RTGs. Therefore it would be good to have independent ways of testing the hypothesis of a gravitational anomaly, without having to use the Pioneer data at all. We do have these independent tests. If the effect obeyed the equivalence principle, it would have had effects on the outer solar system that are not in fact observed: Iorio, "Does the Neptunian system of satellites challenge a gravitational origin for the Pioneer anomaly?," gr-qc/0912.2947


It's dead, Jim.


general relativity - Quotient space in the book The Large scale structure of space-time


On page 79, the author states



One is thus concerned only with $\mathbf{Z}$ modulo a component parallel to $\mathbf{V}$, i.e. only with the projection of $\mathbf{Z}$ at each point $q$ into the space $Q_q$ consisting of equivalence classes of vectors which differ only by addition of a multiple of $\mathbf{V}$. This space can be represented as the subspace $H_q$ of $T_q$ consisting of vectors orthogonal to $\mathbf{V}$.



where $\mathbf{Z}$ is the tangent vector $(\partial/\partial t)_\lambda$ on a family $\lambda (t,s)$ of curves, and $\mathbf{V}$ the timelike tangent vector parameterized by $s$.



How do we formally define this quotient space? I'm not very familiar with the concept of equivalence classes, quotient space, isomorphic, modulo and coset; only from a linear algebra course I remembered the professor defined that: given two vector spaces $W \le V$, we define $V/W$ the quotient space of $V$ by $W$ to be the set $V/W=\{ x+W\ |\ x\in V \}$. And something about the square bracket $[x]$ being the coset of $V$ with representative $x$.


On page 86, where the author is talking about the null curves



The second difference is that $Q_q$, the quotient of $T_q$ by $\mathbf{K}$, is not now isomorphic to $H_q$, the subspace of $T_q$ orthogonal to $\mathbf{K}$, since $H_q$ includes the vector $\mathbf{K}$ itself as $g(\mathbf{K},\mathbf{K})=0$. In fact as will be shown below, one is not really interested in the whole of $Q_q$ but only in the subspace $S_q$ consisting of equivalence classes of vectors in $H_q$ which differ only by multiple of $\mathbf{K}$.



where $\mathbf{K}$ is the tangent vector on the null geodesics.


I know that it's not clear how to define a projection of $T_q$ into the subspace $H_q$ orthogonal to $\mathbf{K}$ since $g(\mathbf{K},\mathbf{K})=0$, but I want to understand the concept of quotient space here, and how formally that $Q_q$ is not isomorphic to $H_q$, an from these notions construct a projection operator ${h^a}_b$. What is the subspace $S_q$? These kinds of discussion are all over this chapter 4.


Heuristically (I don't know why) the author used the pseudo-orthonormal bases defined on page 87: take $\mathbf{E}_4$ equal to $\mathbf{K}$ and $\mathbf{E}_3$ some other null vector satisfying $g(\mathbf{E}_3,\mathbf{E}_4)=-1$, and $\mathbf{E}_1$ and $\mathbf{E}_2$ to be unit space like vectors, orthogonal to each other and to $\mathbf{E}_3$ and $\mathbf{E}_4$. The following statements are out of my reach



It can be seen that $\mathbf{E}_1$, $\mathbf{E}_2$ and $\mathbf{E}_4$ constitute a basis for $H_q$ while the projections into $Q_q$ of $\mathbf{E}_1$, $\mathbf{E}_2$ and $\mathbf{E}_3$ form a basis of $Q_q$, and the projections of $\mathbf{E}_1$ and $\mathbf{E}_2$ form a basis of $S_q$. We shall normally not distinguish between a vector $\mathbf{Z}$ and its projection into $Q_q$ or $S_q$. We shall call a basis having the properties of $\mathbf{E}_1$, $\mathbf{E}_2$, $\mathbf{E}_3$, $\mathbf{E}_4$, above, pseudo-orthonormal.




The author further concluded that the dual basis $\mathbf{E}^3$ is equal to $-K^ag_{ab}$ and $\mathbf{E}^4$ is $-L^ag_{ab}$. Well, I think then $L_b = -K^a g_{ab} = -K_b$, and $g(\mathbf{E}_3,\mathbf{E}_4)=0$?


I really like to know these geometrical objects as they are so powerful and abstract, which is good. My current reference book is Schutz, Geometrical methods of mathematical physics. He touches on these idea on a section about cohomology theory (this section is categorized as supplementary topics, and in fact I did not understand a word). Is there any suggestion to a light weighted reference book for a physicist?


Thanks!



Answer



$H_q$ is indeed a quotient space constructed from $T_q$. We establish the equivalence relation $\mathbf{X}\sim\mathbf{W}$ for $\mathbf{X},\mathbf{W}\in T_q$ if $\mathbf{X}-\mathbf{W}=k\mathbf{V}$ with $k\in\mathbb{R}$. Then $H_q\cong T_q/\sim$. (In "linear algebra notation," we have $T_q/\sim=T_q/\operatorname{span}\mathbf{V}$.$^1$) To see this more clearly, let $[\mathbf{X}]$ be the equivalence class of $\mathbf{X}$ under $\sim$. Expand $\mathbf{X}$ in the basis $\{\mathbf{E}_1,\mathbf{E}_2,\mathbf{E}_3,\mathbf{V}\}$ as $X^a \mathbf{E}_a$. (We set $\mathbf{E}_4=\mathbf V$ for convenience.) Let $\mathbf W\in [\mathbf X]$, then $\mathbf W=W^a\mathbf E_a$ and $$\mathbf X-\mathbf W=(X^4-W^4)\mathbf V+(X^\alpha-W^\alpha)\mathbf E_\alpha=k\mathbf V$$ where $\alpha=1,2,3$. Thus $X^\alpha=W^\alpha$ and $[\mathbf X]$ uniquely determines ${}_\bot \mathbf X:=X^\alpha\mathbf E_\alpha$. As explained here, ${}_\bot \mathbf X\in H_q.$ Likewise, one can show that $\mathbf X\in H_q$ gives rise to an equivalence class $[\mathbf X]$. Thus the quotient space and the orthogonal subspace are isomorphic.


We will show that $Q_q:=T_q/\operatorname{span}\mathbf K\not\cong H_q$. Note that $H_q:=\{\mathbf{X}\in T_q\mid g(\mathbf{X},\mathbf{K})=0\}$. From the above discussion, it should be clear that $\mathbf K\notin Q_q$. But since $\mathbf K$ is null, $\mathbf K\in H_q$. This cannot happen if $H_q\cong Q_q$.


The space $S_q$ is the set of vectors orthogonal to $\mathbf K$ modulo $\mathbf K$ itself, that is, $S_q=H_q/\operatorname{span}\mathbf K$.


To see that $\mathbf E_1,\mathbf E_2$ and $\mathbf K$ span $H_q$, simply note that $H_q$ is $3$-dimensional and $\operatorname{span}\{\mathbf E_1,\mathbf E_2,\mathbf K\}$ contains only vectors orthogonal to $\mathbf K$ and is $3$-dimensional.


The proof that $S_q=\operatorname{span}\{\mathbf E_1,\mathbf E_2\}$ is identical to the proof in the first paragraph and is left to the reader.



The concept of a dual basis is explained on page 17. We explain why $E^3_a=-K^bg_{ba}$ and leave the reader to verify the rest: $\mathbf{E}^3(\mathbf{E}_3)=E^3_aE^a_3=-K^bg_{ba}L^a=-\langle\mathbf K,\mathbf L\rangle=1=\delta^3{}_3$.




$^1$ The span of a single vector $\mathbf X$ is $\{a\mathbf X\mid a\in\mathbb{R}\}$.


cosmology - Why can we see the cosmic microwave background (CMB)?


I understand that we can never see much farther than the farthest galaxies we have observed. This is because, before the first galaxies formed, the universe was opaque--it was a soup of subatomic particles that scattered all light. But before the universe was opaque, the Big Bang happened, which is where the cosmic microwave background (CMB) comes from.


If the opaque early universe scattered all light, and the first few galaxies are as far back as we can see, why is the CMB observable? Where is it coming from?



Answer



The cosmic microwave background does not originate with the big bang itself. It originates roughly 380,000 years after the big bang, when the temperature dropped far enough to allow electrons and protons to form atoms. When it was released, the cosmic microwave background wasn't microwave at all- the photons had higher energies. Since that time, they have been redshifted due to the expansion of the universe, and are presently in the microwave band.


The universe is opaque from 380,000 years and earlier. The galaxies that we can see only formed after that time. Before that, all that is observable is the CMB.


electricity - Heating of an non-ohmic conductor


So I know that if you increase the voltage across a wire then the current will increase. But an increase in current leads to a increase in heat production though $P=I^2R$, but as the temperature increase the vibrations of the metal ions increase and so the current is more restricted implying the resistance increases. But as the resistance has increased, the heating has increased again and this will go in a cycle leading to infinite resistance and heat production so where am I going wrong?




Friday, May 29, 2015

quantum field theory - Is anti-matter matter going backwards in time?


Some sources describe antimatter as just like normal matter, but "going backwards in time". What does that really mean? Is that a good analogy in general, and can it be made mathematically precise? Physically, how could something move backwards in time?



Answer



To the best of my knowledge, most physicists don't believe that antimatter is actually matter moving backwards in time. It's not even entirely clear what would it really mean to move backwards in time, from the popular viewpoint.


If I'm remembering correctly, this idea all comes from a story that probably originated with Richard Feynman. At the time, one of the big puzzles of physics was why all instances of a particular elementary particle (all electrons, for example) are apparently identical. Feynman had a very hand-wavy idea that all electrons could in fact be the same electron, just bouncing back and forth between the beginning of time and the end. As far as I know, that idea never developed into anything mathematically grounded, but it did inspire Feynman and others to calculate what the properties of an electron moving backwards in time would be, in a certain precise sense that emerges from quantum field theory. What they came up with was a particle that matched the known properties of the positron.



Just to give you a rough idea of what it means for a particle to "move backwards in time" in the technical sense: in quantum field theory, particles carry with them amounts of various conserved quantities as they move. These quantities may include energy, momentum, electric charge, "flavor," and others. As the particles move, these conserved quantities produce "currents," which have a direction based on the motion and sign of the conserved quantity. If you apply the time reversal operator (which is a purely mathematical concept, not something that actually reverses time), you reverse the direction of the current flow, which is equivalent to reversing the sign of the conserved quantity, thus (roughly speaking) turning the particle into its antiparticle.


For example, consider electric current: it arises from the movement of electric charge, and the direction of the current is a product of the direction of motion of the charge and the sign of the charge.


$$\vec{I} = q\vec{v}$$


Positive charge moving left ($+q\times -v$) is equivalent to negative charge moving right ($-q\times +v$). If you have a current of electrons moving to the right, and you apply the time reversal operator, it converts the rightward velocity to leftward velocity ($-q\times -v$). But you would get the exact same result by instead converting the electrons into positrons and letting them continue to move to the right ($+q\times +v$); either way, you wind up with the net positive charge flow moving to the right.


By the way, optional reading if you're interested: there is a very basic (though hard to prove) theorem in quantum field theory, the TCP theorem, that says that if you apply the three operations of time reversal, charge conjugation (switch particles and antiparticles), and parity inversion (mirroring space), the result should be exactly equivalent to what you started with. We know from experimental data that, under certain exotic circumstances, the combination of charge conjugation and parity inversion does not leave all physical processes unchanged, which means that the same must be true of time reversal: physics is not time-reversal invariant. Of course, since we can't actually reverse time, we can't test in exactly what manner this is true.


group theory - How is it that angular velocities are vectors, while rotations aren't?


Does anyone have an intuitive explanation of why this is the case?



Answer



This is a note on why angular velocities are vectors, to complement Matt and David's excellent explanations of why rotations are not.


When we say something has a certain angular velocity $\vec{\omega_1}$, we mean that each part of the thing has a position-dependent velocity


$\vec{v_1}(\vec{r}) = \vec{\omega_1} \times \vec{r}$.


We might consider another one of these motions


$\vec{v_2}(\vec{r}) = \vec{\omega_2} \times \vec{r}$


and wonder what happens when we add them. We get



$\vec{v_1}(\vec{r}) + \vec{v_2}(\vec{r}) = \vec{\omega_1} \times \vec{r} + \vec{\omega_2} \times \vec{r}$.


The cross product is linear, so this is equivalent to


$(\vec{v_1} + \vec{v_2})(\vec{r}) = (\vec{\omega_1} + \vec{\omega_2}) \times \vec{r}$,


so it makes fine sense to add angular velocities by vector addition.


classical mechanics - Non-conservative system and velocity dependent potentials


I'm studying Lagrangian mechanics, but I'm a little bit upset because when dealing with Lagrange's equations, we mostly consider conservative systems. If the system is non conservative they are very brief by saying that 'sometimes' there exist a velocity dependent potential $U(q,\dot{q},t)$ such that the generalized force $Q_j$ of the standard system can be written in terms of this potential. $$Q_j =\frac{d}{dt}\left(\frac{\partial U}{\partial \dot{q}_j}\right)-\frac{\partial U}{\partial q_j}$$ They give as example, charged particles in a static EM field.




  1. But my question is, if we can find this velocity dependent potential for any generalized force?





  2. If not, we can't use Lagrangian mechanics?





Answer





  1. No, (generalized) velocity dependent potentials $U(q,\dot{q},t)$ do not exist for all (generalized) forces $Q_j$. See e.g. this Phys.SE post.




  2. Even if no variational formulation exists, one may still consider Lagrange equations, cf. e.g. this Phys.SE post.





nuclear physics - binding energy of a nucleus is positive?


I have found from this link http://hyperphysics.phy-astr.gsu.edu/hbase/nucene/nucbin.html that: Nuclei are made up of protons and neutron, but the mass of a nucleus is always less than the sum of the individual masses of the protons and neutrons which constitute it. The difference is a measure of the nuclear binding energy which holds the nucleus together. This binding energy can be calculated from the Einstein relationship: Nuclear binding energy = Δmc2


Now from the above it seems that the nuclear binding energy should be positive to compensate the energy difference between between the mass of the protons and neutron and the mass of a nucleus. But my question is how the binding energy can be positive if this binds the nucleons by attractive force?


I have also gone through this The "binding energy" of bonded particles adds mass? , but didn't get my question resolved.




Answer



Let's take the simple system of a deuterium nucleus, that is the bound state of a proton and neutron.


It's tempting to think of the binding energy as something that has to be added to a proton and neutron to glue them together into a deuteron, and therefore that the deuteron must weigh more than the proton and neutron because it has had something extra added to it. However this is the exact opposite of what actually happens.


Suppose we start with a proton and a neutron at a large separation, and we let them go and allow them to attract each other. As they move towards each other their potential energy decreases, so because of energy conservation their kinetic energy increases. They accelerate towards each other in just the same way you accelerate towards the Earth if you jump out of a window. This means that when the proton and neutron meet they are moving very rapidly towards each other, and they just flash past each other and coast back out to a large separation. This is not a bound state. To form a deuteron you have to take energy out of the system so that when the proton and neutron meet they are stationary with respect to each other. Then they can bond to each other to form a deuteron.


And this is the key point. To form a bound state you need to take an amount of energy out that is equal to the kinetic energy gained as the two particles collide. This energy is the binding energy, which is 2.2 MeV for a deuteron. If you take the energy 2.2 MeV and convert it to a mass using Einstein's equation $E = mc^2$ you get a mass of $3.97 \times 10^{-30}$ kg, and if you compare the mass of a deuteron with the mass of a proton + neutron you find the deuteron is indeed $3.97 \times 10^{-30}$ kg lighter.


Just for completeness let's look at this the other way round. Suppose we start with a deuteron and we want to separate it into a proton and neutron. Because the two particles in a deuteron attract each other we need to do work to pull them apart, which means we need to add energy to the system. To pull them completely apart we need to add an amount of energy equal to the binding energy of 2.2 MeV. That means our separated proton and neutron have 2.2 MeV more energy than the deuteron, so the total mass of the separated particles has increased by 2.2 MeV and they are $3.97 \times 10^{-30}$ kg heavier than the deuteron.


electromagnetism - What would give us more heat ? infrared or microwaves?


As we know that our body is made up mostly of water and the frequency of vibration of water molecules matches that of microwaves which is the working principle of microwave ovens.


When we come in contact of sunlight and feels its warmth we say that it's due to the heat waves coming as infrared. Atleast to explain the heat we feel shouldn't we give reasoning of microwaves rather than infrared ?



Answer



It's a myth that microwave absorption by water is a resonant process. See Does a domestic microwave work by emitting an electromagnetic wave at the same frequency as a OH bond in water? for a discussion of this.


Light generally interacts with matter by interacting with the electrons in matter. Light has an associated oscillating electric field and this excites oscillations in the electrons in the matter. The electrons may in turn transfer their kinetic energy to lattice vibrations, with the end result of heating the object. Incidentally, black body emission is the reverse of this. Lattice oscillations scatter electrons causing transient dipoles, and these dipole oscillations generate the black body radiation.


How much heating occurs depends simply on how fast the radiation is absorbed, and this is highly dependant on the frequency of the light and the electronic properties of the material. For example silica glass absorbs very little in the visible region while graphite absorbs very strongly. You get similar variations in the microwave region. Water isn't actually a very strong absorber of the 2.45GHz microwaves used in domestic ovens, which is just as well otherwise the centre of the food wouldn't be heated.


Thursday, May 28, 2015

What is the difference between diffraction and interference of light?


I know these two phenomena but I want to know a little deep explanation. What type of fringes are obtained in these phenomena?



Answer




  1. Two separate wave fronts originating from two coherent sources produce interference. Secondary wavelets originating from different parts of the same wave front constitute diffraction. Thus the two are entirely different in nature.

  2. The region of minimum intensity is perfectly dark in interference. In diffraction they are not perfectly dark.

  3. Width of the fringes is equal in interference. In diffraction they are never equal.

  4. The intensity of all positions of maxima are of the same intensity in interference. In diffraction they do vary.


Diffraction pattern



enter image description here


Interference pattern


enter image description here


quantum mechanics - What would be the associated wavelength of the particle if its velocity is zero?


What will be the wavelength of a particle whose velocity is zero?


According to de Broglie's hypothesis, then the wavelength would become infinite as the momentum is zero. But, I think for a stand still particle, its particle nature should be more dominant, as at that moment it is highly localized.



Answer



To the contrary, the slower the particle moves, the more its wavelike properties show up. Compare e.g. electron in an atom, where its energy is at its lowest, with an electron flying out of a CRT. In the former case we need quantum mechanics to describe its motion (it's where QM originates), while in the latter case classical mechanics is sufficient.


So the wavelength becoming infinite for a resting electron is a completely consistent result. And it's also consistent with Heisenberg's uncertainty principle: momentum is exactly defined while position is completely undefined.


quantization - Equivalence of classical and quantized equation of motion for a free field


Suppose a classical free field $\phi$ has a dynamic given in Poisson bracket form by $\partial_o\phi=\{H, \phi\}$. If we promote this field to an operator field, the dynamic after canonical quantization is given by $\partial_o\phi=i[H, \phi]$.


How do we prove the equivalence of these two equation of motions? Does the procedure break down for interacting fields?


Edit: The following is my understanding of how the question is answered. I consider a scalar field for simplicity.



1/ Because the field is free we can write the operator $\phi$ as a superposition of plane-wave operators: $$ \phi(x)=K\int (d^3p) a_p e^{-ipx}+a_p^{\dagger}e^{ipx}$$


Where K is a normalisation constant. The problem is now to find what equation governs the time evolution of $a_p$


2/As an operator, $a_p$ evolves according to $\frac{d}{dt}a_p=i[H, a_p]$


3/ Because the field is free we can write $H$ in the form: $$ H=\int(d^3p)\omega a_p^{\dagger}a_p$$ 4/Inserting into the commutator and using commutation relations this gives $$\frac{d}{dt}a_p=i\omega a_p$$ $$\frac{d^2}{dt^2}a_p=-\omega^2 a_p$$ 5/These equations for operators are the same as the classical equations, which justifies the use of the classical Euler-Lagrange equations for the operators.


Is this correct?



Answer



Let's assume that there are no obstructions to quantization, ordering issues, etc. This is perfectly fine in most physical cases and I think this makes the answer more understandable.


The answer has two parts:



  1. Given that the quantum Hamiltonian is nothing more than the classical Hamiltonian with hats in the fields and momenta $$\hat H=H_{cl}\, (\hat \Pi , \hat \Phi)$$ and that he Dirac prescription holds $$[\cdot, \square]=i\hbar \{\cdot , \square\}$$ with the dot and the square any field or momentum, then it is clear that the classical and the quantum equations of motion in the Heisenberg picture are formally the same.


  2. If the equations of motion are linear in the fields, then the previous formal equivalence is additionally "real", namely: the expectation values of the fields evolve like the classical fields. This is Ehrenfest theorem.


Example: For simplicity, considerer the following quantum mechanical problem (the generalization to QFT is immediate): $$H_{cl} (P,Q)= {P^2\over 2}+{Q^2\over 2}+g{Q^3\over 3}$$ with the standard Poisson brackets. Note that this is the harmonic oscillator (in some convenient units where the mass and frequency are set to 1) plus an interaction term. The classical equation of motion are obtained as you very well know (taking Poisson brackets with the Hamiltonian) and in its second order form is: $$\ddot Q + Q + gQ^2=0$$ So far everything is classical. Now, the quantum Hamiltonian is simply: $$\hat H= H_{cl}\, (\hat P , \hat Q)= {\hat P^2\over 2}+{\hat Q^2\over 2}+g{\hat Q^3\over 3}$$ with the canonical commutation relations obtained from the Dirac prescription. We are in the Heisenberg picture. As you should verify (taking commutators with the Hamiltonian) the quantum equation of motion is: $$\ddot {\hat Q} +\hat Q + g\hat Q^2=0$$
This is the previous first part; as you see both equations are formally the same.


However, physically —rather than formally— we are interested in the evolution of the expectation values of observables instead of in the evolution of the own operators $\hat Q$ in the Heisenberg picture (these operators do not depend on time in the Schrödinger picture and physics cannot depend on the picture humans decide to use). So, we can take the expectation value of the previous equation in a generic state $|\Psi \rangle$ $$\frac{d^2}{dt^2} {\langle \Psi |\hat Q|\Psi\rangle }+\langle \Psi |\hat Q|\Psi\rangle + g\langle \Psi |\hat Q^2|\Psi\rangle =0$$ Note that since we are in the Heisenberg picture, physical states do not evolve in time and we are allow to write the time derivatives out of the expectation value. The big question is: Does this equation imply that the expectation value of the quantum observable (the physical thing) evolve classically? And the answer is rotundly negative because: $$\langle \Psi |\hat Q^2|\Psi\rangle \neq\langle \Psi |\hat Q|\Psi\rangle ^2$$ that is: in genera, the expectation value of the square is not the square of the expectation value (the difference is the square of the standard deviation or indetermination).


In the $g=0$ case the last term is absent, the equation is linear and the evolution of the quantum expectation value is classical. That is, calling $q\equiv \langle \Psi |\hat Q|\Psi\rangle$: $$\ddot q + q=0$$ which is the classical equation (with $g=0$).


general relativity - Horizon problem: Angular size of the causality patches on the CMB surface


I'm having difficulties in calculating the angular size of the causally connected regions on the cosmic microwave background (CMB), as seen from Earth today. I read in several documents that this angle is of about $1^{\circ}$, but most authors are giving only crude hand waving arguments about that number. See for example that page (look at the last two paragraphs):


https://ned.ipac.caltech.edu/level5/Sept02/Kinney/Kinney4_2.html


I'm trying to reproduce that value by explicit calculations from the standard FLRW metric, in the case of a spatially flat geometry ($k = 0$): \begin{equation}\tag{1} ds^2 =dt^2 - a^2(t) (dx^2 + dy^2 + dz^2). \end{equation} At observation time $t_{obs}$ (today: $t_{obs} \approx 13,8~\mathrm{Gyears}$), the proper distance from a given source (emitting light at time $t_{em} \approx 300~000~\mathrm{years}$) is given by $ds^2 = 0$ (light-like spacetime intervall): \begin{equation}\tag{2} \mathcal{D}(t_{obs}, t_{em}) = a(t_{obs}) \int_{t_{em}}^{t_{obs}} \frac{1}{a(t)} \, dt. \end{equation} For example, this distance to the CMB surface, in the case of a dust universe, is found with the scale factor $a(t) \propto t^{2/3}$: \begin{equation}\tag{3} \mathcal{D} = 3 \, (\, t_{obs} - t_{obs}^{2/3} \, t_{em}^{1/3}). \end{equation} This gives $\mathcal{D} \approx 40,2~\mathrm{Gly}$ (more accurate models with radiation gives about $42$ or $45~\mathrm{Gly}$).


Now, the causally correlated regions on the CMB sphere should have a proper radius of (considering the dust only universe): \begin{equation}\tag{4} R_{causal} = a(t_{em}) \int_0^{t_{em}} \frac{1}{a(t)} \, dt = 3 \, t_{em}, \end{equation} i.e. $R_{causal} \approx 9 \times 10^5 ~ \mathrm{ly}$. As seen from Earth, the angular size of a causal patch should have an angular size $\alpha_{causal}$ of: \begin{equation}\tag{5} \alpha_{causal} = 2 \arctan{\Big( \frac{R_{causal}}{\mathcal{D}} \Big)} \approx 0.003^{\circ}. \end{equation} Of course, this is much too short, and I'm probably doing a naive calculation. I don't know where I'm making a mistake.


How should I fix the angular size (5)?




EDIT: Apparently, the right formula fixing (5) is the following (the factor 2 is to get the full angular diameter, and not just the angular radius of the causal patch): \begin{equation}\tag{6} \alpha_{causal} = 2 \arctan{\Big( \frac{\displaystyle{\int_{0}^{t_{em}} \frac{1}{a(t)} \, dt}}{\displaystyle{\int_{t_{em}}^{t_{ob}} \frac{1}{a(t)} \, dt}} \Big)}, \end{equation} but I don't understand why the angle is found by the ratio of the comoving lenghts instead of the proper lenghts.





electromagnetism - Where is the deflection of compass needle when placed inside a current carrying solenoid?



Suppose I have a current carrying solenoid with a strong magnetic field inside and outside it. Now I bring a good compass inside that solenoid now I would like you to tell me the direction of North Pole of that compass in which it will get deflected, either towards the South Pole or North Pole of the solenoid.


Now , please see these two cases:


Case 1:- If the North Pole of compass is attracted towards the South Pole of solenoid because like poles repel and unlike poles attract, then it violates the fact that magnetic field lines run from South to North Pole inside a magnet because we used compass analogy to define the direction of magnetic field lines andas compass needle pointed from North to South Pole outside the magnet we said that magnetic field lines run from North to South Pole outside a magnet . Now as the needle pointed towards South Pole inside the solenoid why didn't we said that the field lines run from North to South Pole inside a magnet too as we did outside the magnet using compass analogy?


Case 2:- Now if the North Pole of compass points towards the North Pole of solenoid according to field lines concept the direction of magnetic field lines inside a solenoid is from South to North Pole. Now this violates the fact that like poles repel and unlike poles attract. So what really happens?




quantum mechanics - A series of bound states covering an interval


Generally, the bound states (normalizable eigenvectors) of a Hamiltonian have discrete eigenvalues.



Is it possible for the eigenvalues to cover an interval? Say, $(a,b)$?


That is, for each $E \in (a,b)$, there is a corresponding bound state?



Answer



A self-adjoint operator $T$ on $L^2(\mathbb{R})$ has in its spectrum three different kinds of subspectra: A discrete point spectrum, a continuous spectrum, and a singular spectrum. The latter is physically discarded.


The point spectrum consists of the eigenvalues of $T$, that is, the spectral values for which true eigenvectors in $L^2(\mathbb{R})$, and hence normalizable eigenstates, exist.


The continuous spectrum may intersect with the point spectrum, but except for these discrete intersections, the continuous spectral values do not have eigenvectors in $L^2(\mathbb{R})$, but only in a larger rigged Hilbert space, which are consequently non-normalizable (for example because the Hermitian product of $L^2(\mathbb{R})$ is not a proper inner product for them).


Thus, continuous bound states do not exist in the usual quantum mechanical setting.


A way to see that the true eigenvalues cannot form a continuum is to notice that they would have to be uncountably many, but the separable Hilbert spaces of quantum mechanics have only countably many basis vectors, and hence there are only countably many possible independent eigenvectors.


Wednesday, May 27, 2015

quantum mechanics - In the Principle of Least Action, how does a particle know where it will be in the future?


In his book on Classical Mechanics, Prof. Feynman asserts that it just does. But if this is really what happens (& if the Principle of Least Action is more fundamental than Newton's Laws), then don't we run into some severe problems regarding causality? In Newtonian Mechanics, a particle's position right now is a result of all the forces that acted on it in the past. It's entirely deterministic in the sense that given position & velocity right now, I can predict the future using Newton's laws. But the principle of least action seems to reframe the question by saying that if the particle ends up in some arbitrary position, then it would take a certain path (namely one minimises the action). But that means that the particle already knows where it'll be and it "naturally" takes the path that minimises the action.


Is there any deeper reason for why this is true? In fact principle of least action seems so arbitrary that it's hard to see why it manages to replicate Newton's Equations? If any of you have any insight into this, please share because I just cannot get my head around it.


Note - Please keep in mind, my question is regarding the principle itself, not the equations that result from that principle.





newtonian mechanics - Why is the Torque divided by the radius but other rotational analogs multiplied?



I'm having trouble building intuition for circular motion. I understand that torque is the rotational analog of force. Why do we multiply the tangential force by the radius while we multiply the rotational analogs of distance, velocity, and acceleration by the radius?


I understand that the moment of inertia is responsible for the r term for a point mass, but is there any intuitive ways of relating it?



Answer



Torque is not the equivalent of force for rotation. Torque is a measure of distance for a force. The definition of torque is exactly the moment of force


$$\mathbf{T} = \mathbf{r} \times \mathbf{F}$$


Conversely, velocity isn't the equivalent of rotational velocity. Velocity is a measure of distance for rotational motion. The definition of velocity is exactly the moment of rotation



$$ \mathbf{v} = \mathbf{r} \times \boldsymbol{\omega}$$


Additionally, angular momentum is the moment of momentum


$$\mathbf{L} = \mathbf{r} \times \mathbf{p}$$


Above $\times$ is the vector cross product


The fundamental relationship in mechanics, is newton's 2nd law, and Euler's law of rotation as commonly expressed at the center of mass C:


$$\begin{aligned} \mathbf{F} & = \frac{{\rm d}}{{\rm d}t}( m \mathbf{v}_C) \\ \mathbf{T}_C & = \frac{{\rm d}}{{\rm d}t}( \mathrm{I}_C \boldsymbol{\omega}) \\ \end{aligned} $$


But to understand them fundamentally consider the following tweak:


$$\begin{aligned} \mathbf{F} & = m \frac{{\rm d}}{{\rm d}t}( \mathbf{r}_{\rm rot} \times \boldsymbol{\omega}) \\ \mathbf{r}_{\rm force} \times \mathbf{F} & = \frac{{\rm d}}{{\rm d}t}( \mathrm{I}_C \boldsymbol{\omega}) \\ \end{aligned} $$


You can see now how the geometry (relative location vectors $\mathbf{r}_{\rm rot}$ and $\mathbf{r}_{\rm force}$) enter at different parts of the equations. And that is why you end up with an r multiplied for torque and divide for velocity.


Tuesday, May 26, 2015

classical mechanics - Force as change in momentum vs. change in velocity


Is there ever a situation where the distinction between $F = m \frac{dv}{dt}$ and $F = \frac{dp}{dt}$ is important? I can't think of a situation where one is true and not the other (assuming only conservation of momentum).




Edit: Obviously it is important to take a changing mass into account (e.g. for a rocket) when you're considering a full time evolution, i.e. $F(t) = m(t) \frac{dv}{dt}$ (or in the relativistic case, perhaps something like $F(t) = m(t) \frac{d}{dt} \left( \frac{p}{m} \right)$ with $m$ the rest-mass). And perhaps there is a nontrivial relationship between the rate of change of mass, and the forces being exerted (again, e.g. with a rocket --- where the mass loss is tied to the propulsion). What is not clear is that there should ever be a $F = v\frac{dm}{dt}$ term.




Edit 2: My understanding of the solution:
There should not be a $dm/dt$ term, as pointed out by @garyp. The change in momentum expression is, however, more accurate because $p \neq mv$ in general (e.g. in relativistic cases, or when considering massless systems). It would seem that either one must take the caveat that $dp/dt$ cannot be used for mass-varying systems, or take the much less conceptual or aesthetically pleasing expression that $F = m \frac{d (\gamma v)}{dt}$ (which still only applies to classical systems).



Answer




In classical mechanics Newton's second law applies only to constant mass systems. In those cases there's no difference between $F=ma$ and $F=\mathrm{d}p/\mathrm{d}t$. However, in special relativity the latter is valid, but the former is not. A relativistic definition of momentum is required: $p=\gamma m v$.


Some details


[rework of my original answer] Some of the responses given so far answer the OP, but have weaknesses that might lead to misconceptions about Newton 2. I'll try to address the issue.


Some of the answers so far are not entirely correct if by $p$ is meant $mv$ of the rocket. Newton's second law is valid only for constant mass systems. $F=\mathrm{d}p/\mathrm{d}t$ leads to the rocket equation by accident if the propellent is exhausted in a direction opposite to the direction of motion. If you are very careful about what you mean by $p$ a correct analysis can be made, but $p=m_\mathrm{rocket}v_\mathrm{rocket}$ does not work.


This Wikipedia entry. is the clearest statement of that fact that I've found.


To see why, consider a system comprising the rocket plus its remaining fuel and remaining propellant, which is what I think the other responders intend. (I may be wrong, but if so, they should clarify what exactly is their system.)


Imagine the system moving at constant velocity, and having two thrusters pointing perpendicular to the direction of motion, and directly opposite each other, and producing identical constant thrust by expelling exhaust gas. The thrust force of the two are equal and opposite. So the net force on the rocket is zero. Then $F_{net}=0$ and $\mathrm{d}v/\mathrm{d}t = 0$, and $v \neq 0$ and $\mathrm{d}m/\mathrm{d}t \neq 0$. Blindly applying $F=\mathrm{d}p/\mathrm{d}t$ leads to $0=v\,\mathrm{d}m/\mathrm{d}t$, a contradiction which can only be resolved if the mass is unchanging.


Comments on some comments


The system is losing momentum by losing mass, but it's velocity is not changing: there is no net force on the system. Momentum is conserved in the closed system consisting of the rocket and the exhausted propellant. The system consisting of the rocket plus the fuel and propellant not yet exhausted (fuel remaining in the tank) is an open system. Conservation of momentum does not apply to open systems.


Careful analysis of a variable mass system lead to $$ \vec{F}_\mathrm{ext}=\vec{u}\frac{\mathrm{d}m}{\mathrm{d}t} + m\frac{\mathrm{d}\vec{v}}{\mathrm{d}t}$$ where $\vec{u}$ is the velocity of the mass leaving the system relative to the velocity of the system, and $\vec{F}_\mathrm{ext}$ is the external force on the system. For a rocket $\vec{F}_\mathrm{ext}=0$, and $$ 0=\vec{u}\frac{\mathrm{d}m}{\mathrm{d}t} + m\frac{\mathrm{d}\vec{v}}{\mathrm{d}t}$$ This is not the same as $$ 0=\vec{v}\frac{\mathrm{d}m}{\mathrm{d}t} + m\frac{\mathrm{d}\vec{v}}{\mathrm{d}t}$$ where $\vec{v}$ is the velocity of the rocket.



Trying to write $F = m\,\mathrm{d}v/\mathrm{d}t + v\,\mathrm{d}m/\mathrm{d}t$ for a variable mass system is not correct. here's a nice discussion whose first sentences are "In mechanics, a variable-mass system is a collection of matter whose mass varies with time. Newton's second law of motion cannot directly be applied to such a system because it is valid for constant mass systems only."


Aeronautical engineers know all this. It's only physicists who are confused. I checked some books. Symon's and John R. Taylor's classical mechanics texts get it right, as does Halliday, Resnick, and Walker.


quantum mechanics - What is the difference between maximally entangled and maximally mixed states?


To my understanding, mixed states is composed of various states with their corresponding probabilities, but what is the actual difference between maximally mixed states and maximally entangled states?



Answer



Suppose we have two Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$. A quantum state on $\mathcal{H}_A$ is a normalized, positive trace-class operator $\rho\in\mathcal{S}_1(\mathcal{H}_A)$. If $\mathcal{H}_A$ is finite dimensinal (i.e. $\mathbb{C}^n$), then a quantum state is just a positive semi-definite matrix with unit trace on this Hilbert space. Let's stick to finite dimensions for simplicity.


Let's now consider the idea of a pure state: A pure state is a rank-one state, i.e. a rank-one projection, or a matrix that can be written as $|\psi\rangle\langle \psi|\equiv\psi\psi^{\dagger}$ for some $\psi\in\mathcal{H}_A$ (the first being the Dirac notation, the second is the usual mathematical matrix notation - since I don't know which of the two you are more familar with, let me use both). A mixed state is now a convex combination of pure states and, by virtue of the spectral theorem, any state is a convex combination of pure states. Hence, a mixed state can be written as



$$ \rho=\sum_i \lambda_i |\psi_i\rangle \langle \psi_i|$$ for some $\lambda_i\geq 0$, $\sum_i \lambda_i=1$. In a sense, the $\lambda_i$ are a probability distribution and the state $\rho$ is a "mixture" of $|\psi\rangle\langle\psi|$ with weights $\lambda_i$. If we assume that the $\psi_i$ form an orthonormal basis, then a maximally mixed state is a state where the $\lambda_i$ are the uniform probability distribution, i.e. $\lambda_i=\frac{1}{n}$ if $n$ is the dimension of the state. In this sense, the state is maximally mixed, because it is a mixture where all states occur with the same probability. In our finite dimensional example, this is the same as saying that $\rho$ is proportional to the identity matrix.


Note that a maximally mixed state is defined for all Hilbert spaces! In order to consider maximally entangled states, we need to have a bipartition of the Hilbert space, i.e. we now consider states $\rho\in\mathcal{S}_1(\mathcal{H}_A\otimes \mathcal{H}_B)$. Let's suppose $\mathcal{H}_A=\mathcal{H}_B$ and finite dimensional. In this case, we can consider entangled state. A state is called separable, if it can be written as a mixture


$$ \rho =\sum_i \lambda_i \rho^{(1)}_i\otimes \rho^{(2)}_i $$ i.e. it is a mixture of product states $\rho^{(1)}_i$ in the space $\mathcal{H}_A$ and $\rho^{(2)}_i$ in the space $\mathcal{H}_B$. All states that are not separable are called entanglend. If we consider $\mathcal{H}_A=\mathcal{H}_B=\mathbb{C}^2$ and denote the standard basis by $|0\rangle,|1\rangle$, an entangled state is given by


$$ \rho= \frac{1}{2}(|01\rangle+|10\rangle)(\langle 01|+\langle 10|)$$ You can try writing it as a separable state and you will see that it's not possible. Note that this state is pure, but entangled states do not need to be pure!


It turns out that for bipartite systems (if you consider three or more systems, this is no longer true), you can define an order on pure entangled states: There are states that are more entangled than others and then there are states that have the maximum amount of possible entanglement (like the example I wrote down above). I won't describe how this is done (it's too much here), but it turns out that there is an easy characterization of a maximally entangled state, which connects maximally entangled and maximally mixed states:


A pure bipartite state is maximally entangled, if the reduced density matrix on either system is maximally mixed.


The reduced density matrix is what is left if you take the partial trace over one of the subsystems. In our example above:


$$ \rho_A = tr_B(\rho)= tr_B(\frac{1}{2}(|01\rangle\langle 01|+|10\rangle\langle 01|+|01\rangle\langle 10|+|10\rangle\langle 10|))=\frac{1}{2}(|0\rangle\langle 0|+|1\rangle\langle 1|) $$


and the last part is exactly the identity, i.e. the state is maximally mixed. You can do the same over with $tr_A$ and see that the state $\rho$ is therefore maximally entangled.


Monday, May 25, 2015

magnetic fields - Fleming's left hand rule


in Fleming's left hand rule is the direction of current showing the direction of the flow of electrons or the direction of positive charge?



Answer



Have a look at the Wikipedia article on the left hand rule. It says:




The direction of the electric current is that of conventional current: from positive to negative.



homework and exercises - Solving the potential function for a Dielectric Sphere in a constant Electric Field


I was observing the solution to the potential function for a dielectric sphere (dielectric constant= $\epsilon$) of radius $r=a$ at a constant field $E_0$. The boundary conditions were as follows:


[WHERE, $\phi_1(r,\theta)$ is the potential inside the sphere and $\phi_2(r,\theta)$ outside the sphere]


1.$\phi_1(r=\infty)=E^0r \cos(\theta)$;


2.$\phi_1(r=a)=\phi_2(r=a)$


3.$\phi(r=0)$ is finite


4.$\epsilon\frac{\partial \phi_1}{\partial r}=\frac{\partial \phi_2}{\partial r}$ at r=a


The solution is the following potential functions:


$\phi_1(r,\theta)=-\frac{(3E^0r \cos(\theta))}{(\epsilon+2)}$ and $\phi_2(r,\theta)=-E^0r \cos(\theta)+\left(\frac{\epsilon-1}{\epsilon+2}\right)\frac{E^0(a)^3 \cos(\theta)}{r^2}$


My questions are as follows:



a) On which physical ground is the condition 3 taken .


b) It was written that the condition for equality of Tangential Electric field component at the junctional surface is included within condition 2. How is that happening?


c) It was also stated that only at the surface of the dielectrics, the Laplace's equation does not hold. I understand why the equations holds within the sphere and outside it (free charge density is 0), but then what is wrong at the surface?



Answer




  1. The potential can only diverge if the charge density diverges, and even then only for line or point charges, not surface charges. Since there is not a point or line charge at the origin, the potential has to be finite there.

  2. The equality of the tangent components follows from the requirement that the potential be continuous and $\mathbf{E} = -\nabla \Phi$. In order to get a non-vanishing curl with that definition you would have to have a discontinuous change in $\Phi$. See, for example, $\Phi = y\Theta(x)$ ($\Theta$ the Heaviside step function).

  3. What's going on at the surface follows from looking at the macroscopic formulation of Maxwell's equations. Notice that in that formulation the dielectric permittivity is inside of the divergence, so if the permittivity changes as a function of position you get a product rule: $$-\nabla \cdot \left( \epsilon \nabla \Phi\right) = - \epsilon \nabla^2 \Phi - (\nabla \Phi) \cdot (\nabla \epsilon)= 0,$$ which is not Laplace's equation. As @CStarAlgebra noted in another answer, this is because of surface charge densities in this case. If $\epsilon$ continuously varies with position, you would get a continuous volume bound charge density.


mass - Why do we use kilograms instead of newtons to measure weight in everyday life?


What was the reason to use kilograms to measure weight (e.g. body weight, market vegetables etc.) instead of using newtons?



Answer



The problem is that while mass is the same everywhere on earth, weight is not - it can vary as much as 0.7% from the North Pole (heavy) to the mountains of Peru (light). This is in part caused by the rotation of the earth, and in part by the fact that the earth's surface is not (quite) a sphere.


When you are interested in "how much" of something there is - say, a bag of sugar - you really don't care about the local force of gravity on the bag: you want to know how many cups of coffee you can sweeten with it. Enter the kilogram.


If I calibrate scales using a reference weight, they will indicate (at that location) the amount of mass present in a sample relative to the calibration (reference). So if I have a 1 kg calibration weight, it might read 9.81 N in one place, and 9.78 N in another place; but if I put the reference weight on the scales and then say "if you feel this force, call it 1 kg" - that is what I get. You can now express relative weights as a ratio to the reference.


All I need to do when I move to Jamaica (would that I could…) is recalibrate my scales - and my coffee will taste just as sweet as before. Well - with Blue Mountain I might not need sugar but that's another story.


So there it is. We use the kilogram because it is a more useful metric in "daily life". The only time we care about weight is when we're about to snap the cables in the elevator (too much sweetened coffee?) or have some other engineering task where we care about the actual force of gravity (as opposed to the quantity of material).


So why don't we call it "mass"? Well, according to http://www.etymonline.com/index.php?term=weigh, "weight" is a very old word,




The original sense was of motion, which led to that of lifting, then to that of "measure the weight of." The older sense of "lift, carry" survives in the nautical phrase weigh anchor.



Before Newton, the concept of inertia didn't exist; so the distinction between mass and weight made no sense when the word was first introduced. And we stuck with it...


quantum interpretations - What is the splitting structure of a state in thermal equilibrium in MWI?


What is the splitting structure of a state in thermal equilibrium in the many worlds interpretation? This is a mixed state, but we can perform a purification of it by doubling the system and forming a pure entangled state between both systems. Can any pure state always admit a natural splitting structure?




optics - Why does light bend?


I have just read about dispersion of light by a prism and the thing which i donnot understand is that WHY does the light bend at all? Through prisms and slabs?


I came to know that red light has the longest wavelength and then i read a formula, Energy is inversely proportional to wavelength. That means that red light contains the least energy . And it bends the least. WHY? Why does it not bend as much as violet ( i know they have more energy but what makes them bend ? )


( P.S - try not to explain using Snell’s laws )




quantum field theory - CP violation from the Electroweak SU(2)$_{weak,flavor}$ by $int theta F wedge F $


Question: Why there is NO Charge-Parity (CP) violation from a potential Theta term in the electroweak SU(2)$_{weak,flavor}$ sector by $\theta_{electroweak} \int F \wedge F$?



(ps. an explicit calculation is required.)




Background:


We know for a non-Abelian gauge theory, the $F \wedge F $ term is nontrivial and breaks $CP$ symmetry (thus break $T$ symmetry by $CPT$ theorem), which is this term: $$ \int F \wedge F $$ with a field strength $F=dA+A\wedge A$.


$\bullet$ SU(3)$_{strong,color}$ QCD:


To describe strong interactions of gluons (which couple quarks), we use QCD with gauge fields of non-Abelian SU(3)$_{color}$ symmetry. This extra term in the QCD Lagrangian: $$ \theta_{QCD} \int G \wedge G =\theta_{QCD} \int d^4x G_{\mu\nu}^a \wedge \tilde{G}^{\mu\nu,a} $$ which any nonzero $\theta_{QCD}$ breaks $CP$ symmetry. (p.s. and there we have the strong CP problem).


$\bullet$ Compare the strong interactions $\theta_{QCD,strong}$ to U(1)$_{em}$ $\theta_{QED}$: For U(1) electromagnetism, even if we have $\theta_{QED} \int F \wedge F$, we can rotate this term and absorb this into the fermion (which couple to U(1)$_{em}$) masses(?). For SU(3) QCD, unlike U(1) electromagnetism, if the quarks are not massless, this term of $\theta_{QCD}$ cannot be rotated away(?) as a trivial $\theta_{QCD}=0$.


$\bullet$ SU(2)$_{weak,flavor}$ electro-weak:


To describe electroweak interactions, we again have gauge fields of non-Abelian SU(2)$_{weak,flavor}$symmetry. Potentially this extra term in the electroweak Lagrangian can break $CP$ symmetry (thus break $T$ symmetry by $CPT$ theorem): $$ \theta_{electroweak} \int F \wedge F =\theta_{electroweak} \int d^4x F_{\mu\nu}^a \wedge \tilde{F}^{\mu\nu,a} $$ here the three components gauge fields $A$ under SU(2) are: ($W^{1}$,$W^{2}$,$W^{3}$) or ($W^{+}$,$W^{-}$,$Z^{0}$) of W and Z bosons.


Question [again as the beginning]: We have only heard of CKM matrix in the weak SU(2) sector to break $CP$ symmetry. Why there is NO CP violation from a potential Theta term of an electroweak SU(2)$_{weak,flavor}$ sector $\theta_{electroweak} \int F \wedge F$? Hint: In other words, how should we rotate the $\theta_{electroweak}$ to be trivial $\theta_{electroweak}=0$? ps. I foresee a reason already, but I wish an explicit calculation is carried out. Thanks a lot!





radiation thermodynamics paradox



This question is concerned with a thermodynamic paradox for radiating bodies and radiation in a cavity of a specific shape.


Consider two nested shells that are axisymmetric ellipsoids with the same two foci, A and B, as shown in the figure (line AB is the axis of symmetry). Cut the system along the vertical plane of symmetry and remove the right side of the outer shell, and remove the left half of the inner shell. Then connect the two halves with vertical surface, as shown in the figure, to make it a continuous enclosure. The result is a figure of rotation shown by the thick black line in the figure.


Next, make the inner surface of it a perfect mirror. The property of such a cavity is that each ray emitted from point B comes to point A; but not each ray emitted from point A comes to point B - some rays emitted from A (shown in blue) come back to A.


Now, put two small black bodies (say, two spheres of some small radius) at points A and B. Thermodynamic equilibrium requires that eventually the temperatures of the two spheres equilibrate. However, according to the geometric properties of this cavity, all energy emitted from B comes to A but only a fraction of energy emitted from A comes to B; so the equality of temperatures is not consistent with balance of emitted and absorbed power. How to resolve this paradox?


enter image description here




Sunday, May 24, 2015

spacetime dimensions - Is there any thing composed of elementary particles in this world that is not 3 dimensional?



Is there any thing composed of elementary particles in this world that is not 3 dimensional? I know that there is graphite which is singular atom thick. Is there anything in this world that has no depth? So I guess what I am asking is there an object that does not have volume



Answer



If by object you mean something composed of elementary particles then there are no two dimensional objects due to the uncertainty principle.



If we take the direction normal to the surface to be the $z$ axis then the uncertainty principle tells us that:


$$ \sigma_z \sigma_{p_z} \ge \frac{\hbar}{2} $$


for an object to become two dimension would require $\sigma_z \rightarrow 0$ and that implies $\sigma_{p_z} \rightarrow \infty$ and therefore requires infinite energy.


However there are many examples of systems that are approximately two dimensional. Graphene would be a good example.


Even if you're willing to relax the requirement for the object to be something physical then I'm still not sure anything can be truly two dimensional. The example that springs to mind is an event horizon, but this is a classical concept and quantum gravity effects may well blur it into a region of finite volume. I suspect quantum mechanics will forbid anything from being truly two dimensional.


quantum field theory - Zero modes ~ zero eigenvalue modes ~ zero energy modes?


There have been several Phys.SE questions on the topic of zero modes. Such as, e.g.,



Here I would like to understand further whether "Zero Modes" may have physically different interpretations and what their consequences are, or how these issues really are the same, related or different. There at least 3 relevant issues I can come up with:


(1) Zero eigenvalue modes


By definition, Zero Modes means zero eigenvalue modes, which are modes $\Psi_j$ with zero eigenvalue for some operator $O$. Say,



$$O \Psi_j = \lambda_j \Psi_j,$$ with some $\lambda_a=0$ for some $a$.


This can be Dirac operator of some fermion fields, such as $$(i\gamma^\mu D^\mu(A,\phi)-m)\Psi_j = \lambda_j \Psi_j$$ here there may be nontrivial gauge profile $A$ and soliton profile $\phi$ in spacetime. If zero mode exists then with $\lambda_a=0$ for some $a$. In this case, however, as far as I understand, the energy of the zero modes may not be zero. This zero mode contributes nontrivially to the path integral as $$\int [D\Psi][D\bar{\Psi}] e^{iS[\Psi]}=\int [D\Psi][D\bar{\Psi}] e^{i\bar{\Psi}(i\gamma^\mu D^\mu(A,\phi)-m)\Psi } =\det(i\gamma^\mu D^\mu(A,\phi)-m)=\prod_j \lambda_j$$ In this case, if there exists $\lambda_a=0$, then we need to be very careful about the possible long range correlation of $\Psi_a$, seen from the path integral partition function (any comments at this point?).


(2) Zero energy modes


If said the operator $O$ is precisely the hamiltonian $H$, i.e. the $\lambda_j$ become energy eigenvalues, then the zero modes becomes zero energy modes: $$ H \Psi_j= \lambda_j \Psi_j $$ if there exists some $\lambda_a=0$.


(3) Zero modes $\phi_0$ and conjugate momentum winding modes $P_{\phi}$


In the chiral boson theory or heterotic string theory, the bosonic field $\Phi(x)$ $$ \Phi(x) ={\phi_{0}}+ P_{\phi} \frac{2\pi}{L}x+i \sum_{n\neq 0} \frac{1}{n} \alpha_{n} e^{-in x \frac{2\pi}{L}} $$ contains zero mode $\phi_0$.




Thus: Are the issues (1),(2) and (3) the same, related or different physical issues? If they are the same, why there are the same? If they're different, how they are different?


I also like to know when people consider various context, which issues they are really dealing with: such as the Jackiw-Rebbi model, the Jackiw-Rossi model and Goldstone-Wilczek current computing induced quantum number under soliton profile, Majorana zero energy modes, such as the Fu-Kane model (arXiv:0707.1692), Ivanov half-quantum vortices in p-wave superconductors (arXiv:cond-mat/0005069), or the issue with fermion zero modes under QCD instanton as discussed in Sidney Coleman's book ``Aspects of symmetry''.


ps. since this question may be a bit too broad, it is totally welcomed that anyone attempts to firstly answer the question partly and add more thoughts later.





Quantum uncertainty of particle falling in black hole


A stationary observer at infinity sees a particle of mass m falling in a supermassive Schwarzschild black hole. He observes an increasing redshift and sees the particle ceasing to progress when it approaches the black hole's horizon. What happens to the positional uncertainty of this particle in the reference frame of the distant observer?


A straightforward scaling argument (inserting the Hawking temperature into the equation for the thermal de Broglie wavelength for a particle of mass m) yields a thermal areal uncertainty scaling as the black hole circumference times the particle's Compton wavelength.


Is this the correct limiting behavior?





quantum field theory - Green's function for adjoint Dirac Equation


If $S_F(x-y)$ is the Green's function for the Dirac operator $(i\gamma^\mu\partial_\mu-m)$, that is, I assume the following matrix equation holds: $$ (i\gamma^\mu\partial_\mu-m)S_F(x-y)=i\delta(x-y) $$


The adjoint dirac equation is: $$ -i\partial_\mu\bar{\psi}\gamma^\mu -m\bar\psi=0 $$


I am looking for the Green's function of the above equation, in terms of $S_F(x-y)$, that is, if $F[S]$ is some function of $S$ (may include complex conjugation, transposing, etc), then I am looking for such an $F[S]$ such that this equation holds: $$ -i\partial_\mu F[S]\gamma^\mu -mF[S]=i\delta(x-y) $$


What I have done so far:




  1. Dagger this equation: $(i\gamma^\mu\partial_\mu-m)S_F(x-y)=i\delta(x-y)$ and get $$(-i\partial_\mu S_F(x-y)^\dagger \gamma^{\mu\dagger} - S_F(x-y)^\dagger m)=-i\delta(x-y)$$

  2. Multiply by $\gamma^0$ on the right to get: $$ (-i\partial_\mu [-\bar{S_F}(x-y)] \gamma^{\mu} - [-\bar{S_F}(x-y)] m)=i\delta(x-y)\gamma^0 $$

  3. So $[-\bar{S_F}(x-y)]$ almost solves this equation, except you get a factor of $\gamma^0$ on the right which I am not sure how to handle.



Answer



The solution to the problem can be found in the nature of the propagator of the Dirac equation: it is a matrix with two spinor indices, i.e.


$$S_F(x-y)=S_F(x-y)_{\alpha\beta}.$$


In step 2, you have treated the propagator as if it was a spinor with a single index, which is not correct. Avoiding this, but multiplying the equation by $\gamma^0$ from the left leaves you with the result


$$F(S)=\gamma^0 S^\dagger \gamma^0=\bar{S}.$$


The last equality sign is consistent with the reference given in one of the comments and with the definition of the adjoint of a matrix discussed in this question.



Saturday, May 23, 2015

Streamlines and line of flow of fluid particles


I am very confused with this thing can someone clarify these problems given below-- 1 //Do or can the streamlines of a fluid particle show the position of the particle at a time( using the streamlines)?? 2// I know that streamlines cannot intersect because at a specific instant the particle reaching the intersection will have two different directions of motion (in the steady flow of particles), but can I say the same thing in turbulent flow for the LINE OF FLOW of the particles // why or why not?? 3//is it necessary that a streamLINE will be a perfect straight line(to a good approximation)?? If know why is it called a line then??



Answer




  • On a steady flow, streamlines correspond to the trajectory of "fluids particles" or parcels. (Not to be confused with the one of the real "particles" that are the molecules.)


  • But if it's not a steady flow, it's wrong. You might even see appearant source and sink in the lines that do not exist as a flow.

  • Steady flow streamlines cannot cross because of basic continuity and conservation laws: otherwise a particle should teleport itself on the other side of the obstacle that is the crossing flow ;-)

  • A streamline is a curve, not especially a straight line.

  • More on wikipedia.


homework and exercises - Force on Earth due to Sun's radiation pressure


I have been asked by my Classical Electrodynamics professor to calculate the force that the Sun exerts in the Earth's surface due to its radiation pressure supposing that all radiation is absorbed and a flat Earth, and knowing only that the magnitude of the Poynting vector in the surface is $\left\langle {\bar S} \right\rangle = 13000{\rm{[W/}}{{\rm{m}}^{\rm{2}}}{\rm{]}}$ using:



  1. Maxwell's stress tensor.

  2. The absorbed momentum.


Using Maxwell's stress tensor I get ${\rm{35.6}} \cdot {\rm{1}}{{\rm{0}}^8}{\rm{[N]}}$, which seems plausible since we consider a flat Earth and no radiation reflection. But I'm lost on how to obtain an answer using the variation of electromagnetic momentum.


I think I should start by writing



$$\vec F = \frac{d}{{dt}}{\vec p_{EM}} = \frac{d}{{dt}}\int\limits_V {{\varepsilon _0}{\mu _0}\left( {\vec E \times \vec H} \right)dV}$$


But, how do I take it from here?



Answer



$\vec S$ is the flux, so you need an area integral of the surface of the earth.


The pressure $P$ you will have is force per area, $F/A$. The pressure is flux $S$ divided by speed of light, since you have a momentum of $hf/c$ in the photons.


Then you should integrate over the pressure (i. e. multiplying with the cross section of the earth) to get the force: $$ \vec F = \frac{\pi R^2 \vec S}{c} $$


quantum mechanics - Don't understand the integral over the square of the Dirac delta function


In Griffiths' Intro to QM [1] he gives the eigenfunctions of the Hermitian operator $\hat{x}=x$ as being


$$g_{\lambda}\left(x\right)~=~B_{\lambda}\delta\left(x-\lambda\right)$$


(cf. last formula on p. 101). He then says that these eigenfunctions are not square integrable because


$$\int_{-\infty}^{\infty}g_{\lambda}\left(x\right)^{*}g_{\lambda}\left(x\right)dx ~=~\left|B_{\lambda}\right|^{2}\int_{-\infty}^{\infty}\delta\left(x-\lambda\right)\delta\left(x-\lambda\right)dx ~=~\left|B_{\lambda}\right|^{2}\delta\left(\lambda-\lambda\right) ~\rightarrow~\infty$$


(cf. second formula on p. 102). My question is, how does he arrive at the final term, more specifically, where does the $\delta\left(\lambda-\lambda\right)$ bit come from?


My total knowledge of the Dirac delta function was gleaned earlier on in Griffiths and extends to just about understanding


$$\tag{2.95}\int_{-\infty}^{\infty}f\left(x\right)\delta\left(x-a\right)dx~=~f\left(a\right)$$



(cf. second formula on p. 53).


References:



  1. D.J. Griffiths, Introduction to Quantum Mechanics, (1995) p. 101-102.



Answer



You need nothing more than your understanding of $$ \int_{-\infty}^\infty f(x)\delta(x-a)dx=f(a) $$ Just treat one of the delta functions as $f(x)\equiv\delta(x-\lambda)$ in your problem. So it would be something like this: $$ \int\delta(x-\lambda)\delta(x-\lambda)dx=\int f(x)\delta(x-\lambda)dx=f(\lambda)=\delta(\lambda-\lambda) $$ So there you go.


Friday, May 22, 2015

optics - How to optically rotate images in small increments (for eyeglasses)?


As the result of an accident, one or more nerves that control the rotation of my left eye were damaged. The result is that my left eye views the world rotated clockwise several degrees, compared to my normal (and dominant) right eye. I'm told that it is not possible to construct a lens that rotates images, other than a camera lens that inverts an image a fixed 180 degrees. Has anyone created a portable optical system that rotates images in small adjustable increments?


As shown in the attached image, the yellow line represents the plane of vision in my normal right eye, and the red line represents the rotated, or tilted, vision in my left eye.


As shown in the attached image, the yellow line represents the plane of vision in my normal right eye, and the red line represents the rotated, or tilted, vision in my left eye.




waves - Power of Shockwaves



My question is about shockwaves and their power when they are created/how do they lose their power?


Let's say that we have ground 0 with 10 grams of TATP on it. The detonation velocity of TATP is 5,300m/s. How can we calculate the pressure it will make and the shockwave's power when it is detonated. Also,how can we calculate how fast the shockwave will lose power?




homework and exercises - Half Witt algebra



I have the following Lie algebra which is generated by $\{L_n|n\geq 0\}.$ It satisfies the following commutation rule $$ \Big[ L_i ,L_j \Big]=\frac18 \frac{(2i+2j-1)(2j-2i)}{(2j+1)(2i+1)}L_{i+j-1}-\frac12 \frac{(2i+2j+1)(2j-2i)}{(2j+1)(2i+1)}L_{i+j}.$$ My question is that if the algebra algebra defined above is a half witt algebra?




I thought we can conclude by seeing the commutator relation of $L_n$ but it can happen there exists a change of basis say $\{V_n| n\geq 0\}$ such that where $$V_n=\sum_{\text{finite sum over some indedx}}L_k $$ such that $$\Big[ V_i ,V_j \Big]=[i-j]V_{i+j}~?$$
How can I conclude it is not a witt algebra etc?



Answer



We can start with the obvious rescaling $$ \tilde{L}_i = (2i + 1)L_i\,, $$ which turns the algebra to $$ [\tilde{L}_i,\tilde{L}_j] = (i-j)\left(\tilde{L}_{i+j}-\frac{1}{4}\tilde{L}_{i+j-1}\right)\,. $$ Then one can tentatively postulate a change of basis of the form $$ V_i = \sum_{m=0}^i a_m\,\tilde{L}_m\,. $$ Let's work out the algebra, let's just say without loss of generality that $i>j$. $$ [V_i,V_j] = \sum_{m=0}^i\sum_{n=0}^j a_ma_n\,(m-n)\left(\tilde{L}_{m+n}-\frac{1}{4}\tilde{L}_{m+n-1}\right)\,, $$ change summation variable to $(m,n) \to (M = m+n, N = m-n)$ $$ \begin{aligned} \,[V_i,V_j] &= \sum_{M=0}^{i+j}\sum_{\substack{N=\max(-M,M-2j)\\N\equiv M \mod 2}}^{\min(M,2i-M)}a_{\frac{M+N}{2}}a_{\frac{M-N}{2}} N\left(\tilde{L}_{M}-\frac{1}{4}\tilde{L}_{M-1}\right)=\\&= \sum_{M=1}^{i+j}\sum_{\substack{N=\max(-M,M-2j)\\N\equiv M \mod 2}}^{\min(M,2i-M)}a_{\frac{M+N}{2}}a_{\frac{M-N}{2}} N\,\tilde{L}_{M}-\frac{1}{4}\sum_{\tilde{M}=0}^{i+j-1}\sum_{\substack{N=\max(-\tilde{M}-1,\tilde{M}+1-2j)\\N\equiv \tilde{M}+1 \mod 2}}^{\min(\tilde{M}+1,2i-\tilde{M}-1)}a_{\frac{\tilde{M}+N+1}{2}}a_{\frac{\tilde{M}-N+1}{2}} N\,\tilde{L}_{\tilde{M}} \end{aligned} $$ Where I replaced $\tilde{M} +1 = M$ in the second term. Now we can group everything by simply dropping the $\sim$ and obtain an equation of the form $$ [V_i,V_j] = \sum_{M=0}^{i+j} b_M^{(i,j)}\,\tilde{L}_M\,, $$ for some coefficients $b_M^{(i,j)}$ that are defined as the whole mess above. Finally the equation seeked it $$ b_M^{(i,j)} = (i-j)\, a_M\,. $$ I have no idea if this equation has solutions or not, but it's more concrete so perhaps one can try and see if there are solutions for the first two or three values of $i,j$. If you find it, there might be a way to prove that it exists always. If you find an obstruction it might be that the change of basis I chose was not general enough or that there are no solutions indeed. The inexistence of solutions implies that the algebras are not isomorphic.


This is not by any means a complete answer, but hopefully it goes in the right direction.


electromagnetism - How do I derive the Lorenz gauge from the continuity equation?


I was reading my old electromagnetics book (Elements of Electromagnetics, by Sadiku, 3rd edition) and after the author explained what the Lorenz gauge is mathematically and why it is useful in decoupling the scalar electrical potential from the vector magnetic potential to produce the wave equations, the author made a side comment below eq. (9.52) that the Lorenz gauge $$\nabla \cdot \vec A = -\mu\varepsilon \dfrac{\partial V}{\partial t}$$ can be derived from the continuity equation.


I don't have a background in relativity. I am an electrical engineering student. So given these limitations, how would I go about deriving the Lorenz gauge from the continuity equation?


Note: I am sticking to the conventions of my book in writing equations.




Wednesday, May 20, 2015

experimental physics - Photomultiplier/ voltage divider troubleshooting


I have a Photonis XP5301 PMT to use for fast neutron detection (detector optically glued to the PMT window). I can't quite figure out why its not producing a spectrum. First of all, I have multiple different values of operating voltage that I found on the internet, but I think it's either -800 V or - 1000V. I used the same setup with a NaITl scintillation detector + attached PMT, and it gave me proper results, but not mine. What could I do to locate the issue? The Voltage divider I'm using is a Photonis VD202K/40. ALso, while the NaITl detector's PMT has a power cable that connects to the amplifier, my PMT doesnt have a power cable. What could be the issue here? What am I not addressing? Thank you!




Answer



You say you are using the setup for a neutron detector.


What is the material you are using? What is the expected light output (in photons / event) and how fast would they appear? When you use the NaI based detector, what is your source? What is the activity?


Here are the troubleshooting steps I would take:



  1. To make sure the PMT is working, I would hook the output (straight from the PMT anode) up to a scope, into a low-ish impedance (1 k to ground). Turn the gain of the scope to maximum (or 10 mV / cm). Slowly increase the operating voltage (while PMT is in darkness); eventually, you should see "grass" on the baseline of the scope: this is the signal from individual (thermally induced) photoelectrons.

  2. Turn the voltage down about 100 V from the point where noise started appearing, and introduce a very weak flashing LED into the enclosure of the PMT (if you can see the flash with the naked eye it's too bright for the PMT). You should now see a massive spike of signal for every flash of the LED. Turn the HV off and demonstrate to yourself that the signal is gone (so there is no cross talk)

  3. Now connect the PMT output to a shaping amplifier (it's not clear whether you have one - if you don't, then that might be your problem). The shaping amplifier should give a "slow" output at every pulse coming in - this is usually needed as the input to a spectrum analyzer. Check the output of the amplifier on your scope (you will need a much less sensitive setting). Again, using the LED could be very helpful here. If you don't get a signal from the amplifier, check its power etc.

  4. Now that you have a PMT with roughly appropriate voltage and a working amplifier, connect to the spectrum analyzer. Introduce a robust source (so you get good signal) and slowly sweep the HV in the range you found above, until you see triggers on the spectrum



When you have done all the above, and you found a place where things don't work as expected, you've likely narrowed your problem significantly. If you still have trouble, leave a comment.


classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...