Uncharted Territory

April 8, 2016

Missing Mass, the Absent Planet X and Yet More Comments on Trujillo & Sheppard and Batygin & Brown

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 5:33 pm

Since my two previous posts attempting to debunk the idea of “Planet X” – a supposed large planet in the outer reaches of the Solar System – the occasional media reference has informed me that teams of researchers and various items of multi-million pound equipment are apparently employed chasing the wild Planet X goose.  Indeed, as I go to press, Scientific American has just emailed me a link to an article reporting the latest developments in the search.  Then, yesterday, New Scientist reported on speculation as to where Planet X (or “Planet Nine” as they call it) might have come from.  A paper New Scientist refer to has a bearing on my own conclusions so I’m adding a note about it at the end of this piece.

I had some further thoughts some weeks ago, and, it’s time I cleared away a loose end by writing them up.

My Original Proposed Explanation

Let’s recap.  The story so far is that, based on certain characteristics of the orbits of Sedna and a dozen or so other distant minor planets – often referred to as trans-Neptunian objects or TNOs – several groups of researchers have proposed a “Planet X” or sometimes “Planet Nine”, Pluto, the ninth planet for a certain generation, having been relegated to mere “minor planet” status. As I consider the demotion of Pluto to be utterly ridiculous, I’m going to stick to the terminology “Planet X” for the hypothetical distant planet.  You can take “X” to be the Roman numeral if you want.

I was immediately sceptical about the existence of Planet X.  Some other explanation for the TNO orbits seemed more probable to me.  Planet X would be exceptional, compared to the eight (or nine) known planets, not only in its distance from the Sun, but also in the plane of its orbit.  To explain the strange features of the orbits of the minor planets by the known “Kozai mechanism” of gravitational “shepherding” of smaller objects by a large body, Planet X would have to orbit perpendicular to the plane of the Solar System, within a few degrees of which the planes of the orbits of all the other planets lie.

Some weeks ago then, in my first post on the subject, I reviewed what had been written on the subject of Planet X.  I think now that I was perhaps overly influenced by the Scientific American article on the subject and considered much the most important aspect of the minor planets’ orbits to be their near 0˚ arguments of perihelion (AOPs).  That is, they cross the plane of the Solar System roughly when they are nearest the Sun.

On reflection, I was perhaps wrong to be so dismissive of the eccentricity of the minor planets’ orbits.  All orbits are eccentric, I pointed out.  But the minor planets orbits are really quite eccentric.  There may be a cause of this eccentricity.

I also think it is important that the minor planets’ orbits are highly inclined to the plane of the Solar System compared to those of the inner planets, but they are nevertheless less inclined than random, i.e. in most cases somewhat less than 30˚.

I went on to suggest that perhaps something (other than Planet X) was pulling the minor planets towards the plane of the Solar System.   I suggested it was simply the inner planets, since there would be a component of the gravitational attraction of the minor planets perpendicular to the plane of the Solar System.  I included a diagram which I reproduce once again:

160205 Planet X slash 9In my second post about Planet X a few days later, I looked more closely at the original scientific papers, in particular those by Trujillo & Sheppard and Batygin & Brown.  I wondered why my suggestion had been rejected, albeit implicitly.  To cut a long story short, the only evidence that the minor planet orbits can’t be explained by the gravity of the inner eight planets (and the Sun) is computer modelling described in the paper by Trujillo & Sheppard.  I wondered if this could have gone wrong somehow.

Problems with Naive Orbital Flattening

Let’s term my original explanation “Naive Orbital Flattening”.  There are some issues with it:

First, if the minor planets are “falling” towards the plane of the Solar System, as in my figure, as well as orbiting its centre of gravity, they would overshoot and “bounce”.  They would have no way of losing the momentum towards the plane of the Solar System, so, after reaching an inclination of 0˚, their inclination would increase again on the opposite side of the plane as it were (I say “as it were” since the minor planets would cross the plane of the Solar System twice on each orbit, of course).

Second, mulling the matter over, there is no reason why orbital flattening wouldn’t have been detected by computer modelling.  Actually, I tell a lie; there is a reason.  The reason is that the process would be too slow.  Far from bouncing, it turns out that the minor planets would not have had time for their orbital inclinations to decline to 0˚ even once.  I did some back of the envelope calculations – several times in fact – and if you imagine the minor planets falling towards the plane of the Solar System under the gravity of the component of the inner planets’ orbits perpendicular to the plane and give yourself 4 billion years, the minor planets would only have fallen a small fraction of the necessary distance!

Third, we have this issue of the AOP.  The AOPs of the inner planets precess because of the gravitational effect of the other planets as they orbit the Sun (with some tweaks arising from relativity).  It’s necessary to explain why this precession wouldn’t occur for the minor planets.

Missing Mass

However you look at it, explaining the orbits of the minor planets must involve finding some mass in the Solar System!  One possible explanation is Planet X.  But could there be another source of missing mass?

Well, trying to rescue my theory, I was reading about the history of the Solar System.  As you do.

It turns out that the Kuiper Belt, beyond Neptune, now masses only a fraction of the Earth.  At one time it must have had at least 30 times the mass of the Earth, in order for the large objects we see today to form at all.  Trouble is, the consensus is that all that stuff either spiralled into the Sun, or was driven into interstellar space, depending on particle size, by the effect of solar radiation and the solar wind.

The science doesn’t seem done and dusted, however.  Perhaps there is more mass in the plane of the Solar System than is currently supposed.  Stop Press: Thanks to New Scientist I’m alerted to a paper that suggests exactly that – see the Addendum at the end of this piece.

It seems to me a likely place for particles to end up is around the heliopause, about 125 AU (i.e. 125 times the Earth’s orbit) from the Sun, because this is where the Solar wind collides with the interstellar medium.  You can imagine that forces pushing particles – of a certain range of sizes – out of the Solar System might at this point balance those pushing them back in.

Sophisticated Orbital Flattening

OK, there’s a big “if”, but if there is somewhat more mass – the remains of the protoplanetary disc – in the plane of the Solar System than is generally assumed, then it might be possible to explain the orbits of Sedna and the other TNOs quite neatly.  All we have to assume is that the mass is concentrated in the inner part of the TNOs orbits, let’s say from the Kuiper Belt through the heliopause at ~125 AU.

First, the AOPs of around 0˚ are even easier to explain than by the effects of the inner planets.  As for the inner planets, the mass would have greatest effect on the TNOs when they are nearest perihelion, so would perturb the orbits most then, as discussed in my previous posts.  The improvement in the explanation is that there is no need to worry about AOP precession.  Because the mass is in a disc, and therefore distributed relatively evenly around the Sun, its rotation has no gravitational effect on the minor planets.  And it is the rotation of the other planets that causes each planet’s AOP precession.

Second, we need to observe that there is a trade-off between orbital inclination and eccentricity as in the Kozai effect, due to conservation of angular momentum in the plane of the Solar System.  Thus, as the inclination of the TNOs’ orbits is eroded, so their orbits become more eccentric.  This could have one or 3 possible consequences:

  • it could be that, as I concluded for the effects of the inner planets alone, there has not been time for the TNOs’ orbits to flatten to 0˚ inclination in the 4 billion or so years since the formation of the Solar System.
  • or, it could be that the TNOs we observe are doomed in the sense that their orbits will be perturbed by interactions with the planets if they stray further into the inner Solar System – assuming they don’t actually collide with one of the inner planets – and we don’t observe TNOs that have already been affected in this way.
  • or, it could be that the TNOs’ orbits eventually reach an inclination of 0˚ and “bounce” back into more inclined orbits.  The point is that the eccentricity of the orbits of such bodies would decline again, so we may not observe them so easily, since the objects are so far away we can only see them when they are closest to the Sun.

Which of these possibilities actually occurs would depend on the amount and distribution of the proposed additional mass I am suggesting may exist in the plane of the Solar System.  My suspicion is that the orbital flattening process would be very slow, but it is possible different objects are affected in different ways, depending on initial conditions, such as their distance from the Sun.

Now I really will write to the scientists to ask whether this is plausible.  Adding some mass in the plane of the Solar System to Mercury symplectic integrator modelling would indicate whether or not Sophisticated Orbital Flattening is a viable hypothesis.

Addendum: I mentioned towards the start of this post that the search continues for Planet X.  I can’t help remarking that this doesn’t strike me as good science.  What research should be trying to do is explain the observations, i.e. the characteristics of the minor planets’ orbits, not trying to explain Planet X, which is as yet merely an unproven hypothetical explanation of those observations.  Anyway, this week’s New Scientist notes that:

“…the planet could have formed where we find it now. Although some have speculated that there wouldn’t be enough material in the outer solar system, Kenyon found that there could be enough icy pebbles to form something as small as Planet Nine in a couple of hundred million years (arxiv.org/abs/1603.08008).”

Aha!  Needless to say I followed the link provided by New Scientist and it turns out that the paper by Kenyon & Bromley does indeed suggest a mechanism for a debris disc at the right sort of distance in the Solar System.  They focus, though, on modelling how Planet X might have formed.  They find that it could exist, if the disc had the right characteristics, but it also may not have done.  It all depends on the “oligarchs” (seed planets) and the tendency of the debris to break up in collisions.  This is from their Summary (my explanatory comment in square brackets):

We use a suite of coagulation calculations to isolate paths for in situ production of super-Earth mass planets at 250–750 AU around solar-type stars. These paths begin with a massive ring, M0 >~ 15 M⊕ [i.e. more than 15 times the mass of the Earth], composed of strong pebbles, r0 ≈ 1 cm, and a few large oligarchs, r ≈ 100 km. When these systems contain 1–10 oligarchs, two phases of runaway growth yield super-Earth mass planets in 100–200 Myr at 250 AU and 1–2 Gyr at 750 AU. Large numbers of oligarchs stir up the pebbles and initiate a collisional cascade which prevents the growth of super-Earths. For any number of oligarchs, systems of weak pebbles are also incapable of producing a super-Earth mass planet in 10 Gyr.

They don’t consider the possibility that the disc itself could explain the orbits of the minor planets.  And may indeed be where they originated in the first place.  In fact, the very existence of the minor planets could suggest there were too many “oligarchs” for a “super-Earth” to form.  Hmm!

 

February 13, 2016

Is Planet X Needed? – Further Comments on Trujillo & Sheppard and Batygin & Brown

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 7:42 pm

In my last post, Does (Brown and Batygin’s) Planet 9 (or Planet X) Exist?, I ignored the media squall which accompanied the publication on 20th January 2016 of a paper in The Astronomical Journal, Evidence for a Distant Giant Planet in the Solar System, by Konstantin Batygin and Michael E Brown, and discussed the coverage of the issue in New Scientist (here [paywall] and here) and in Scientific American (here [paywall]).

The idea that there may be a Planet X is not original to the Batygin and Brown paper.  It was also proposed in particular by Chadwick A. Trujillo and Scott S. Sheppard in a Nature paper A Sedna-like body with a perihelion of 80 astronomical units dated 27th March 2014.  The New Scientist and Scientific American feature articles were not informed by Batygin and Brown.  Scientific American explicitly referenced Trujillo and Sheppard (as well as a paper by C and R de la Fuente Marcos).

A key part of the evidence for a “Planet X” is that for the orbits of a number of trans-Neptunian objects (TNOs) – objects outside the orbit of Neptune – including the minor planet Sedna, the argument of perihelion is near 0˚.  That is, they cross the plane of the planets near when they are closest to the Sun. The suggestion is that this is not coincidental and can only be explained by the action of an undiscovered planet, perhaps 10 times the mass of the Earth, lurking out there way beyond Neptune. An old idea, the “Kozai mechanism”, is invoked to explain how Planet X could be controlling the TNOs, as noted, for example, by C and R de la Fuente Marcos in their paper Extreme trans-Neptunian objects and the Kozai mechanism: signalling the presence of trans-Plutonian planets.

I proposed a simpler explanation for the key finding.  My argument is based on the fact that the mass of the inner Solar System is dispersed from its centre of gravity, in particular because of the existence of the planets. Consequently, the gravitational force acting on the distant minor planets can be resolved into a component towards the centre of gravity of the Solar System, which keeps them in orbit, and, when averaged over time and because their orbits are inclined to the plane of the Solar System, another component at 90˚ to the first, towards the plane of the orbits of the eight major planets:

160205 Planet X slash 9

My suggestion is that this second component tend will gradually reduce the inclination of the minor planets’ orbits. Furthermore, the force towards the plane of the Solar System will be strongest when the minor planets are at perihelion on their eccentric orbits, not just in absolute terms, but also when averaged over time, taking into account varying orbital velocity as described by Kepler. This should eventually create orbits with an argument of perihelion near 0˚, as observed.

Has such an effect been taken into account by those proposing a Planet X?  The purpose of this second post on the topic is to look a little more closely at how the two main papers, Batygin & Brown and Trujillo & Sheppard tested for this possibility.

Batygin & Brown

The paper by Batygin and Brown does not document any original research that would have shown AOPs tending towards 0˚ without a Planet X by the mechanism I suggest.  Here’s what they say:

“To motivate the plausibility of an unseen body as a means of explaining the data, consider the following analytic calculation. In accord with the selection procedure outlined in the preceding section, envisage a test particle that resides on an orbit whose perihelion lies well outside Neptune’s orbit, such that close encounters between the bodies do not occur. Additionally, assume that the test particle’s orbital period is not commensurate (in any meaningful low-order sense—e.g., Nesvorný & Roig 2001) with the Keplerian motion of the giant planets.

The long-term dynamical behavior of such an object can be described within the framework of secular perturbation theory (Kaula 1964). Employing Gauss’s averaging method (see Ch. 7 of Murray & Dermott 1999; Touma et al. 2009), we can replace the orbits of the giant planets with massive wires and consider long-term evolution of the test particle under the associated torques. To quadrupole order in planet–particle semimajor axis ratio, the Hamiltonian that governs the planar dynamics of the test particle is [as close as I can get the symbols to the original]:

Η=-¼(GM/a) (1-e2)-3/2 Σ4i=1(miai2)/Ma2

In the above expression, G is the gravitational constant, M is the mass of the Sun, mi and ai are the masses and semimajor axes of the giant planets, while a and e are the test particle’s semimajor axis and eccentricity, respectively.

Equation (1) is independent of the orbital angles, and thus implies (by application of Hamilton’s equations) apsidal precession at constant eccentricity… in absence of additional effects, the observed alignment of the perihelia could not persist indefinitely, owing to differential apsidal precession.” [my stress].

After staring at this for a bit I noticed that the equation does not include the inclination of the orbit of test particle, just its semimajor axis (i.e. mean distance from the Sun) and eccentricity.  Then I saw that the text also only refers to the “planar dynamics of the test particle”, i.e. its behaviour in two, not three dimensions.

Later in the paper Batygin and Brown note (in relation to their modelling in general, not just what I shall call the “null case” of no Planet X) that:

“…an adequate account for the data requires the reproduction of grouping in not only the degree of freedom related to the eccentricity and the longitude of perihelion, but also that related to the inclination and the longitude of ascending node. Ultimately, in order to determine if such a confinement is achievable within the framework of the proposed perturbation model, numerical simulations akin to those reported above must be carried out, abandoning the assumption of coplanarity.”

I can’t say I found Batygin & Brown very easy to follow, but it’s fairly clear that they haven’t modeled the Solar System in a fully 3-dimensional manner.

Trujillo & Sheppard

If we have to discount Batygin & Brown, then the only true test of the null case is that in Trujillo & Sheppard.  Last time I quoted the relevant sentence:

“By numerically simulating the known mass in the solar system on the inner Oort cloud objects, we confirmed that [they] should have random ω [i.e. AOP]… This suggests that a massive outer Solar System perturber may exist and [sic, meaning “which”, perhaps] restricts ω for the inner Oort cloud objects.”

I didn’t mention that they then referred to the Methods section at the end of their paper.  Here’s what they say there (and I’m having to type this in because I only have a paper copy! – so much for scientific and technological progress!):

Dynamical simulation. We used the Mercury integrator to simulate the long-term behaviour of ω for the Inner Oort cloud objects and objects with semi-major axes greater than 150AU and perihelia greater than Neptune.  The goal of this simulation was to attempt to explain the ω clustering.  The simulation shows that for the currently known mass in the Solar System, ω for all objects circulates on short and differing timescales dependent on the semi-major acis and perihelion (for example, 1,300 Myr, 500 Myr, 100 Myr and 650 Myr for Sedna, 2012 VP113, 2000 CR105 and 2010 GB17, respectively).”

In other words their model reproduced the “apsidal precession” proposed in Batygin & Brown, but since Trujillo & Sheppard refer to ω, the implication is that their simulation was in 3 dimensions and not “planar”.

However, could the model used by Trujillo and Sheppard have somehow not correctly captured the interaction between the TNOs and the inner planets?  The possibilities range from apsidal precession being programmed in to the Mercury package (stranger things have happened!) to something more subtle, resulting from the simplifications necessary for Mercury to model Solar System dynamics.

Maybe I’d better pluck up courage and ask Trujillo and Sheppard my stupid question!  Of course, the effect I propose would have to dominate apsidal precession, but that’s definitely possible when apsidal precession is on a timescale of 100s of millions of years, as found by Trujillo and Sheppard.

February 5, 2016

Does (Brown and Batygin’s) Planet 9 (or Planet X) Exist?

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 7:23 pm

What exactly is the evidence that there may be a “Super-Earth” lurking in the outer reaches of the Solar System?  Accounts differ, so I’ll review what I’ve read (ignoring the mainstream media storm around 20th January!), to try to minimise confusion.

New Scientist

If you read your New Scientist a couple of weeks ago, you’ll probably have seen the cover-story feature article Last Great Mysteries of the Solar System, one of which was Is There a Planet X? [paywall for full article – if, that is, unlike me, you can even get your subscription number to give you access].  The article discussed the dwarf planets Sedna and 2012VP113.  The orbits of these planetoids – and another 10 or so not quite so distant bodies – according to New Scientist and the leaders of the teams that discovered Sedna and 2012VP113, Mike Brown and Scott Sheppard, respectively, could indicate “there is something else out there”.

Apparently, says NS:

“[the orbits of Sedna and 2012VP113] can’t be explained by our current understanding of the solar system…  Elliptical orbits happen when one celestial object is pushed around by the gravity of another.  But both Sedna and 2012VP113 are too away from the solar system’s giants – Jupiter, Saturn, Uranus and Neptune – to be influenced.”  Something else must be stirring the pot.”

“Elliptical orbits happen when one celestial object is pushed around by the gravity of another.”  This is nonsense.  Elliptical orbits are quite usual, beyond the 8 planets (i.e. for “trans-Neptunian objects”) which is where we’re talking about.  The fact that the orbits of Sedna and 2012VP113 are elliptical is not why there may be another decent-sized planet way out beyond Uranus (and little Pluto).

I see that the online version of New Scientist’s article Is There a Planet X? has a strap-line:

“Wobbles in the orbit of two distant dwarf planets are reviving the idea of a planet hidden in our outer solar system.”

Guess what?  The supposed evidence for Planet X is nothing to do with “wobbles” either.

The New Scientist article was one of several near-simultaneous publications and in fact the online version was updated, the same day, 20th January, with a note:

Update, 20 January: Mike Brown and Konstantin Batygin say that they have found evidence of “Planet Nine” from its effect on other bodies orbiting far from the sun.

Exciting.  Or it would have been, had I not been reading the print version.  The link is to another New Scientist article: Hints that ‘Planet Nine’ may exist on edge of our solar system [no paywall]. “Planet Nine”?  It was “Planet X” a minute ago.

Referencing the latest paper on the subject, by Brown and Batygin, this new online NS article notes that:

“Brown and others have continued to explore the Kuiper belt and have discovered many small bodies. One called 2012 VP113, which was discovered in 2014, raised the possibility of a large, distant planet, after astronomers realised its orbit was strangely aligned with a group of other objects. Now Brown and Batygin have studied these orbits in detail and found that six follow elliptical orbits that point in the same direction and are similarly tilted away from the plane of the solar system.

‘It’s almost like having six hands on a clock all moving at different rates, and when you happen to look up, they’re all in exactly the same place,’ said Brown in a press release announcing the discovery. The odds of it happening randomly are just 0.007 per cent. ‘So we thought something else must be shaping these orbits.’

According to the pair’s simulations, that something is a planet that orbits on the opposite side of the sun to the six smaller bodies. Gravitational resonance between this Planet Nine and the rest keep everything in order. The planet’s high, elongated orbit keeps it at least 200 times further away from the sun than Earth, and it would take between 10,000 and 20,000 Earth years just to complete a single orbit.”

Brown and Batygin claim various similarities in the orbits of the trans-Neptunian objects.  But they don’t stress what initially sparked the idea that “Planet Nine” might be influencing them.

Scientific American and The Argument of Perihelion
Luckily, by the time I saw the 23rd January New Scientist, I’d already read The Search for Planet X [paywall again, sorry] cover story in the February 2016 (who says time travel is impossible?) issue of Scientific American, so I knew that – at least prior to the Brown and Batygin paper – what was considered most significant about the trans-Neptunian objects was that they all had similar arguments of perihelion (AOPs), specifically around 0˚.  That is, they cross the plane of the planets roughly at the same time as they are closest to the Sun (perihelion).  The 8 (sorry, Pluto) planets orbit roughly in a similar plane; these more distant objects are somewhat more inclined to that plane.

Scientific American reports the findings by two groups of researchers, citing a paper by each.  One is a letter to Nature, titled A Sedna-like body with a perihelion of 80 astronomical units, by Chadwick Trujillo and Scott Sheppard [serious paywall, sorry], which announced the discovery of 2012 VP113 and arguably started the whole Planet X/9/Nine furore.  They quote Sheppard: “Normally, you would expect the arguments of perihelion to have been randomized over the life of the solar system.”

To cut to the chase, I think that is a suspect assumption.  I think there may be reasons for AOPs of bodies in inclined orbits to tend towards 0˚, exactly as observed.

The Scientific Papers

The fact that the argument of perihelion is key to the “evidence” for Planet X is clear from the three peer-reviewed papers mentioned so far.

Trujillo and Sheppard [paywall, still] say that:

“By numerically simulating the known mass in the solar system on the inner Oort cloud objects, we confirmed that [they] should have random ω [i.e. AOP]… This suggests that a massive outer Solar System perturber may exist and [sic, meaning “which”, perhaps] restricts ω for the inner Oort cloud objects.”

The Abstract of the other paper referenced by Scientific American, Extreme trans-Neptunian objects and the Kozai mechanism: signalling the presence of the trans-Plutonian planets, by C and R de la Fuente Marcos, begins:

“The existence of an outer planet beyond Pluto has been a matter of debate for decades and the recent discovery of 2012 VP113 has just revived the interest for this controversial topic. This Sedna-like object has the most distant perihelion of any known minor planet and the value of its argument of perihelion is close to 0 degrees. This property appears to be shared by almost all known asteroids with semimajor axis greater than 150 au and perihelion greater than 30 au (the extreme trans-Neptunian objects or ETNOs), and this fact has been interpreted as evidence for the existence of a super-Earth at 250 au.”

And the recent paper by Konstantin Batygin and Michael E Brown, Evidence for a Distant Giant Planet in the Solar System, starts:

Recent analyses have shown that distant orbits within the scattered disk population of the Kuiper Belt exhibit an unexpected clustering in their respective arguments of perihelion. While several hypotheses have been put forward to explain this alignment, to date, a theoretical model that can successfully account for the observations remains elusive.

So, whilst Batygin and Brown claim other similarities in the orbits of the trans-Neptunian objects, the key peculiarity is the alignment of AOPs around 0˚.

Is There a Simpler Explanation for ~0˚ AOPs?

Let’s consider first why the planets orbit in approximately the same plane, and why the Galaxy is also fairly flat.  The key is the conservation of angular momentum.  The overall rotation within a system about its centre of gravity must be conserved.  Furthermore, this rotation must be in a single plane.  Any orbits above and below that plane will eventually cancel each other out, through collisions (as in Saturn’s rings) and/or gravitational interactions (as when an elliptical galaxy gradually becomes a spiral galaxy).  Here’s an entertaining explanation of what happens.

This process is still in progress for the trans-Neptunian objects, I suggest, since they are inclined by up to around 30˚ – Sedna’s inclination is 11.9˚ for example – which is much more than the planets, which are all inclined within a few degrees of the plane of the Solar System.  What’s happening is that the TNOs are all being pulled constantly towards the plane of the Solar System, as I’ve tried to show in this schematic:

160205 Planet X slash 9

Now, here comes the key point: because the mass of the Solar System is spread out, albeit only by a small amount, because there are planets and not just a Sun, the gravitational pull on each TNO is greater when it is nearer the Sun (closer to perihelion) than when it is further away. There’s more of a tendency for the TNO (or any eccentrically orbiting body) to gravitate towards the plane of the system when it’s nearer perihelion.

This is true, I believe, even after allowing for Kepler’s 2nd Law, i.e. that the TNO spends less time closer to the Sun.  Kepler’s 2nd Law suggests the time an orbiting body spends at a certain distance from the centre of gravity of the system is proportional to the square of that distance, which you’d think might cancel out the inverse square law of gravity.  But the mass of the Solar System is not all at the centre of gravity.  The nearest approach of Neptune to Sedna, for example, when the latter is at perihelion is around 46AU (astronomical units, the radius of Earth’s orbit) but about 476AU when Sedna is at aphelion.

The most stable orbit for a TNO is therefore when it crosses the plane of the Solar System at perihelion, that is, when its argument of perihelion (AOP) is 0˚.  Over many millions of years the AOPs of the orbits of Sedna and co. have therefore tended to approach 0˚.

I suggest it is not necessary to invoke a “Super-Earth” to explain the peculiarly aligned arguments of perihelion of the trans-Neptunian objects.

November 10, 2011

Still Debunking Fast Neutrinos

Filed under: Neutrinos, Physics, Relativity — Tim Joslin @ 1:59 pm

I mentioned in my previous post that “I submitted a paper with a more thorough explanation [of the apparent super-luminal neutrinos detected in the CERN-OPERA experiment] to ArXiv a week ago”. As I write, ArXiv are refusing to publish the paper, without giving reasons in sufficient detail for me to take any corrective action. With hindsight I should perhaps have made the title even less provocative, shortened the Abstract, put the Acknowledgements at the end, rather than on the front page and so on – in each case, “it seemed like a good idea at the time”, and I’m wondering now whether I’ve done something that screams “amateur”. As if that should disqualify me from publishing. But all these things are cosmetic and I was concentrating on the science itself. At the time I didn’t realise there was a significant moderation hurdle on ArXiv – I thought the key was to get an endorser. I was also keen to get my idea “out there”. I figured that if something needed changing they’d just ask me to do it. Anyway, more about my travails in trying to catch the attention of the “physics community” another time.

Here, I want to provide an up to date version of my “Explanation of Apparent Superluminal Velocity in the CERN-OPERA Experiment” (pdf) together with a few words of explanation. This version is significantly different from the one I included in a previous Uncharted Territory post. Postscript: This version has now been submitted to viXra.org, an alternative to arXiv that I came across yesterday.

To recap, my argument is that the neutrinos are travelling at the speed of light, but the speed of light varies slightly depending on direction, because of the motion of the Earth, which is travelling at an estimated 300km/s as measured against the cosmic microwave background (CMB) radiation, which, being the same in all directions, can be taken as providing a stationary reference. It is very difficult to measure the one-way speed of light directly, and the experiments that have been carried out have determined instead c, the “round-trip” light speed. The CERN-OPERA neutrino velocity measurement experiment has unintentionally measured the one-way light speed by comparing neutrino flight times with the expected flight time at c, determined by measuring the distance between CERN and OPERA and successfully transmitting the time at CERN to the OPERA neutrino detector.

The reason for the new version is that I was a bit slow on the uptake. It was only on 27th October, just as I was about to submit to ArXiv, that I belatedly realised that the GPS doesn’t somehow use the “one-way” light speed, but provides a timing based on the “round-trip” light speed. At first I’d thought the whole problem was caused by the procedure for calculating the delay in the optical fibre used to transmit the timing signal over the last 10km into the Gran Sasso mountain to the OPERA neutrino detector. Then, around 25th October, I’d realised that sending a timing signal down an optical fibre runs into the same problems as moving a clock to measure a signal transmission time. On 27th I realised that the GPS also does something similar. A timing signal is sent from the GPS satellite to CERN and to Gran Sasso.

Schematic of CERN-OPERA neutrino velocity measurement experiment

In each case (using the labelling A – CERN, B – Gran Sasso and C – the OPERA neutrino detector) – (i) transmitting a timing signal via GPS from A to B, (ii) moving a clock from B to C to measure a delay in signal transmission, and (iii) transmitting a timing signal via optical fibre cable from B to C – you simply can’t use the time which the transmission has noted it was sent at to measure the speed of light. It’s slippery, but if you really concentrate on the problem, you’ll realise, as did Henri Poincaré, the hero of my previous post, that in each case you have to assume the light speed transmission time.

The best analogy I can come up with is if I received a letter from my nephew – undated, sent 2nd class, with an illegible postmark, as is usual these days – saying I’d forgotten his birthday. I’d be unable to pin down exactly when his birthday was. I’d have to guess how long the letter had taken to reach me.

Similarly, in case (i) in the CERN-OPERA experiment, a “common-view” timing signal is transmitted from a GPS satellite to clocks at CERN (point A) and Gran Sasso (point B), for the express purpose of synchronising those clocks. This signal simply includes the message “the time here is t_SAT”, where t_SAT is whatever the time is on the satellite’s clock. Now, the crucial point (which I only appreciated on 27th October) is that if the message takes n seconds to reach one of the clocks then it’s also n seconds out of date. Unless you know n you can’t tell what the time actually is at the satellite when you receive the message. This is established in the experiment by subtracting the distance from the satellite to the clock and dividing by c, the “round-trip” speed of light, adding the result to the time at which the message was sent. In fact, our whole system of Coordinated Universal Time (UTC) depends on subtracting the assumed transmission time of signals, determined by dividing distance by c, the round-trip light speed. Procedure (i) thus synchronises time between A and B, assuming light travels at c in both directions.

In case (ii) a clock is transported from a master clock at Gran Sasso (point B in the paper, synchronised with CERN by GPS) into the mountain to the OPERA neutrino detector (at point C in the paper) in order to establish the transmission delays in an 8.3km optical fibre (actually this is only the longest fibre in the experiment, but the argument is the same for the others). The transmission time, t_tr in the paper, is the time light would have taken to travel the distance of the cable into the mountain, plus the delays caused by the cable and associated equipment, which I’ve called t_sig. It’s t_sig we want to find out. But again, the signal includes no information about the duration of transmission at light speed. Again, if it takes n seconds to arrive, it’s n seconds out of date when it reaches the clock at the neutrino detector.

Case (ii), though, is slightly different from (i) and highlights the subtleties inherent in the problem. Here, we have a clock at C which we believe shows the time at B, assuming events at B and C are simultaneous. We can therefore establish the delay, t_sig, in transmitting the signal from B to C. But, if events at B and C are not simultaneous, as Einstein suggested, because of the Earth’s motion, then the delay in (or early) arrival of the signal at C compared to light speed transmission from B is exactly matched by the delay in (or early) time at C compared to that at B. Once the clock is moved from B to C it is no longer synchronous with the clock left at B. This is analogous to the Sagnac Effect, whereby clocks have to be adjusted to allow for the rotation of the Earth, and clocks that are moved lose synchronicity. It is in itself an important result, and I intend to devote a post solely to this point (though I don’t always keep my promises). Returning to the CERN-OPERA neutrino velocity measurement experiment, the outcome is that we’re no better off physically moving a clock from B to C than we are transmitting the time from B to C. We always obtain the same delay, t_sig. It might be worth noting that, in the experiment, the same t_sig was obtained by a different procedure (the two-way fibre delay calibration procedure) that doesn’t depend on physically moving clocks.

In case (iii) we do actually transmit a timing signal from B to C along the fibre-optic cables, using the calculation of the delay obtained in procedure (ii). We don’t know how long the timing signal takes, because, again, if it takes n seconds to transmit, it is n seconds out of date, but we assume the delay compared to light speed transmission is t_sig. Thus, the expected time of flight over the entire neutrino flight path has been calibrated based on c, the round-trip light speed – the time of arrival of the timing signal at C against which neutrino flight times are compared is t_A + x_3/c + t_sig (where t_A is the time at A, CERN, when the neutrinos were created and x_3 is, as in the paper, the distance from A to C).

It has to be said that the CERN-OPERA team have done a good job. I’m convinced they are excellent experimenters. The problem is theoretical. It appears they succeeded in establishing a timing signal at point C, the OPERA neutrino detector, that very accurately represented the arrival time of neutrinos emitted from point A, CERN, based on a round-trip light-speed, c, neutrino velocity. The trouble is the neutrinos were only travelling one-way.

The method I’ve adopted in the paper to demonstrate the point is to show that the measurements to determine the expected neutrino flight time at light speed, that is those used in the calculation of the timing signal delay at C relative to A, would all be the same for an observer moving relative to the experiment, but the actual neutrino flight time would depend on the motion of that observer. This is the point of the Lorentz transformations in the paper.

The experimenters have assumed they are stationary relative to the neutrinos, but since the neutrinos arrived earlier than expected, this is clearly not the case. The paper therefore goes on to calculate the velocity of the experimenters relative to the neutrinos, or to be more precise the “frame of reference” of the neutrinos. Because of the details of the experiment this can only be done in the average case. We can only determine one component of our motion relative to the neutrinos, that is that along the Earth’s axis. The rest of our motion varies with the Earth’s rotation and orbit and I assume these motions cancel out to zero.

This is where it gets really interesting. I can’t get the result to tally exactly with the Earth’s motion against the CMB. That leaves me wondering whether the problem is due to experimental error or real – in which case satellites such as WMAP have not measured our motion correctly against the CMB. If the problem is real, and we have measured something other than our motion against the CMB, then things could get very interesting indeed.

November 4, 2011

Einstein Causes Confusion Shock!

Filed under: Neutrinos, Physics, Relativity — Tim Joslin @ 2:52 pm

Regular readers will be aware that I’ve offered an explanation of the apparent superluminal neutrinos detected in the CERN-OPERA experiment. Things have moved on since my last post on the subject, and I submitted a paper with a more thorough explanation to ArXiv a week ago – more about that another time.

My argument boils down essentially to the point that I believe that light doesn’t travel at the same speed in every direction (relative to an observer on our moving planet, and according to our reckoning of time and distance) and the physics establishment does, for reasons that defy simple logic.

A couple of days ago I thought I’d see if I could find some books that shed some light (groan!) on the matter. What I was interested in was whether physicists are confused. I think it’s fair to say that they are.

I ended up tracking all the way back to Einstein’s seminal paper, On the Electrodynamics of Moving Bodies (1905). I found myself standing in Waterstones reading a translation in Hawking’s On the Shoulders of Giants, which basically consists of some reprints and a few pages of comment by the current Lucasian Professor. £22 (OK, £15 on Amazon) for a paperback. Nice work.

Anyway, it’s possible to find On the Electrodynamics kicking around on the internet (pdf), though not easily on the first page of Google’s results, which I guess tells you something straight away about the readership of the “most important scientific paper of the 20th century”.

Any reader of On the Electrodynamics can’t help being struck by the paper’s obvious shortcomings. Yes, shortcomings. Just because a paper includes brilliant, revolutionary ideas does not mean it is perfect in all respects. And On the Electrodynamics has two serious flaws which have perhaps contributed to today’s confusion:

1. References or, rather, the lack of them – Einstein’s assumption of isotropy
Einstein did not include any references in his paper. As a result, we simply do not know how carefully he’d studied certain works of the era. In particular, it makes it difficult to evaluate his opening arguments. My impression is that Einstein just wanted to focus on the crux of his argument.

Einstein first sets out his postulates, or assumptions:

“…the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest. They suggest rather that, as has already been shown to the first order of small quantities, the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good. We will raise this conjecture (the purport of which will hereafter be called the ‘Principle of Relativity’) to the status of a postulate, and also introduce another postulate, which is only apparently irreconcilable with the former, namely, that light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body. [my stress]”

Maybe these postulates are only nearly true. The crux of my argument is that if the speed of light is independent of its source, then we always (except in one very special case) need to apply the same equations Einstein used (Lorentz transformations) to determine the speed of light relative to ourselves, the observer. We can’t just assume we’re the stationary observer! It’s a simple point.

Let’s read on a little more. Einstein is very particular about the need to reckon time in terms of synchronous clocks:

“If at the point A of space there is a clock, an observer at A can determine the time values of events in the immediate proximity of A by finding the positions of the hands which are simultaneous with these events. If there is at the point B of space another clock in all respects resembling the one at A, it is possible for an observer at B to determine the time values of events in the immediate neighbourhood of B. But it is not possible without further assumption to compare, in respect of time, an event at A with an event at B. We have so far defined only an ‘A time’ and a ‘B time’. We have not defined a common ‘time’ for A and B, for the latter cannot be defined at all unless we establish by definition that the ‘time’ required by light to travel from A to B equals the ‘time’ it requires to travel from B to A. Let a ray of light start at the ‘A time’ tA from A towards B, let it at the ‘B time’ tB be reflected at B in the direction of A, and arrive again at A at the ‘A time’ t′A. In accordance with definition the two clocks synchronize if:

tB − tA = t′A − tB

We assume that this definition of synchronism is free from contradictions, and possible for any number of points; and that the following relations are universally valid:—
1. If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B.
2. If the clock at A synchronizes with the clock at B and also with the clock at C, the clocks at B and C also synchronize with each other.
Thus with the help of certain imaginary physical experiments we have settled what is to be understood by synchronous stationary clocks located at different places, and have evidently obtained a definition of ‘simultaneous’, or ‘synchronous’, and of ‘time’. The ‘time’ of an event is that which is given simultaneously with the event by a stationary clock located at the place of the event, this clock being synchronous, and indeed synchronous for all time determinations, with a specified stationary clock.
In agreement with experience we further assume the quantity

2AB/(t′A − tA) = c

to be a universal constant—the velocity of light in empty space.”

He’s going to go on, of course, to assume that the stationary clocks are in the “stationary” [sic] system, and show that the clocks will not appear synchronous to a moving observer.

But how do we know that light would travel at the same speed from A to B as from B to A? This need not affect the round-trip time.

One book I found in Ealing Central Library is devoted specifically to this issue of synchronising clocks. In Einstein’s Clocks, Poincaré’s Maps by Peter Galison, we find (p.217) that the French mathematician, well, polymath, really, Henri Poincaré, understood the problem of “true” and “local” time in 1904:

“‘[Lorentz’s] most ingenious idea was that of local time. Let us imagine two observers who want to set their watches by optical signals; they exchange their signals, but as they know that the transformation is not instantaneous, they take care to cross them. When the station B receives the signal of station A, its clock must not mark the same time as station A at the moment of the emission of the signal, but rather that time augmented by a constant representing the duration of the signal.’ [wrote Poincaré – so far so good]

At first, Poincaré considered the two clock-minders at A and B to be at rest – their observing stations were fixed with respect to the ether. But then, as he had since 1900, Poincaré proceeded to ask what happened when the observers are in a frame of reference moving through the ether. In that case ‘the duration of the transmission will not be the same in the two directions, because station A, for example, moves towards any optical perturbation sent by B, while the station B retreats from a perturbation by A. Their watches set in this manner will not mark therefore true time, they will mark what one can call local time, in such a way that one of them will be offset with respect to the other. This is of little importance, because we don’t have any way to perceive it.‘ [my stress] True and local time differ. But nothing, Poincaré insisted, would allow A to realize that his clock will be set back relative to B’s, because B’s will be set back by precisely the same amount.’All the phenomena that will be produced at A for example, will be set back in time, but they will all be set back by the same amount, and the observer will not be able to perceive it because his watch will be set back’ [my stress]; thus, as the principle of relativity would have it, there is no means of knowing if he is at rest or in absolute movement. [my stress]”

Galison goes on (p.257ff) to speculate as to how familiar Einstein was in 1905 with Poincaré’s discussion of the idea of obtaining local time by the synchronisation of clocks. Regardless, when Einstein swept away the ideas of “local time” and “true time”, he took no account of the little difficulty highlighted by Poincaré. We can only speculate as to what Einstein thought and what he didn’t, but it would clearly have been more difficult to move away from the idea of the ether to that of relativity had it also been necessary to assume a “preferred” or isotropic reference frame relative to which the Earth is moving. That doesn’t mean it doesn’t exist, though. And, for my money, relativity includes a logical inconsistency. You simply can’t assume the light moves independent of its source and then perform transformations to show that the view of a moving observer of the reference frame in which the light is emitted perceives the light not to be moving at equal velocities in all directions relative to objects in the emitting frame, whilst those in the emitting frame magically do see equal velocities.

2. Unclear use of symbols
There’s one way to “rescue” Special Relativity. Let’s imagine you’re a confused young physics student. You might imagine that the Lorentz transformations retain equal light speed in all directions despite diagrams to the contrary, that is, you might pay particular attention to the qualification “length contraction not depicted” in representations of Einstein’s train thought-experiment.

You might define a thought-experiment (flashes when light from the centre reaches the ends of a moving train) and write something like:

“In fact [from the point of view of an observer on the platform] it [the moving train] is shorter in the same proportion as the second flash is later than the rear one.”

as Adam Hart-Davis does on p.231 of The Book of Time.

Hart-Davis appears to assume time-dilation and the relativity of simultaneity are the same thing. His calculations are then totally confusing (even if we ignore the fact that he presents calculations based on the length of the train before he’s told us how long it actually is and later says km/h when he means km/s!). The formula for calculating the time delay between flashes at the rear and front of the train as perceived by the observer on the platform is:

t = (1/√(1 – v^2/c^2))(τ – vx/c^2)

where:

  • τ is the time difference as seen by the observer on the train
  • t is the time difference as seen by the observer on the platform
  • √ is supposed to be a square root sign – I’ve used ^2 for squared (can’t see how to get a superscipt on here)
  • v is the velocity of the train (according to the observer on the train)
  • c is the speed of light
  • x is the length of the train (according to the observer on the train)

The term 1/√(1 – v^2/c^2) is known as γ (gamma) and can be ignored when v is small compared to c, as Hart-Davis does when calculating for a train moving at 22m/s.  The curious thing is, he also ignores γ when calculating the delay when the train is running at 200,000,000m/s and gets a 44ns delay.  He doesn’t explain this.

Hart-Davis then uses γ correctly to show the train looks only ~15m long (he implies this is exact – it isn’t) rather than 20m.

Presumably what Hart-Davis has done is assume that the length contraction of the train (a factor of ~0.75) cancels out with the time dilation factor (also ~0.75), the proportion by which time on the train runs slower than on the train. Is this correct? I guess so, though I’m not claiming to be 100% sure!

Nevertheless, the length contraction doesn’t cancel out the relativity of simultaneity – the signal still appears to take longer to travel to the front of the train from the perspective of the observer on the platform than from that of the observer on the train. The light simply has further to travel from the p.o.v. of the observer on the platform, as the front of the train is receding relative to the observer on the platform, but stationary relative to the observer on the train. The statement: “In fact [from the point of view of an observer on the platform] it [the moving train] is shorter in the same proportion as the second flash is later than the rear one”, is either totally confusing or simply incorrect.

You’d think they’d make a lot of effort to ensure accuracy in a book intended to inform, so maybe they’re confused.

And maybe it’s Albert’s fault.  If you take a look at On the Electrodynamics you might notice that Einstein uses “t” and “τ” (Greek letter small tau) to derive the difference (the formula above) between times observed by stationary and moving observers.  He then, breathlessly, one might imagine, rushes on to derive the time dilation factor (γ) in the rates of the clocks of the moving and stationary observers, using the same “t” and “τ”.  What he really meant to relate in the second case were, of course, “δt” and “δτ”.

Naughty Albert! It’s like Peter Crouch pulling the hair of the Trinidad defender to score in 2006.  He scored a goal, so we’ll ignore that little detail.

Or maybe Einstein was being deliberately obscure just to see if people really understood!

October 18, 2011

Not so fast neutrinos – how CERN have got it wrong

Filed under: Neutrinos, Physics, Relativity — Tim Joslin @ 2:07 pm

Uncharted Territory: “E pur il muove…” [And yet it (the Earth) moves]
CERN: “Mea culpa!”

From Galileo’s Dialogues, 2020 edition, recently received by NeutrinoMail(TM).

This post supercedes my previous effort, since it includes an updated version of my paper on the CERN-OPERA neutrino velocity measurement experiment.  It complements my two earlier posts on the fast neutrino problem, the first being a general discussion of my explanation for the detection of apparent superluminal neutrinos and the second a detailed look at Einstein’s train thought-experiment.

They say a picture’s worth a thousand words.  I can tell you that’s a massive underestimate.  I could have bashed in 5,000 words in the time it took me to conceive of the following diagram, construct it in Powerpoint (is it really totally pants, or do I need a training course?) and most of all, tinker with it. And I’m still not sure it’s clear enough (comments welcome):

In Fig 1, I’ve shown a schematic of the neutrino velocity measurement experiment. It’s very simplified as the neutrinos were not sent exactly from the GPS at CERN. Similar errors may have occurred there as I am about to describe at the OPERA neutrino detector end of the experiment.

The aim of Fig 1 is to show that, over most of the neutrino flight path, all timing measurements were “one-way”:

  • The neutrinos were transmitted from CERN to OPERA.
  • GPS timing signals, used to synchronise the two GPS points shown, travel at light speed, which we perceive to be the same in all directions.
  • Distances were measured either with reference to the GPS signals, or physically the old-fashioned way, in both cases within the Earth system of reference (ES) where we perceive distance to be equal in all directions.

But, crucially over part of the neutrino flight path, the experimenters have used the two-way light speed to determine the expected neutrino flight time. They had to do this because the OPERA neutrino detector is inside the Gran Sasso mountain to protect it from cosmic rays.

The question is whether the light “thinks” it moves at the same speed in all directions with respect to us here on Earth.

The answer is it doesn’t. Why would it? The Earth isn’t even a speck of dust over the distance light and neutrinos can travel. Light speed is nearly 30,000 times the Earth’s escape velocity. Neutrinos go through planets like bullets through tissue-paper. I could go on.

Fig. 2 attempts to show why light doesn’t “think” it moves at the same speed in all directions with respect to our experimental equipment.

First, Fig. 2a shows part of our planet as we perceive it. We perceive distances and therefore time the same in all directions. We set our clocks the same everywhere and assume that light simply takes d/c (the distance divided by a constant speed of light) to get from A to B, or CERN to OPERA in this case.

But, in fact, the Earth is moving, as shown by the fact that distant objects, such as what I recently saw a cosmologist term the rest frame represented by the cosmic microwave background (CMB) (this link has some good CMB schematics, too, btw), appear blue-shifted in the direction we’re moving in – the relative motion causes the photon waves to appear closer together – and red-shifted in the other direction. The CMB appears blue-shifted, not due south, but in a southerly direction (this is important).

Now, the fact that the planet is moving matters. Its speed relative to the CMB is about 600km/s or 1/500th light speed, which is significant when you’re trying to measure neutrino flight times to the nearest nanosecond (ns) or so.

I’m going to digress and explain why movement matters. Here’s another diagram (you really are being spoilt today!):

All we have to do is imagine we’re on a mission to Mars. Mission control are really sweating us – sending people to Mars is not cheap – but have said that we can have precisely one hour off each day to watch the 60 minute episodes of the second series of Firefly we’ve been waiting for for decades. After all, improved morale will increase efficiency.

Unfortunately, the PVR broke on landing so we can’t record the show or fast-forward through the ads. We have to be at the TV to catch the actual broadcast.

Now, if the show is broadcast at 8pm on Earth the time it will reach Mars depends on how far away we are at the time. Light takes 8 minutes to reach Earth from the sun, so I’m guessing that, during our mission, Mars could vary from being, say, 3 light minutes away from Earth to, say, 15, as illustrated in Fig 3.

Hopefully it is apparent that if we have a clock on Mars set to Earth time, we will have to adjust it continually as Mars moves further away from and closer to Earth as the two planets follow their different orbits around the Sun. The rate at which clocks appear to move on Earth as seen from Mars varies continually.

There is no absolute time, a common misconception.

Rather, relative time depends on distance and the relative rates at which time seems to pass depend on relative motion.

Returning to a consideration of Fig 2a, note that I have used no colour within the Earth part of the diagram. OPERA does not appear red-shifted to CERN even though OPERA is moving at 500km/s to the right in the diagram, because CERN is also moving at 500km/s to the right.

Now let’s consider how all this appears from the point of view of a neutrino (or light). Now, light can’t move at greater than the speed of other light. Such faster than light travel would effectively be the same as going back in time.

So light (and neutrinos) always travel at the speed of light, c, in empty space. If other objects are moving then they will perceive light to be red-shifted in the direction they’re moving from and blue-shifted in the direction they’re moving. The light waves are stretched (red-shifted) or squashed up (blue-shifted). As will be seen, it’s only by convention that we don’t instead consider the Earth as squashed up or stretched, which in fact might be a better fit to how the Universe actually is – as usual (sigh!) our perception is Earth-centric.

In Fig 2b, then, I’ve put the neutrino in the middle of its journey from CERN to OPERA (it might have been less confusing to have put it at the end, but the view forward is also interesting). Looking back, the neutrino would see CERN coming towards it at ~500km/s, due to the motion of the planet – the distance it has had to travel to OPERA is shortened (blue-shifted) by the Earth’s motion. This is true when the neutrino arrives at OPERA, so the neutrino has travelled less distance because of the motion of the Earth. It is not moving relative to the planet, but relative to all the other light in the Universe. It has to, otherwise some light would travel faster than other light, and that’s not what we observe.

Now to confuse the issue (probably a good job neutrinos can’t really see!). Looking forward, the neutrino sees OPERA moving away (red-shifted), but after it covers that distance, where it currently is (and CERN) will look blue-shifted. Got it? If it’s any help, this is exactly analogous to the discussion of Einstein’s train thought-experiment where, you will recollect, Bob was able to warn Alice, even though it seemed to Xavier that Agent Zero could reach her in less than d/c time in his near-light speed ship inside the train moving towards her, than the d/c he knew it would take Bob to warn her along the platform.

If the Earth is really moving, then, why don’t we reckon time to take account of that fact? Well, consider how awkward it would be (Fig 2b). Light would still travel at the same speed in all directions, but we’d dispense with the idea of time being the same in different places. Instead of having time constant everywhere and assuming that distances are the same in all directions, as we perceive, we’d have to assume time is different everywhere, and that it varies with distance depending on the motion of the Earth system. Or alternatively, we could keep an idea of time as the same over the whole planet and have distances vary depending on which direction we’re going in (as shown by the red-shift and blue-shift)!

Because we’ve forgotten these points that were long ago demonstrated in special relativity, and followed the convention to assume our time can be used to measure the speed of light and neutrinos travelling in all different directions – which it can’t – we’ve observed the neutrinos travelling as faster than they actually are.

The expected arrival time of the CERN neutrinos at OPERA should have been calculated taking into account the motion of the Earth, vE, as:

t=d/c – dvE/c^2

(or to be precise t=d/c – (dvE/c^2)/SQRT(1-v^2/c^2), but the denominator is insignificant at Earth’s velocity).

This can simply be understood (ignoring the complex version of the formula) as the distance the Earth would have moved in the time light travelled a given distance, divided by the speed of light.  It works out to about 5.6ns/km.  This is a maximum value, if the experiment were oriented directly in line with the Earth’s motion.

The precise distance that was measured without taking special relativity into account is uncertain, but the paper describing the experiment notes that there is a 8.3km optical fibre cable at San Grasso, though it’s the distance as the neutrino flies that is important.

The neutrinos appeared to arrive 60ns early.  Do the math.

——————
I’ve discussed this issue in much more detail in a paper Analysis of Flawed Fibre Time-Delay Calibration Methodology in CERN-OPERA Experiment v0.3 (pdf). I’d really like to post this on the open physics site, ArXiv, but I need to be endorsed by someone with the right credentials to do so. If anyone reading this can help, please contact me via this blog.

Here’s the Abstract for v0.3 of my paper (pdf):

The OPERA neutrino experiment at the Gran Sasso Laboratory, [1], obtained a measurement, v, of the neutrino muon velocity with respect to the speed of light, c, of (v-c)/c = (2.48 ± 0.28 (stat.) ± 0.30 (sys.)) ×10-5, that is, in excess of c by about 1 part in 40,000. Over most of the neutrino flight path from CERN to OPERA, distances and timings were established by GPS signals to within 2cm and (2.3 ± 0.9)ns respectively. These measurements are not problematic. The Gran Sasso Laboratory is underground, however, and a significant part of the measurement of the expected flight time at c was established using two separate fibre delay calibration procedures, both of which are based on the two-way speed of light rather than the reference-frame dependent one-way speed implicit in the GPS and the velocity of neutrinos over the (one-way) flight path from CERN to OPERA. The two-way light speed measurement ignores the moving frame of reference (the Earth system (ES)) in which the experiment was conducted and introduces errors of the same order as the early arrival time of the neutrinos (60.7 ± 6.9 (stat.) ± 7.4 (sys.)) ns. Similar problems affect all attempts to measure the one-way speed of light [2]. This paper explains these problems, with reference to Einstein’s train thought-experiment and suggests how the expected one-way speed of light in a moving frame of reference such as the ES could be derived using the cosmic microwave background (CMB) as a stationary frame of reference. This problem is additional to the clock synchronisation issues described in [3], and can be traced to a flaw in a procedure, [5], which may have been used at CERN on other occasions, so may affect other experiments involving the accurate measurement of particle velocities approaching or at c. The experiment may yet prove to be important as it would, in fact, have measured a one-way neutrino (and hence one-way light) velocity and, by identifying a stationary frame of reference (possibly that defined by the CMB) with respect to light (and neutrino) transmission, perhaps shed light on cosmological questions.

October 14, 2011

Analysis of Fibre Delay Calibration Error in CERN-OPERA Neutrino Speed Experiment

Filed under: Neutrinos, Physics, Relativity — Tim Joslin @ 12:57 am

Following on from my previous two informal posts discussing the apparent measurement of superluminal neutrino velocities in transit between CERN and OPERA, I have drafted a paper on the topic (pdf), which I hope to submit to ArXiv, the open physics online journal.

The paper is a little more precise than the previous blog posts, but makes the same general point. I’d appreciate any comments. Here is the Abstract:

“The OPERA neutrino experiment at the Gran Sasso Laboratory obtained a measurement, v, of the neutrino muon velocity with respect to the speed of light, c, of (v-c)/c = (2.48 ± 0.28 (stat.) ± 0.30 (sys.)) ×10-5, that is, in excess of c by about 1 part in 40,000. Over most of the neutrino flight path from CERN to OPERA, distances and timings were established by GPS signals to within 2cm and (2.3 ± 0.9)ns respectively. These measurements are not problematic. The Gran Sasso Laboratory is underground, however, and a significant part of the measurement of the expected flight time at c was established using two separate fibre delay calibration procedures, both of which are based on the two-way speed of light rather than the reference-frame dependent one-way speed implicit in the GPS and the velocity of neutrinos over the (one-way) flight path from CERN to OPERA. This ignores the moving frame of reference (the Earth system (ES)) in which the experiment was conducted and introduces errors of the same order as the early arrival time of the neutrinos (60.7 ± 6.9 (stat.) ± 7.4 (sys.)) ns. Similar problems affect all attempts to measure the one-way speed of light. This paper explains these problems, with reference to Einstein’s train thought-experiment and suggests how the expected one-way speed of light in a moving frame of reference such as the ES could be derived using the cosmic microwave background (CMB) as a stationary frame of reference. This problem is additional to the clock synchronisation issues described in Carlo Contaldi’s paper, and can be traced to a flaw in a procedure which may have been used at CERN on other occasions, so may affect other experiments involving the accurate measurement of particle velocities approaching or at c.”

October 10, 2011

Einstein’s Train Thought-Experiment, Fast Neutrinos (and One-Way Measurements of the Speed of Light)

Filed under: Neutrinos, Physics, Relativity — Tim Joslin @ 12:58 am

My previous, somewhat rambling, post attempted to explain the recent experimental observation of neutrinos apparently travelling faster than light. I concluded that an adjustment needs to be made to the clocks used to measure the neutrino time of flight to compensate for the absolute motion of the experiment, i.e. in relation to the cosmic microwave background radiation (CMB). Observations suggest the Earth is moving at ~600kps (~1/500th light speed) relative to the CMB.

This post attempts to prove that conclusion by a thought-experiment.

Let’s remind ourselves of Einstein’s train thought-experiment (figures from Wikipedia, copied here purely for convenience):

What Einstein demonstrates here is the relativity of simultaneity. Events (say light emitted at the centre of the carriage reaching the two ends) which are simultaneous in one frame of reference (top, from within the train) are not simultaneous in another frame of reference (bottom, from the platform).

I’d like to extend this slightly.

There’s even a little story. Xavier, in his hyperspeed train, has lured Bob to the train platform under the pretext of performing a relativistic experiment. He hopes to distract Bob for long enough to send Agent Zero to shoot Alice, who is also standing on the platform:

Consider a scenario where Xavier is at the centre of the train, and Bob is standing on the platform, each with their atomic clocks. Bob’s colleagues are also standing on the platform with their atomic clocks at the precise points where there are clocks on the train. Clearly, if set correctly, each pair of clocks would, for an instant, show the time in the other frame of reference – T1(train) would be the same as T1(platform) and so on.

Xavier doesn’t just want a pretext for his evil scheme. He also wants to perform an experiment. He wants to impress Bob by displaying Bob’s correct time on clocks at the ends of his train when he passes Bob on the platform. Bob’s clocks are synchronised by signals from T1(platform) so reflect light speed transmission delays in Bob’s frame of reference. Unfortunately for Xavier, he doesn’t know how fast his train will be moving relative to the platform at the moment he passes Bob.

How can Xavier get his clocks to show the same as Bobs’? There are two possibilities. He could either send a timing signal to each of the clocks at the ends of his train or he could walk the clocks from his position in the centre to each of the two ends. In the first case, the clocks would show the same as T1(train) adjusted for the transmission delays at the speed of light in the train. But we know from Einstein’s thought-experiment that these clocks will not show the same times as those on the platform. In the second case, the clocks would initially show the same as T1(train) but to show the same time as the corresponding clocks on the platform would have to be adjusted to reflect light transmission delays in Bob’s frame of reference. The adjustments would be different for the two clocks (the clock at the front of the train would have to be adjusted by more than the transmission time for light in the train and that at the rear by less).

It is evident from Einstein’s thought-experiment that neither method will yield the times on Bob’s clocks on the platform (Xavier would need to know the exact speed of his train relative to the platform and apply some equations from special relativity).

Signals sent at light speed to the clocks at the front and rear of the train will set them too early and late, respectively, since Bob sees events at the front of the train later than those at the rear. Similarly, walking the clocks from the centre of the train would ensure T1(train)=T2(train)=T3(train), but if adjustments are made to set T2(train) and T3(train) to be in synchrony like T2(platform) and T3(platform), T3(train) at the front would be behind T3(platform) and T2(train) at the rear would be ahead of T2(platform).

Xavier would either have to assume the speed of light was different in each direction or that time at the front of the train was delayed relative to time at the rear (and middle) of the train.

Let’s consider further what would happen if, on seeing Bob on the platform, Xavier told Agent Zero to set off in his near-light speed ship within the train (did I mention it was a big train?) in the direction of travel of the train, towards Alice, who Xavier knows is standing further along the platform in order to help Bob with the experiment.

Now, Bob is nobody’s fool, and sees Zero setting off (did I mention that the train is transparent?) and transmits a message to Alice: “Duck!”. She does so just before Zero arrives at T3(platform) (before he reaches the end of the train at T3(train)) and he misses. Obviously if Zero had reached Alice before the message from Bob, Zero would have travelled faster than light in Bob’s frame of reference. This is not possible, says Einstein.

Similarly, Bob might retaliate by sending Charlie in a similar near-light speed ship to shoot Yolanda at the rear of Bob’s train (which, again, he would reach before he came to the end of the platform). In this case, Xavier would be able to warn Yolanda.

The critical point is that both Bob and Xavier see light travelling at the same speed in both frames of reference. Apologies if you knew this already, but I enjoyed telling the story.

Not shown on my diagram is that Bob would have the same problem with clocks if he was also moving (did I mention the platform is a movable construct in space? – thought-experiment technology really has moved on since Einstein’s day!). If he passed Harry’s Bar on the Intergalactic Highway, he would have the same issues as Xavier (and if Harry also tried to shoot Alice in the same way – after all, he is simmering with jealousy after she ran off with Bob – Bob would also be able to warn Alice). Bob would have to assume either that clocks on his ship were delayed or that light travelled at a different speed along and against his direction of travel.

It follows that it is impossible to measure the one-way speed of light in any frame of reference since you either have to assume there is an error between all clocks in line of motion (proportional to distance) or that light travels at different speeds in different directions.

Unless, of course, you know that your frame of reference is stationary or exactly how you are moving in relation to a stationary frame of reference.

Now, Big Bang theory* tells us that the cosmic microwave background radiation (CMB) was all generated simultaneously. The CMB was generated throughout the Universe and what we see now is what’s reaching us from those regions in all directions that happened to be 13.7 billion light-years away at the time of the Big Bang. Clearly, therefore, the anisotropy (or redshift and blueshift) of the CMB is due to the motion of the Earth rather than that of the CMB itself.

The CMB therefore represents a stationary frame of reference.

As an aside, note that if neutrinos did travel faster than light, they’d eventually (in around 500 trillion years or so, based on the magnitude of the superluminal neutrino velocity reported for the CERN-OPERA experiment) reach a region of space where the Big Bang was still in progress, potentially interfering with their own creation. I don’t think the Universe would allow that, somehow. Alice would be miffed.

I therefore propose that if we adjust the fast neutrino timings to allow for the motion of the experiment, relative to the CMB, during the time the atomic clock was** moved between the GPS at San Grasso and the point where the neutrinos were detected, we will determine the true speed of neutrinos (which I predict will be that of light, c, within the limits of experimental error).

Note that the GPS timings and distances in the fast neutrino experiment do not contribute to the error, since all these are distorted in the same way (i.e. they are consistent within our frame of reference, or, to put it another way, systematic errors cancel out) due to the motion of the Earth (I may put up another post on just this point).

——-
* Just in case anyone assumes I’m an unreconstructed Big Bangian, I want to point out that I’m not entirely comfortable with Big Bang theory, but we can explain the difficulties of one-way lightspeed measurement and, by extension, fast neutrinos, within the paradigm. Obviously any replacement theory would have to explain all the Big Bang predictions (and more – but “more” doesn’t include fast neutrinos, since, just to be crystal, that phenomenon can already be explained) just as Einstein explained Newton’s.

** Postscript (15/10/11): It is incorrect to suggest that the time when the clock was moved (or clocks were synchronised) is important. My brain must have shorted at around the point I came up with that idea last weekend. In fact, the fibre time-delay calibration procedure (which uses two techniques based on the same misconception, one of which involves a transportable caesium clock) always gives the same result for the time-delay (excepting small measurement errors) based on an implicit two-way lightspeed measurement. The (one-way) neutrino flight time is on average greater than that expected based on the fibre time-delay calibration procedure (in this particular experiment, probably due to the N-S component of the CERN-OPERA neutrino flight-path), but dependent on the experiment’s orientation with the CMB when they are sent and detected (the change in orientation, as opposed to the motion, of the planet during their flight time is seriously insignificant). More detail on this point is included in the paper I have subsequently drafted on the erroneous superluminal neutrino result.  (Also made one unrelated wording change in the text, to clarify that the 500 trillion year estimate is based on the reported CERN-OPERA superluminal neutrino flight time).

Frames of Reference and Fast Neutrinos: Misinterpreted Findings?

Filed under: Neutrinos, Physics, Relativity — Tim Joslin @ 12:32 am

There are much more pressing problems I should be looking into, but I find it more than a little distracting when my understanding of how the universe operates is publicly and loudly challenged. I refer of course to the recent experiment which appeared to determine that the velocity of neutrinos is faster than light (by about one part in 40,000) (hereafter “the fast neutrino experiment”). So I’ve had a look at the paper describing the fast neutrino experiment (Autiero et al), sampled the discussion as to what might really be happening, and even listened to the seminar of 23rd September announcing the findings from CERN.

Here’s my take on the whole business.

Note: After drafting this post I’ve come up with a systematic demonstration, by thought-experiment, of my assertion here that a correction needs to be made for the motion of the experiment relative to the stationary frame of reference represented by the cosmic microwave background (CMB) signal. When it’s complete I’ll jump in my neutrino-drive time-machine and add a link to it here.

Abstract

I’m convinced the neutrinos are not travelling faster than light, though the findings may tell us something useful. I believe the results have been misinterpreted or the experiment misconceived. It is not possible to validate the speed of neutrinos in the way attempted. I don’t believe the results are due to an experimental error as such. The problem is with the concept of synchronised time, necessary to establish a frame of reference for an experiment. The experimenters have synchronised time measurements over most of the distance over which they measured the speed of neutrinos by reference to the speed of light (via the GPS system). But over a significant distance they used a method involving the transport of timing devices. Such a method does not compensate for the motion of the local frame of reference and has introduced measurement errors leading to the apparent superluminal speed of neutrinos. Although we perceive the speed of light (and neutrinos) to be constant, time is distorted by the motion of the local frame of reference in relation to the rest of the universe (hence we see distant objects red-shifted or blue-shifted depending on their motion relative to us). The neutrinos have not agreed to be unaffected by the rest of the universe. The results could, however, be used to partially test the hypothesis that the frame of reference of the neutrinos (and, at least, light) is the whole universe, i.e. that they only appear to be travelling at equal velocity in all directions if this is measured against the cosmic microwave background (CMB) or average motion of the universe as a whole.

A Doubly Ambitious Experiment

The fast neutrino experiment measured the one-way speed of light (implicitly, by trying to establish the simultaneity of events) and then (explicitly) that of neutrinos, determining the latter to be in excess of the former. Now, the one-way speed of light has not previously been successfully measured. There are fundamental reasons for this which I will try to explain.

The fact that the literature which exists on the difficulty of one-way light-speed measurements is not discussed in Autiero et al suggests that this is the area in which the problem lies.

First, a little background.

What are Neutrinos Anyway?

I think it’s fair to say neutrinos are fundamental particles comparable in some ways to photons. If the fundamental property of photons is that they carry energy, maybe we can consider neutrinos to carry another form of energy (which we understand less and are less able to manipulate). Under certain circumstances the neutrino form of energy can be exchanged for the more familiar form of energy (so we can measure neutrino energy). I make this comparison and simplification to stress that our current level of knowledge suggests neutrinos should behave in some ways like photons. At least that should be the null hypothesis we should be aiming to falsify.

Why the Fast Neutrino Result must be false: (1) SN1987a

We already know neutrinos travel at the speed of light, to a very close approximation. Neutrinos from a distant supernova observed in 1987 (SN1987a) arrived 3 hours before light believed to have been emitted simultaneously, and there are reasons why the light might have been delayed. In any case, if the neutrinos had been travelling as fast as those in the fast neutrino experiment they would have arrived several years earlier. It’s therefore necessary to suppose there is something different about the fast neutrinos, such as their energy, even though there is no theoretical reason to suppose that this would be the case – the speed of light is not affected by its energy (which affects instead its wavelength). In fact the energy of the neutrinos in the fast neutrino experiment was several orders of magnitude greater than those from SN1987a. The obvious thing to do is to repeat the fast neutrino experiment with different energies. The critical question was asked about 1 hour 49 minutes into the CERN seminar (slide 83), by which time everyone was a bit tired. The answer was that no significant effect of energy on neutrino velocity had been determined. Referring to the fast neutrino paper (p.20), it’s clear that limited analysis, with significant uncertainty, has been carried out on only a small range of energies (varying by less than an order of magnitude and divided into just those of more than 20GeV and those of less than 20GeV).

Why the Fast Neutrino Result must be false: (2) Relativity

Relativity is not generally well-understood, but even Dara O’Briain on last week’s Mock the Week (a light-hearted UK current affairs TV show) noted that light doesn’t experience time. As far as light is concerned it travels instantaneously from point A to point B, assuming Einstein is right. Time passes only for stationary or at least slow-moving observers. In other words, time is associated with space, not with light itself. This is fundamental.

Simultaneity, Time and Frames of Reference

There’s some astonishingly good stuff on Wikipedia. I recommend the entries on Relativity of Simultaneity and Inertial frame of Reference. The point is we can only discuss events happening simultaneously within a closed system within which the relative velocities of objects are known. A frame of reference is not a property of the universe. Rather it’s something we need to define in order to apply physical laws. If we can assume we are taking measurements within a frame of reference, that is, that the phenomena (time, position, velocity, acceleration and so on) that we wish to relate are not independent of one another, then we can apply physical laws. For example, the fact that the solar system is hurtling round the Milky Way which is itself hurtling through space does not need to be taken account of in calculating a trajectory to land on the Moon.

And it’s only within a frame of reference that we can invoke a concept of simultaneity. Again, this is not a property of the universe, but something we need for our convenience. A concept of simultaneity allows us to relate time in different places.

Establishing a Frame of Reference: Conventions

Time always involves conventions, even when relativity doesn’t come into it. But when we want to make very accurate measurements we need further conventions. The main issue is how to deal with delays caused by the apparent speed of light. Consider a mission to the Moon. The communication delay is only a couple of seconds, but how does the astronaut know precisely when to expect a communication from Earth? There are a couple of sensible possibilities. The clock on the Moon could be arranged to show the time a message would arrive from Earth. The astronaut would not miss even a second of his favourite TV programme. A broadcast sent at 12 midday from Earth would arrive at 12 midday on the Moon. But when the reverse communication was attempted there would be a delay. Messages sent at 12:00:00 Moon time would reach Earth at over 2 seconds (the round-trip time delay) past 12 on Earth.

Less confusing is to pretend the time is the same on the Moon as on Earth. Messages sent at 12:00:00 from the Moon to the Earth will arrive at 12:00 and a bit over a second Earth time, and messages sent at precisely midday from the Earth will arrive at a bit over a second after midday Moon time.

Once relativity comes into it conventions have to be used to manage time even within a frame of reference. Or rather, a better way of thinking about it may be that conventions to manage time are required to establish a frame of reference. Remember, the universe doesn’t know anything about our frames of reference. They are just a means for us to apply physical laws somewhat more easily. We should always ask whether all the phenomena we are measuring are in fact entirely relative to our frame of reference. Otherwise we are not measuring what we think we’re measuring. It’s a failure to question the validity of the frame of reference that lies behind the apparent anomaly of “fast neutrinos”.

The fast neutrino experiment relied on a convention whereby all clocks indicate the same time as on a master clock (i.e. they simply tried to determine Coordinated Universal Time (UTC) with great accuracy at both the sending (CERN) and receiving points (the OPERA detector at Gran Sasso Laboratory in a mountain 730km away)). So the convention for communication is that all transmissions received appear delayed by a time dependent on the distance from sender to receiver (i.e. distance divided by the speed of light, c). At least, that is the aim, but as I hope to explain, this was not achieved. It is the limitations of this approach (rather than any errors in execution) that I believe causes the fast neutrino anomaly.

Establishing a Frame of Reference: Issues with Synchronising Clocks

There are a number of aspects of relativity that need to be considered when synchronising atomic clocks. These are well known from the experience of the GPS system (PDF), which relies on all satellites being synchronised to UTC. The fast neutrino experiment could not just rely on GPS, though, because the OPERA facility is underground. They also used “a two-way fibre procedure” (I don’t know what this is and neither does the internet) and, most importantly, “a transportable Cs [caesium] clock… yielding the same result” for a substantial distance (“an 8.3km long optical fibre” along which a timing pulse is transferred every millisecond).

There’s plenty to go wrong and some of the issues are discussed in a short paper by Carlo Contaldi (although written when it was not yet clear that GPS time synchronisation was used for the bulk of the transmission path, the points Contaldi makes are applicable to the residual time synchronisation). I agree with Contaldi that the fast neutrino experiment paper should have discussed this issue in much more depth, but am willing to assume that relativistic effects have been correctly accounted for, since two Metrology Institutes (Swiss and German) were involved. Whilst the level of accuracy may have been unusual, no new principles were involved.

One particular relativistic effect may be surprising. It turns out that simply moving atomic clocks, however slowly, introduces an error as a result of the Sagnac Effect. If you synchronise two clocks next to each other and move one from west to east (e.g. from CERN to OPERA) it gains time, i.e. a signal from CERN would take a little longer to reach OPERA. This is adjusted for in the fast neutrino experiment (and in GPS) by adding a correction to signal arrival times (you can’t just adjust the clocks because if you kept doing that right round the planet you’d end up with a clock around 200ns ahead or behind the one you started with). At the fast neutrino experiment seminar one person asked if they had “corrected for the Earth’s rotation”. One way of looking at the problem is that the light (or neutrino) is slowed down or speeded up by the Earth’s rotation against or in the direction of travel. But the light always appears to arrive at the same speed. So, even though the problem is resolved by adding a correction factor to each signal received, we have to think of the Sagnac Effect as a time distortion arising from trying to create a rotating frame of reference.

Extending Sagnac: What’s Been Overlooked

The Sagnac Effect is clearly a special case of a general effect. It’s not the result specifically of the Earth rotating, but of the Earth moving and taking with it the frame of reference we are trying to construct. If you were trying to measure the velocity of neutrinos instead in a space laboratory (with synchronised atomic clocks at each end, say), then you’d have to make a similar adjustment for the velocity of the laboratory.

Hang about! The velocity of the laboratory in relation to what? The experiment is being conducted in a closed space.

Well, to cut to the chase, my money is on the frame of reference for light and neutrinos being the whole universe. It’s unlikely to be something as small as the Earth or the Solar System, since most of the billions (trillions?) of neutrinos created at CERN will travel for light-years – only a hand-full were detected at OPERA (I heard that if you could shine a beam of neutrinos at a light-year thickness of lead, most of them would pass through).

Now, although the fast neutrino experimenters may have corrected for the Sagnac Effect (which GPS also corrects for) when moving their Cs clock, it may be the case that they didn’t correct for the Sagnac Effect motion of the Earth around the Sun, the Sun around the Milky Way or the Milky Way about the local supercluster. These are not corrected for in the GPS system, because these motions affect all the clocks (and light transmission between them) simultaneously (though I can’t help pointing out that small errors would arise if they failed to correct for the rotation of the Earth-Moon system about it’s the centre of gravity over a lunar month, but maybe they do!).

Most importantly, the red-shift/blue-shift dipole in the cosmic microwave background radiation (CMB) suggests the Earth is moving at about 6ookps (about 1/500th or 0.2% lightspeed).

Let’s come back to that.

Sources of Confusion 1: How could the motion of the Earth affect the neutrino speed measurement?

I’m confused so I expect you are too!

As with the Sagnac Effect, the motion of the Earth does not affect the speed of light or passage of time we perceive. It does, though affect time on Earth as perceived by an external observer (say one at rest in relation to the CMB). They would observe time running faster on Earth if it was coming towards them and going slower if it was receding. The time between pulses of light from Earth would vary and they would be blueshifted or redshifted.

Most importantly (as in Einstein’s train analogy), events that appear simultaneous on Earth do not appear simultaneous to the external observer.

The neutrinos (and light) are like the external observer. We might think clocks are simultaneous, but the neutrinos do not.

Sources of Confusion 2: Why don’t we notice this effect, say in the GPS system?

As in the case of the Sagnac Effect, the problem only occurs when you try to synchronise clocks. Once clocks are synchronised, then they all run slower or faster, as does every other physical process, as perceived by an external observer, as they change velocity relative to the external observer, e.g. as the Earth rotates and orbits.

Individual clocks may appear (to the observer) to speed up or slow down as they change orientation, as in the Fizeau experiment. Fizeau measured the difference in speed between light beams travelling through water flowing in opposite directions. If you were in one of Fizeau’s flows (but couldn’t detect the water flow – this is a thought experiment, OK?), it would be impossible to set two clocks to measure the speed of light without knowing the speed of the water flow.

Similarly, the errors in an experiment to measure the speed of light or neutrinos on Earth will depend on the orientation of the experiment with respect to the planet’s motion (such as its speed of 500kps relative to the CMB) at the time the clocks are synchronised.* (This has got a little chicken and egg, as we want to measure the speed of light and neutrinos to determine whether their frame of reference is the CMB. Try not to forget that the speed of light is not some external quantity, but actually determines time within a frame of reference, so appears constant, i.e. using time differences to measure the speed of light is circular. What the fast neutrino experiment was trying to do, of course, was measure the speed of neutrinos, by trying to calculate what the speed of light would have been had it been able to travel through the Earth with the neutrinos).

The GPS system synchronises time by the exchange of signals. These travel at the speed of light, which is common for the whole frame of reference. GPS doesn’t need to know about the motion of the planet as a whole (except for its rotation), since all positions are calculated relative to each other within the Earth system.

In the Fizeau thought-experiment, if instead you used the speed of light to synchronise clocks within the water flow, you could then use them to determine positions just as in the GPS.

Sources of Confusion 3: the Aether versus Frames of Reference, Mitchelson-Morley versus Sagnac and Fizeau

I’ve noticed that a number of people have pointed out on blogs that errors might have arisen in the fast neutrino experiment because of the motion of the Earth. They’ve been slapped down by others saying there’s no “aether”, as supposedly shown by Mitchelson and Morley (M&M). The idea of the aether was that there was a substance through which all objects move, which some argued exerts a drag in a preferred direction. Mitchelson and Morley showed this was not the case by demonstrating that light travelled at the same (2 way) speed in perpendicular directions.

This helped Einstein develop a theory without the aether concept. But he nevertheless needed to explain findings like those of Sagnac and Fizeau which appeared to show the existence of an aether.

The Real Meaning of the Fast Neutrino Experiment

When I thought the fast neutrino experiment had involved transporting clocks over 730km, I calculated the error as too large – up to 730km/500 (assuming the Earth is travelling at roughly 600kps or 1/500th light speed relative to the CMB, as discussed earlier) or 1.46km distance or around 4900ns early, as against the actual 60ns. To have had only 60ns error (of course other timing errors may come to light) would have been to assume the OPERA clock was synchronised when the experiment was somehow moving only slowly in relation to the CMB (maybe perpendicular).*

But with only 8.3km to worry about (actually there are other segments where the same error might be made), the maximum possible error is around 8.3km/500 or 16.6m or around 55ns, around that which was observed. Trouble is, this assumes the experiment was oriented directly in line with the motion of the Earth in relation to the CMB. If records of exactly how and when clocks were moved* still exist and/or it is possible to repeat the experiment, it should be possible to determine orientations wrt the CMB, and hence expected timing errors, assuming there’s anything in all this.

Maybe the fast neutrino experiment has not proved that neutrinos travel faster than light, but rather suggests ways in which we might measure the direction and velocity of motion of the Earth with respect to the frame of reference of neutrinos (meaning the frame of reference in which they appear to travel at the same speed in all directions). Maybe this will turn out to be the whole universe (and therefore in agreement with CMB redshift/blueshift measurements), maybe not.

* Postscript (15/10/11): It is incorrect to suggest that the time when the clock was moved (or clocks were synchronised) is important. My brain must have shorted at around the point I came up with that idea last weekend. In fact, the fibre time-delay calibration procedure (which uses two techniques based on the same misconception, one of which involves a transportable caesium clock) always gives the same result for the time-delay (excepting small measurement errors) based on an implicit two-way lightspeed measurement. The (one-way) neutrino flight time is on average greater than that expected based on the fibre time-delay calibration procedure (in this particular experiment, probably due to the N-S component of the CERN-OPERA neutrino flight-path), but dependent on the experiment’s orientation with the CMB when they are sent and detected (the change in orientation, as opposed to the motion, of the planet during their flight time is seriously insignificant). More detail on this point is included in the paper I have subsequently drafted on the erroneous superluminal neutrino result.  (Also made one unrelated wording change in the text, to clarify that the Sagnac Effect is just one example of a general phenomenon).

Create a free website or blog at WordPress.com.