Uncharted Territory

April 8, 2016

Missing Mass, the Absent Planet X and Yet More Comments on Trujillo & Sheppard and Batygin & Brown

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 5:33 pm

Since my two previous posts attempting to debunk the idea of “Planet X” – a supposed large planet in the outer reaches of the Solar System – the occasional media reference has informed me that teams of researchers and various items of multi-million pound equipment are apparently employed chasing the wild Planet X goose.  Indeed, as I go to press, Scientific American has just emailed me a link to an article reporting the latest developments in the search.  Then, yesterday, New Scientist reported on speculation as to where Planet X (or “Planet Nine” as they call it) might have come from.  A paper New Scientist refer to has a bearing on my own conclusions so I’m adding a note about it at the end of this piece.

I had some further thoughts some weeks ago, and, it’s time I cleared away a loose end by writing them up.

My Original Proposed Explanation

Let’s recap.  The story so far is that, based on certain characteristics of the orbits of Sedna and a dozen or so other distant minor planets – often referred to as trans-Neptunian objects or TNOs – several groups of researchers have proposed a “Planet X” or sometimes “Planet Nine”, Pluto, the ninth planet for a certain generation, having been relegated to mere “minor planet” status. As I consider the demotion of Pluto to be utterly ridiculous, I’m going to stick to the terminology “Planet X” for the hypothetical distant planet.  You can take “X” to be the Roman numeral if you want.

I was immediately sceptical about the existence of Planet X.  Some other explanation for the TNO orbits seemed more probable to me.  Planet X would be exceptional, compared to the eight (or nine) known planets, not only in its distance from the Sun, but also in the plane of its orbit.  To explain the strange features of the orbits of the minor planets by the known “Kozai mechanism” of gravitational “shepherding” of smaller objects by a large body, Planet X would have to orbit perpendicular to the plane of the Solar System, within a few degrees of which the planes of the orbits of all the other planets lie.

Some weeks ago then, in my first post on the subject, I reviewed what had been written on the subject of Planet X.  I think now that I was perhaps overly influenced by the Scientific American article on the subject and considered much the most important aspect of the minor planets’ orbits to be their near 0˚ arguments of perihelion (AOPs).  That is, they cross the plane of the Solar System roughly when they are nearest the Sun.

On reflection, I was perhaps wrong to be so dismissive of the eccentricity of the minor planets’ orbits.  All orbits are eccentric, I pointed out.  But the minor planets orbits are really quite eccentric.  There may be a cause of this eccentricity.

I also think it is important that the minor planets’ orbits are highly inclined to the plane of the Solar System compared to those of the inner planets, but they are nevertheless less inclined than random, i.e. in most cases somewhat less than 30˚.

I went on to suggest that perhaps something (other than Planet X) was pulling the minor planets towards the plane of the Solar System.   I suggested it was simply the inner planets, since there would be a component of the gravitational attraction of the minor planets perpendicular to the plane of the Solar System.  I included a diagram which I reproduce once again:

160205 Planet X slash 9In my second post about Planet X a few days later, I looked more closely at the original scientific papers, in particular those by Trujillo & Sheppard and Batygin & Brown.  I wondered why my suggestion had been rejected, albeit implicitly.  To cut a long story short, the only evidence that the minor planet orbits can’t be explained by the gravity of the inner eight planets (and the Sun) is computer modelling described in the paper by Trujillo & Sheppard.  I wondered if this could have gone wrong somehow.

Problems with Naive Orbital Flattening

Let’s term my original explanation “Naive Orbital Flattening”.  There are some issues with it:

First, if the minor planets are “falling” towards the plane of the Solar System, as in my figure, as well as orbiting its centre of gravity, they would overshoot and “bounce”.  They would have no way of losing the momentum towards the plane of the Solar System, so, after reaching an inclination of 0˚, their inclination would increase again on the opposite side of the plane as it were (I say “as it were” since the minor planets would cross the plane of the Solar System twice on each orbit, of course).

Second, mulling the matter over, there is no reason why orbital flattening wouldn’t have been detected by computer modelling.  Actually, I tell a lie; there is a reason.  The reason is that the process would be too slow.  Far from bouncing, it turns out that the minor planets would not have had time for their orbital inclinations to decline to 0˚ even once.  I did some back of the envelope calculations – several times in fact – and if you imagine the minor planets falling towards the plane of the Solar System under the gravity of the component of the inner planets’ orbits perpendicular to the plane and give yourself 4 billion years, the minor planets would only have fallen a small fraction of the necessary distance!

Third, we have this issue of the AOP.  The AOPs of the inner planets precess because of the gravitational effect of the other planets as they orbit the Sun (with some tweaks arising from relativity).  It’s necessary to explain why this precession wouldn’t occur for the minor planets.

Missing Mass

However you look at it, explaining the orbits of the minor planets must involve finding some mass in the Solar System!  One possible explanation is Planet X.  But could there be another source of missing mass?

Well, trying to rescue my theory, I was reading about the history of the Solar System.  As you do.

It turns out that the Kuiper Belt, beyond Neptune, now masses only a fraction of the Earth.  At one time it must have had at least 30 times the mass of the Earth, in order for the large objects we see today to form at all.  Trouble is, the consensus is that all that stuff either spiralled into the Sun, or was driven into interstellar space, depending on particle size, by the effect of solar radiation and the solar wind.

The science doesn’t seem done and dusted, however.  Perhaps there is more mass in the plane of the Solar System than is currently supposed.  Stop Press: Thanks to New Scientist I’m alerted to a paper that suggests exactly that – see the Addendum at the end of this piece.

It seems to me a likely place for particles to end up is around the heliopause, about 125 AU (i.e. 125 times the Earth’s orbit) from the Sun, because this is where the Solar wind collides with the interstellar medium.  You can imagine that forces pushing particles – of a certain range of sizes – out of the Solar System might at this point balance those pushing them back in.

Sophisticated Orbital Flattening

OK, there’s a big “if”, but if there is somewhat more mass – the remains of the protoplanetary disc – in the plane of the Solar System than is generally assumed, then it might be possible to explain the orbits of Sedna and the other TNOs quite neatly.  All we have to assume is that the mass is concentrated in the inner part of the TNOs orbits, let’s say from the Kuiper Belt through the heliopause at ~125 AU.

First, the AOPs of around 0˚ are even easier to explain than by the effects of the inner planets.  As for the inner planets, the mass would have greatest effect on the TNOs when they are nearest perihelion, so would perturb the orbits most then, as discussed in my previous posts.  The improvement in the explanation is that there is no need to worry about AOP precession.  Because the mass is in a disc, and therefore distributed relatively evenly around the Sun, its rotation has no gravitational effect on the minor planets.  And it is the rotation of the other planets that causes each planet’s AOP precession.

Second, we need to observe that there is a trade-off between orbital inclination and eccentricity as in the Kozai effect, due to conservation of angular momentum in the plane of the Solar System.  Thus, as the inclination of the TNOs’ orbits is eroded, so their orbits become more eccentric.  This could have one or 3 possible consequences:

  • it could be that, as I concluded for the effects of the inner planets alone, there has not been time for the TNOs’ orbits to flatten to 0˚ inclination in the 4 billion or so years since the formation of the Solar System.
  • or, it could be that the TNOs we observe are doomed in the sense that their orbits will be perturbed by interactions with the planets if they stray further into the inner Solar System – assuming they don’t actually collide with one of the inner planets – and we don’t observe TNOs that have already been affected in this way.
  • or, it could be that the TNOs’ orbits eventually reach an inclination of 0˚ and “bounce” back into more inclined orbits.  The point is that the eccentricity of the orbits of such bodies would decline again, so we may not observe them so easily, since the objects are so far away we can only see them when they are closest to the Sun.

Which of these possibilities actually occurs would depend on the amount and distribution of the proposed additional mass I am suggesting may exist in the plane of the Solar System.  My suspicion is that the orbital flattening process would be very slow, but it is possible different objects are affected in different ways, depending on initial conditions, such as their distance from the Sun.

Now I really will write to the scientists to ask whether this is plausible.  Adding some mass in the plane of the Solar System to Mercury symplectic integrator modelling would indicate whether or not Sophisticated Orbital Flattening is a viable hypothesis.

Addendum: I mentioned towards the start of this post that the search continues for Planet X.  I can’t help remarking that this doesn’t strike me as good science.  What research should be trying to do is explain the observations, i.e. the characteristics of the minor planets’ orbits, not trying to explain Planet X, which is as yet merely an unproven hypothetical explanation of those observations.  Anyway, this week’s New Scientist notes that:

“…the planet could have formed where we find it now. Although some have speculated that there wouldn’t be enough material in the outer solar system, Kenyon found that there could be enough icy pebbles to form something as small as Planet Nine in a couple of hundred million years (arxiv.org/abs/1603.08008).”

Aha!  Needless to say I followed the link provided by New Scientist and it turns out that the paper by Kenyon & Bromley does indeed suggest a mechanism for a debris disc at the right sort of distance in the Solar System.  They focus, though, on modelling how Planet X might have formed.  They find that it could exist, if the disc had the right characteristics, but it also may not have done.  It all depends on the “oligarchs” (seed planets) and the tendency of the debris to break up in collisions.  This is from their Summary (my explanatory comment in square brackets):

We use a suite of coagulation calculations to isolate paths for in situ production of super-Earth mass planets at 250–750 AU around solar-type stars. These paths begin with a massive ring, M0 >~ 15 M⊕ [i.e. more than 15 times the mass of the Earth], composed of strong pebbles, r0 ≈ 1 cm, and a few large oligarchs, r ≈ 100 km. When these systems contain 1–10 oligarchs, two phases of runaway growth yield super-Earth mass planets in 100–200 Myr at 250 AU and 1–2 Gyr at 750 AU. Large numbers of oligarchs stir up the pebbles and initiate a collisional cascade which prevents the growth of super-Earths. For any number of oligarchs, systems of weak pebbles are also incapable of producing a super-Earth mass planet in 10 Gyr.

They don’t consider the possibility that the disc itself could explain the orbits of the minor planets.  And may indeed be where they originated in the first place.  In fact, the very existence of the minor planets could suggest there were too many “oligarchs” for a “super-Earth” to form.  Hmm!

 

February 13, 2016

Is Planet X Needed? – Further Comments on Trujillo & Sheppard and Batygin & Brown

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 7:42 pm

In my last post, Does (Brown and Batygin’s) Planet 9 (or Planet X) Exist?, I ignored the media squall which accompanied the publication on 20th January 2016 of a paper in The Astronomical Journal, Evidence for a Distant Giant Planet in the Solar System, by Konstantin Batygin and Michael E Brown, and discussed the coverage of the issue in New Scientist (here [paywall] and here) and in Scientific American (here [paywall]).

The idea that there may be a Planet X is not original to the Batygin and Brown paper.  It was also proposed in particular by Chadwick A. Trujillo and Scott S. Sheppard in a Nature paper A Sedna-like body with a perihelion of 80 astronomical units dated 27th March 2014.  The New Scientist and Scientific American feature articles were not informed by Batygin and Brown.  Scientific American explicitly referenced Trujillo and Sheppard (as well as a paper by C and R de la Fuente Marcos).

A key part of the evidence for a “Planet X” is that for the orbits of a number of trans-Neptunian objects (TNOs) – objects outside the orbit of Neptune – including the minor planet Sedna, the argument of perihelion is near 0˚.  That is, they cross the plane of the planets near when they are closest to the Sun. The suggestion is that this is not coincidental and can only be explained by the action of an undiscovered planet, perhaps 10 times the mass of the Earth, lurking out there way beyond Neptune. An old idea, the “Kozai mechanism”, is invoked to explain how Planet X could be controlling the TNOs, as noted, for example, by C and R de la Fuente Marcos in their paper Extreme trans-Neptunian objects and the Kozai mechanism: signalling the presence of trans-Plutonian planets.

I proposed a simpler explanation for the key finding.  My argument is based on the fact that the mass of the inner Solar System is dispersed from its centre of gravity, in particular because of the existence of the planets. Consequently, the gravitational force acting on the distant minor planets can be resolved into a component towards the centre of gravity of the Solar System, which keeps them in orbit, and, when averaged over time and because their orbits are inclined to the plane of the Solar System, another component at 90˚ to the first, towards the plane of the orbits of the eight major planets:

160205 Planet X slash 9

My suggestion is that this second component tend will gradually reduce the inclination of the minor planets’ orbits. Furthermore, the force towards the plane of the Solar System will be strongest when the minor planets are at perihelion on their eccentric orbits, not just in absolute terms, but also when averaged over time, taking into account varying orbital velocity as described by Kepler. This should eventually create orbits with an argument of perihelion near 0˚, as observed.

Has such an effect been taken into account by those proposing a Planet X?  The purpose of this second post on the topic is to look a little more closely at how the two main papers, Batygin & Brown and Trujillo & Sheppard tested for this possibility.

Batygin & Brown

The paper by Batygin and Brown does not document any original research that would have shown AOPs tending towards 0˚ without a Planet X by the mechanism I suggest.  Here’s what they say:

“To motivate the plausibility of an unseen body as a means of explaining the data, consider the following analytic calculation. In accord with the selection procedure outlined in the preceding section, envisage a test particle that resides on an orbit whose perihelion lies well outside Neptune’s orbit, such that close encounters between the bodies do not occur. Additionally, assume that the test particle’s orbital period is not commensurate (in any meaningful low-order sense—e.g., Nesvorný & Roig 2001) with the Keplerian motion of the giant planets.

The long-term dynamical behavior of such an object can be described within the framework of secular perturbation theory (Kaula 1964). Employing Gauss’s averaging method (see Ch. 7 of Murray & Dermott 1999; Touma et al. 2009), we can replace the orbits of the giant planets with massive wires and consider long-term evolution of the test particle under the associated torques. To quadrupole order in planet–particle semimajor axis ratio, the Hamiltonian that governs the planar dynamics of the test particle is [as close as I can get the symbols to the original]:

Η=-¼(GM/a) (1-e2)-3/2 Σ4i=1(miai2)/Ma2

In the above expression, G is the gravitational constant, M is the mass of the Sun, mi and ai are the masses and semimajor axes of the giant planets, while a and e are the test particle’s semimajor axis and eccentricity, respectively.

Equation (1) is independent of the orbital angles, and thus implies (by application of Hamilton’s equations) apsidal precession at constant eccentricity… in absence of additional effects, the observed alignment of the perihelia could not persist indefinitely, owing to differential apsidal precession.” [my stress].

After staring at this for a bit I noticed that the equation does not include the inclination of the orbit of test particle, just its semimajor axis (i.e. mean distance from the Sun) and eccentricity.  Then I saw that the text also only refers to the “planar dynamics of the test particle”, i.e. its behaviour in two, not three dimensions.

Later in the paper Batygin and Brown note (in relation to their modelling in general, not just what I shall call the “null case” of no Planet X) that:

“…an adequate account for the data requires the reproduction of grouping in not only the degree of freedom related to the eccentricity and the longitude of perihelion, but also that related to the inclination and the longitude of ascending node. Ultimately, in order to determine if such a confinement is achievable within the framework of the proposed perturbation model, numerical simulations akin to those reported above must be carried out, abandoning the assumption of coplanarity.”

I can’t say I found Batygin & Brown very easy to follow, but it’s fairly clear that they haven’t modeled the Solar System in a fully 3-dimensional manner.

Trujillo & Sheppard

If we have to discount Batygin & Brown, then the only true test of the null case is that in Trujillo & Sheppard.  Last time I quoted the relevant sentence:

“By numerically simulating the known mass in the solar system on the inner Oort cloud objects, we confirmed that [they] should have random ω [i.e. AOP]… This suggests that a massive outer Solar System perturber may exist and [sic, meaning “which”, perhaps] restricts ω for the inner Oort cloud objects.”

I didn’t mention that they then referred to the Methods section at the end of their paper.  Here’s what they say there (and I’m having to type this in because I only have a paper copy! – so much for scientific and technological progress!):

Dynamical simulation. We used the Mercury integrator to simulate the long-term behaviour of ω for the Inner Oort cloud objects and objects with semi-major axes greater than 150AU and perihelia greater than Neptune.  The goal of this simulation was to attempt to explain the ω clustering.  The simulation shows that for the currently known mass in the Solar System, ω for all objects circulates on short and differing timescales dependent on the semi-major acis and perihelion (for example, 1,300 Myr, 500 Myr, 100 Myr and 650 Myr for Sedna, 2012 VP113, 2000 CR105 and 2010 GB17, respectively).”

In other words their model reproduced the “apsidal precession” proposed in Batygin & Brown, but since Trujillo & Sheppard refer to ω, the implication is that their simulation was in 3 dimensions and not “planar”.

However, could the model used by Trujillo and Sheppard have somehow not correctly captured the interaction between the TNOs and the inner planets?  The possibilities range from apsidal precession being programmed in to the Mercury package (stranger things have happened!) to something more subtle, resulting from the simplifications necessary for Mercury to model Solar System dynamics.

Maybe I’d better pluck up courage and ask Trujillo and Sheppard my stupid question!  Of course, the effect I propose would have to dominate apsidal precession, but that’s definitely possible when apsidal precession is on a timescale of 100s of millions of years, as found by Trujillo and Sheppard.

February 5, 2016

Does (Brown and Batygin’s) Planet 9 (or Planet X) Exist?

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 7:23 pm

What exactly is the evidence that there may be a “Super-Earth” lurking in the outer reaches of the Solar System?  Accounts differ, so I’ll review what I’ve read (ignoring the mainstream media storm around 20th January!), to try to minimise confusion.

New Scientist

If you read your New Scientist a couple of weeks ago, you’ll probably have seen the cover-story feature article Last Great Mysteries of the Solar System, one of which was Is There a Planet X? [paywall for full article – if, that is, unlike me, you can even get your subscription number to give you access].  The article discussed the dwarf planets Sedna and 2012VP113.  The orbits of these planetoids – and another 10 or so not quite so distant bodies – according to New Scientist and the leaders of the teams that discovered Sedna and 2012VP113, Mike Brown and Scott Sheppard, respectively, could indicate “there is something else out there”.

Apparently, says NS:

“[the orbits of Sedna and 2012VP113] can’t be explained by our current understanding of the solar system…  Elliptical orbits happen when one celestial object is pushed around by the gravity of another.  But both Sedna and 2012VP113 are too away from the solar system’s giants – Jupiter, Saturn, Uranus and Neptune – to be influenced.”  Something else must be stirring the pot.”

“Elliptical orbits happen when one celestial object is pushed around by the gravity of another.”  This is nonsense.  Elliptical orbits are quite usual, beyond the 8 planets (i.e. for “trans-Neptunian objects”) which is where we’re talking about.  The fact that the orbits of Sedna and 2012VP113 are elliptical is not why there may be another decent-sized planet way out beyond Uranus (and little Pluto).

I see that the online version of New Scientist’s article Is There a Planet X? has a strap-line:

“Wobbles in the orbit of two distant dwarf planets are reviving the idea of a planet hidden in our outer solar system.”

Guess what?  The supposed evidence for Planet X is nothing to do with “wobbles” either.

The New Scientist article was one of several near-simultaneous publications and in fact the online version was updated, the same day, 20th January, with a note:

Update, 20 January: Mike Brown and Konstantin Batygin say that they have found evidence of “Planet Nine” from its effect on other bodies orbiting far from the sun.

Exciting.  Or it would have been, had I not been reading the print version.  The link is to another New Scientist article: Hints that ‘Planet Nine’ may exist on edge of our solar system [no paywall]. “Planet Nine”?  It was “Planet X” a minute ago.

Referencing the latest paper on the subject, by Brown and Batygin, this new online NS article notes that:

“Brown and others have continued to explore the Kuiper belt and have discovered many small bodies. One called 2012 VP113, which was discovered in 2014, raised the possibility of a large, distant planet, after astronomers realised its orbit was strangely aligned with a group of other objects. Now Brown and Batygin have studied these orbits in detail and found that six follow elliptical orbits that point in the same direction and are similarly tilted away from the plane of the solar system.

‘It’s almost like having six hands on a clock all moving at different rates, and when you happen to look up, they’re all in exactly the same place,’ said Brown in a press release announcing the discovery. The odds of it happening randomly are just 0.007 per cent. ‘So we thought something else must be shaping these orbits.’

According to the pair’s simulations, that something is a planet that orbits on the opposite side of the sun to the six smaller bodies. Gravitational resonance between this Planet Nine and the rest keep everything in order. The planet’s high, elongated orbit keeps it at least 200 times further away from the sun than Earth, and it would take between 10,000 and 20,000 Earth years just to complete a single orbit.”

Brown and Batygin claim various similarities in the orbits of the trans-Neptunian objects.  But they don’t stress what initially sparked the idea that “Planet Nine” might be influencing them.

Scientific American and The Argument of Perihelion
Luckily, by the time I saw the 23rd January New Scientist, I’d already read The Search for Planet X [paywall again, sorry] cover story in the February 2016 (who says time travel is impossible?) issue of Scientific American, so I knew that – at least prior to the Brown and Batygin paper – what was considered most significant about the trans-Neptunian objects was that they all had similar arguments of perihelion (AOPs), specifically around 0˚.  That is, they cross the plane of the planets roughly at the same time as they are closest to the Sun (perihelion).  The 8 (sorry, Pluto) planets orbit roughly in a similar plane; these more distant objects are somewhat more inclined to that plane.

Scientific American reports the findings by two groups of researchers, citing a paper by each.  One is a letter to Nature, titled A Sedna-like body with a perihelion of 80 astronomical units, by Chadwick Trujillo and Scott Sheppard [serious paywall, sorry], which announced the discovery of 2012 VP113 and arguably started the whole Planet X/9/Nine furore.  They quote Sheppard: “Normally, you would expect the arguments of perihelion to have been randomized over the life of the solar system.”

To cut to the chase, I think that is a suspect assumption.  I think there may be reasons for AOPs of bodies in inclined orbits to tend towards 0˚, exactly as observed.

The Scientific Papers

The fact that the argument of perihelion is key to the “evidence” for Planet X is clear from the three peer-reviewed papers mentioned so far.

Trujillo and Sheppard [paywall, still] say that:

“By numerically simulating the known mass in the solar system on the inner Oort cloud objects, we confirmed that [they] should have random ω [i.e. AOP]… This suggests that a massive outer Solar System perturber may exist and [sic, meaning “which”, perhaps] restricts ω for the inner Oort cloud objects.”

The Abstract of the other paper referenced by Scientific American, Extreme trans-Neptunian objects and the Kozai mechanism: signalling the presence of the trans-Plutonian planets, by C and R de la Fuente Marcos, begins:

“The existence of an outer planet beyond Pluto has been a matter of debate for decades and the recent discovery of 2012 VP113 has just revived the interest for this controversial topic. This Sedna-like object has the most distant perihelion of any known minor planet and the value of its argument of perihelion is close to 0 degrees. This property appears to be shared by almost all known asteroids with semimajor axis greater than 150 au and perihelion greater than 30 au (the extreme trans-Neptunian objects or ETNOs), and this fact has been interpreted as evidence for the existence of a super-Earth at 250 au.”

And the recent paper by Konstantin Batygin and Michael E Brown, Evidence for a Distant Giant Planet in the Solar System, starts:

Recent analyses have shown that distant orbits within the scattered disk population of the Kuiper Belt exhibit an unexpected clustering in their respective arguments of perihelion. While several hypotheses have been put forward to explain this alignment, to date, a theoretical model that can successfully account for the observations remains elusive.

So, whilst Batygin and Brown claim other similarities in the orbits of the trans-Neptunian objects, the key peculiarity is the alignment of AOPs around 0˚.

Is There a Simpler Explanation for ~0˚ AOPs?

Let’s consider first why the planets orbit in approximately the same plane, and why the Galaxy is also fairly flat.  The key is the conservation of angular momentum.  The overall rotation within a system about its centre of gravity must be conserved.  Furthermore, this rotation must be in a single plane.  Any orbits above and below that plane will eventually cancel each other out, through collisions (as in Saturn’s rings) and/or gravitational interactions (as when an elliptical galaxy gradually becomes a spiral galaxy).  Here’s an entertaining explanation of what happens.

This process is still in progress for the trans-Neptunian objects, I suggest, since they are inclined by up to around 30˚ – Sedna’s inclination is 11.9˚ for example – which is much more than the planets, which are all inclined within a few degrees of the plane of the Solar System.  What’s happening is that the TNOs are all being pulled constantly towards the plane of the Solar System, as I’ve tried to show in this schematic:

160205 Planet X slash 9

Now, here comes the key point: because the mass of the Solar System is spread out, albeit only by a small amount, because there are planets and not just a Sun, the gravitational pull on each TNO is greater when it is nearer the Sun (closer to perihelion) than when it is further away. There’s more of a tendency for the TNO (or any eccentrically orbiting body) to gravitate towards the plane of the system when it’s nearer perihelion.

This is true, I believe, even after allowing for Kepler’s 2nd Law, i.e. that the TNO spends less time closer to the Sun.  Kepler’s 2nd Law suggests the time an orbiting body spends at a certain distance from the centre of gravity of the system is proportional to the square of that distance, which you’d think might cancel out the inverse square law of gravity.  But the mass of the Solar System is not all at the centre of gravity.  The nearest approach of Neptune to Sedna, for example, when the latter is at perihelion is around 46AU (astronomical units, the radius of Earth’s orbit) but about 476AU when Sedna is at aphelion.

The most stable orbit for a TNO is therefore when it crosses the plane of the Solar System at perihelion, that is, when its argument of perihelion (AOP) is 0˚.  Over many millions of years the AOPs of the orbits of Sedna and co. have therefore tended to approach 0˚.

I suggest it is not necessary to invoke a “Super-Earth” to explain the peculiarly aligned arguments of perihelion of the trans-Neptunian objects.

January 23, 2016

Greater Interannual Seasonal Temperature Variability in a Warming World?

Filed under: Agriculture, Global climate trends, Global warming, Science, UK climate trends — Tim Joslin @ 5:42 pm

You attempt to use the correct scientific jargon and then realise that sometimes the English language is insufficiently precise.  What I mean by the title is to ask the important question as to whether, as global warming proceeds, we will see a greater variation between summers, winters, springs and autumns from year to year.  Or not.

My previous two posts used Central England Temperature (CET) record data to show how exceptional the temperatures were in December in 2010 (cold) and 2015 (warm) and highlighted two other recent exceptional months: March 2013 (cold) and April 2011 (warm).  I speculated that perhaps, relative to current mean temperatures for a given period, in these examples a calendar month, both hot and cold extreme weather conditions are becoming more extreme.

What prompted today’s follow-up post was an update from the venerable James Hansen, Global Temperature in 2015, to which a link appeared in my Inbox a few days ago.  This short paper documents how 2015 was by a fair margin globally the warmest year on record.  But it also includes a very interesting figure which seems to show increasingly greater variability in Northern Hemisphere summers and winters:

160120 More variable summer and winter temps

I’ve added a bit of annotation to emphasise that the bell curves for both summer and winter have widened and flattened. That is, not only have the mean summer and winter temperatures increased, so has the variance or standard deviation, to use the technical terms.

If true, this would be very concerning. If you’re trying to grow food and stuff, for example, it means you have to worry about a greater range of possible conditions from year to year than before, not just that it’s getting warmer.

I was about to suggest it might be time to panic. But then it occurred to me that there must surely have been some debate about this issue. And sure enough Google reveals that Hansen has written about variability before, and more explicitly, such as in a paper in 2012, titled Perception of climate change, which is free to download.  Hansen et al note “greater temperature variability in 1981-2010” compared to 1951-80.

Trouble is Hansen et al, 2012 was vigorously rebutted by a couple of Harvard boffs.  Andrew Rhines and Peter Huybers wrote to the Proceedings of the National Academy of Sciences, where Hansen et al had published their paper, claiming that Frequent summer temperature extremes reflect changes in the mean, not the variance [my stress].  They attributed Hansen’s flattening bell curves were due to various statistical effects and asserted that mean summer and winter temperatures had increased, but not the standard deviation, and therefore the probability of relative extremes.

That left me pretty flummoxed, especially when I found that in Nature that another bunch of eminent climate scientists also claimed, No increase in global temperature variability despite changing regional patterns (Huntingford et al, Nature 500, p327–330, 15 August 2013).

Just so we’re clear, what the guys are saying is that as global warming proceeds – not even when we reach some kind of steady state – temperatures will just on average be shifted up by a certain amount.

I have to say I find this very difficult to believe, and indeed incompatible with the fact that some parts of the world (continental interiors, say) warm faster than others (deep oceans) and the observation that the wind blows in different directions at different times!

Furthermore we’ve just seen, between Decembers 2010 and 2015 in the  CET record, a much greater spread of temperatures than in any comparable period (actually in any period, period, but we’re concerned here with variability over a few years – less than a decade or two, say – when the climate has had little time to change) in the previous 350 years.  I take the liberty of reproducing the graph from my previous post:

160114 Dec 2015 related CET analysis slide 2a

December 2015 was 10C warmer than December 2010, 2C more than the range between December temperatures in any other era.

And I also recollect figures like this one, showing the freakishness of summer 2003 in Switzerland, where, like the UK, there is a long history of weather records:

160120 More variable summer and winter temps slide 2

This appears on the Climate Communication site, which shies away from any mention of increased variability.  But the original Nature paper in which it appeared, Schär et al, 2004 is very clear, and is even titled The role of increasing temperature variability in European summer heatwaves. The synopsis (which is all I can access – pay-wall) notes that:

Instrumental observations and reconstructions of global and hemispheric temperature evolution reveal a pronounced warming during the past approx 150 years. One expression of this warming is the observed increase in the occurrence of heatwaves. Conceptually this increase is understood as a shift of the statistical distribution towards warmer temperatures, while changes in the width of the distribution are often considered small. Here we show that this framework fails to explain the record-breaking central European summer temperatures in 2003, although it is consistent with observations from previous years. We find that an event like that of summer 2003 is statistically extremely unlikely, even when the observed warming is taken into account. We propose that a regime with an increased variability of temperatures (in addition to increases in mean temperature) may be able to account for summer 2003. To test this proposal, we simulate possible future European climate with a regional climate model in a scenario with increased atmospheric greenhouse-gas concentrations, and find that temperature variability increases by up to 100%, with maximum changes in central and eastern Europe. [My stress].

Hmm. Contradictory findings, scientific debate.

My money’s on an increase in variability. I’ll keep an eye on that CET data.

January 19, 2016

Two More Extreme UK Months: March 2013 and April 2011

Filed under: Effects, Global warming, Science, Sea ice, Snow cover, UK climate trends — Tim Joslin @ 7:17 pm

My previous post showed how December 2015 was not only the mildest on record in the Central England Temperature (CET) record, but also the mildest compared to recent and succeeding years, that is, compared to the 21 year running mean December temperature (though I had to extrapolate the 21-year running mean forward).

December 2010, though not quite the coldest UK December in the CET data, was the coldest compared to the running 21 year mean.

I speculated that global warming might lead to a greater range of temperatures, at least until the planet reaches thermal equilibrium, which could be some time – thousands of years, maybe.  The atmosphere over land responds rapidly to greenhouse gases. But there is a lag before the oceans warm because of the thermal inertia of all that water. One might even speculate that the seas will never warm as much as the land, but we’ll discuss that another time. So in UK summers we might expect the hottest months – when a continental influence dominates – to be much hotter than before, whereas the more usual changeable months – when maritime influences come into play – to be not much hotter than before.

The story in winter is somewhat different.  Even in a warmer world, frozen water (and land) will radiate away heat in winter until it reaches nearly as cold a temperature as before, because what eventually stops it radiating heat away is the insulation provided by ice, not the atmosphere.  So the coldest winter months – when UK weather is influenced by the Arctic and the Continent – will be nearly as cold as before global warming.   This will also slow the increase in monthly mean temperatures.  Months dominated by tropical influences on the UK will therefore be warmer, compared to the mean, than before global warming.

If this hypothesis is correct, then it would obviously affect other months as well as December.  So I looked for other recent extreme months in the CET record.  It turns out that the other recent extreme months have been in late winter or early spring.

Regular readers will recall that I wrote about March 2013, the coldest in more than a century, at the time, and noted that the month was colder than any previous March compared to the running mean.  I don’t know why I didn’t produce a graph to back then, but here it is:

160118 Extreme months in CET slide 1b

Just as December 2010 was not quite the coldest December on record, March 2013 was not the coldest March, just the coldest since 1892, as I reported at the time.  It was, though, the coldest in the CET record compared to the 21-year running mean, 3.89C below, compared to 3.85C in 1785.  And because I’ve had to extrapolate, the difference will increase if the average for Marches 2016-2023 (the ones I’ve had to assume) is greater than the current 21-year mean (for 1995-2015), which is more than half likely, since the planet is warming, on average.

We’re talking about freak years, so it’s surprising to find yet another one in the 2010s.  April 2011 was, by some margin, the warmest April on record, and the warmest compared to the 21-year running mean:

160119 Extreme months in CET slide 2

The mean temperature in April 2010 was 11.8C.  The next highest was only 4 years earlier, 11.2 in 2007.  The record for the previous 348 years of CET data was 142 years earlier, in 1865, at 10.6C.

On our measure of freakishness – deviation from the 21-year running mean – April 2011, at 2.82C, was comfortably more freakish than 1893 (2.58C), which was in a period of cooler Aprils than the warmest April before the global warming era, 1865.  The difference between 2.82C and 2.58C is unlikely to be eroded entirely when the data for 2016-2021 is included in place of my extrapolation.  It’s possible, but for that to happen April temperatures for the next 6 years would need to average around 10C to sufficiently affect the running mean – the warmth in the Aprils in the period including 2007 and 2011 would need to be repeated.

So, of the 12 months of the year, the most freakishly cold for two of them, December and March, have occurred in the last 6 years, and so have the most freakishly warm for two of them, December and April. The CET record is over 350 years long, so we’d expect a most freakishly warm or cold month to have occurred approximately once every 15 years (360 divided by 24 records).  In 6 years we’d have expected a less than 50% chance of a single freakishly extreme monthly temperature.

According to the CET record, we’ve had more than 8 times the number of freakishly extreme cold or warm months in the last 6 years than would have been expected had they occurred randomly since 1659.

And I bet we get more freakishly extreme cold or warm months over the next 6 years, too.

 

January 14, 2016

Just How Exceptionally Mild Was December 2015 in the UK?

Filed under: Global warming, Science, Sea ice, UK climate trends — Tim Joslin @ 5:24 pm

“Very” is the answer, based on the 350+ year long Central England Temperature (CET) record.  Here’s a graph of all the CET December temperatures since 1659:

160114 Dec 2015 related CET analysis slide 1
As is readily apparent from the graph, the mean temperature of 9.7C in December 2015 was much higher than in any previous year.  In fact, only twice before had the average exceeded 8C.  Decembers 1934 and 1974 were previously tied as the mildest on 8.1C.

But how much was the recent mild weather due to global warming and how much to normal variability? Apart from anything else the mild spell has to coincide with a calendar month to show up in this particular dataset.  And it so happened that the weather turned cooler even as the champagne corks were in the air to celebrate the end of 2015.

To help untangle trends from freak events, I’ve included some running means on the graph above.  The green line shows the mean December temperature over 5 year periods.  For example, thanks in large part to December 2015, the 5 Decembers from 2011 to 2015 are the mildest in succession, though other periods have come close.

The red and black lines show 11 and 21 year running means, respectively.  The black line therefore represents the long-term trend of December temperatures.  These are close to the highest they’ve ever been, though in some periods, such as around the start of the 19th century, the average December has been as much as 2C colder than it is now.  Perhaps some exceptionally mild Decembers back then – such as 1806 – were as unusual for the period as December 2015 was compared to today’s Decembers.

I therefore had the idea to plot the deviation of each December from the 21 year mean centred on that year, represented by the black line on the graph above.  If you like, I’ve simply subtracted the black line from the blue line.

A health warning is necessary.  I’ve had to extrapolate the 21 year mean, since we don’t yet know what weather the next 10 Decembers (2016 to 2025) will bring.  We’ll have to wait until 2025 to see precisely how unusual December 2015 will prove to have been.  In the meantime, I’ve set the mean temperature for 2016 through 2025 to the last 21 year mean (i.e. the one for the years 1995 through 2015).

With that proviso, here’s what we get:

160114 Dec 2015 related CET analysis slide 2a
The green line now shows the difference between the mean December temperature for a given year and the mean December temperature for the 21 years including the 10 before and the 10 after the given year.

We can see that December 2015 was, at 4.91C much more mild than contemporary Decembers than was any other December, with the proviso that I’ve not been able to take Decembers after 2015 into account.

The next most freakish December was the aforementioned 1806 which was 3.86C warmer than the mean of Decembers 1796 through 1816.

What’s going on? Is it just weather – something to do with the ongoing El Nino, perhaps – or is something else afoot?

One hypothesis might be that, with the climate out of equilibrium due to global warming, greater variability is possible than before. Our weather in 2015 may have been driven by a heat buildup somewhere (presumably in the ocean) due to global warming. On average this perhaps doesn’t happen – we may suppose our weather to be often determined by regions of the planet where the temperature hasn’t changed much, at least at the relevant time of year. Specifically, the Greenland ice-sheet hasn’t had time to melt yet.

It won’t have escaped the notice of my eagle-eyed readers that the graph above also shows 2010 to be the most freakishly cold December in the entire CET record.

Perhaps, until the ice-sheets melt, the deep oceans warm and the planet reaches thermal equilibrium, we’ll find that when it’s cold it’s just as cold as it used to be, but when it’s warm it’s a lot warmer than it used to be.   Just a thought.

It might be worth mentioning a couple of other, not necessarily exclusive, possibilities:

  • Maybe the situation will continue even when the planet is in thermal equilibrium.  Maybe, for example, assuming there is some limit to global warming and the Arctic seas still freeze in winter, we’ll still get cold weather in winter just or nearly as cold as it ever was, but we’ll get much warmer weather when there’s a tropical influence.
  • It could be that weather patterns are affected by global warming, especially through the later freezing of Arctic ice.

Or December 2015 could just have been a freak weather event.

September 16, 2015

Will Osborne’s UK National Living Wage Really Cost 60,000 Jobs?

Filed under: Economics, Inequality, Minimum wage, Unemployment — Tim Joslin @ 7:25 pm

It’s pretty dismal if you’re left-leaning in the UK right now.  Not only did Labour lose the election catastrophically and – adding to the shock – much more badly than implied by the polls, they’ve now gone nuts and elected a leader who can’t win, and even if he did advocates policies that belong in the 1970s.  Meanwhile Osborne is implementing a policy Labour should have been pushing during the election campaign, namely what is in effect a higher minimum wage, his so-called National Living Wage (NLW) for over 25s.  Of course, Osborne’s overall package is disastrous for many of the poorest households who will be worse off even with the NLW because of simultaneous cuts to tax credits.

If you’re following the debate around the NLW – for example as expertly hosted by the Resolution Foundation – it’s clear that the Big Question is how much effect the NLW (and increased minimum wages in general) is likely to have on (un)employment.  Now, based on logical argument (that being my favoured modus operandi), and, of course, because my philosophy is to question everything, I am highly sceptical of the mainstream line of reasoning that labour behaves like paper-clips.  Put up the price of paper-clips and you’ll sell fewer; put up the price of labour and unemployment will rise is the gist of it.  But this ignores the fact that increasing wages itself creates demand.  More on this later.

Much as I believe in the power of reasoned argument, I nevertheless recognise that it’s a good idea to first look at the strengths and weaknesses of the opposing position.  In this post I therefore want to focus on the meme that Osborne’s NLW will cost 60,000 jobs.  How well-founded is this estimate?  You’ll see it quoted frequently, for example, by the Resolution Foundation and on the Institute for Fiscal Studies’ (IFS) website and no doubt in mainstream media reports.  The original source is the Office for Budget Responsibility.  As far as I can tell the 60,000 figure first appeared in a report, Summer budget 2015 policy measures (pdf) which was issued around the time of Osborne’s “emergency” budget in July (the “emergency” being that the Tories had won a majority), when he bombshelled the NLW announcement.

So, I asked myself, being keen to get right to the bottom of things, where did the OBR boffs get their 60,000 estimate from?  Well, what they did was make a couple of assumptions (Annex B, para 17 on p.204), the key one being:

“…an elasticity of demand for labour of -0.4… This means total hours worked fall by 0.4 per cent for every 1.0 per cent increase in wages;”

They stuck this into their computer, together with the assumption that “half the effect on total hours will come through employment and half through average hours” and out popped the 60,000 figure.

But where does this figure of -0.4 come from?  They explain in Annex B.20:

“The elasticity of demand we have assumed lies within a relatively wide spectrum of empirical estimates, including the low-to-high range of -0.15 to -0.75 in Hamermesh (1991). This is a key assumption, with the overall effects moving linearly with it.”

The Hamermesh reference is given in footnote 3 on p.205, together with another paper:

“Hamermesh (1991), “Labour demand: What do we know? What don’t we know?”. Loeffler, Peichl, Siegloch (2014), “The own-wage elasticity of labor demand: A meta-regression analysis””, present a median estimate of -0.39, within a range of -0.072 to -0.446.” (my emphasis)

Evidently Hamermesh is the go to guy for the elasticity of demand for “labor”.  So I thought I’d have a look at how Hamermesh’s figure was arrived at.

I hope you’ve read this far, because this is where matters start to become a little curious.

Both papers referred to in footnote 3 are available online.  Here’s what Hamermesh actually wrote (it’s a screen print since the document was evidently scanned in to produce the pdf Google found for me):

150916 National Living WageSo what our guru is actually saying is that although the demand elasticity figure is between -0.15 and -0.75, as assumed by the OBR, his best guess – underlined, when that was not a trivial matter, necessitating sophisticated typewriter operation – was actually -0.3.

So why didn’t the OBR use the figure of -0.3?

Perhaps the answer is to do with the -0.39 they quote from the Loeffler, Peichl and Siegloch paper (pdf).  But this is what those guys actually wrote:

“Overall, our results suggest that there is not one unique value for the own-wage elasticity of labor demand; rather, heterogeneity matters with respect to several dimensions. Our preferred estimate in terms of specification – the long-run, constant-output elasticity obtained from a structural-form model using administrative panel data at the firm level for the latest mean year of observation, with mean characteristics on all other variables and corrected for publication selection bias – is -0.246, bracketed by the interval [-0.072;-0.446]. Compared to this interval, we note that (i) many estimates of the own-wage elasticity of labor demand given in the literature are upwardly inflated (with a mean value larger than -0.5 in absolute terms) and (ii) our preferred estimate is close to the best guess provided by Hamermesh (1993), albeit with our confidence interval for values of the elasticity being smaller.” (my emphasis)

Yep, the Germanically named guys from Germany came up with a figure of -0.246, not the -0.39 in the OBR’s footnote 3.  The OBR’s -0.39 is a rogue figure.  It must be some kind of typographical error, since they correctly quote the possible range ( -0.072 to -0.446) for the demand elasticity.  Bizarre, frankly.

It’s even more mysterious when you consider that the OBR would surely have used the elasticity of demand for labour previously.

Based on the sources they refer to it seems the OBR should have plugged -0.3 at most into their model, not -0.4.  This would have given a significantly lower estimate of the increase in unemployment attributable to the introduction of the NLW, that is, roughly 45,000 rather than 60,000.

Why does this matter?  It matters because the idea that a higher minimum wage will increase unemployment is one of the main arguments against it, frequently cited by those opposed to fair wages and giving pause to those in favour.  Here, for example, is what Allister Heath wrote recently in a piece entitled How the new national living wage will kill jobs in the Telegraph:

“…it is clear that George Osborne’s massive hike in the minimum wage will exclude a significant number of people from the world of work. There is a view that this might be a worthwhile trade-off: if millions are paid more, does it really matter if a few can’t find gainful employment any longer? Again, I disagree: there is nothing more cruel than freezing out the young, the unskilled, the inexperienced or the aspiring migrant from the chance of employment.

Being permanently jobless is a terrible, heart-wrenching state; the Government should never do anything that encourages such a catastrophe.”

Clearly, Heath’s argument (which I don’t in any case agree with) carries more weight the greater the effect of a higher minimum wage on unemployment.  But getting the numbers wrong isn’t the only problem with the OBR’s use of the demand elasticity of labour, as I’ll try to explain in my next post.

July 17, 2013

Staggering Piccadilly Line Trains

Filed under: TfL, Transport, Tube — Tim Joslin @ 11:03 am

Does London Underground appreciate that the vast majority of tube passengers (or “customers”, “clients”, “johns” or whatever might be the latest business-speak term for the long-suffering punters) simply want to be delivered to their destination as quickly as possible (and in reasonable comfort, but let’s put that issue to one side for the moment).

Why, then, would the operators deliberately slow tube trains down?

I’ve been pondering this question on many an occasion over the last year when I’ve had the dubious pleasure of making use of the Piccadilly Line to travel into central London from Acton Town (previously I took the Central Line from Ealing Broadway, which has a different set of problems).

Take last Saturday. I was concerned how long it would take me to get to Golders Green due to the closure of the Northern Line, I understand to test new signalling. As it turned out I needn’t have worried, since a frequent replacement bus service whisked me there from Finchley Road (though anyone who didn’t twig they were best off catching the Jubilee Line there might well have arrived late at their destination). Still, credit where credit’s due.

Anyway, when I left home last Saturday morning I naturally had no foreknowledge of the impending outbreak of efficiency at Finchley Road, so I was keen to get there as soon as possible. When I saw a Piccadilly Line train on platform 4 at Acton Town, I jumped on immediately, even though there were no free seats and I had to stand. When traveling into London, Acton Town is the first station on the Piccadilly Line after two branches come together. Well, they partially come together at Acton Town, since there are two platforms serving Piccadilly Line trains in each direction and two sets of track each way, it seems for much of the distance towards Hammersmith. Sometimes there are trains at both platforms and occasionally there’s even an announcement telling you the other train will leave first.

But on Saturday there was no Piccadilly Line train waiting at platform 3.

Nevertheless, to my considerable annoyance, the full train I was on was OVERTAKEN by TWO other Piccadilly Line trains as it trundled towards Hammersmith. Since the Piccadilly Line operators like to maintain the gap between trains this happenstance added maybe as much as 10 minutes onto my 1/2 hour journey to Green Park to pick up the Jubilee Line.

And the trains that overtook us were fairly empty. That’s right: a full train with hundreds of passengers was held up to allow relatively empty trains, perhaps with only scores of passengers to pass.

What’s going on? The only conceivable explanation is that the Piccadilly Line operators are not concerned directly with the passengers at all. No, they’re simply focused on their timetable. Operational processes, I suspect, take priority over customer service. Now, a glance at the Piccadilly Line reveals there is no branch in the east, only in the west:

130717 Piccadilly Line

It’s not outwith the bounds of possibility that the Piccadilly Line timetable ensures trains join the merged section of track based on where they’ve come from. That is, it could be that trains from Uxbridge are held up to allow Heathrow trains to overtake. I know: this is stupid. The trains appear to be identical. It’s not as if there’ll be a knock-on effect if trains arrive at the other end of the line in the wrong order. In fact, the trains being identical is another problem with the Piccadilly Line: the Heathrow trains need more luggage storage space. More about that issue another time perhaps.

Another possibility is that the trains are being ordered according to their destination. Not all trains go to Cockfosters. Some terminate before then. But holding up trains to ensure that (say) every other train goes to Cockfosters is almost as stupid as ordering the trains based on where they’ve come from. The reason not every train goes to Cockfosters is presumably because they are fairly empty when they get there. So delaying trains full of hundreds of people at Acton for the convenience of a few passengers at the other end of the line would make no sense. Which isn’t to say that’s not why it’s being done.

As I mentioned earlier, Piccadilly line eastbound train ordering is usually achieved by holding trains at Acton Town station. Infuriatingly, you often can’t tell whether the train at platform 3 will leave first or the one at platform 4. Simply providing this information would save hundreds of person-hours of tube travel time every day. But maybe (understandably) the operators don’t want hundreds of people rushing from one train to the other – though not everyone would necessarily move, since there’s the issue of access to those precious seats to consider.

But why hold up any trains at all? Holding up trains ALWAYS delays many passengers whereas managing train ordering – or, as we’ll see, the intervals between them – generally speeds up very few journeys.

Here’s my advice to Transport for London: stop trying to be clever. Start trains according to a timetable, but after that just run them as fast as possible. Hold them at stations only long enough for passengers to get on and off.

The calculation the operators should be carrying out, but I very much doubt they are is whether the amount of time they are costing the passengers on held trains is less than that saved by passengers who would otherwise miss the held train.

Perhaps this becomes clearer if we consider what happens when trains are held to even the intervals between Piccadilly Line trains in Central London. There’s only one tunnel and one set of platforms there, so there’s no issue of trains overtaking each other. Yet frequently – maybe on as many as half of all journeys – you sit in a station and hear that “we’re being held to regulate the service”. But consider the effect of this procedure. ALL the passengers on a train are being held up so that a few passengers further down the line find the train hasn’t already left when they reach the platform.

Such train staggering only makes sense when trains are fairly empty and many passengers are arriving at the platform. But the reverse is the case for morning journeys into central London and evening journeys out. When I’m on staggered Piccadilly Line trains it’s almost always the case that many more passengers are being delayed than are being convenienced.

If TfL is not swayed by the simple numeric argument, perhaps they should consider the business argument that the more often people have a rapid journey, the more they will be inclined to use the Underground rather than alternatives, such as taxis.

My advice to TfL’s London Underground operations team is to stop dicking around trying to timetable Piccadilly Line trains along the whole line. Release them at regular intervals and then get them to their destination as quickly as possible. Simples.

April 9, 2013

Could 2013 Still be the Warmest on Record in the CET?

Filed under: Global warming, Science, UK climate trends — Tim Joslin @ 5:05 pm

Blossom is appearing and buds are opening. The front garden magnolias of West London are coming into flower. The weather is turning milder in the UK. Spring is here at last.

So perhaps I’ll be coming to the end of posts on the subject of unusual weather for a while. Until there’s some more!

We’ve seen that March 2013 was, with 1892, the equal coldest in the CET since 1883, which is particularly significant given the generally warmer Marches in recent decades.

The first quarter of 2013 was the coldest since 1987, and the cold has now continued into April. This is where we now are, according to the Met Office:

130409 Latest weather slide 1

So far this year it’s been 1.44C colder than the average over 1961-90, which is the basis for CET “anomalies” here.

The rest of the year would have to be 2.37C warmer than usual, on average, for 2013 to be the warmest in the record.

Is it possible for 2013 to still be the warmest year in the CET? I’m saying no – or, to be more measured, it’s extremely unlikely.

Last year, it was July 13th before I felt able to make a similar statement.

But now I’ve realised that I can simply plot a graph of the later parts of previous years and compare them to the required mean temperature in 2013.

Here’s the graph of mean CET for the last 9 months of the year:

130409 Latest weather slide 2

Perhaps the most notable feature is that the last 9 months of 2006, at 13C were a whole 0.5C warmer than the last 9 months of the next warmest year, 1959, at 12.5C!

It’s easy enough to calculate that for 2013 to be the warmest year in the CET, the mean temperature for the last 9 months of the year would have to be 13.38C.

To be warmer than the warmest year in the CET, also 2006, the last 9 months of 2013 would need to be 0.38C warmer than the last 9 months of 2006. That’s a big ask.

But let’s look a little more carefully at 1959. The last 9 months of 2009 were about 1.4C warmer than the prevailing mean temperatures at the time, given by the 11 year (red line) and 21 year (black line) running means. The last 9 months of 2006 were “only” about 1.1 or 1.2C warmer than an average year at that time.

If 2013 were 1.4C warmer than the running means in previous years (obviously we can only determine the running means centred on 2013 with hindsight) then it would not be far off the warmest year in the CET.

No other year in the entire CET spikes above the average as much as 1959, so we have to suppose the last 9 months of that year were “freak” – say a once in 400 year event – and extremely unlikely to be repeated.

So on this basis it seems 2013 is extremely unlikely to be the warmest in the CET.

Now we have a bit of data for April we can also carry out a similar exercise for the last 8 months of the year.

The Met Office notes (see the screen-grab, above) that the first 8 days of April 2013 were on average 3C cooler than normal in the CET (“normal” with respect to the CET is always the 1961-90 average). If we call those 8 days a quarter of the month, the rest of the month needs to be 1C warmer than usual for April as a whole to be average. Let’s be conservative, though, and assumes that happens.

It’s easy enough now to calculate that for 2013 to be the warmest year in the CET, the mean temperature for the last 8 months of the year would have to be 14.07C, assuming the April temperature ends up as the 1961-90 average.

On this basis, we can then compare the last 8 months of previous years in the CET with what’s required for this year to be the warmest on record:

130409 Latest weather slide 3

Here 2006 seems more exceptional, and 1959 not quite such an outlier. (April is not now included: in 1959 the month was warm at 9.4C whereas in 2006 it was warmer than average at 8.6C, but not unusual).

Clearly, the spike above the running means would have to be a lot higher than ever before for 2013 to be the warmest year in the CET. Those 8 cold days seem to have made all the difference to the likelihood of 2013 breaking the record.

That’s it for now – though if April is particularly cold this year, a comparison of March and April with those months in previous years will be in order. The plot-spoiler is that 1917 was the standout year in the 20th century for the two months combined.

April 8, 2013

CET End of Month Adjustments

Filed under: Global warming, Media, Science, Science and the media, UK climate trends — Tim Joslin @ 5:51 pm

When we have some exceptional weather I like to check out the Central England Temperature (CET) record for the month (or year) in question and make comparisons with historic data which I have imported into spreadsheets from the Met Office’s CET pages.

The CET record goes back to 1659, so the historical significance of an exceptionally cold or warm spell – a month, season, year or longer – can be judged over a reasonably long period. Long-term trends, such as the gradual, irregular warming that has occurred since the late 17th century, are, of course, also readily apparent. The Met Office bases press releases and suchlike on records for the UK as a whole which go back only to 1910.

The Met Office update the CET for the current month on a daily basis, which is very handy for seeing how things are going.

After watching the CET carefully during a few extreme months – December 2010 and March 2013 come to mind – I noticed that there seems to be a downwards adjustment at the end of the month. I speculated about the reasons for the apparent correction to the figures a week or so ago:

“…I’ve noticed the CET is sometimes adjusted downwards before the final figure for the month is published, a few days into the next month. I don’t know why this is. Maybe the data for more remote (and colder) weather-stations is slow to come in. Or maybe it’s to counter for the urban heat island effect, to ensure figures are calibrated over the entire duration of the CET.”

and, as I mentioned earlier, today emailed the Met Office to ask.

I received a very prompt reply, and the first of the possible explanations I came up with is in fact correct. My phrase “more remote” makes it sound like the data is still being collected by 18th century vicars and landed gentry, but in fact there is a bias in the daily CET for the month to date due to the timing of availability of data:

“Not all weather stations send us their reports in real time, i.e. every day, and so for some stations we have to wait until after the end of the month before [complete] data are available.”

It must simply be that the stations that send in the data later tend to be in colder areas (at least in winter when I’ve noticed the end of month adjustment) – perhaps they really are “more remote”!

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.