Uncharted Territory

April 8, 2016

Missing Mass, the Absent Planet X and Yet More Comments on Trujillo & Sheppard and Batygin & Brown

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 5:33 pm

Since my two previous posts attempting to debunk the idea of “Planet X” – a supposed large planet in the outer reaches of the Solar System – the occasional media reference has informed me that teams of researchers and various items of multi-million pound equipment are apparently employed chasing the wild Planet X goose.  Indeed, as I go to press, Scientific American has just emailed me a link to an article reporting the latest developments in the search.  Then, yesterday, New Scientist reported on speculation as to where Planet X (or “Planet Nine” as they call it) might have come from.  A paper New Scientist refer to has a bearing on my own conclusions so I’m adding a note about it at the end of this piece.

I had some further thoughts some weeks ago, and, it’s time I cleared away a loose end by writing them up.

My Original Proposed Explanation

Let’s recap.  The story so far is that, based on certain characteristics of the orbits of Sedna and a dozen or so other distant minor planets – often referred to as trans-Neptunian objects or TNOs – several groups of researchers have proposed a “Planet X” or sometimes “Planet Nine”, Pluto, the ninth planet for a certain generation, having been relegated to mere “minor planet” status. As I consider the demotion of Pluto to be utterly ridiculous, I’m going to stick to the terminology “Planet X” for the hypothetical distant planet.  You can take “X” to be the Roman numeral if you want.

I was immediately sceptical about the existence of Planet X.  Some other explanation for the TNO orbits seemed more probable to me.  Planet X would be exceptional, compared to the eight (or nine) known planets, not only in its distance from the Sun, but also in the plane of its orbit.  To explain the strange features of the orbits of the minor planets by the known “Kozai mechanism” of gravitational “shepherding” of smaller objects by a large body, Planet X would have to orbit perpendicular to the plane of the Solar System, within a few degrees of which the planes of the orbits of all the other planets lie.

Some weeks ago then, in my first post on the subject, I reviewed what had been written on the subject of Planet X.  I think now that I was perhaps overly influenced by the Scientific American article on the subject and considered much the most important aspect of the minor planets’ orbits to be their near 0˚ arguments of perihelion (AOPs).  That is, they cross the plane of the Solar System roughly when they are nearest the Sun.

On reflection, I was perhaps wrong to be so dismissive of the eccentricity of the minor planets’ orbits.  All orbits are eccentric, I pointed out.  But the minor planets orbits are really quite eccentric.  There may be a cause of this eccentricity.

I also think it is important that the minor planets’ orbits are highly inclined to the plane of the Solar System compared to those of the inner planets, but they are nevertheless less inclined than random, i.e. in most cases somewhat less than 30˚.

I went on to suggest that perhaps something (other than Planet X) was pulling the minor planets towards the plane of the Solar System.   I suggested it was simply the inner planets, since there would be a component of the gravitational attraction of the minor planets perpendicular to the plane of the Solar System.  I included a diagram which I reproduce once again:

160205 Planet X slash 9In my second post about Planet X a few days later, I looked more closely at the original scientific papers, in particular those by Trujillo & Sheppard and Batygin & Brown.  I wondered why my suggestion had been rejected, albeit implicitly.  To cut a long story short, the only evidence that the minor planet orbits can’t be explained by the gravity of the inner eight planets (and the Sun) is computer modelling described in the paper by Trujillo & Sheppard.  I wondered if this could have gone wrong somehow.

Problems with Naive Orbital Flattening

Let’s term my original explanation “Naive Orbital Flattening”.  There are some issues with it:

First, if the minor planets are “falling” towards the plane of the Solar System, as in my figure, as well as orbiting its centre of gravity, they would overshoot and “bounce”.  They would have no way of losing the momentum towards the plane of the Solar System, so, after reaching an inclination of 0˚, their inclination would increase again on the opposite side of the plane as it were (I say “as it were” since the minor planets would cross the plane of the Solar System twice on each orbit, of course).

Second, mulling the matter over, there is no reason why orbital flattening wouldn’t have been detected by computer modelling.  Actually, I tell a lie; there is a reason.  The reason is that the process would be too slow.  Far from bouncing, it turns out that the minor planets would not have had time for their orbital inclinations to decline to 0˚ even once.  I did some back of the envelope calculations – several times in fact – and if you imagine the minor planets falling towards the plane of the Solar System under the gravity of the component of the inner planets’ orbits perpendicular to the plane and give yourself 4 billion years, the minor planets would only have fallen a small fraction of the necessary distance!

Third, we have this issue of the AOP.  The AOPs of the inner planets precess because of the gravitational effect of the other planets as they orbit the Sun (with some tweaks arising from relativity).  It’s necessary to explain why this precession wouldn’t occur for the minor planets.

Missing Mass

However you look at it, explaining the orbits of the minor planets must involve finding some mass in the Solar System!  One possible explanation is Planet X.  But could there be another source of missing mass?

Well, trying to rescue my theory, I was reading about the history of the Solar System.  As you do.

It turns out that the Kuiper Belt, beyond Neptune, now masses only a fraction of the Earth.  At one time it must have had at least 30 times the mass of the Earth, in order for the large objects we see today to form at all.  Trouble is, the consensus is that all that stuff either spiralled into the Sun, or was driven into interstellar space, depending on particle size, by the effect of solar radiation and the solar wind.

The science doesn’t seem done and dusted, however.  Perhaps there is more mass in the plane of the Solar System than is currently supposed.  Stop Press: Thanks to New Scientist I’m alerted to a paper that suggests exactly that – see the Addendum at the end of this piece.

It seems to me a likely place for particles to end up is around the heliopause, about 125 AU (i.e. 125 times the Earth’s orbit) from the Sun, because this is where the Solar wind collides with the interstellar medium.  You can imagine that forces pushing particles – of a certain range of sizes – out of the Solar System might at this point balance those pushing them back in.

Sophisticated Orbital Flattening

OK, there’s a big “if”, but if there is somewhat more mass – the remains of the protoplanetary disc – in the plane of the Solar System than is generally assumed, then it might be possible to explain the orbits of Sedna and the other TNOs quite neatly.  All we have to assume is that the mass is concentrated in the inner part of the TNOs orbits, let’s say from the Kuiper Belt through the heliopause at ~125 AU.

First, the AOPs of around 0˚ are even easier to explain than by the effects of the inner planets.  As for the inner planets, the mass would have greatest effect on the TNOs when they are nearest perihelion, so would perturb the orbits most then, as discussed in my previous posts.  The improvement in the explanation is that there is no need to worry about AOP precession.  Because the mass is in a disc, and therefore distributed relatively evenly around the Sun, its rotation has no gravitational effect on the minor planets.  And it is the rotation of the other planets that causes each planet’s AOP precession.

Second, we need to observe that there is a trade-off between orbital inclination and eccentricity as in the Kozai effect, due to conservation of angular momentum in the plane of the Solar System.  Thus, as the inclination of the TNOs’ orbits is eroded, so their orbits become more eccentric.  This could have one or 3 possible consequences:

  • it could be that, as I concluded for the effects of the inner planets alone, there has not been time for the TNOs’ orbits to flatten to 0˚ inclination in the 4 billion or so years since the formation of the Solar System.
  • or, it could be that the TNOs we observe are doomed in the sense that their orbits will be perturbed by interactions with the planets if they stray further into the inner Solar System – assuming they don’t actually collide with one of the inner planets – and we don’t observe TNOs that have already been affected in this way.
  • or, it could be that the TNOs’ orbits eventually reach an inclination of 0˚ and “bounce” back into more inclined orbits.  The point is that the eccentricity of the orbits of such bodies would decline again, so we may not observe them so easily, since the objects are so far away we can only see them when they are closest to the Sun.

Which of these possibilities actually occurs would depend on the amount and distribution of the proposed additional mass I am suggesting may exist in the plane of the Solar System.  My suspicion is that the orbital flattening process would be very slow, but it is possible different objects are affected in different ways, depending on initial conditions, such as their distance from the Sun.

Now I really will write to the scientists to ask whether this is plausible.  Adding some mass in the plane of the Solar System to Mercury symplectic integrator modelling would indicate whether or not Sophisticated Orbital Flattening is a viable hypothesis.

Addendum: I mentioned towards the start of this post that the search continues for Planet X.  I can’t help remarking that this doesn’t strike me as good science.  What research should be trying to do is explain the observations, i.e. the characteristics of the minor planets’ orbits, not trying to explain Planet X, which is as yet merely an unproven hypothetical explanation of those observations.  Anyway, this week’s New Scientist notes that:

“…the planet could have formed where we find it now. Although some have speculated that there wouldn’t be enough material in the outer solar system, Kenyon found that there could be enough icy pebbles to form something as small as Planet Nine in a couple of hundred million years (arxiv.org/abs/1603.08008).”

Aha!  Needless to say I followed the link provided by New Scientist and it turns out that the paper by Kenyon & Bromley does indeed suggest a mechanism for a debris disc at the right sort of distance in the Solar System.  They focus, though, on modelling how Planet X might have formed.  They find that it could exist, if the disc had the right characteristics, but it also may not have done.  It all depends on the “oligarchs” (seed planets) and the tendency of the debris to break up in collisions.  This is from their Summary (my explanatory comment in square brackets):

We use a suite of coagulation calculations to isolate paths for in situ production of super-Earth mass planets at 250–750 AU around solar-type stars. These paths begin with a massive ring, M0 >~ 15 M⊕ [i.e. more than 15 times the mass of the Earth], composed of strong pebbles, r0 ≈ 1 cm, and a few large oligarchs, r ≈ 100 km. When these systems contain 1–10 oligarchs, two phases of runaway growth yield super-Earth mass planets in 100–200 Myr at 250 AU and 1–2 Gyr at 750 AU. Large numbers of oligarchs stir up the pebbles and initiate a collisional cascade which prevents the growth of super-Earths. For any number of oligarchs, systems of weak pebbles are also incapable of producing a super-Earth mass planet in 10 Gyr.

They don’t consider the possibility that the disc itself could explain the orbits of the minor planets.  And may indeed be where they originated in the first place.  In fact, the very existence of the minor planets could suggest there were too many “oligarchs” for a “super-Earth” to form.  Hmm!

 

February 13, 2016

Is Planet X Needed? – Further Comments on Trujillo & Sheppard and Batygin & Brown

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 7:42 pm

In my last post, Does (Brown and Batygin’s) Planet 9 (or Planet X) Exist?, I ignored the media squall which accompanied the publication on 20th January 2016 of a paper in The Astronomical Journal, Evidence for a Distant Giant Planet in the Solar System, by Konstantin Batygin and Michael E Brown, and discussed the coverage of the issue in New Scientist (here [paywall] and here) and in Scientific American (here [paywall]).

The idea that there may be a Planet X is not original to the Batygin and Brown paper.  It was also proposed in particular by Chadwick A. Trujillo and Scott S. Sheppard in a Nature paper A Sedna-like body with a perihelion of 80 astronomical units dated 27th March 2014.  The New Scientist and Scientific American feature articles were not informed by Batygin and Brown.  Scientific American explicitly referenced Trujillo and Sheppard (as well as a paper by C and R de la Fuente Marcos).

A key part of the evidence for a “Planet X” is that for the orbits of a number of trans-Neptunian objects (TNOs) – objects outside the orbit of Neptune – including the minor planet Sedna, the argument of perihelion is near 0˚.  That is, they cross the plane of the planets near when they are closest to the Sun. The suggestion is that this is not coincidental and can only be explained by the action of an undiscovered planet, perhaps 10 times the mass of the Earth, lurking out there way beyond Neptune. An old idea, the “Kozai mechanism”, is invoked to explain how Planet X could be controlling the TNOs, as noted, for example, by C and R de la Fuente Marcos in their paper Extreme trans-Neptunian objects and the Kozai mechanism: signalling the presence of trans-Plutonian planets.

I proposed a simpler explanation for the key finding.  My argument is based on the fact that the mass of the inner Solar System is dispersed from its centre of gravity, in particular because of the existence of the planets. Consequently, the gravitational force acting on the distant minor planets can be resolved into a component towards the centre of gravity of the Solar System, which keeps them in orbit, and, when averaged over time and because their orbits are inclined to the plane of the Solar System, another component at 90˚ to the first, towards the plane of the orbits of the eight major planets:

160205 Planet X slash 9

My suggestion is that this second component tend will gradually reduce the inclination of the minor planets’ orbits. Furthermore, the force towards the plane of the Solar System will be strongest when the minor planets are at perihelion on their eccentric orbits, not just in absolute terms, but also when averaged over time, taking into account varying orbital velocity as described by Kepler. This should eventually create orbits with an argument of perihelion near 0˚, as observed.

Has such an effect been taken into account by those proposing a Planet X?  The purpose of this second post on the topic is to look a little more closely at how the two main papers, Batygin & Brown and Trujillo & Sheppard tested for this possibility.

Batygin & Brown

The paper by Batygin and Brown does not document any original research that would have shown AOPs tending towards 0˚ without a Planet X by the mechanism I suggest.  Here’s what they say:

“To motivate the plausibility of an unseen body as a means of explaining the data, consider the following analytic calculation. In accord with the selection procedure outlined in the preceding section, envisage a test particle that resides on an orbit whose perihelion lies well outside Neptune’s orbit, such that close encounters between the bodies do not occur. Additionally, assume that the test particle’s orbital period is not commensurate (in any meaningful low-order sense—e.g., Nesvorný & Roig 2001) with the Keplerian motion of the giant planets.

The long-term dynamical behavior of such an object can be described within the framework of secular perturbation theory (Kaula 1964). Employing Gauss’s averaging method (see Ch. 7 of Murray & Dermott 1999; Touma et al. 2009), we can replace the orbits of the giant planets with massive wires and consider long-term evolution of the test particle under the associated torques. To quadrupole order in planet–particle semimajor axis ratio, the Hamiltonian that governs the planar dynamics of the test particle is [as close as I can get the symbols to the original]:

Η=-¼(GM/a) (1-e2)-3/2 Σ4i=1(miai2)/Ma2

In the above expression, G is the gravitational constant, M is the mass of the Sun, mi and ai are the masses and semimajor axes of the giant planets, while a and e are the test particle’s semimajor axis and eccentricity, respectively.

Equation (1) is independent of the orbital angles, and thus implies (by application of Hamilton’s equations) apsidal precession at constant eccentricity… in absence of additional effects, the observed alignment of the perihelia could not persist indefinitely, owing to differential apsidal precession.” [my stress].

After staring at this for a bit I noticed that the equation does not include the inclination of the orbit of test particle, just its semimajor axis (i.e. mean distance from the Sun) and eccentricity.  Then I saw that the text also only refers to the “planar dynamics of the test particle”, i.e. its behaviour in two, not three dimensions.

Later in the paper Batygin and Brown note (in relation to their modelling in general, not just what I shall call the “null case” of no Planet X) that:

“…an adequate account for the data requires the reproduction of grouping in not only the degree of freedom related to the eccentricity and the longitude of perihelion, but also that related to the inclination and the longitude of ascending node. Ultimately, in order to determine if such a confinement is achievable within the framework of the proposed perturbation model, numerical simulations akin to those reported above must be carried out, abandoning the assumption of coplanarity.”

I can’t say I found Batygin & Brown very easy to follow, but it’s fairly clear that they haven’t modeled the Solar System in a fully 3-dimensional manner.

Trujillo & Sheppard

If we have to discount Batygin & Brown, then the only true test of the null case is that in Trujillo & Sheppard.  Last time I quoted the relevant sentence:

“By numerically simulating the known mass in the solar system on the inner Oort cloud objects, we confirmed that [they] should have random ω [i.e. AOP]… This suggests that a massive outer Solar System perturber may exist and [sic, meaning “which”, perhaps] restricts ω for the inner Oort cloud objects.”

I didn’t mention that they then referred to the Methods section at the end of their paper.  Here’s what they say there (and I’m having to type this in because I only have a paper copy! – so much for scientific and technological progress!):

Dynamical simulation. We used the Mercury integrator to simulate the long-term behaviour of ω for the Inner Oort cloud objects and objects with semi-major axes greater than 150AU and perihelia greater than Neptune.  The goal of this simulation was to attempt to explain the ω clustering.  The simulation shows that for the currently known mass in the Solar System, ω for all objects circulates on short and differing timescales dependent on the semi-major acis and perihelion (for example, 1,300 Myr, 500 Myr, 100 Myr and 650 Myr for Sedna, 2012 VP113, 2000 CR105 and 2010 GB17, respectively).”

In other words their model reproduced the “apsidal precession” proposed in Batygin & Brown, but since Trujillo & Sheppard refer to ω, the implication is that their simulation was in 3 dimensions and not “planar”.

However, could the model used by Trujillo and Sheppard have somehow not correctly captured the interaction between the TNOs and the inner planets?  The possibilities range from apsidal precession being programmed in to the Mercury package (stranger things have happened!) to something more subtle, resulting from the simplifications necessary for Mercury to model Solar System dynamics.

Maybe I’d better pluck up courage and ask Trujillo and Sheppard my stupid question!  Of course, the effect I propose would have to dominate apsidal precession, but that’s definitely possible when apsidal precession is on a timescale of 100s of millions of years, as found by Trujillo and Sheppard.

February 5, 2016

Does (Brown and Batygin’s) Planet 9 (or Planet X) Exist?

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 7:23 pm

What exactly is the evidence that there may be a “Super-Earth” lurking in the outer reaches of the Solar System?  Accounts differ, so I’ll review what I’ve read (ignoring the mainstream media storm around 20th January!), to try to minimise confusion.

New Scientist

If you read your New Scientist a couple of weeks ago, you’ll probably have seen the cover-story feature article Last Great Mysteries of the Solar System, one of which was Is There a Planet X? [paywall for full article – if, that is, unlike me, you can even get your subscription number to give you access].  The article discussed the dwarf planets Sedna and 2012VP113.  The orbits of these planetoids – and another 10 or so not quite so distant bodies – according to New Scientist and the leaders of the teams that discovered Sedna and 2012VP113, Mike Brown and Scott Sheppard, respectively, could indicate “there is something else out there”.

Apparently, says NS:

“[the orbits of Sedna and 2012VP113] can’t be explained by our current understanding of the solar system…  Elliptical orbits happen when one celestial object is pushed around by the gravity of another.  But both Sedna and 2012VP113 are too away from the solar system’s giants – Jupiter, Saturn, Uranus and Neptune – to be influenced.”  Something else must be stirring the pot.”

“Elliptical orbits happen when one celestial object is pushed around by the gravity of another.”  This is nonsense.  Elliptical orbits are quite usual, beyond the 8 planets (i.e. for “trans-Neptunian objects”) which is where we’re talking about.  The fact that the orbits of Sedna and 2012VP113 are elliptical is not why there may be another decent-sized planet way out beyond Uranus (and little Pluto).

I see that the online version of New Scientist’s article Is There a Planet X? has a strap-line:

“Wobbles in the orbit of two distant dwarf planets are reviving the idea of a planet hidden in our outer solar system.”

Guess what?  The supposed evidence for Planet X is nothing to do with “wobbles” either.

The New Scientist article was one of several near-simultaneous publications and in fact the online version was updated, the same day, 20th January, with a note:

Update, 20 January: Mike Brown and Konstantin Batygin say that they have found evidence of “Planet Nine” from its effect on other bodies orbiting far from the sun.

Exciting.  Or it would have been, had I not been reading the print version.  The link is to another New Scientist article: Hints that ‘Planet Nine’ may exist on edge of our solar system [no paywall]. “Planet Nine”?  It was “Planet X” a minute ago.

Referencing the latest paper on the subject, by Brown and Batygin, this new online NS article notes that:

“Brown and others have continued to explore the Kuiper belt and have discovered many small bodies. One called 2012 VP113, which was discovered in 2014, raised the possibility of a large, distant planet, after astronomers realised its orbit was strangely aligned with a group of other objects. Now Brown and Batygin have studied these orbits in detail and found that six follow elliptical orbits that point in the same direction and are similarly tilted away from the plane of the solar system.

‘It’s almost like having six hands on a clock all moving at different rates, and when you happen to look up, they’re all in exactly the same place,’ said Brown in a press release announcing the discovery. The odds of it happening randomly are just 0.007 per cent. ‘So we thought something else must be shaping these orbits.’

According to the pair’s simulations, that something is a planet that orbits on the opposite side of the sun to the six smaller bodies. Gravitational resonance between this Planet Nine and the rest keep everything in order. The planet’s high, elongated orbit keeps it at least 200 times further away from the sun than Earth, and it would take between 10,000 and 20,000 Earth years just to complete a single orbit.”

Brown and Batygin claim various similarities in the orbits of the trans-Neptunian objects.  But they don’t stress what initially sparked the idea that “Planet Nine” might be influencing them.

Scientific American and The Argument of Perihelion
Luckily, by the time I saw the 23rd January New Scientist, I’d already read The Search for Planet X [paywall again, sorry] cover story in the February 2016 (who says time travel is impossible?) issue of Scientific American, so I knew that – at least prior to the Brown and Batygin paper – what was considered most significant about the trans-Neptunian objects was that they all had similar arguments of perihelion (AOPs), specifically around 0˚.  That is, they cross the plane of the planets roughly at the same time as they are closest to the Sun (perihelion).  The 8 (sorry, Pluto) planets orbit roughly in a similar plane; these more distant objects are somewhat more inclined to that plane.

Scientific American reports the findings by two groups of researchers, citing a paper by each.  One is a letter to Nature, titled A Sedna-like body with a perihelion of 80 astronomical units, by Chadwick Trujillo and Scott Sheppard [serious paywall, sorry], which announced the discovery of 2012 VP113 and arguably started the whole Planet X/9/Nine furore.  They quote Sheppard: “Normally, you would expect the arguments of perihelion to have been randomized over the life of the solar system.”

To cut to the chase, I think that is a suspect assumption.  I think there may be reasons for AOPs of bodies in inclined orbits to tend towards 0˚, exactly as observed.

The Scientific Papers

The fact that the argument of perihelion is key to the “evidence” for Planet X is clear from the three peer-reviewed papers mentioned so far.

Trujillo and Sheppard [paywall, still] say that:

“By numerically simulating the known mass in the solar system on the inner Oort cloud objects, we confirmed that [they] should have random ω [i.e. AOP]… This suggests that a massive outer Solar System perturber may exist and [sic, meaning “which”, perhaps] restricts ω for the inner Oort cloud objects.”

The Abstract of the other paper referenced by Scientific American, Extreme trans-Neptunian objects and the Kozai mechanism: signalling the presence of the trans-Plutonian planets, by C and R de la Fuente Marcos, begins:

“The existence of an outer planet beyond Pluto has been a matter of debate for decades and the recent discovery of 2012 VP113 has just revived the interest for this controversial topic. This Sedna-like object has the most distant perihelion of any known minor planet and the value of its argument of perihelion is close to 0 degrees. This property appears to be shared by almost all known asteroids with semimajor axis greater than 150 au and perihelion greater than 30 au (the extreme trans-Neptunian objects or ETNOs), and this fact has been interpreted as evidence for the existence of a super-Earth at 250 au.”

And the recent paper by Konstantin Batygin and Michael E Brown, Evidence for a Distant Giant Planet in the Solar System, starts:

Recent analyses have shown that distant orbits within the scattered disk population of the Kuiper Belt exhibit an unexpected clustering in their respective arguments of perihelion. While several hypotheses have been put forward to explain this alignment, to date, a theoretical model that can successfully account for the observations remains elusive.

So, whilst Batygin and Brown claim other similarities in the orbits of the trans-Neptunian objects, the key peculiarity is the alignment of AOPs around 0˚.

Is There a Simpler Explanation for ~0˚ AOPs?

Let’s consider first why the planets orbit in approximately the same plane, and why the Galaxy is also fairly flat.  The key is the conservation of angular momentum.  The overall rotation within a system about its centre of gravity must be conserved.  Furthermore, this rotation must be in a single plane.  Any orbits above and below that plane will eventually cancel each other out, through collisions (as in Saturn’s rings) and/or gravitational interactions (as when an elliptical galaxy gradually becomes a spiral galaxy).  Here’s an entertaining explanation of what happens.

This process is still in progress for the trans-Neptunian objects, I suggest, since they are inclined by up to around 30˚ – Sedna’s inclination is 11.9˚ for example – which is much more than the planets, which are all inclined within a few degrees of the plane of the Solar System.  What’s happening is that the TNOs are all being pulled constantly towards the plane of the Solar System, as I’ve tried to show in this schematic:

160205 Planet X slash 9

Now, here comes the key point: because the mass of the Solar System is spread out, albeit only by a small amount, because there are planets and not just a Sun, the gravitational pull on each TNO is greater when it is nearer the Sun (closer to perihelion) than when it is further away. There’s more of a tendency for the TNO (or any eccentrically orbiting body) to gravitate towards the plane of the system when it’s nearer perihelion.

This is true, I believe, even after allowing for Kepler’s 2nd Law, i.e. that the TNO spends less time closer to the Sun.  Kepler’s 2nd Law suggests the time an orbiting body spends at a certain distance from the centre of gravity of the system is proportional to the square of that distance, which you’d think might cancel out the inverse square law of gravity.  But the mass of the Solar System is not all at the centre of gravity.  The nearest approach of Neptune to Sedna, for example, when the latter is at perihelion is around 46AU (astronomical units, the radius of Earth’s orbit) but about 476AU when Sedna is at aphelion.

The most stable orbit for a TNO is therefore when it crosses the plane of the Solar System at perihelion, that is, when its argument of perihelion (AOP) is 0˚.  Over many millions of years the AOPs of the orbits of Sedna and co. have therefore tended to approach 0˚.

I suggest it is not necessary to invoke a “Super-Earth” to explain the peculiarly aligned arguments of perihelion of the trans-Neptunian objects.

April 8, 2013

CET End of Month Adjustments

Filed under: Global warming, Media, Science, Science and the media, UK climate trends — Tim Joslin @ 5:51 pm

When we have some exceptional weather I like to check out the Central England Temperature (CET) record for the month (or year) in question and make comparisons with historic data which I have imported into spreadsheets from the Met Office’s CET pages.

The CET record goes back to 1659, so the historical significance of an exceptionally cold or warm spell – a month, season, year or longer – can be judged over a reasonably long period. Long-term trends, such as the gradual, irregular warming that has occurred since the late 17th century, are, of course, also readily apparent. The Met Office bases press releases and suchlike on records for the UK as a whole which go back only to 1910.

The Met Office update the CET for the current month on a daily basis, which is very handy for seeing how things are going.

After watching the CET carefully during a few extreme months – December 2010 and March 2013 come to mind – I noticed that there seems to be a downwards adjustment at the end of the month. I speculated about the reasons for the apparent correction to the figures a week or so ago:

“…I’ve noticed the CET is sometimes adjusted downwards before the final figure for the month is published, a few days into the next month. I don’t know why this is. Maybe the data for more remote (and colder) weather-stations is slow to come in. Or maybe it’s to counter for the urban heat island effect, to ensure figures are calibrated over the entire duration of the CET.”

and, as I mentioned earlier, today emailed the Met Office to ask.

I received a very prompt reply, and the first of the possible explanations I came up with is in fact correct. My phrase “more remote” makes it sound like the data is still being collected by 18th century vicars and landed gentry, but in fact there is a bias in the daily CET for the month to date due to the timing of availability of data:

“Not all weather stations send us their reports in real time, i.e. every day, and so for some stations we have to wait until after the end of the month before [complete] data are available.”

It must simply be that the stations that send in the data later tend to be in colder areas (at least in winter when I’ve noticed the end of month adjustment) – perhaps they really are “more remote”!

March 2013 WAS equal with 1892 as coldest in the CET record since 1883!

10 days or so ago I discussed the possibility that March 2013 would turn out to be the coldest in the Central England Temperature (CET) record since the 19th century.

Well, it did it!

Here’s a list of the coldest Marches since 1800 in the CET:

1.   1883  1.9C
2.   1845  2.0C
3.   1837  2.3C
4= 1892  2.7C
4= 2013  2.7C
5.   1962  2.8C

A few questions and not quite so many answers occur to me:

1. Why hasn’t the Met Office trumpeted March 2013 as the coldest since the 19th century?
What I’m alluding to here is, first, that the Met Office records for the UK and England only go back to 1910, but also that, as detailed on the Met Office’s blog, it turns out that March 2013 was only the joint 2nd coldest for the UK as a whole:

“March – top five coldest in the UK
1 1962 1.9 °C
2 2013 2.2 °C
2 1947 2.2 °C
4 1937 2.4 °C
5 1916 2.5 °C”

and second coldest for England as a whole:

“Looking at individual countries, the mean temperature for England for March was 2.6 °C – making it the second coldest on record, with only 1962 being colder (2.3 °C). In Wales, the mean temperature was 2.4 °C which also ranks it as the second coldest recorded – with only 1962 registering a lower temperature (2.1 °C). Scotland saw a mean temperature of 1.3 °C, which is joint fifth alongside 1916 and 1958. The coldest March on record for Scotland was set in 1947 (0.2 °C). For Northern Ireland, this March saw a mean temperature of 2.8 °C, which is joint second alongside 1919, 1937, and 1962. The record was set in 1947 (2.5 °C).”

The figures all tally suggesting that the parts of England not included in the CET were less exceptionally cold than those included, as I suggested before.

2. Why hasn’t the Met Office trumpeted March 2013 as the second coldest on record?
What I’m alluding to here is that the Met Office only made their “second coldest” announcement on their blog, not with a press release. The press release they did issue on 26th March was merely for “the coldest March since 1962”, and included somewhat different data to that (above) which appeared on their blog for the whole month:

“This March is set to be the coldest since 1962 in the UK in the national record dating back to 1910, according to provisional Met Office statistic [sic].

From 1 to 26 March the UK mean temperature was 2.5 °C, which is three degrees below the long term average. This also makes it joint 4th coldest on record in the UK.

Looking at individual countries, March 2013 is likely to be the 4th coldest on record for England, joint third coldest for Wales, joint 8th coldest for Scotland and 6th coldest for Northern Ireland.” (my stress)

and a “top 5” ranking that doesn’t even include March 2013, which eventually leapt into 2nd place!:

“March – Top five coldest in the UK
1 1962 1.9 °C
2 1947 2.2 °C
3 1937 2.4 °C
4 1916 2.5 °C
5 1917 2.5 °C.”

As I’ve also mentioned before, it’s odd to say the least that the Met Office have formally released provisional data (and not the actual data!) to the media.

So I’ve asked them why they do this, by way of a comment on their blog:

“The Met Office’s [sic – oops] announced a few days ago that March 2013 was only the ‘joint 4th coldest on record’ (i.e. since 1910) rather than the joint 2nd coldest. This was based on a comparison of data to 26th in 2013 with the whole month in earlier years, which seems to me a tad unscientific.

Maybe it’s just me, but it seems that there was more media coverage of the earlier, misleading, announcement.

Why did the Met Office make its early announcement and not wait until complete data became available at the end of the month?”

I’ll let you know when I receive a response – my comment has been awaiting moderation for 4 days now.

3. Why was it not clearer from the daily CET updates that March 2013 would be as cold as 2.7C?
And what I’m alluding to here is the end of month adjustment that seems to occur in the daily updated monthly mean CET data. I’ve noticed this and so has the commenter on my blog, “John Smith”.

I didn’t make a record of the daily mean CET for March to date, unfortunately, but having made predictions of the final mean temperature for March 2013 on this blog, I checked progress. From memory the mean ticked down to 2.9C up to and including the 30th, but was 2.7C for the whole month, i.e. after one more day. At that stage in the month, it didn’t seem to me possible for the mean CET for the month as a whole to drop more than 0.1C in a day (and it had been falling by less, i.e. by 0.1C less often than every day). Anyway, I’ve emailed the Met Office CET guy to ask about the adjustment. Watch this space.

4. Does all this matter?
Yes, I think it does.

Here’s the graph for March mean CET I produced for the previous post, updated with 2.7C for March 2013:

130408 Latest weather slide 1 CET graph

A curiosity is that never before has a March been so much colder – more than 5C – than the one the previous year. But the main point to note is the one I pointed out last time, that March 2013 has been colder than recent Marches – as indicated by the 3 running means I’ve provided – by more than has occurred before (except after the Laki eruption in 1773).

I stress the difference with recent Marches rather than just March 2012, because what matters most in many areas is what we’re used to. For example, farmers will gradually adjust the varieties of crops and breeds of livestock to the prevailing conditions. A March equaling the severity of the worst in earlier periods, when the average was lower, will then be more exceptional and destructive in its effects.

The same applies to the natural world and to other aspects of the human world. For example, species that have spread north over the period of warmer springs will not be adapted to this year’s conditions.  And we gradually adjust energy provision – such as gas storage – on the basis of what we expect based on recent experience, not possible theoretical extremes.

OK, this has just been a cold March, but it seems to me we’re ill-prepared for an exceptional entire winter, like 1962-3 or 1740. And it seems such events have more to do with weather-patterns than with the global mean temperature, so are not ruled out by global warming.

March 29, 2013

How Significant is the Cold UK March of 2013 in the CET?

Filed under: Global warming, Media, Science, Science and the media, UK climate trends — Tim Joslin @ 12:51 pm

Well, few UK citizens can still be unaware that March 2013 has been the coldest since 1962, though I’m still baffled why the Met Office jumps the gun on reporting data. There were 4 days to go when their announcement arrived in my Inbox – and clearly that of every newspaper reporter in the land.

Overall, though, the Met Office analysis – which, remember, is based on a series going back only to 1910 – suggests 2013 has been less of an outlier than it is in the Central England Temperature (CET) record.

This is what they say:

“This March is set to be the coldest since 1962 in the UK in the national record dating back to 1910, according to provisional Met Office statistic.

From 1 to 26 March the UK mean temperature was 2.5 °C, which is three degrees below the long term average. This also makes it joint 4th coldest on record in the UK.”

They provide a list:

“March – Top five coldest in the UK

1 1962 1.9 °C
2 1947 2.2 °C
3 1937 2.4 °C
4 1916 2.5 °C
5 1917 2.5 °C”

The discrepancy with the CET is presumably partly because Scotland, although colder than England, has not been as extreme compared to the cold Marches of the 20th century. The Met Office note:

“Looking at individual countries, March 2013 is likely to be the 4th coldest on record for England, joint third coldest for Wales, joint 8th coldest for Scotland and 6th coldest for Northern Ireland.”

Still, I’m rather puzzled why this March is reported as only the 4th coldest in England, particularly when I read in a post the Met Office’s blog that in most English counties it’s been the 2nd coldest after 1962.

It may be that the overall ranking for England will change over the next few days, which would add to my bafflement as to why the Met office makes early announcements. I’d have thought such behaviour was fine for mere bloggers like me, but not what is expected from an authoritative source. Isn’t the difference the same as that between City company analysts and the companies themselves? The former speculate; the latter announce definitive results.

Anyway, it’s also possible that the CET region has been colder than England as a whole relative to the previous cold Marches. I notice on the Met Office blog that this March has not been the second coldest for Yorkshire, Northumberland and Durham. If these are outside the CET area, their significant area would explain the difference in the Met Office rankings for England as a whole.

Focusing just on the CET, it’s still possible that March 2013 could be as cold or colder than 1962, and therefore the equal coldest since 1892 or 1883 (or even the coldest since 1883, though that seems unlikely now).

Although daily maximum temperatures have increased slightly to 6C or so, we’re also expecting some serious frosts (in fact some daily minimum records look vulnerable on 30th and 31st), and the CET website implies it is a (min+max)/2 statistic (as included in the screen-grab below).

Here’s the latest CET information for March:

130328 Latest actual weather v2 slide 2

It’s now very easy to work out what the mean temperature will be at the end of the March, due to the happy coincidence of the mean being 3.1 so far and there being 31 days in the month (regular readers will have noticed that I much prefer ready reckoning methods to those involving calculators or spreadsheets). Obviously, spread over the whole month the 3.1C so far would be 2.7C. That is, if the mean temperature for the remaining 4 days were 0C, that for the month would be 2.7C, the same as 1892 (and lower than 1962s 2.8C). Every 3.1 degree days above 0 (that is ~0.75C mean for the 4 days) adds 0.1C (over 2.7C) for the month as a whole. So if you think it’ll average 1.5C for the rest of the month in the CET region, the mean for the month as a whole will be 2.9C.

Obviously rounding could come into it, so it might be worth noting that the mean to 26th was also 3.1C. If you think (or find out – due to time constraints, I haven’t drilled down on the CET site) that 27th was colder than 3.1C (which seems likely) then just a bit less than 1.5C for the rest of the month – say 1.4C – would likely leave the overall mean at 2.8C.

Here’s the latest ensemble chart for London temperatures from the Weathercast site to help you make your mind up:

130328 Latest actual weather v2 slide 5

My guesstimate is 2.8C, so on that basis I move on to the main point of this post. Just for a bit of fun I put together a chart of the entire CET record for March, with running means:

130328 Latest actual weather v2 slide 6 v2

The picture is not dissimilar to that for the unusually cool summer of 2011. Although this March has been the coldest for “only” 50 years – one might argue that a coldest for 50 years month will occur on average every 50 months, i.e. every 4 and a bit years – global and general UK (including CET) temperatures have increased significantly over the last few decades.

As can be seen from the chart above, this March has been around 3.5 degrees colder than the running mean (depending which you take).

I say this with the health warning (as I gave for summer 2011) that the running means may be dragged down if March is also cold over the next few years – the significance of extreme events can only be fully appreciated in hindsight, and it may be that the warm Marches of the two decades or so before this year will look more exceptional and this year’s less exceptional when we have the complete picture.

Health-warning aside, there aren’t really any other Marches as much as 3.5C colder than the prevailing March temperature. The period 1783-6 stands out on the graph, but isn’t really comparable, because the eruption of the Icelandic volcano Laki gave the country a sulphurous shade, significantly reducing the Sun’s warmth. 1674 looks notable, too, but the monthly means back then seem to be rounded to the nearest degree centigrade, so we can’t be sure it actually was as cold as 1C (at least without considerable further research).

It’s all very curious. After December 2010 (for which I should prepare a similar chart some time) and now March 2013, one wonders whether, when we do get cold snaps, it’s going to be even more of a shock than in the past. Does global warming have much less effect on cold UK winter temperatures than on the long-term average? Or would this March have been even colder had the same weather conditions occurred before the global warming era? March 1883 averaged 1.9C, but was only about 3C colder than prevailing Marches. Perhaps this year’s weather conditions would have resulted in a monthly mean of 1.4C back then! The trouble is we now have no idea whether this March has been a once in 50 years, once a century or once a millennium event.

And has melting the Arctic ice made cold snaps more likely?

Confusion and unpredictability abounds when it comes to extreme weather events. Preparing for the worst – the precautionary principle – is called for.

March 26, 2013

March 2013 in UK: Coldest in CET since 1892 or 1883?

I see the Daily Mail is now suggesting that 2013’s “could be Britain’s coldest March since 1892.”

The nation-wide statistics published by the Met Office only go back to 1910, so the Central England Temperature (CET) record is needed to put current weather in a long-term context.

1892 is an odd year for the Daily Mail to choose, since the CET for March that year was 2.7C, whereas 1962, the year we have to “beat” for it to be the coldest since 1892, saw a mean March CET of 2.8C. We’re unlikely to say this March is “the coldest since 1883”, since if it comes in at 2.8C we’d probably say it’s the “equal coldest since 1892″ and if it comes in at 2.7C we might say it’s the “equal coldest since 1883″.

Furthermore, given the possibility of rounding, the difference between 1892 and 1962 could be much less than 0.1C, for all I know.

In addition, the difficulties of calibrating temperature readings between 1892 and 1962 make a difference of 0.1C in a monthly mean fairly insignificant (and probably statistically insignificant). To put it another way, the error bars on the temperatures are probably greater than 0.1C. Perhaps we shouldn’t really be quibbling over the difference between monthly means of 2.7C and 2.8C. But then again, we do like our weather records!

If this March is colder even than that in 1892, the next mark is 1883, when March saw a CET of 1.9C. It’s no longer on the cards for it to be as cold as 1.9C this March.

But what are the chances of the CET this March being colder than 2.8C or even 2.7C?

Here’s the state of play at the moment:

130326 Latest ensemble forecasts slide 1

This is moreorless in line with my projection of a few days ago. But that was based on ensemble forecasts on 22nd March, and, as I noted yesterday, the forecast for the rest of March has just kept getting colder since 22nd:

130326 Latest ensemble forecasts slide 3

Based on the forecast for 22nd March I wrote:

“Ignoring today (22nd) as transitional, it now looks likely that the 5 days 23rd through 27th March will be seriously cold, so let’s knock 0.1C off the monthly average for each of them. That gets us down to 3.1C.

The 28th will most likely be around the new average (3.1C), so it all depends on when the mild air comes in from the Atlantic. The computer model runs (grey lines) differ, and the average (yellow line) for 30th and 31st are for it to be relatively mild. If that’s the case, then we’d need to add on 0.1C for each day, so would roughly equal 1969.”

It’s certainly now not the case that 30th and 31st will be “relatively mild”, so we won’t have to add on 0.1C for each of those days. This March is therefore very likely to be colder than 1969 (3.3C) and therefore the coldest since at least 1962.

But could it be even colder than the 2.8C in 1962?

Here’s a larger image of the current ensemble forecast from the Weathercast site:

130326 Latest ensemble forecasts slide 2

The CET mean for March so far is 3.2C. To depress this average the mean for the rest of March would have to be lower than 3.2C, obviously. And since there’s 31 days in the month, each degree it is lower than 3.2C over one day (what we might call a degree-day) will depress the monthly mean by 1/31st of a degree.

The ensemble chart suggests the mean temperature for London for the rest of March will be about 1.5C – your judgement is as good as mine – over 6 days, so that’s very roughly 12 degree days lower than 3.2C (about 2C each day), so dividing by 30 (rounding 31), we might expect the mean for the month to come out about 0.4C lower than it is now, at 2.8C.

This estimate is very rough and ready since I’ve assumed in particular that London is representative of the CET region. It’s quite possible the region as a whole will be colder than London. Not only might this be the case generally, but there’s a lot of lying snow in more northerly areas, which tends to depress temperature readings (because it resists warming by reflecting sunlight and because its latent heat buffers warming of the ground surface at about 0C, both preventing warming of the air above it, and it is the near-ground air temperature that’s being measured).

Additionally, I’ve noticed the CET is sometimes adjusted downwards before the final figure for the month is published, a few days into the next month. I don’t know why this is. Maybe the data for more remote (and colder) weather-stations is slow to come in. Or maybe it’s to counter for the urban heat island effect, to ensure figures are calibrated over the entire duration of the CET.

By way of a sanity-check, here’s another view of much the same ensemble data as in the previous image, from the Wetterzentrale site:

130326 Latest actual weather

Note that to depress the average for the month so far the temperature would need to be around 4C less than usual, since the mean CET mean (!) for the whole of March is about 5.7C and it’s near the end of the month when mean daily temperatures around 7C would be typical. On that basis the Wetterzentrale maps suggest that 12 degree-days lower than the mean for the month so far is a reasonably estimate for the outlook over the next 6 days.

If a best guess is that the mean CET for March 2013 is 2.8C (“equal coldest since 1892”), with some uncertainty, it certainly seems possible that it could instead come in at 2.7C (“equal coldest since 1883”). In either case, though, it might be more accurate to simply say it has been one of the coldest 3 Marches since March 1883. I like to be fairly conservative, but I suppose there’s just an outside chance the mean CET this month could be even lower, at 2.6C, say, in which case we’d probably claim it has been the coldest since 1883.

Of course, this is all just estimation: the mean CET for March 2013 might end up “only” as cold as say 2.9C, the coldest for 51 years!

Forecasting and Philosophy

I noted yesterday that:

“Weather forecasting (and climate prediction) is not just about computer power. Deep philosophical ideas also come into play.”

I fear I may have under-delivered on the philosophy.

I intended to suggest that all forecasts, such as of weather, are necessarily and systematically inaccurate.

To recap, my main point yesterday was that running an inaccurate forecasting model numerous times doesn’t solve all the inherent problems:

“All ensemble forecasters know is that a certain proportion of computer model runs produce a given outcome. This might help identify possible weather events, but doesn’t tell you real-world probabilities. If there is some factor that the computer model doesn’t take account of then running the computer model 50 times is in effect to make the same mistake 50 times.”

Let me elaborate.

We can dismiss the normal explanation for forecasting difficulties.  Forecasters normally plead “chaos”.  Perfect forecasts are impossible, they say, because the flap of a butterfly’s wings can cause a hurricane.  Small changes in initial conditions can have dramatic consequences.

I don’t accept this excuse for minute.

It may well be the case that computer models suffer badly from the chaos problem.  In fact, the ensemble modelling approach relies on it.  I suspect the real world is much less susceptible.  Besides, given enough computer power I could model the butterfly down to the last molecule and predict its wing-flapping in precise detail.

No, the real-world is determined. That is, there is only one possible outcome.  Given enough information and processing power you could, in principle, predict the future with complete accuracy.

Of course, there are insurmountable practical problems that prevent perfect forecasting:

  • The most fundamental difficulty is that no computer can exceed the computing capabilities of the universe itself.  Although the future is written, it is in principle impossible to read it.
  • You might try to get round the computing capacity problem by taking part of the universe as a closed system and building a huge computer to model what is going on in that relatively small part.  The difficulty then is that the entire universe is interconnected.  Every part of it is open, not closed.  If the small part you were modelling were the Earth, say, then you’d have to also model all celestial events, not just those which might have a physical effect, but all those which might be detectable by humans and therefore able to affect thought-processes and decision-making.  And, since our telescopes can see galaxies billions of light-years away, there’s a lot to include in your model.  That’s not all, though.  You’d also need to model every cosmic ray that might disrupt a molecule, most dramatically of germ-line DNA – though a change to any molecule is of consequence – and even every photon that might warm a specific electron, contribute to photosynthesis or allow a creature to see just that bit better when hunting…
  • Then there are problems of what George Soros terms reflexivity.  That is, people’s behaviour is modified by knowing the predicted future.  They might act to deliberately avoid the modelled outcome, for example by deflecting an asteroid away from its path towards the Earth, which we might term strong reflexivity.  Or they might change their behaviour in a way that unintentionally affects the future, for example by cancelling travel plans in light of a weather forecast – weak reflexivity.  With enough computer power, some such problems could conceivably be overcome.  One might predict the response to an inbound asteroid, for example.  But it’s not immediately apparent how a model would handle the infinitely recursive nature of the general problem.

In practice, of course, these would be nice problems to have, because computer simulations of the weather system are grossly simplified.   They must therefore be systematically biased in their forecasting of any phenomena that rely on the complexity absent from the models.  As I noted yesterday, all runs in an ensemble forecast will suffer from any underlying bias in the model.

Two categories of simplification are problematic:

  • Models divide the real-world into chunks, for example layers of the atmosphere (or of the ocean).
  • And models necessarily represent closed systems – since the only truly closed system is the universe as a whole.  Anything not included in the model can affect the forecast.  For example, volcanic eruptions will invalidate any sufficiently long-term (and on occasion short-term) weather forecast.  Worse, weather models may be atmosphere only or include only a crude simplification of the oceans.  That is, they may represent the oceans in insufficient detail, and furthermore fail to include the effect of the forecast on the oceans, which in turn affects the forecast later on.

The good news, of course, is that it is possible to improve our weather forecasting almost indefinitely.

Perhaps those presenting weather forecasts should reflect on the fact that, as computer models improve, ensemble forecast ranges will narrow.  The 5-day forecast today is as good as the 3-day forecast was a decade or two ago. The probability of specific real-world conditions will not have changed.  That has always been and always will be precisely one: certainty.

It makes no sense to say “the probability of snow over Easter is x%”, when x depends merely on how big a computer you are using.

No, forecasters need to say instead that “x% of our forecasts predict snow over Easter”, which is not the same thing at all.

March 25, 2013

The UK’s Cold March 2013 and the Perils of Ensemble Forecasting

Weather forecasting (and climate prediction) is not just about computer power. Deep philosophical ideas also come into play. In particular, problems emanate from the use and communication of the concepts of probability and uncertainty. Often, the probability of a specific outcome is quoted, when what is meant is the level of its certainty in the opinion of the forecaster. Or more to the point in the opinion of the forecaster’s computer.

I’ll discuss communication problems more generally in a later post, but here I want to suggest the possibility that the outputs of forecasts – specifically ensemble forecasts – are being misinterpreted, and not just poorly communicated.

Anyone wanting an accessible introduction to the issues of forecasting, communicating forecasts and ensemble forecasting in particular, could do a lot worse than view the recent Royal Society debate, Storms, floods and droughts: predicting and reporting adverse weather. It’s entertaining too – there’s a great rant (with which I can’t help agreeing) from an audience member on the way the BBC reports London’s weather.

Ensemble forecasting is when a computer weather (or climate) model is run repeatedly – say 50 times – for the same forecast, but with very slightly different initial conditions (e.g. atmospheric pressure and temperature at particular locations). The idea is to produce a range of forecasts, representing the likelihood of possible outcomes. Tim Palmer suggests during the Royal Society debate that the most famous UK forecast ever – the dismissal in 1987 by Michael Fish of the possibility of a hurricane in Southern England the evening before one occurred – would instead have been presented probabilistically. Had he had an ensemble of forecasts, Fish might have said that there was a 30% probability of a hurricane.

I don’t agree with this.

If Fish had said there was a 30% probability of a hurricane he would have been guilty of confusing his computer model with reality.

All ensemble forecasters know is that a certain proportion of computer model runs produce a given outcome. This might help identify possible weather events, but doesn’t tell you real-world probabilities. If there is some factor that the computer model doesn’t take account of then running the computer model 50 times is in effect to make the same mistake 50 times.

March 2013 in the UK is the cold snap that just keeps on giving. The weather has defied forecasts. Specifically, it seemed just a few days ago that westerly air was going to break through before the end of March. Of course, this has a bearing on where March 2013 will rank among the all-time coldest, which I discussed in my previous post, but I’ll have to find time to revisit that subject in the next day or two.

The Weathercast site has made ensemble forecasts from the European Centre for Medium-range Weather Forecasts (ECMWF) available to the public. Here are some from over the last few days:

130323 Heathrow actual weather slide 4

I’ve lined them up, so that the day the forecast is for appears in a column for forecasts as of 00:00 hours on 22nd, 24th and 25th March.

An ensemble that’s behaving itself should provide less of a spread of forecasts as we get closer to the forecast date. For example, the spread of the maximum temperature on Easter Day, 31st March narrows in the forecast from 25th March compared to that on 24th.

But now look at the coldest possible temperatures on 31st March. On 22nd hardly any predicted temperatures below 0C, and none below about -2C. By 25th most of the forecasts were for a frost on the morning of 31st, and many for a severe frost (-3C or so). This shouldn’t happen.

It seems that on 22nd nearly all the model runs predicted Atlantic air to break through by 31st; by 25th virtually none of them did.

Instead of fanning out more the longer in the future the forecast is for, the ensemble model outcome seems to change systematically. Perhaps ensemble forecasts don’t solve all our problems. I suspect there are aspects of the climate system our computer models do not yet capture. There are things we do not yet know.

As we saw for the unexpected rainfall in 2012, ensemble forecasts can predict zero probability of extreme events, in this case the (most likely) second coldest March since the 19th century. And the whole point of ensemble forecasts is to predict extremes.

The forecast for 31st March is of more than passing interest, of course. It is no doubt of great importance to those who may be planning to take school kids on Duke of Edinburgh expeditions on Dartmoor or (since we’re talking about a London forecast) preparing for a traditional Boat Race on the Thames!

March 22, 2013

2013 UK Weather: Coldest March in 51, 44 or just 43 years?

Filed under: BBC, Global warming, Media, Science, Science and the media, UK climate trends — Tim Joslin @ 3:10 pm

I read in this morning’s Metro that “it looks certain to be the coldest March since 1962”. The Mail chips in with: “The appalling weather over the past few weeks is set to make this month the coldest March in 50 years.”

This puzzled me a little since I noted only on Wednesday (and published only yesterday) that:

“…we have to go back to 1970 for the most recent March with an average CET of less than 4C, when 3.7C was recorded. It’s possible this March could even beat that mark.

But March 1969 was even colder at 3.3C. I doubt the figure for this year will come out below that. Most likely the headlines will be ‘coldest March for 44 years’.”

This passage is a little garbled because I only said it was “possible” this March could be colder than the 3.7C in March 1970 in the Central England Temperature (CET) record, yet implied that it would when I said it was “[m]ost likely” to be the “coldest March for 44 years”. I think I probably meant to write “for 43 years”, and could have added “with a possibility of it being the coldest March for 44 years”.

Over the last few days the forecast for the rest of the month has certainly turned decidedly wintry, as I discussed in my previous post, and even more so since I wrote, but is it now “certain” this March will be colder than in 1969?

CET vs UK Average Temperature: Comments on a BBC Assessment

A possible reason for my less bold temperature prediction is that the CET will turn out differently from the UK as a whole or for different regions. I’ve been assuming the CET is representative of the UK as a whole, but that might not always be the case.

In the absence of a contribution on the topic on the Met Office’s official blog, perhaps the BBC is the most authoritative source on weather statistics, being less inclined to hyperbole than most of the print media.

A post by John Hammond of the BBC suggests some slight differences between the UK figure and the CET. I assume Hammond is using the UK temperature because his figures are lower than the CET equivalents (and his figures tally with graphs available on the Met Office site – see below). He writes that:

“So far, March 2013 has been colder than both this winter’s December and January. The average temperature (day and night combined) for the UK this March to date is currently around 3C. It should normally be nearer 6C.”

The claim that March 2013 has “[s]o far” been colder than January is not true for the CET:

130322 March 2013 in the CET An Unusually Long UK Winter slide 3

since January averaged 3.5C and so far March is 3.6C in the CET (rather than “around 3C”). The 3.6C figure for March was published in the last hour (as I type, rather than as I publish) – we’ll come back to the fact that the March figure has actually come down from 3.7C when Hammond was writing on Wednesday or Thursday.

Hammond goes on to say:

“…the coldest March on record was in 1962, when the mean temperature staggered to just 1.9C. That record will not be broken this year, but the more recent cold March of 1987 looks under threat – its mean temperature was 3.3C.”

not mentioning 1969 and 1970, for which the CET March temperatures fall between those in 1962 and 1987. This is a little odd since the same is true for the UK as a whole, which is what Hammond’s data seems to relate to. To admit my ignorance, I don’t know how to access the actual UK figures (maybe I should ask the Met Office), but I do know how to plot them graphically from a handy Met Office page:

130322 March 2013 in the CET An Unusually Long UK Winter slide 4

Note the cold Marches in 1969 and 1970, which Hammond doesn’t remark on. Maybe that’s an oversight.

So, will it be colder this March than in 1969?

Since the media are so sure March 2013 will be the coldest “for 50 years” (meaning 51), let’s have another look.

The first point to note is that the average so far this month, up to and including the 21st, is 3.6C. As I mentioned, it was only 3.7C to the 19th and the 20th. In terms of record-breaking, this could be enough to make a difference.

The second point is arithmetical. If one of the remaining 10 days this month (22nd-31st) is 2-3C below the current average (3.6C), the mean for the month will decrease by 0.1C; if a day is 2-3C above the average so far the mean will increase by 0.1C. This is admittedly a crude reckoning system, but simple and effective.

Third, taking the Heathrow temperature as typical of the Central England region as a whole, the medium term forecast has been deteriorating. This was the main theme of my previous post – perhaps I didn’t take enough account of it when discussing the monthly record temperatures.

This is what the forecasts looked like a couple of days ago (thanks Weathercast):

130322 March 2013 in the CET An Unusually Long UK Winter slide 1 v2

and this is the latest graph:

130322 March 2013 in the CET An Unusually Long UK Winter slide 2

The forecasts do seem to have deteriorated even further than I discussed in my previous post.

Ignoring today (22nd) as transitional, it now looks likely that the 5 days 23rd through 27th March will be seriously cold, so let’s knock 0.1C off the monthly average for each of them. That gets us down to 3.1C.

The 28th will most likely be around the new average (3.1C), so it all depends on when the mild air comes in from the Atlantic. The computer model runs (grey lines) differ, and the average (yellow line) for 30th and 31st are for it to be relatively mild. If that’s the case, then we’d need to add on 0.1C for each day, so would roughly equal 1969.

The grey lines represent an ensemble of forecasts, I assume each less precise than those generating the published maps. If we go on the basis of the main model runs (the red and blue lines), of which the ECMWF (red line) seems to me to have been best at predicting the “battle” between cold easterly and mild westerly air this winter (as in fact exemplified by the pair of Weathercast graphs above), then it looks fairly cold through almost to the end of the month.

The balance of probabilities does seem now to suggest that March 2013 will be the coldest in the CET since 1962. Most likely, the Metro, the Mail and John Hammond of the BBC will be proved right.

It might be worth noting that if the CET this month is lower than not just 3.3C, but the 3.2C recorded in both 1917 and 1955, it will not only be the coldest March in the series since 1962, but the second coldest since the 19th century. This is where the CET is useful – it gives a longer historical perspective than the UK figures, which only go back to 1910.

I suppose it’s not outside the bounds of possibility that we’ll beat that 1962 figure of a March CET of 2.8C, but that remains very unlikely.

All this may seem nit-picking, but if we’re going to make claims about increased frequency of weather extremes – and policy based on those claims – it’s essential to be clear what the data is telling us.

Older Posts »

Create a free website or blog at WordPress.com.