Uncharted Territory

February 5, 2016

Does (Brown and Batygin’s) Planet 9 (or Planet X) Exist?

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 7:23 pm

What exactly is the evidence that there may be a “Super-Earth” lurking in the outer reaches of the Solar System?  Accounts differ, so I’ll review what I’ve read (ignoring the mainstream media storm around 20th January!), to try to minimise confusion.

New Scientist

If you read your New Scientist a couple of weeks ago, you’ll probably have seen the cover-story feature article Last Great Mysteries of the Solar System, one of which was Is There a Planet X? [paywall for full article – if, that is, unlike me, you can even get your subscription number to give you access].  The article discussed the dwarf planets Sedna and 2012VP113.  The orbits of these planetoids – and another 10 or so not quite so distant bodies – according to New Scientist and the leaders of the teams that discovered Sedna and 2012VP113, Mike Brown and Scott Sheppard, respectively, could indicate “there is something else out there”.

Apparently, says NS:

“[the orbits of Sedna and 2012VP113] can’t be explained by our current understanding of the solar system…  Elliptical orbits happen when one celestial object is pushed around by the gravity of another.  But both Sedna and 2012VP113 are too away from the solar system’s giants – Jupiter, Saturn, Uranus and Neptune – to be influenced.”  Something else must be stirring the pot.”

“Elliptical orbits happen when one celestial object is pushed around by the gravity of another.”  This is nonsense.  Elliptical orbits are quite usual, beyond the 8 planets (i.e. for “trans-Neptunian objects”) which is where we’re talking about.  The fact that the orbits of Sedna and 2012VP113 are elliptical is not why there may be another decent-sized planet way out beyond Uranus (and little Pluto).

I see that the online version of New Scientist’s article Is There a Planet X? has a strap-line:

“Wobbles in the orbit of two distant dwarf planets are reviving the idea of a planet hidden in our outer solar system.”

Guess what?  The supposed evidence for Planet X is nothing to do with “wobbles” either.

The New Scientist article was one of several near-simultaneous publications and in fact the online version was updated, the same day, 20th January, with a note:

Update, 20 January: Mike Brown and Konstantin Batygin say that they have found evidence of “Planet Nine” from its effect on other bodies orbiting far from the sun.

Exciting.  Or it would have been, had I not been reading the print version.  The link is to another New Scientist article: Hints that ‘Planet Nine’ may exist on edge of our solar system [no paywall]. “Planet Nine”?  It was “Planet X” a minute ago.

Referencing the latest paper on the subject, by Brown and Batygin, this new online NS article notes that:

“Brown and others have continued to explore the Kuiper belt and have discovered many small bodies. One called 2012 VP113, which was discovered in 2014, raised the possibility of a large, distant planet, after astronomers realised its orbit was strangely aligned with a group of other objects. Now Brown and Batygin have studied these orbits in detail and found that six follow elliptical orbits that point in the same direction and are similarly tilted away from the plane of the solar system.

‘It’s almost like having six hands on a clock all moving at different rates, and when you happen to look up, they’re all in exactly the same place,’ said Brown in a press release announcing the discovery. The odds of it happening randomly are just 0.007 per cent. ‘So we thought something else must be shaping these orbits.’

According to the pair’s simulations, that something is a planet that orbits on the opposite side of the sun to the six smaller bodies. Gravitational resonance between this Planet Nine and the rest keep everything in order. The planet’s high, elongated orbit keeps it at least 200 times further away from the sun than Earth, and it would take between 10,000 and 20,000 Earth years just to complete a single orbit.”

Brown and Batygin claim various similarities in the orbits of the trans-Neptunian objects.  But they don’t stress what initially sparked the idea that “Planet Nine” might be influencing them.

Scientific American and The Argument of Perihelion
Luckily, by the time I saw the 23rd January New Scientist, I’d already read The Search for Planet X [paywall again, sorry] cover story in the February 2016 (who says time travel is impossible?) issue of Scientific American, so I knew that – at least prior to the Brown and Batygin paper – what was considered most significant about the trans-Neptunian objects was that they all had similar arguments of perihelion (AOPs), specifically around 0˚.  That is, they cross the plane of the planets roughly at the same time as they are closest to the Sun (perihelion).  The 8 (sorry, Pluto) planets orbit roughly in a similar plane; these more distant objects are somewhat more inclined to that plane.

Scientific American reports the findings by two groups of researchers, citing a paper by each.  One is a letter to Nature, titled A Sedna-like body with a perihelion of 80 astronomical units, by Chadwick Trujillo and Scott Sheppard [serious paywall, sorry], which announced the discovery of 2012 VP113 and arguably started the whole Planet X/9/Nine furore.  They quote Sheppard: “Normally, you would expect the arguments of perihelion to have been randomized over the life of the solar system.”

To cut to the chase, I think that is a suspect assumption.  I think there may be reasons for AOPs of bodies in inclined orbits to tend towards 0˚, exactly as observed.

The Scientific Papers

The fact that the argument of perihelion is key to the “evidence” for Planet X is clear from the three peer-reviewed papers mentioned so far.

Trujillo and Sheppard [paywall, still] say that:

“By numerically simulating the known mass in the solar system on the inner Oort cloud objects, we confirmed that [they] should have random ω [i.e. AOP]… This suggests that a massive outer Solar System perturber may exist and [sic, meaning “which”, perhaps] restricts ω for the inner Oort cloud objects.”

The Abstract of the other paper referenced by Scientific American, Extreme trans-Neptunian objects and the Kozai mechanism: signalling the presence of the trans-Plutonian planets, by C and R de la Fuente Marcos, begins:

“The existence of an outer planet beyond Pluto has been a matter of debate for decades and the recent discovery of 2012 VP113 has just revived the interest for this controversial topic. This Sedna-like object has the most distant perihelion of any known minor planet and the value of its argument of perihelion is close to 0 degrees. This property appears to be shared by almost all known asteroids with semimajor axis greater than 150 au and perihelion greater than 30 au (the extreme trans-Neptunian objects or ETNOs), and this fact has been interpreted as evidence for the existence of a super-Earth at 250 au.”

And the recent paper by Konstantin Batygin and Michael E Brown, Evidence for a Distant Giant Planet in the Solar System, starts:

Recent analyses have shown that distant orbits within the scattered disk population of the Kuiper Belt exhibit an unexpected clustering in their respective arguments of perihelion. While several hypotheses have been put forward to explain this alignment, to date, a theoretical model that can successfully account for the observations remains elusive.

So, whilst Batygin and Brown claim other similarities in the orbits of the trans-Neptunian objects, the key peculiarity is the alignment of AOPs around 0˚.

Is There a Simpler Explanation for ~0˚ AOPs?

Let’s consider first why the planets orbit in approximately the same plane, and why the Galaxy is also fairly flat.  The key is the conservation of angular momentum.  The overall rotation within a system about its centre of gravity must be conserved.  Furthermore, this rotation must be in a single plane.  Any orbits above and below that plane will eventually cancel each other out, through collisions (as in Saturn’s rings) and/or gravitational interactions (as when an elliptical galaxy gradually becomes a spiral galaxy).  Here’s an entertaining explanation of what happens.

This process is still in progress for the trans-Neptunian objects, I suggest, since they are inclined by up to around 30˚ – Sedna’s inclination is 11.9˚ for example – which is much more than the planets, which are all inclined within a few degrees of the plane of the Solar System.  What’s happening is that the TNOs are all being pulled constantly towards the plane of the Solar System, as I’ve tried to show in this schematic:

160205 Planet X slash 9

Now, here comes the key point: because the mass of the Solar System is spread out, albeit only by a small amount, because there are planets and not just a Sun, the gravitational pull on each TNO is greater when it is nearer the Sun (closer to perihelion) than when it is further away. There’s more of a tendency for the TNO (or any eccentrically orbiting body) to gravitate towards the plane of the system when it’s nearer perihelion.

This is true, I believe, even after allowing for Kepler’s 2nd Law, i.e. that the TNO spends less time closer to the Sun.  Kepler’s 2nd Law suggests the time an orbiting body spends at a certain distance from the centre of gravity of the system is proportional to the square of that distance, which you’d think might cancel out the inverse square law of gravity.  But the mass of the Solar System is not all at the centre of gravity.  The nearest approach of Neptune to Sedna, for example, when the latter is at perihelion is around 46AU (astronomical units, the radius of Earth’s orbit) but about 476AU when Sedna is at aphelion.

The most stable orbit for a TNO is therefore when it crosses the plane of the Solar System at perihelion, that is, when its argument of perihelion (AOP) is 0˚.  Over many millions of years the AOPs of the orbits of Sedna and co. have therefore tended to approach 0˚.

I suggest it is not necessary to invoke a “Super-Earth” to explain the peculiarly aligned arguments of perihelion of the trans-Neptunian objects.

January 23, 2016

Greater Interannual Seasonal Temperature Variability in a Warming World?

Filed under: Agriculture, Global climate trends, Global warming, Science, UK climate trends — Tim Joslin @ 5:42 pm

You attempt to use the correct scientific jargon and then realise that sometimes the English language is insufficiently precise.  What I mean by the title is to ask the important question as to whether, as global warming proceeds, we will see a greater variation between summers, winters, springs and autumns from year to year.  Or not.

My previous two posts used Central England Temperature (CET) record data to show how exceptional the temperatures were in December in 2010 (cold) and 2015 (warm) and highlighted two other recent exceptional months: March 2013 (cold) and April 2011 (warm).  I speculated that perhaps, relative to current mean temperatures for a given period, in these examples a calendar month, both hot and cold extreme weather conditions are becoming more extreme.

What prompted today’s follow-up post was an update from the venerable James Hansen, Global Temperature in 2015, to which a link appeared in my Inbox a few days ago.  This short paper documents how 2015 was by a fair margin globally the warmest year on record.  But it also includes a very interesting figure which seems to show increasingly greater variability in Northern Hemisphere summers and winters:

160120 More variable summer and winter temps

I’ve added a bit of annotation to emphasise that the bell curves for both summer and winter have widened and flattened. That is, not only have the mean summer and winter temperatures increased, so has the variance or standard deviation, to use the technical terms.

If true, this would be very concerning. If you’re trying to grow food and stuff, for example, it means you have to worry about a greater range of possible conditions from year to year than before, not just that it’s getting warmer.

I was about to suggest it might be time to panic. But then it occurred to me that there must surely have been some debate about this issue. And sure enough Google reveals that Hansen has written about variability before, and more explicitly, such as in a paper in 2012, titled Perception of climate change, which is free to download.  Hansen et al note “greater temperature variability in 1981-2010” compared to 1951-80.

Trouble is Hansen et al, 2012 was vigorously rebutted by a couple of Harvard boffs.  Andrew Rhines and Peter Huybers wrote to the Proceedings of the National Academy of Sciences, where Hansen et al had published their paper, claiming that Frequent summer temperature extremes reflect changes in the mean, not the variance [my stress].  They attributed Hansen’s flattening bell curves were due to various statistical effects and asserted that mean summer and winter temperatures had increased, but not the standard deviation, and therefore the probability of relative extremes.

That left me pretty flummoxed, especially when I found that in Nature that another bunch of eminent climate scientists also claimed, No increase in global temperature variability despite changing regional patterns (Huntingford et al, Nature 500, p327–330, 15 August 2013).

Just so we’re clear, what the guys are saying is that as global warming proceeds – not even when we reach some kind of steady state – temperatures will just on average be shifted up by a certain amount.

I have to say I find this very difficult to believe, and indeed incompatible with the fact that some parts of the world (continental interiors, say) warm faster than others (deep oceans) and the observation that the wind blows in different directions at different times!

Furthermore we’ve just seen, between Decembers 2010 and 2015 in the  CET record, a much greater spread of temperatures than in any comparable period (actually in any period, period, but we’re concerned here with variability over a few years – less than a decade or two, say – when the climate has had little time to change) in the previous 350 years.  I take the liberty of reproducing the graph from my previous post:

160114 Dec 2015 related CET analysis slide 2a

December 2015 was 10C warmer than December 2010, 2C more than the range between December temperatures in any other era.

And I also recollect figures like this one, showing the freakishness of summer 2003 in Switzerland, where, like the UK, there is a long history of weather records:

160120 More variable summer and winter temps slide 2

This appears on the Climate Communication site, which shies away from any mention of increased variability.  But the original Nature paper in which it appeared, Schär et al, 2004 is very clear, and is even titled The role of increasing temperature variability in European summer heatwaves. The synopsis (which is all I can access – pay-wall) notes that:

Instrumental observations and reconstructions of global and hemispheric temperature evolution reveal a pronounced warming during the past approx 150 years. One expression of this warming is the observed increase in the occurrence of heatwaves. Conceptually this increase is understood as a shift of the statistical distribution towards warmer temperatures, while changes in the width of the distribution are often considered small. Here we show that this framework fails to explain the record-breaking central European summer temperatures in 2003, although it is consistent with observations from previous years. We find that an event like that of summer 2003 is statistically extremely unlikely, even when the observed warming is taken into account. We propose that a regime with an increased variability of temperatures (in addition to increases in mean temperature) may be able to account for summer 2003. To test this proposal, we simulate possible future European climate with a regional climate model in a scenario with increased atmospheric greenhouse-gas concentrations, and find that temperature variability increases by up to 100%, with maximum changes in central and eastern Europe. [My stress].

Hmm. Contradictory findings, scientific debate.

My money’s on an increase in variability. I’ll keep an eye on that CET data.

January 19, 2016

Two More Extreme UK Months: March 2013 and April 2011

Filed under: Effects, Global warming, Science, Sea ice, Snow cover, UK climate trends — Tim Joslin @ 7:17 pm

My previous post showed how December 2015 was not only the mildest on record in the Central England Temperature (CET) record, but also the mildest compared to recent and succeeding years, that is, compared to the 21 year running mean December temperature (though I had to extrapolate the 21-year running mean forward).

December 2010, though not quite the coldest UK December in the CET data, was the coldest compared to the running 21 year mean.

I speculated that global warming might lead to a greater range of temperatures, at least until the planet reaches thermal equilibrium, which could be some time – thousands of years, maybe.  The atmosphere over land responds rapidly to greenhouse gases. But there is a lag before the oceans warm because of the thermal inertia of all that water. One might even speculate that the seas will never warm as much as the land, but we’ll discuss that another time. So in UK summers we might expect the hottest months – when a continental influence dominates – to be much hotter than before, whereas the more usual changeable months – when maritime influences come into play – to be not much hotter than before.

The story in winter is somewhat different.  Even in a warmer world, frozen water (and land) will radiate away heat in winter until it reaches nearly as cold a temperature as before, because what eventually stops it radiating heat away is the insulation provided by ice, not the atmosphere.  So the coldest winter months – when UK weather is influenced by the Arctic and the Continent – will be nearly as cold as before global warming.   This will also slow the increase in monthly mean temperatures.  Months dominated by tropical influences on the UK will therefore be warmer, compared to the mean, than before global warming.

If this hypothesis is correct, then it would obviously affect other months as well as December.  So I looked for other recent extreme months in the CET record.  It turns out that the other recent extreme months have been in late winter or early spring.

Regular readers will recall that I wrote about March 2013, the coldest in more than a century, at the time, and noted that the month was colder than any previous March compared to the running mean.  I don’t know why I didn’t produce a graph to back then, but here it is:

160118 Extreme months in CET slide 1b

Just as December 2010 was not quite the coldest December on record, March 2013 was not the coldest March, just the coldest since 1892, as I reported at the time.  It was, though, the coldest in the CET record compared to the 21-year running mean, 3.89C below, compared to 3.85C in 1785.  And because I’ve had to extrapolate, the difference will increase if the average for Marches 2016-2023 (the ones I’ve had to assume) is greater than the current 21-year mean (for 1995-2015), which is more than half likely, since the planet is warming, on average.

We’re talking about freak years, so it’s surprising to find yet another one in the 2010s.  April 2011 was, by some margin, the warmest April on record, and the warmest compared to the 21-year running mean:

160119 Extreme months in CET slide 2

The mean temperature in April 2010 was 11.8C.  The next highest was only 4 years earlier, 11.2 in 2007.  The record for the previous 348 years of CET data was 142 years earlier, in 1865, at 10.6C.

On our measure of freakishness – deviation from the 21-year running mean – April 2011, at 2.82C, was comfortably more freakish than 1893 (2.58C), which was in a period of cooler Aprils than the warmest April before the global warming era, 1865.  The difference between 2.82C and 2.58C is unlikely to be eroded entirely when the data for 2016-2021 is included in place of my extrapolation.  It’s possible, but for that to happen April temperatures for the next 6 years would need to average around 10C to sufficiently affect the running mean – the warmth in the Aprils in the period including 2007 and 2011 would need to be repeated.

So, of the 12 months of the year, the most freakishly cold for two of them, December and March, have occurred in the last 6 years, and so have the most freakishly warm for two of them, December and April. The CET record is over 350 years long, so we’d expect a most freakishly warm or cold month to have occurred approximately once every 15 years (360 divided by 24 records).  In 6 years we’d have expected a less than 50% chance of a single freakishly extreme monthly temperature.

According to the CET record, we’ve had more than 8 times the number of freakishly extreme cold or warm months in the last 6 years than would have been expected had they occurred randomly since 1659.

And I bet we get more freakishly extreme cold or warm months over the next 6 years, too.

 

January 14, 2016

Just How Exceptionally Mild Was December 2015 in the UK?

Filed under: Global warming, Science, Sea ice, UK climate trends — Tim Joslin @ 5:24 pm

“Very” is the answer, based on the 350+ year long Central England Temperature (CET) record.  Here’s a graph of all the CET December temperatures since 1659:

160114 Dec 2015 related CET analysis slide 1
As is readily apparent from the graph, the mean temperature of 9.7C in December 2015 was much higher than in any previous year.  In fact, only twice before had the average exceeded 8C.  Decembers 1934 and 1974 were previously tied as the mildest on 8.1C.

But how much was the recent mild weather due to global warming and how much to normal variability? Apart from anything else the mild spell has to coincide with a calendar month to show up in this particular dataset.  And it so happened that the weather turned cooler even as the champagne corks were in the air to celebrate the end of 2015.

To help untangle trends from freak events, I’ve included some running means on the graph above.  The green line shows the mean December temperature over 5 year periods.  For example, thanks in large part to December 2015, the 5 Decembers from 2011 to 2015 are the mildest in succession, though other periods have come close.

The red and black lines show 11 and 21 year running means, respectively.  The black line therefore represents the long-term trend of December temperatures.  These are close to the highest they’ve ever been, though in some periods, such as around the start of the 19th century, the average December has been as much as 2C colder than it is now.  Perhaps some exceptionally mild Decembers back then – such as 1806 – were as unusual for the period as December 2015 was compared to today’s Decembers.

I therefore had the idea to plot the deviation of each December from the 21 year mean centred on that year, represented by the black line on the graph above.  If you like, I’ve simply subtracted the black line from the blue line.

A health warning is necessary.  I’ve had to extrapolate the 21 year mean, since we don’t yet know what weather the next 10 Decembers (2016 to 2025) will bring.  We’ll have to wait until 2025 to see precisely how unusual December 2015 will prove to have been.  In the meantime, I’ve set the mean temperature for 2016 through 2025 to the last 21 year mean (i.e. the one for the years 1995 through 2015).

With that proviso, here’s what we get:

160114 Dec 2015 related CET analysis slide 2a
The green line now shows the difference between the mean December temperature for a given year and the mean December temperature for the 21 years including the 10 before and the 10 after the given year.

We can see that December 2015 was, at 4.91C much more mild than contemporary Decembers than was any other December, with the proviso that I’ve not been able to take Decembers after 2015 into account.

The next most freakish December was the aforementioned 1806 which was 3.86C warmer than the mean of Decembers 1796 through 1816.

What’s going on? Is it just weather – something to do with the ongoing El Nino, perhaps – or is something else afoot?

One hypothesis might be that, with the climate out of equilibrium due to global warming, greater variability is possible than before. Our weather in 2015 may have been driven by a heat buildup somewhere (presumably in the ocean) due to global warming. On average this perhaps doesn’t happen – we may suppose our weather to be often determined by regions of the planet where the temperature hasn’t changed much, at least at the relevant time of year. Specifically, the Greenland ice-sheet hasn’t had time to melt yet.

It won’t have escaped the notice of my eagle-eyed readers that the graph above also shows 2010 to be the most freakishly cold December in the entire CET record.

Perhaps, until the ice-sheets melt, the deep oceans warm and the planet reaches thermal equilibrium, we’ll find that when it’s cold it’s just as cold as it used to be, but when it’s warm it’s a lot warmer than it used to be.   Just a thought.

It might be worth mentioning a couple of other, not necessarily exclusive, possibilities:

  • Maybe the situation will continue even when the planet is in thermal equilibrium.  Maybe, for example, assuming there is some limit to global warming and the Arctic seas still freeze in winter, we’ll still get cold weather in winter just or nearly as cold as it ever was, but we’ll get much warmer weather when there’s a tropical influence.
  • It could be that weather patterns are affected by global warming, especially through the later freezing of Arctic ice.

Or December 2015 could just have been a freak weather event.

September 16, 2015

Will Osborne’s UK National Living Wage Really Cost 60,000 Jobs?

Filed under: Economics, Inequality, Minimum wage, Unemployment — Tim Joslin @ 7:25 pm

It’s pretty dismal if you’re left-leaning in the UK right now.  Not only did Labour lose the election catastrophically and – adding to the shock – much more badly than implied by the polls, they’ve now gone nuts and elected a leader who can’t win, and even if he did advocates policies that belong in the 1970s.  Meanwhile Osborne is implementing a policy Labour should have been pushing during the election campaign, namely what is in effect a higher minimum wage, his so-called National Living Wage (NLW) for over 25s.  Of course, Osborne’s overall package is disastrous for many of the poorest households who will be worse off even with the NLW because of simultaneous cuts to tax credits.

If you’re following the debate around the NLW – for example as expertly hosted by the Resolution Foundation – it’s clear that the Big Question is how much effect the NLW (and increased minimum wages in general) is likely to have on (un)employment.  Now, based on logical argument (that being my favoured modus operandi), and, of course, because my philosophy is to question everything, I am highly sceptical of the mainstream line of reasoning that labour behaves like paper-clips.  Put up the price of paper-clips and you’ll sell fewer; put up the price of labour and unemployment will rise is the gist of it.  But this ignores the fact that increasing wages itself creates demand.  More on this later.

Much as I believe in the power of reasoned argument, I nevertheless recognise that it’s a good idea to first look at the strengths and weaknesses of the opposing position.  In this post I therefore want to focus on the meme that Osborne’s NLW will cost 60,000 jobs.  How well-founded is this estimate?  You’ll see it quoted frequently, for example, by the Resolution Foundation and on the Institute for Fiscal Studies’ (IFS) website and no doubt in mainstream media reports.  The original source is the Office for Budget Responsibility.  As far as I can tell the 60,000 figure first appeared in a report, Summer budget 2015 policy measures (pdf) which was issued around the time of Osborne’s “emergency” budget in July (the “emergency” being that the Tories had won a majority), when he bombshelled the NLW announcement.

So, I asked myself, being keen to get right to the bottom of things, where did the OBR boffs get their 60,000 estimate from?  Well, what they did was make a couple of assumptions (Annex B, para 17 on p.204), the key one being:

“…an elasticity of demand for labour of -0.4… This means total hours worked fall by 0.4 per cent for every 1.0 per cent increase in wages;”

They stuck this into their computer, together with the assumption that “half the effect on total hours will come through employment and half through average hours” and out popped the 60,000 figure.

But where does this figure of -0.4 come from?  They explain in Annex B.20:

“The elasticity of demand we have assumed lies within a relatively wide spectrum of empirical estimates, including the low-to-high range of -0.15 to -0.75 in Hamermesh (1991). This is a key assumption, with the overall effects moving linearly with it.”

The Hamermesh reference is given in footnote 3 on p.205, together with another paper:

“Hamermesh (1991), “Labour demand: What do we know? What don’t we know?”. Loeffler, Peichl, Siegloch (2014), “The own-wage elasticity of labor demand: A meta-regression analysis””, present a median estimate of -0.39, within a range of -0.072 to -0.446.” (my emphasis)

Evidently Hamermesh is the go to guy for the elasticity of demand for “labor”.  So I thought I’d have a look at how Hamermesh’s figure was arrived at.

I hope you’ve read this far, because this is where matters start to become a little curious.

Both papers referred to in footnote 3 are available online.  Here’s what Hamermesh actually wrote (it’s a screen print since the document was evidently scanned in to produce the pdf Google found for me):

150916 National Living WageSo what our guru is actually saying is that although the demand elasticity figure is between -0.15 and -0.75, as assumed by the OBR, his best guess – underlined, when that was not a trivial matter, necessitating sophisticated typewriter operation – was actually -0.3.

So why didn’t the OBR use the figure of -0.3?

Perhaps the answer is to do with the -0.39 they quote from the Loeffler, Peichl and Siegloch paper (pdf).  But this is what those guys actually wrote:

“Overall, our results suggest that there is not one unique value for the own-wage elasticity of labor demand; rather, heterogeneity matters with respect to several dimensions. Our preferred estimate in terms of specification – the long-run, constant-output elasticity obtained from a structural-form model using administrative panel data at the firm level for the latest mean year of observation, with mean characteristics on all other variables and corrected for publication selection bias – is -0.246, bracketed by the interval [-0.072;-0.446]. Compared to this interval, we note that (i) many estimates of the own-wage elasticity of labor demand given in the literature are upwardly inflated (with a mean value larger than -0.5 in absolute terms) and (ii) our preferred estimate is close to the best guess provided by Hamermesh (1993), albeit with our confidence interval for values of the elasticity being smaller.” (my emphasis)

Yep, the Germanically named guys from Germany came up with a figure of -0.246, not the -0.39 in the OBR’s footnote 3.  The OBR’s -0.39 is a rogue figure.  It must be some kind of typographical error, since they correctly quote the possible range ( -0.072 to -0.446) for the demand elasticity.  Bizarre, frankly.

It’s even more mysterious when you consider that the OBR would surely have used the elasticity of demand for labour previously.

Based on the sources they refer to it seems the OBR should have plugged -0.3 at most into their model, not -0.4.  This would have given a significantly lower estimate of the increase in unemployment attributable to the introduction of the NLW, that is, roughly 45,000 rather than 60,000.

Why does this matter?  It matters because the idea that a higher minimum wage will increase unemployment is one of the main arguments against it, frequently cited by those opposed to fair wages and giving pause to those in favour.  Here, for example, is what Allister Heath wrote recently in a piece entitled How the new national living wage will kill jobs in the Telegraph:

“…it is clear that George Osborne’s massive hike in the minimum wage will exclude a significant number of people from the world of work. There is a view that this might be a worthwhile trade-off: if millions are paid more, does it really matter if a few can’t find gainful employment any longer? Again, I disagree: there is nothing more cruel than freezing out the young, the unskilled, the inexperienced or the aspiring migrant from the chance of employment.

Being permanently jobless is a terrible, heart-wrenching state; the Government should never do anything that encourages such a catastrophe.”

Clearly, Heath’s argument (which I don’t in any case agree with) carries more weight the greater the effect of a higher minimum wage on unemployment.  But getting the numbers wrong isn’t the only problem with the OBR’s use of the demand elasticity of labour, as I’ll try to explain in my next post.

July 17, 2013

Staggering Piccadilly Line Trains

Filed under: TfL, Transport, Tube — Tim Joslin @ 11:03 am

Does London Underground appreciate that the vast majority of tube passengers (or “customers”, “clients”, “johns” or whatever might be the latest business-speak term for the long-suffering punters) simply want to be delivered to their destination as quickly as possible (and in reasonable comfort, but let’s put that issue to one side for the moment).

Why, then, would the operators deliberately slow tube trains down?

I’ve been pondering this question on many an occasion over the last year when I’ve had the dubious pleasure of making use of the Piccadilly Line to travel into central London from Acton Town (previously I took the Central Line from Ealing Broadway, which has a different set of problems).

Take last Saturday. I was concerned how long it would take me to get to Golders Green due to the closure of the Northern Line, I understand to test new signalling. As it turned out I needn’t have worried, since a frequent replacement bus service whisked me there from Finchley Road (though anyone who didn’t twig they were best off catching the Jubilee Line there might well have arrived late at their destination). Still, credit where credit’s due.

Anyway, when I left home last Saturday morning I naturally had no foreknowledge of the impending outbreak of efficiency at Finchley Road, so I was keen to get there as soon as possible. When I saw a Piccadilly Line train on platform 4 at Acton Town, I jumped on immediately, even though there were no free seats and I had to stand. When traveling into London, Acton Town is the first station on the Piccadilly Line after two branches come together. Well, they partially come together at Acton Town, since there are two platforms serving Piccadilly Line trains in each direction and two sets of track each way, it seems for much of the distance towards Hammersmith. Sometimes there are trains at both platforms and occasionally there’s even an announcement telling you the other train will leave first.

But on Saturday there was no Piccadilly Line train waiting at platform 3.

Nevertheless, to my considerable annoyance, the full train I was on was OVERTAKEN by TWO other Piccadilly Line trains as it trundled towards Hammersmith. Since the Piccadilly Line operators like to maintain the gap between trains this happenstance added maybe as much as 10 minutes onto my 1/2 hour journey to Green Park to pick up the Jubilee Line.

And the trains that overtook us were fairly empty. That’s right: a full train with hundreds of passengers was held up to allow relatively empty trains, perhaps with only scores of passengers to pass.

What’s going on? The only conceivable explanation is that the Piccadilly Line operators are not concerned directly with the passengers at all. No, they’re simply focused on their timetable. Operational processes, I suspect, take priority over customer service. Now, a glance at the Piccadilly Line reveals there is no branch in the east, only in the west:

130717 Piccadilly Line

It’s not outwith the bounds of possibility that the Piccadilly Line timetable ensures trains join the merged section of track based on where they’ve come from. That is, it could be that trains from Uxbridge are held up to allow Heathrow trains to overtake. I know: this is stupid. The trains appear to be identical. It’s not as if there’ll be a knock-on effect if trains arrive at the other end of the line in the wrong order. In fact, the trains being identical is another problem with the Piccadilly Line: the Heathrow trains need more luggage storage space. More about that issue another time perhaps.

Another possibility is that the trains are being ordered according to their destination. Not all trains go to Cockfosters. Some terminate before then. But holding up trains to ensure that (say) every other train goes to Cockfosters is almost as stupid as ordering the trains based on where they’ve come from. The reason not every train goes to Cockfosters is presumably because they are fairly empty when they get there. So delaying trains full of hundreds of people at Acton for the convenience of a few passengers at the other end of the line would make no sense. Which isn’t to say that’s not why it’s being done.

As I mentioned earlier, Piccadilly line eastbound train ordering is usually achieved by holding trains at Acton Town station. Infuriatingly, you often can’t tell whether the train at platform 3 will leave first or the one at platform 4. Simply providing this information would save hundreds of person-hours of tube travel time every day. But maybe (understandably) the operators don’t want hundreds of people rushing from one train to the other – though not everyone would necessarily move, since there’s the issue of access to those precious seats to consider.

But why hold up any trains at all? Holding up trains ALWAYS delays many passengers whereas managing train ordering – or, as we’ll see, the intervals between them – generally speeds up very few journeys.

Here’s my advice to Transport for London: stop trying to be clever. Start trains according to a timetable, but after that just run them as fast as possible. Hold them at stations only long enough for passengers to get on and off.

The calculation the operators should be carrying out, but I very much doubt they are is whether the amount of time they are costing the passengers on held trains is less than that saved by passengers who would otherwise miss the held train.

Perhaps this becomes clearer if we consider what happens when trains are held to even the intervals between Piccadilly Line trains in Central London. There’s only one tunnel and one set of platforms there, so there’s no issue of trains overtaking each other. Yet frequently – maybe on as many as half of all journeys – you sit in a station and hear that “we’re being held to regulate the service”. But consider the effect of this procedure. ALL the passengers on a train are being held up so that a few passengers further down the line find the train hasn’t already left when they reach the platform.

Such train staggering only makes sense when trains are fairly empty and many passengers are arriving at the platform. But the reverse is the case for morning journeys into central London and evening journeys out. When I’m on staggered Piccadilly Line trains it’s almost always the case that many more passengers are being delayed than are being convenienced.

If TfL is not swayed by the simple numeric argument, perhaps they should consider the business argument that the more often people have a rapid journey, the more they will be inclined to use the Underground rather than alternatives, such as taxis.

My advice to TfL’s London Underground operations team is to stop dicking around trying to timetable Piccadilly Line trains along the whole line. Release them at regular intervals and then get them to their destination as quickly as possible. Simples.

April 9, 2013

Could 2013 Still be the Warmest on Record in the CET?

Filed under: Global warming, Science, UK climate trends — Tim Joslin @ 5:05 pm

Blossom is appearing and buds are opening. The front garden magnolias of West London are coming into flower. The weather is turning milder in the UK. Spring is here at last.

So perhaps I’ll be coming to the end of posts on the subject of unusual weather for a while. Until there’s some more!

We’ve seen that March 2013 was, with 1892, the equal coldest in the CET since 1883, which is particularly significant given the generally warmer Marches in recent decades.

The first quarter of 2013 was the coldest since 1987, and the cold has now continued into April. This is where we now are, according to the Met Office:

130409 Latest weather slide 1

So far this year it’s been 1.44C colder than the average over 1961-90, which is the basis for CET “anomalies” here.

The rest of the year would have to be 2.37C warmer than usual, on average, for 2013 to be the warmest in the record.

Is it possible for 2013 to still be the warmest year in the CET? I’m saying no – or, to be more measured, it’s extremely unlikely.

Last year, it was July 13th before I felt able to make a similar statement.

But now I’ve realised that I can simply plot a graph of the later parts of previous years and compare them to the required mean temperature in 2013.

Here’s the graph of mean CET for the last 9 months of the year:

130409 Latest weather slide 2

Perhaps the most notable feature is that the last 9 months of 2006, at 13C were a whole 0.5C warmer than the last 9 months of the next warmest year, 1959, at 12.5C!

It’s easy enough to calculate that for 2013 to be the warmest year in the CET, the mean temperature for the last 9 months of the year would have to be 13.38C.

To be warmer than the warmest year in the CET, also 2006, the last 9 months of 2013 would need to be 0.38C warmer than the last 9 months of 2006. That’s a big ask.

But let’s look a little more carefully at 1959. The last 9 months of 2009 were about 1.4C warmer than the prevailing mean temperatures at the time, given by the 11 year (red line) and 21 year (black line) running means. The last 9 months of 2006 were “only” about 1.1 or 1.2C warmer than an average year at that time.

If 2013 were 1.4C warmer than the running means in previous years (obviously we can only determine the running means centred on 2013 with hindsight) then it would not be far off the warmest year in the CET.

No other year in the entire CET spikes above the average as much as 1959, so we have to suppose the last 9 months of that year were “freak” – say a once in 400 year event – and extremely unlikely to be repeated.

So on this basis it seems 2013 is extremely unlikely to be the warmest in the CET.

Now we have a bit of data for April we can also carry out a similar exercise for the last 8 months of the year.

The Met Office notes (see the screen-grab, above) that the first 8 days of April 2013 were on average 3C cooler than normal in the CET (“normal” with respect to the CET is always the 1961-90 average). If we call those 8 days a quarter of the month, the rest of the month needs to be 1C warmer than usual for April as a whole to be average. Let’s be conservative, though, and assumes that happens.

It’s easy enough now to calculate that for 2013 to be the warmest year in the CET, the mean temperature for the last 8 months of the year would have to be 14.07C, assuming the April temperature ends up as the 1961-90 average.

On this basis, we can then compare the last 8 months of previous years in the CET with what’s required for this year to be the warmest on record:

130409 Latest weather slide 3

Here 2006 seems more exceptional, and 1959 not quite such an outlier. (April is not now included: in 1959 the month was warm at 9.4C whereas in 2006 it was warmer than average at 8.6C, but not unusual).

Clearly, the spike above the running means would have to be a lot higher than ever before for 2013 to be the warmest year in the CET. Those 8 cold days seem to have made all the difference to the likelihood of 2013 breaking the record.

That’s it for now – though if April is particularly cold this year, a comparison of March and April with those months in previous years will be in order. The plot-spoiler is that 1917 was the standout year in the 20th century for the two months combined.

April 8, 2013

CET End of Month Adjustments

Filed under: Global warming, Media, Science, Science and the media, UK climate trends — Tim Joslin @ 5:51 pm

When we have some exceptional weather I like to check out the Central England Temperature (CET) record for the month (or year) in question and make comparisons with historic data which I have imported into spreadsheets from the Met Office’s CET pages.

The CET record goes back to 1659, so the historical significance of an exceptionally cold or warm spell – a month, season, year or longer – can be judged over a reasonably long period. Long-term trends, such as the gradual, irregular warming that has occurred since the late 17th century, are, of course, also readily apparent. The Met Office bases press releases and suchlike on records for the UK as a whole which go back only to 1910.

The Met Office update the CET for the current month on a daily basis, which is very handy for seeing how things are going.

After watching the CET carefully during a few extreme months – December 2010 and March 2013 come to mind – I noticed that there seems to be a downwards adjustment at the end of the month. I speculated about the reasons for the apparent correction to the figures a week or so ago:

“…I’ve noticed the CET is sometimes adjusted downwards before the final figure for the month is published, a few days into the next month. I don’t know why this is. Maybe the data for more remote (and colder) weather-stations is slow to come in. Or maybe it’s to counter for the urban heat island effect, to ensure figures are calibrated over the entire duration of the CET.”

and, as I mentioned earlier, today emailed the Met Office to ask.

I received a very prompt reply, and the first of the possible explanations I came up with is in fact correct. My phrase “more remote” makes it sound like the data is still being collected by 18th century vicars and landed gentry, but in fact there is a bias in the daily CET for the month to date due to the timing of availability of data:

“Not all weather stations send us their reports in real time, i.e. every day, and so for some stations we have to wait until after the end of the month before [complete] data are available.”

It must simply be that the stations that send in the data later tend to be in colder areas (at least in winter when I’ve noticed the end of month adjustment) – perhaps they really are “more remote”!

March 2013 WAS equal with 1892 as coldest in the CET record since 1883!

10 days or so ago I discussed the possibility that March 2013 would turn out to be the coldest in the Central England Temperature (CET) record since the 19th century.

Well, it did it!

Here’s a list of the coldest Marches since 1800 in the CET:

1.   1883  1.9C
2.   1845  2.0C
3.   1837  2.3C
4= 1892  2.7C
4= 2013  2.7C
5.   1962  2.8C

A few questions and not quite so many answers occur to me:

1. Why hasn’t the Met Office trumpeted March 2013 as the coldest since the 19th century?
What I’m alluding to here is, first, that the Met Office records for the UK and England only go back to 1910, but also that, as detailed on the Met Office’s blog, it turns out that March 2013 was only the joint 2nd coldest for the UK as a whole:

“March – top five coldest in the UK
1 1962 1.9 °C
2 2013 2.2 °C
2 1947 2.2 °C
4 1937 2.4 °C
5 1916 2.5 °C”

and second coldest for England as a whole:

“Looking at individual countries, the mean temperature for England for March was 2.6 °C – making it the second coldest on record, with only 1962 being colder (2.3 °C). In Wales, the mean temperature was 2.4 °C which also ranks it as the second coldest recorded – with only 1962 registering a lower temperature (2.1 °C). Scotland saw a mean temperature of 1.3 °C, which is joint fifth alongside 1916 and 1958. The coldest March on record for Scotland was set in 1947 (0.2 °C). For Northern Ireland, this March saw a mean temperature of 2.8 °C, which is joint second alongside 1919, 1937, and 1962. The record was set in 1947 (2.5 °C).”

The figures all tally suggesting that the parts of England not included in the CET were less exceptionally cold than those included, as I suggested before.

2. Why hasn’t the Met Office trumpeted March 2013 as the second coldest on record?
What I’m alluding to here is that the Met Office only made their “second coldest” announcement on their blog, not with a press release. The press release they did issue on 26th March was merely for “the coldest March since 1962”, and included somewhat different data to that (above) which appeared on their blog for the whole month:

“This March is set to be the coldest since 1962 in the UK in the national record dating back to 1910, according to provisional Met Office statistic [sic].

From 1 to 26 March the UK mean temperature was 2.5 °C, which is three degrees below the long term average. This also makes it joint 4th coldest on record in the UK.

Looking at individual countries, March 2013 is likely to be the 4th coldest on record for England, joint third coldest for Wales, joint 8th coldest for Scotland and 6th coldest for Northern Ireland.” (my stress)

and a “top 5” ranking that doesn’t even include March 2013, which eventually leapt into 2nd place!:

“March – Top five coldest in the UK
1 1962 1.9 °C
2 1947 2.2 °C
3 1937 2.4 °C
4 1916 2.5 °C
5 1917 2.5 °C.”

As I’ve also mentioned before, it’s odd to say the least that the Met Office have formally released provisional data (and not the actual data!) to the media.

So I’ve asked them why they do this, by way of a comment on their blog:

“The Met Office’s [sic – oops] announced a few days ago that March 2013 was only the ‘joint 4th coldest on record’ (i.e. since 1910) rather than the joint 2nd coldest. This was based on a comparison of data to 26th in 2013 with the whole month in earlier years, which seems to me a tad unscientific.

Maybe it’s just me, but it seems that there was more media coverage of the earlier, misleading, announcement.

Why did the Met Office make its early announcement and not wait until complete data became available at the end of the month?”

I’ll let you know when I receive a response – my comment has been awaiting moderation for 4 days now.

3. Why was it not clearer from the daily CET updates that March 2013 would be as cold as 2.7C?
And what I’m alluding to here is the end of month adjustment that seems to occur in the daily updated monthly mean CET data. I’ve noticed this and so has the commenter on my blog, “John Smith”.

I didn’t make a record of the daily mean CET for March to date, unfortunately, but having made predictions of the final mean temperature for March 2013 on this blog, I checked progress. From memory the mean ticked down to 2.9C up to and including the 30th, but was 2.7C for the whole month, i.e. after one more day. At that stage in the month, it didn’t seem to me possible for the mean CET for the month as a whole to drop more than 0.1C in a day (and it had been falling by less, i.e. by 0.1C less often than every day). Anyway, I’ve emailed the Met Office CET guy to ask about the adjustment. Watch this space.

4. Does all this matter?
Yes, I think it does.

Here’s the graph for March mean CET I produced for the previous post, updated with 2.7C for March 2013:

130408 Latest weather slide 1 CET graph

A curiosity is that never before has a March been so much colder – more than 5C – than the one the previous year. But the main point to note is the one I pointed out last time, that March 2013 has been colder than recent Marches – as indicated by the 3 running means I’ve provided – by more than has occurred before (except after the Laki eruption in 1773).

I stress the difference with recent Marches rather than just March 2012, because what matters most in many areas is what we’re used to. For example, farmers will gradually adjust the varieties of crops and breeds of livestock to the prevailing conditions. A March equaling the severity of the worst in earlier periods, when the average was lower, will then be more exceptional and destructive in its effects.

The same applies to the natural world and to other aspects of the human world. For example, species that have spread north over the period of warmer springs will not be adapted to this year’s conditions.  And we gradually adjust energy provision – such as gas storage – on the basis of what we expect based on recent experience, not possible theoretical extremes.

OK, this has just been a cold March, but it seems to me we’re ill-prepared for an exceptional entire winter, like 1962-3 or 1740. And it seems such events have more to do with weather-patterns than with the global mean temperature, so are not ruled out by global warming.

March 29, 2013

How Significant is the Cold UK March of 2013 in the CET?

Filed under: Global warming, Media, Science, Science and the media, UK climate trends — Tim Joslin @ 12:51 pm

Well, few UK citizens can still be unaware that March 2013 has been the coldest since 1962, though I’m still baffled why the Met Office jumps the gun on reporting data. There were 4 days to go when their announcement arrived in my Inbox – and clearly that of every newspaper reporter in the land.

Overall, though, the Met Office analysis – which, remember, is based on a series going back only to 1910 – suggests 2013 has been less of an outlier than it is in the Central England Temperature (CET) record.

This is what they say:

“This March is set to be the coldest since 1962 in the UK in the national record dating back to 1910, according to provisional Met Office statistic.

From 1 to 26 March the UK mean temperature was 2.5 °C, which is three degrees below the long term average. This also makes it joint 4th coldest on record in the UK.”

They provide a list:

“March – Top five coldest in the UK

1 1962 1.9 °C
2 1947 2.2 °C
3 1937 2.4 °C
4 1916 2.5 °C
5 1917 2.5 °C”

The discrepancy with the CET is presumably partly because Scotland, although colder than England, has not been as extreme compared to the cold Marches of the 20th century. The Met Office note:

“Looking at individual countries, March 2013 is likely to be the 4th coldest on record for England, joint third coldest for Wales, joint 8th coldest for Scotland and 6th coldest for Northern Ireland.”

Still, I’m rather puzzled why this March is reported as only the 4th coldest in England, particularly when I read in a post the Met Office’s blog that in most English counties it’s been the 2nd coldest after 1962.

It may be that the overall ranking for England will change over the next few days, which would add to my bafflement as to why the Met office makes early announcements. I’d have thought such behaviour was fine for mere bloggers like me, but not what is expected from an authoritative source. Isn’t the difference the same as that between City company analysts and the companies themselves? The former speculate; the latter announce definitive results.

Anyway, it’s also possible that the CET region has been colder than England as a whole relative to the previous cold Marches. I notice on the Met Office blog that this March has not been the second coldest for Yorkshire, Northumberland and Durham. If these are outside the CET area, their significant area would explain the difference in the Met Office rankings for England as a whole.

Focusing just on the CET, it’s still possible that March 2013 could be as cold or colder than 1962, and therefore the equal coldest since 1892 or 1883 (or even the coldest since 1883, though that seems unlikely now).

Although daily maximum temperatures have increased slightly to 6C or so, we’re also expecting some serious frosts (in fact some daily minimum records look vulnerable on 30th and 31st), and the CET website implies it is a (min+max)/2 statistic (as included in the screen-grab below).

Here’s the latest CET information for March:

130328 Latest actual weather v2 slide 2

It’s now very easy to work out what the mean temperature will be at the end of the March, due to the happy coincidence of the mean being 3.1 so far and there being 31 days in the month (regular readers will have noticed that I much prefer ready reckoning methods to those involving calculators or spreadsheets). Obviously, spread over the whole month the 3.1C so far would be 2.7C. That is, if the mean temperature for the remaining 4 days were 0C, that for the month would be 2.7C, the same as 1892 (and lower than 1962s 2.8C). Every 3.1 degree days above 0 (that is ~0.75C mean for the 4 days) adds 0.1C (over 2.7C) for the month as a whole. So if you think it’ll average 1.5C for the rest of the month in the CET region, the mean for the month as a whole will be 2.9C.

Obviously rounding could come into it, so it might be worth noting that the mean to 26th was also 3.1C. If you think (or find out – due to time constraints, I haven’t drilled down on the CET site) that 27th was colder than 3.1C (which seems likely) then just a bit less than 1.5C for the rest of the month – say 1.4C – would likely leave the overall mean at 2.8C.

Here’s the latest ensemble chart for London temperatures from the Weathercast site to help you make your mind up:

130328 Latest actual weather v2 slide 5

My guesstimate is 2.8C, so on that basis I move on to the main point of this post. Just for a bit of fun I put together a chart of the entire CET record for March, with running means:

130328 Latest actual weather v2 slide 6 v2

The picture is not dissimilar to that for the unusually cool summer of 2011. Although this March has been the coldest for “only” 50 years – one might argue that a coldest for 50 years month will occur on average every 50 months, i.e. every 4 and a bit years – global and general UK (including CET) temperatures have increased significantly over the last few decades.

As can be seen from the chart above, this March has been around 3.5 degrees colder than the running mean (depending which you take).

I say this with the health warning (as I gave for summer 2011) that the running means may be dragged down if March is also cold over the next few years – the significance of extreme events can only be fully appreciated in hindsight, and it may be that the warm Marches of the two decades or so before this year will look more exceptional and this year’s less exceptional when we have the complete picture.

Health-warning aside, there aren’t really any other Marches as much as 3.5C colder than the prevailing March temperature. The period 1783-6 stands out on the graph, but isn’t really comparable, because the eruption of the Icelandic volcano Laki gave the country a sulphurous shade, significantly reducing the Sun’s warmth. 1674 looks notable, too, but the monthly means back then seem to be rounded to the nearest degree centigrade, so we can’t be sure it actually was as cold as 1C (at least without considerable further research).

It’s all very curious. After December 2010 (for which I should prepare a similar chart some time) and now March 2013, one wonders whether, when we do get cold snaps, it’s going to be even more of a shock than in the past. Does global warming have much less effect on cold UK winter temperatures than on the long-term average? Or would this March have been even colder had the same weather conditions occurred before the global warming era? March 1883 averaged 1.9C, but was only about 3C colder than prevailing Marches. Perhaps this year’s weather conditions would have resulted in a monthly mean of 1.4C back then! The trouble is we now have no idea whether this March has been a once in 50 years, once a century or once a millennium event.

And has melting the Arctic ice made cold snaps more likely?

Confusion and unpredictability abounds when it comes to extreme weather events. Preparing for the worst – the precautionary principle – is called for.

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.