Uncharted Territory

August 23, 2016

After the Brexit Referendum (2) – Free Movement vs Work Permit Schemes

Filed under: Uncategorized — Tim Joslin @ 3:22 pm

In my previous post, I argued that free movement is the best way to organise migration.  During the referendum campaign we heard Boris Johnson parrot the phrase “Australian-style points system” with nauseating regularity.  Putting to one side the inconvenient fact that even Australia doesn’t have an Australian-style points system, since a large majority of migrants to Australia are brought in through company sponsorship schemes, I nevertheless assumed that the UK would, after Brexit, attempt to implement some kind of points-based system.

I argued that a points-based system was misguided, in part because it’s bound to reduce social mobility within the UK.  Nevertheless, as the Guardian reports, a survey by ICM on behalf of a think-tank, albeit one I’d never previously heard of, called British Future, found that “[o]nly 12% [of the sample] want to cut the number of highly skilled workers migrating to Britain; nearly half (46%) would like to see an increase, with 42% saying that it should stay the same.”  Baffling.  Why exactly are we leaving the EU?

But, part way through my previous post, it became clear that a pure points-based system might not be what all the Brexiteers have in mind.  I quoted David Goodhart writing in Prospect magazine in favour of “guest citizenship”.  According to Goodhart, free movement has led to many EU citizens coming to the UK who “do not want or need to become British”, causing an “integration problem”.  He claims that “unnecessary resentment” has been created by “the lack of a distinction between full and guest citizenship”.  Utter poppycock.  The problem is the reverse.  Voters are afraid, so they tell us, of their communities being changed by immigration.  If they thought migrant workers were here only temporarily one might reasonably suppose they’d be less, not more, concerned.  In a Wonderland Alice-like leap of logic, Goodhart somehow argues that because many migrants don’t stay forever they should be prevented from doing so, ignoring the common-sense argument that people don’t usually make a decision to stay forever in advance.  Life is what happens when you’re busy making other plans.  Roots are put down over a long period of time.  Moss gathers only slowly on stones.  And so on.

To put my cards on the table, I find Goodhart’s views fairly, well, abhorrent is the word that comes to mind.  He notes in passing, for example, that “the right of people to bring in dependents should be reviewed.”  It seems to me that if you’re working somewhere, you should be able to make your life there.  Not every migrant worker will choose to do so, of course, and some jobs necessarily involve spending time away from one’s family, but settling where you work is the norm, and I don’t see what right the UK has to prevent it.  Doing so is exploitation, pure and simple, taking advantage of the weaker economic circumstances in some other parts of the world.

So I was a bit disappointed to read Alistair Campbell’s musings in The New European (“My memo to Mrs May…”, issue 2, July 15-21 2016) drifting towards Goodhart’s position:

“…in addition to discussing terms of exit, you would like [sic] to explore the possible terms on which we might stay, including another look at immigration… Might freedom of movement become freedom of labour, for example?”

No, Alistair, we should simply be asking for what the EU failed to accept first time round when Cameron asked, which is renewed transition controls with those countries from which there is a large net flow to the UK.  Clearly, 7 years has not proved to be anything like enough for the economies of Eastern Europe to converge with those in the West.  This would save the principle of free movement by amending the rules, rather than sacrificing the principle to rigid, ill-thought-out rules that were drafted on the basis of no experience whatsoever.

The bizarre situation we find ourselves in is that we’ve voted to leave the EU in part because of the number of migrants into rural areas – Boston, Lincolnshire, had the highest Brexit vote – but, judging by the frequent dire warnings from food producers, supposedly we are going to have to create (presumably time-limited) work-permit schemes to maintain the migrant work-force in those very same areas!  Yeap, we need temporary migrants to replace people who, according to David Goodhart, were treating “our national home… as a transit camp and a temporary inconvenience.”

We’ve got a big problem here.  On many levels, not just that of how society values different jobs, an aspect Peter Fleming emphasises.

According to the food producers, we have to produce as much food in the UK as possible.  Even though farming less intensively and leaving more land fallow would surely reduce soil depletion and enhance our ability to feed ourselves in the long-term.  Do we really think our national security is at risk if we have to buy cucumbers from Poland or Romania, rather than employ Poles and Romanians to pick cucumbers grown in East Anglia?  Of course it isn’t.

And apparently migrants on low wages are essential to our food production.  Yet those communities ultimately sustained by farming – Boston, Lincolnshire and its ilk – don’t want East European shops and voices on their high street. I guess Goodhart envisages migrant permits forcing workers to stay on the farm 24/7 – how else to prevent them shopping or speaking in Boston High Street? – and, I presume, traveling in blacked out vehicles to and from Stansted for their Wizzair flights.

But what bothers me most is the general attitude that it is acceptable for non-UK citizens to live in conditions that the locals aren’t expected to put up with.  The fact that only migrant workers will do certain jobs should not be a reason for ensuring a continual flow of migrant workers under schemes denying them rights to make a life in the UK.  Rather, it should be a warning that working conditions in those jobs are exploitative.  Pay – that is, the minimum wage – needs to be increased.  Only when British workers apply for such jobs should we employ migrant workers with a clear conscience.

And I seem to recollect that seasonal fruit-picking jobs were advertised in local newspapers back in the day (I’m talking ’70s and ’80s).  I read such ads as a kid and wondered if I could get some pocket-money that way.  Students, I recall, habitually supplemented their grants by helping bring in the harvest – grape-picking in France being the coolest gig.

The government should simply  face down the farming lobby.  Tell them they’ll simply have to pay more after Brexit.  Put the minimum wage up faster than currently planned to give them a clue as to what they should be paying.  Don’t give them an exploitative migrant-worker scheme.  And don’t give one either to any of the many other industries that are also no doubt lobbying ferociously behind the scenes.  If some jobs move overseas and we have to import cucumbers, so be it.  It makes no economic sense for the UK to do everything – the theory of comparative advantage and all that.

The tragic thing is that if we hadn’t accepted over the last decade that it was OK to employ migrants on lower pay than Brits would accept for the same work and conditions we might not be Brexiting in the first place.

 

August 9, 2016

After the Brexit Referendum (1) – Free Movement vs Immigration

Filed under: Brexit, Economics, Migration, Politics — Tim Joslin @ 5:34 pm

In the days before the Brexit referendum I found myself unable to focus on anything other than the last frantic round of debates, speeches and pleas.  It was clear to me even before the vote that there are several huge interconnected problems with our political culture which could lead to a major political accident.  So I began drafting a letter/paper to send, initially to my MP. Of course, the exercise grew like Topsy and, whilst I may still produce a single document, I’m breaking it up in the first instance and posting it on my rather appropriately named blog.

My original idea was to be clever and couch my thoughts as “regardless of the result of the referendum”, so please don’t think my views are just a snap reaction to the setback.

My overall view has consistently been that the referendum should never have been called and that, even if we Brexit, we must rebuild and strengthen our trading, political and cultural relationships with Europe.  Isolation is not the answer.  Instead we must address the causes of so much dissatisfaction and fix our democracy.

We mustn’t just roll over.  Rather, we need to be tough not only on Brexit, but also on the causes of Brexit!

The most significant issue for Remain was the utter, abject failure – not just during the referendum campaign, but over many years – to build a case for free movement within the EU, or, strictly speaking, the single market of the European Economic Area (EEA), which includes a few additional countries in addition to the EU.  The desirability or otherwise of free movement remains a live issue, since the UK may wish to stay in the single market, members of which are supposed to permit free movement.  Since UK membership of the single market would be highly desirable, it’s definitely worthwhile to start making a coherent argument in favour of free movement.  The horse may have bolted, but it’s still in sight.

First, let me define my terms.

“Immigrants” vs “EU migrant workers”

The core issue in the referendum campaign was “immigration”, though, whatever Teresa “Maggie” May, and many other politicians and commentators are now saying to justify their stance on immigration controls, the question on the ballot paper was Leave or Remain, so the vote gives no clear indication of the level of opposition to free movement.

Furthermore the scapegoats for all our problems are not actually “immigrants”.  Immigrants arrive on visas and are generally on a path to citizenship.  At some point, very soon in many cases, they get to vote.

The term “immigration” suggests an intention of permanency from the outset, whereas “migration” is less committal.  It may or may not lead to long-term residence.  It’s unlikely to involve an immediate change of citizenship.

I’ll therefore use the term “EU migrant workers” to refer to those who are in the UK under the free movement provisions of EU treaties.  I should say that, whatever the context, I don’t like the negative connotations of the word “immigrant” and I’d prefer a more distinct term with a different root rather than “EU migrant workers”.  But those are the words we have and it’s kind of important to actually be understood.

Of course, some EU citizens come to the UK for reasons other than to work or to seek work.  Such “EU migrants” may be economically self-sufficient – retired or the wealthy enjoying the London lifestyle, perhaps – and are unlikely to be able to claim benefits or subsidised housing.  The issues cited in the referendum campaign relate, though, mostly to “EU migrant workers”, not “EU migrants”.

The Rationale for Free Movement of Labour

Why does the EU insist on freedom of movement within the single market?  It seems not to have occurred to the leaders of the Remain campaign to try to answer this simple question.

When I started drafting this post I assumed that the argument for free movement would have been clearly stated by the founding fathers (sorry, they seem to have all been male) of the EU (or, strictly, of the organisation’s predecessor, the EEC).   If there is such a statement – and I may research further – it’s not likely to rank highly in any citation index.  We’re not talking about the Rights of Man, here.

No, all accounts I have seen suggest that the freedom of movement we see now evolved from an initial freedom of movement specifically to work, that is from the free movement of labour.

I’ll come onto why free movement purely to work is unworkable (intended, of course – I can’t resist a play on words) in a fair society, but, first, why is the free movement of labour so important?

The argument is not often stated clearly, but there are several threads of thought:

First, the observation was made in the mid 20th century – predating the EEC, I understand (sorry, more research needed) – that one reason the US economy is more dynamic than Europe’s is because of the higher rates of migration between states in the US than between countries in Europe.  This allows new industries – Motown, Hollywood, Silicon Valley – to develop rapidly and regions to regenerate through “creative destruction” rather than stagnate when the local economy declines – Detroit, for example.

Second, it’s often said that free movement of labour is necessary for free movement of capital.  I take this to mean that if companies or an entire industry moves, or an industrial  cluster exhausts the local labour supply, trained workers can move too.  The alternative would be skilled workers in one country having to retrain or be unemployed, whereas workers in another country have to acquire the relevant skills.  Those with a vocation may be frustrated in their ambitions.  This aspect of European free movement is presumably most beneficial in very highly-skilled occupations, such as research and financial services.

Third, free movement benefits the European economy as a whole when one or more countries face an economic downturn.  As we’re seeing now, young people from some of the southern European countries which suffered most in the euro crisis, who would otherwise be unemployed, are able to find work in the UK and other economies where demand is presently creating more jobs.  Or, conversely, one economy may boom and draw in labour from its neighbours.  Germany’s post-WWII economic miracle led to “Gastarbeiter” (literally “guest-worker”) deals with its neighbours (and, famously, Turkey) which clearly foreshadowed more general free movement in Europe (and complemented free movement between the Treaty of Rome signatories).

Why Free Movement of Labour is Not Enough

Having established free movement of labour – relatively uncontroversial for some decades, certainly in the UK – the EU in 1992, through the Maastricht treaty, and by various directives and court rulings, granted additional rights to EU nationals resident in other member countries, in effect a form of EU citizenship.

There’s little disagreement about the basic narrative of how freedom of movement of labour became EU citizenship, though if you listen to Farage or Johnson you’d assume it was mission-creep, perhaps a plot by European superstate zealots.

But if you reflect for a moment on how people live their lives it’s obvious that freedom of movement purely to work is not enough.  People put down roots where they work.  They may want to retire there.  They start families, or have children already.  Crucially, because people don’t necessarily make a conscious decision that they’re going to remain forever in their adopted EU country, they don’t tend to apply for citizenship.  So the rights of EU nationals to benefits, pensions, housing, healthcare, education of their children and so on has to be protected and on the same basis as the locals.  This is simply a logical consequence, which should have been instituted from the outset.

There are, however, those who deny this logical consequence.  For example, David Goodhart argues in Prospect magazine (August 2016) that:

“A guest citizen is not a full member, does not have full access to social and political rights and leaves after a few years.  Formalising guest citizenship would mean that we could concentrate rights, benefits and integration efforts on those who are making a commitment to this country. … If we don’t want to continue with relatively high inflows we have to guard full citizenship more jealously.”

In other words, he wants us to become more like Qatar.

Why Free Movement is Preferable to Other Forms of Migration

During the entire Brexit referendum campaign I only heard one voice defending free movement.  Mine.  I piped up, somewhat uncharacteristically, in a meeting organised by UCL, where the aforementioned David Goodhart was one of the panelists, to point out that, from the point of view of the home country of migrant workers, free movement is preferable to a points-based system.  It’s less of a brain-drain.  So, I tried to explain, EU countries aren’t going to agree to anything less than free movement as part of any Brexit negotiations.

Goodhart seized on what I said to emphasise that migration in itself is a brain-drain, period, twisting the point I’d made.  So, having put my head over the parapet I had to reiterate my point that free movement is less problematic than a points-based system, since not only doctors are being tempted abroad; their patients are as well.  Wealth-creators may leave for sunnier climes; but so do the unemployed.

The problem with Goodhart’s suggestion that free movement has been bad for migrants’ home countries is that their governments – most vocally Poland in the UK context – simply don’t agree with it.  And he doesn’t repeat his claim in his Prospect article, acknowledging that migration to the UK has been an “unemployment safety valve for struggling southern or eastern European economies”.

But free movement is not only preferable to a points-based system from the point of view of the originating country.  It’s also better for the UK.

First, free movement is simple.  A points-based system not only requires a huge bureaucracy just to keep track of who should be working and who shouldn’t – a dead-weight cost on economic activity – it also implies some bod in Westminster making decisions on how many pheasant-pluckers and widget-testers the UK “needs”.  And all the lobbying that’s bound to accompany the process.  No wonder that in the example of the Australian system that is always cited, the vast majority of immigrants come in with company sponsorship – recruited abroad, something the Brexit brigade are always railing against.

Second, free movement is flexible.  Because it doesn’t involve granting citizenship, migrant workers remain mobile.  Should they fail to retain work in the UK they can return to their home country or go to any other EU country.  In particular, they lubricate the free movement safety-valve (if that’s not taking the metaphor too far) – in the event of a downturn in the UK (as we will no doubt see during the post-Brexit recession) those who have already migrated to the UK for work are no doubt better equipped than UK nationals to find work in their home country or elsewhere rather than swell the numbers of job-seekers in this country.  Perhaps flexibility is why David Goodhart champions a work permit scheme.  But such schemes are flexible only for the host country, not the migrant workers.  If the UK proposes to the EU a system of sector-specific time-limited work permits – as Goodhart seems to be advocating – in return for access to markets they’ll no doubt be told where to go.

Third, if we did institute a points-based scheme to address skills-shortages, won’t that reduce even further the incentive for UK employers to train British workers?  Or to promote them.  At present, migration from outside the EU is in part capped by salary requirements.  So your employer can recruit senior staff, but not junior ones.  Is that really what you want more of?

And, fourth, free movement also confers rights.  What is possibly achieved by restricting migration to and from countries from which the net population flow is low?  Restrictions on movement are almost bound to be reciprocated, so, if Brexit leads to the end of free movement, the opportunities for UK citizens will be reduced and British businesses hamstrung because of restrictions on the ability of their staff to work in France and Germany.  As ever, it’s easy to try to solve problems by taking away other peoples’ rights.

Finally, free movement is a mechanism for economies to converge.  Migrant workers relieve unemployment in their home countries and send money back home – the so-called remittances, helping those countries’ economies.  And economic convergence may take years, even a decade or two, but is a finite process.  Net bilateral migration flows are likely to reduce eventually to zero as the source country develops.  If we keep free movement, then eventually Poles, Bulgarians and Romanians will stop coming to the UK to work.

It seems to me that, if we abandon free movement after Brexit on the dubious assumption it was “the” rather than a reason for the vote – of course there’s no denying it was a factor – we’ll be making a huge mistake.  The current migration flows from Eastern European countries are a temporary phenomenon, and would reduce as their economies transition to be more like those in the West, and anyone who thinks the UK itself won’t someday need an “unemployment safety valve” is living in cloud cuckoo land.  Indeed, net flows of EU migrants may well reverse as the UK economy goes down the pan ahead of Brexit.

The tragedy is that arguments in support of free movement as opposed to other forms of migration were so rarely heard during the referendum campaign.

May 25, 2016

Marketing Live Chess Broadcast Rights

Background: Agog at Agon’s Candidates Broadcast Monopoly

The moves of chess games have always been in the public domain.  Anyone can quote them in any medium including whilst games are in progress.  Indeed, in recent years live internet broadcasts of commentary on games at chess tournaments and in matches have become very popular with the chess community.

It was in this context that several chess websites geared up to broadcast commentaries on the most significant chess tournament of 2016, the Candidates tournament in Moscow which opened on March 10th.  The winner of the Candidates gets to play the current World Champion in a match for the title of World Chess Champion, so we’re all looking forward to watching Sergey Karjakin challenge Magnus Carlsen in November. But two days before the Candidates started, the organisers, a company called Agon, working with the worldwide chess federation, FIDE, forbade anyone else from broadcasting the moves until 2 hours after each game.  They did this by requiring anyone accessing the official website to sign a “Clickwrap Agreement” agreeing not to retransmit the moves.  Presumably onsite spectators and journalists were subject to similar restrictions.

Malcolm Pein, the editor of Chess magazine, noted in the May 2016 issue (p.4-5) that Agon’s attempt “to impose a new order on the chess world” were “cack-handed” and it is indeed very unfortunate that FIDE has been involved in preventing a number of websites from supporting what I would have thought is its core objective, promoting the game.  The sites are likely to have incurred costs as a result, perhaps having committed to pay commentary teams.

Furthermore, as Pein notes in his Chess editorial, aspects of the Agon Candidates commentary left something to be desired.  He highlights an unfortunate incident when the moves were inadvertently shown swapped between games at the start of the last round.  I noticed that too, and was also confused for a moment by the disconnect between the commentary and what actually appeared to be happening, but much more serious was the quality of the commentary itself.  I felt it was interrupted much too often for breaks, usually to show the same couple of ads, plus what I’ll describe as “pen-portraits” of the players (cartoon-style drawing accompanied by commentary).  These were quite entertaining the first time you saw them.  Not quite so much the tenth time.  And, although the commentary team obviously worked hard to help the audience understand what was going on, I’ve enjoyed other commentators somewhat more.  The commentary is much more important in chess than say football, since (as non-players will appreciate!) there are periods of a game when there’s nothing much to see happening on the board.  I would have liked the choice to watch another broadcast.

Steve Giddings writes, also in the April edition of Chess (p.8-9), that preventing unauthorised broadcast of chess games is in “the commercial interests of the game”.  That may be so, but it seems to me that monopoly broadcasting is not the best way forward.

Given the goals of promoting chess by maximising the number of viewers of chess matches and tournaments and maximising revenue from the broadcast of elite events, simply in order to pay for them, a better option would be to license multiple broadcasters, if they’ll pay collectively more than would a single exclusive media outlet.  I outline in this article how the revenue-maximising number of broadcasters could be established by a simple process of bidding for a share of the rights.

First, though, let’s consider how other sports rights are sold and then whether times have changed – perhaps other sports might want to reconsider granting exclusivity – and how chess is different. I focus particularly on the case of domestic rights to broadcast the English Premier League.

The Football Precedent: English Premier League Live Broadcast Rights

When I was a kid, the FA Cup Final was always shown live simultaneously on both BBC1 and ITV.  So much for consumer choice – at the time there were only 3 channels (the other one being BBC2).  Nevertheless, there was competition, of a sort.  Sometimes we’d switch over to see what they were saying on the other side, though when the ads came on we’d switch back.

One might wonder why ITV would bother broadcasting the Cup Final when it was also on BBC (without ad breaks) and, indeed, why the FA would sell it to two broadcasters rather than just one.  I can think of two considerations:

  1. There was some product differentiation between the broadcasts on BBC1 and ITV.  The channels employed different commentators and pundits.  This produces what I would argue is healthy competition for viewers between broadcasters.
  2. Strange though it may seem to many younger readers, back in the day many – perhaps most – households watched either ITV or BBC almost exclusively, even though they both were (and still are) free-to-air.  It could be argued that the choice between watching BBC and ITV used to be very much driven by social class or at least the social class households identified themselves as belonging to, but that is actually irrelevant to the argument.  The point is that broadcasting the FA Cup Final on both ITV and BBC ensured that the product reached more people – ITV viewers and BBC viewers – than it would have done had it gone out only on BBC or ITV.

Presumably ITV could attract enough viewers and sell enough advertising to make it worthwhile to broadcast the FA Cup Final even though BBC1 was showing it too.

Sports Broadcast Monopolies

Why, then, you might ask, is almost all football shown in the UK now, in 2016, indeed, almost all sport (and much other content besides), broadcast on just one channel?  That is:

  • why have sports broadcast monopolies developed?;
  • why do sports administrators tolerate and even encourage broadcast monopolies?;
  • and whose interests do sports broadcast monopolies actually serve?

Some years ago I had the dubious pleasure of a job interview with BT; actually they wasted an entire day of my time at their recruitment centre (and even more with some further interviews later on).  The question arose in discussion – I guess after we’d noted the ongoing convergence of internet and broadcast media – as to how BT could best grow their broadband market.  I suggested offering some exclusive movies.  Perhaps my interlocutor was playing Devil’s advocate, but I don’t think so; regardless, he seemed to be arguing that they should market on the basis of their whizzy new network.  No, no, no!  The vast majority of consumers care only about what appears on their TV; they don’t care at all about the underlying technology.  And if there is some exclusive content – I mentioned movies at my BT interview because Sky had already “done” sport – that is likely to be decisive in winning customers.

It seems clear from their enthusiasm to enter into them that exclusive deals for live sport transmission rights are in the interests of subscription broadcasters, particularly when  trying to build a customer base.  We have the example of the English Premier League (EPL) and much other sport (as well as films and other content) on Sky, now being contested by BT.  Netflix and Amazon are exclusively hosting supposedly must-see drama series.

As a consumer, I’m always wary when I’m told something is “exclusive”.  The very word suggests to me that someone is being ripped off.  Probably muggins.

But let’s not jump to conclusions.  Besides, what we’re really interested in is the health of the sport – that is, chess, when I get to the end of this preamble.

So, could exclusive sports rights sales be in the interests of the sports themselves?

Well, when broadcasters are trying to grow their business – think of Sky and the EPL – they may be prepared to pay what appears to be a substantial premium for a monopoly.  I say “appears to be a substantial premium” because at some point the broadcaster has to demonstrate income (advertising and/or subscriptions) commensurate with the expense.  Otherwise they go bust.

It’s not immediately apparent, and, indeed, somewhat counter-intuitive, that a single broadcaster of live events or TV series can unlock more advertising and/or subscription income than can multiple broadcasters of the same material.  Nevertheless, many sports administrators appear to believe monopoly broadcast deals are in the interest of their sport.   At least in the short term.

An example of what can happen in the longer term is provided by EPL broadcast rights in the UK.  Sky held the exclusive rights from the start of the Premier League in 1992 until 2007.  After the European Commission ruled that Sky should not have exclusive rights to all matches, they had competition, first from Setanta, who ran into financial difficulties, then ESPN, who took over Setanta’s rights and most recently BT who came into the market in 2012 prepared to bid aggressively against Sky for a whole range of football and other sports rights and apparently with equally deep pockets.  Guess what happened once there was competition?  The total paid for EPL live transmission rights went up.  Considerably.

Note, though, that what Sky and BT bid for is how much of the monopoly each enjoys.  They are not in direct competition, in the sense of broadcasting the same matches, as BBC1 and ITV used to be in the case of the FA Cup Final.

The only logical conclusion is that – given that live broadcast rights to the EPL have a definite value represented by the income they can generate – they were previously being sold too cheaply!  Who’d have thunk it?

Players on £50K a week 5 years ago should be a bit miffed.  They could have been on £60K!

Why are BT and Sky paying more than Sky alone did?

Is it a conspiracy against the consumer, as I once read a commentator claiming?  Apparently, he wrote (I think it was a “he”) EPL fans would now have to buy two subscriptions.  As someone who only buys one, it might be worth pointing out that, unless it’s your team playing, or a key fixture (in which case there’s always the option of going to the pub – a form of pay-per-view) it doesn’t make that much difference which match you watch.  You don’t know in advance whether a particular match is going to be exciting.  In other words, if you’re only going to watch 20 matches a season, there’s not much point paying for 200.

Are BT and Sky trying to buy or defend market share and therefore overpaying?  Well, there may be an element of this, but, first, from the point of view of the sport this is a good thing.  Second, companies can’t do this for ever.  BT is now established in the market.  I doubt they’d be paying so much for 3 years of broadcast rights if they didn’t think they’d make money on the deal.

Has BT unlocked market segments Sky wasn’t reaching?  Yes, I believe so.  I pay a small add-on to my broadband internet deal to receive BT’s sports channels, which I watch online, on a PC.  For the number of matches I actually manage to watch I can justify this cost, but not a Sky subscription (plus charges for set-top boxes and so on).

But it may also have been that Sky was paying less than the EPL transmission rights were worth and making excess profits as a result.  These have not necessarily all appeared as profits in its accounts, but may have also been reinvested, for example, in establishing a dominant position in the UK in the broadcast markets for other sports, such as cricket.

A market needs to be competitive to establish the real value of a product.  It’s in the long-term interests of sports themselves, I suggest, to maintain a competitive market for broadcast rights and not allow monopolies to develop.  Such monopolies might end up underpaying until a competitor eventually challenges them, as, I argue, appears to have happened for EPL live broadcast rights in the UK.

In addition,  it’s in the long-term interests of sports for as many people as possible to be able to watch them.  This is best achieved by a number of broadcasters with different business models reaching different segments of the market.  It’s worth pointing out that sports administrators sometimes ensure that at least some events are “free-to-air” in order to show-case their product, for example the World Cup and, this summer, the UEFA Euro 2016 tournament (at least in most European countries).  This month’s FA Cup Final was broadcast on the BBC as well as BT Sport.

Differences Between Chess and Football

After that somewhat longer discussion of football than I had intended, let’s get back to chess.  As I’ve argued, even football could consider selling live transmission rights to multiple broadcasters, but are there differences between chess and football) that make monopoly broadcasting a less attractive option in the case of chess?

I believe there are several relevant (though interrelated) differences:

First, live chess is typically broadcast globally, over the internet.  This means that the peculiarities of local markets are much less relevant.  For example, in the UK the playing field for broadcasting football was uneven when Sky entered the market.  Sky had to have content that was not available to the free-to-air channels ITV and the BBC or no-one would have subscribed; and it needed subscriptions to fund the cost of its satellites.  OK, there is at least one place where chess appears live on TV: Norway, home of the World Champion, Magnus Carlsen.  But given the general reliance on the internet for broadcasting chess, it makes sense to simply leave distribution to the broadcaster and not sell rights separately for different platforms (TV, internet, mobile devices etc).

Second, and related to the first point, the world has moved on in the quarter-century since the EPL broadcast model was established.  To some extent sports channel subscription revenues funded a dramatic increase in the number of channels available by enabling satellite and cable TV.  But with the growth of TV over the internet, the potential number of channels is vast, and the entry-cost considerably lower than in the past.  Broadcasters don’t need huge guaranteed revenues to justify their business models.  Furthermore, given the flexibility of advertising charging that is possible on the internet – essentially payment depends on the actual number of viewers – advertisers do not need historic broadcast data.  They’ll just pay for what they actually get.

Third, the commentary and presentation is a more significant part of the overall package in the case of chess than it is for football.  Personally, for normal tournament commentaries I’m as much interested in who’s commentating than who’s playing.  I’d be much more likely to tune in if Maurice Ashley, Danny King or Peter Svidler are explaining a game.

Fourth, chess is still at the experimental stage, still trying to explore what works best in live transmission.  It doesn’t make a lot of sense to stifle this process by restricting the number of broadcasters to one.

Fifth, interest in chess is global.  Viewers might appreciate broadcasts in their own language as well as English.

Sixth, there are a limited number of marketable chess events.  To promote the game, as well as maximise revenues, it makes sense for these to be available to as many viewers as possible.

Seventh, I don’t believe there is a pot of gold waiting for someone able to sell advertising round chess events.  Compared to football, it’s always going to be a niche market.  Indeed, for many of the chess sites – Chess.com. Chess24.com, the Internet Chess Club (ICC), Playchess.com and so on – that broadcast (or might broadcast) elite chess events, covering live events is, unlike in the case of football, only part of their offering to visitors to their site (who may pay a subscription).  These sites also allow you to play online, host articles and instructional videos and so on.  Unlike the sports channels of Sky or BT, losing live transmission rights is not an existential threat.  They are therefore unlikely to pay huge sums for monopoly rights.  Collectively, though, they may pay a decent amount for something that is “nice to have”.  The resulting choice for viewers would also be beneficial to the game and raise broadcast standards.

For all these reasons it seems to me that it makes sense for chess events to be hosted by multiple broadcasters.

Price Discovery for Chess Broadcast Rights

Before considering the mechanics of an auction for chess broadcast rights, let’s first establish a principle: all broadcasters will pay the same price.

Live sports transmission rights are generally sold territorially.  That’s messy already – people cheat by importing satellite dishes from neighbouring countries and so on- but in the age of internet broadcasting its unworkable.

One might also consider language restrictions.  Why should a broadcaster be able to reach the whole Chinese population or the English, Russian, French or Spanish-speaking world for the same price as an Estonian native-language broadcaster?  Well, don’t worry about it.  The market will take care of things.  Broadcast auctions will be a repeat exercise and, if the price is low compared to the size of the market in a specific language, that will simply encourage more broadcasters.

What if some broadcasters are mainstream TV channels, in Norway, for example?  Again, don’t worry about it.  Just leave distribution up to the broadcasters.  TV channels are competing with internet broadcasters.  The only restriction should be that a one licence – one broadcast rule.  If a broadcaster wants to transmit to multiple audiences, in different languages, say, or by producing different versions tailored to experts and the general public, then they have to buy two or more licences.

What would the broadcasters buy?  An automatic feed of the moves (top events nowadays use boards that automatically transmit the moves electronically) is obviously essential.  Since you don’t want numerous video cameras in the playing hall, the organisers (or a host broadcaster) would also provide video feeds of the players, often used as background to the commentary (generally in a separate window).  Post-game interviews or a press conference are also usual and these could be part of the package, as could clips from the recent innovation of a “confession-box”, where players can comment during their game.  Broadcasters would edit these video feeds together with their own commentary to produce their final product.

Let’s make one other thing clear about the objective of the auction process.  The goal is to maximise revenue.  This is not in conflict with the goal of maximising the online audience and thereby promoting the game.

So, how would the auction work?  How can we maximise revenue from an unknown number of broadcasters all paying the same price per transmission stream?

Here’s my suggestion.  The broadcasters would be required to submit a number of bids each dependent on the total number of broadcasters.   That is, they would bid a certain amount to be the monopoly broadcaster, another amount (lower, assuming they act rationally!) to be one of two broadcasters, another amount to be one of three, and so on, up to some arbitrary number, for example to be one of more than ten broadcasters.

The chess rights holder – FIDE, for example – would simply select the option that generates most revenue.  All bidders would of course pay what the lowest bidder offered to be one of the specific number of bidders chosen.  E.g., if 2 bidders are successful, one bidding $70,000 to be one of two broadcasters and the other $60,000, both would pay $60,000.  In this case, neither broadcaster, nor any other, would have bid more than $120,000 for exclusive rights and no 3 more than $40,000 to be one of 3 broadcasters, nor 4 more than $30,000 to be one of 4, and so on.

For example, it may be the case that one bidder bids more to be the sole broadcaster than any two bid to be dual broadcasters, any three to be the only three broadcasters and so on.  In that case, one broadcaster would secure a monopoly.  Or, at the other extreme, 12 broadcasters might, for example, bid more to be one of “more than ten” broadcasters than any sole broadcaster bid for a monopoly and so on, and more than 13/12 times what the unlucky 13th highest bidding broadcaster bid to be one of “more than ten” broadcasters, 14/12 times what the 14th highest bidder bid, 20/12 times what the 20th bid and so on up to the total number of bidders.

It’s my guess that revenue will be maximised for a World Championship match by a relatively large number of bidders.  And the crucial point is that the more broadcasters, the larger the audience and the greater the choice for viewers.

 

April 8, 2016

Missing Mass, the Absent Planet X and Yet More Comments on Trujillo & Sheppard and Batygin & Brown

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 5:33 pm

Since my two previous posts attempting to debunk the idea of “Planet X” – a supposed large planet in the outer reaches of the Solar System – the occasional media reference has informed me that teams of researchers and various items of multi-million pound equipment are apparently employed chasing the wild Planet X goose.  Indeed, as I go to press, Scientific American has just emailed me a link to an article reporting the latest developments in the search.  Then, yesterday, New Scientist reported on speculation as to where Planet X (or “Planet Nine” as they call it) might have come from.  A paper New Scientist refer to has a bearing on my own conclusions so I’m adding a note about it at the end of this piece.

I had some further thoughts some weeks ago, and, it’s time I cleared away a loose end by writing them up.

My Original Proposed Explanation

Let’s recap.  The story so far is that, based on certain characteristics of the orbits of Sedna and a dozen or so other distant minor planets – often referred to as trans-Neptunian objects or TNOs – several groups of researchers have proposed a “Planet X” or sometimes “Planet Nine”, Pluto, the ninth planet for a certain generation, having been relegated to mere “minor planet” status. As I consider the demotion of Pluto to be utterly ridiculous, I’m going to stick to the terminology “Planet X” for the hypothetical distant planet.  You can take “X” to be the Roman numeral if you want.

I was immediately sceptical about the existence of Planet X.  Some other explanation for the TNO orbits seemed more probable to me.  Planet X would be exceptional, compared to the eight (or nine) known planets, not only in its distance from the Sun, but also in the plane of its orbit.  To explain the strange features of the orbits of the minor planets by the known “Kozai mechanism” of gravitational “shepherding” of smaller objects by a large body, Planet X would have to orbit perpendicular to the plane of the Solar System, within a few degrees of which the planes of the orbits of all the other planets lie.

Some weeks ago then, in my first post on the subject, I reviewed what had been written on the subject of Planet X.  I think now that I was perhaps overly influenced by the Scientific American article on the subject and considered much the most important aspect of the minor planets’ orbits to be their near 0˚ arguments of perihelion (AOPs).  That is, they cross the plane of the Solar System roughly when they are nearest the Sun.

On reflection, I was perhaps wrong to be so dismissive of the eccentricity of the minor planets’ orbits.  All orbits are eccentric, I pointed out.  But the minor planets orbits are really quite eccentric.  There may be a cause of this eccentricity.

I also think it is important that the minor planets’ orbits are highly inclined to the plane of the Solar System compared to those of the inner planets, but they are nevertheless less inclined than random, i.e. in most cases somewhat less than 30˚.

I went on to suggest that perhaps something (other than Planet X) was pulling the minor planets towards the plane of the Solar System.   I suggested it was simply the inner planets, since there would be a component of the gravitational attraction of the minor planets perpendicular to the plane of the Solar System.  I included a diagram which I reproduce once again:

160205 Planet X slash 9In my second post about Planet X a few days later, I looked more closely at the original scientific papers, in particular those by Trujillo & Sheppard and Batygin & Brown.  I wondered why my suggestion had been rejected, albeit implicitly.  To cut a long story short, the only evidence that the minor planet orbits can’t be explained by the gravity of the inner eight planets (and the Sun) is computer modelling described in the paper by Trujillo & Sheppard.  I wondered if this could have gone wrong somehow.

Problems with Naive Orbital Flattening

Let’s term my original explanation “Naive Orbital Flattening”.  There are some issues with it:

First, if the minor planets are “falling” towards the plane of the Solar System, as in my figure, as well as orbiting its centre of gravity, they would overshoot and “bounce”.  They would have no way of losing the momentum towards the plane of the Solar System, so, after reaching an inclination of 0˚, their inclination would increase again on the opposite side of the plane as it were (I say “as it were” since the minor planets would cross the plane of the Solar System twice on each orbit, of course).

Second, mulling the matter over, there is no reason why orbital flattening wouldn’t have been detected by computer modelling.  Actually, I tell a lie; there is a reason.  The reason is that the process would be too slow.  Far from bouncing, it turns out that the minor planets would not have had time for their orbital inclinations to decline to 0˚ even once.  I did some back of the envelope calculations – several times in fact – and if you imagine the minor planets falling towards the plane of the Solar System under the gravity of the component of the inner planets’ orbits perpendicular to the plane and give yourself 4 billion years, the minor planets would only have fallen a small fraction of the necessary distance!

Third, we have this issue of the AOP.  The AOPs of the inner planets precess because of the gravitational effect of the other planets as they orbit the Sun (with some tweaks arising from relativity).  It’s necessary to explain why this precession wouldn’t occur for the minor planets.

Missing Mass

However you look at it, explaining the orbits of the minor planets must involve finding some mass in the Solar System!  One possible explanation is Planet X.  But could there be another source of missing mass?

Well, trying to rescue my theory, I was reading about the history of the Solar System.  As you do.

It turns out that the Kuiper Belt, beyond Neptune, now masses only a fraction of the Earth.  At one time it must have had at least 30 times the mass of the Earth, in order for the large objects we see today to form at all.  Trouble is, the consensus is that all that stuff either spiralled into the Sun, or was driven into interstellar space, depending on particle size, by the effect of solar radiation and the solar wind.

The science doesn’t seem done and dusted, however.  Perhaps there is more mass in the plane of the Solar System than is currently supposed.  Stop Press: Thanks to New Scientist I’m alerted to a paper that suggests exactly that – see the Addendum at the end of this piece.

It seems to me a likely place for particles to end up is around the heliopause, about 125 AU (i.e. 125 times the Earth’s orbit) from the Sun, because this is where the Solar wind collides with the interstellar medium.  You can imagine that forces pushing particles – of a certain range of sizes – out of the Solar System might at this point balance those pushing them back in.

Sophisticated Orbital Flattening

OK, there’s a big “if”, but if there is somewhat more mass – the remains of the protoplanetary disc – in the plane of the Solar System than is generally assumed, then it might be possible to explain the orbits of Sedna and the other TNOs quite neatly.  All we have to assume is that the mass is concentrated in the inner part of the TNOs orbits, let’s say from the Kuiper Belt through the heliopause at ~125 AU.

First, the AOPs of around 0˚ are even easier to explain than by the effects of the inner planets.  As for the inner planets, the mass would have greatest effect on the TNOs when they are nearest perihelion, so would perturb the orbits most then, as discussed in my previous posts.  The improvement in the explanation is that there is no need to worry about AOP precession.  Because the mass is in a disc, and therefore distributed relatively evenly around the Sun, its rotation has no gravitational effect on the minor planets.  And it is the rotation of the other planets that causes each planet’s AOP precession.

Second, we need to observe that there is a trade-off between orbital inclination and eccentricity as in the Kozai effect, due to conservation of angular momentum in the plane of the Solar System.  Thus, as the inclination of the TNOs’ orbits is eroded, so their orbits become more eccentric.  This could have one or 3 possible consequences:

  • it could be that, as I concluded for the effects of the inner planets alone, there has not been time for the TNOs’ orbits to flatten to 0˚ inclination in the 4 billion or so years since the formation of the Solar System.
  • or, it could be that the TNOs we observe are doomed in the sense that their orbits will be perturbed by interactions with the planets if they stray further into the inner Solar System – assuming they don’t actually collide with one of the inner planets – and we don’t observe TNOs that have already been affected in this way.
  • or, it could be that the TNOs’ orbits eventually reach an inclination of 0˚ and “bounce” back into more inclined orbits.  The point is that the eccentricity of the orbits of such bodies would decline again, so we may not observe them so easily, since the objects are so far away we can only see them when they are closest to the Sun.

Which of these possibilities actually occurs would depend on the amount and distribution of the proposed additional mass I am suggesting may exist in the plane of the Solar System.  My suspicion is that the orbital flattening process would be very slow, but it is possible different objects are affected in different ways, depending on initial conditions, such as their distance from the Sun.

Now I really will write to the scientists to ask whether this is plausible.  Adding some mass in the plane of the Solar System to Mercury symplectic integrator modelling would indicate whether or not Sophisticated Orbital Flattening is a viable hypothesis.

Addendum: I mentioned towards the start of this post that the search continues for Planet X.  I can’t help remarking that this doesn’t strike me as good science.  What research should be trying to do is explain the observations, i.e. the characteristics of the minor planets’ orbits, not trying to explain Planet X, which is as yet merely an unproven hypothetical explanation of those observations.  Anyway, this week’s New Scientist notes that:

“…the planet could have formed where we find it now. Although some have speculated that there wouldn’t be enough material in the outer solar system, Kenyon found that there could be enough icy pebbles to form something as small as Planet Nine in a couple of hundred million years (arxiv.org/abs/1603.08008).”

Aha!  Needless to say I followed the link provided by New Scientist and it turns out that the paper by Kenyon & Bromley does indeed suggest a mechanism for a debris disc at the right sort of distance in the Solar System.  They focus, though, on modelling how Planet X might have formed.  They find that it could exist, if the disc had the right characteristics, but it also may not have done.  It all depends on the “oligarchs” (seed planets) and the tendency of the debris to break up in collisions.  This is from their Summary (my explanatory comment in square brackets):

We use a suite of coagulation calculations to isolate paths for in situ production of super-Earth mass planets at 250–750 AU around solar-type stars. These paths begin with a massive ring, M0 >~ 15 M⊕ [i.e. more than 15 times the mass of the Earth], composed of strong pebbles, r0 ≈ 1 cm, and a few large oligarchs, r ≈ 100 km. When these systems contain 1–10 oligarchs, two phases of runaway growth yield super-Earth mass planets in 100–200 Myr at 250 AU and 1–2 Gyr at 750 AU. Large numbers of oligarchs stir up the pebbles and initiate a collisional cascade which prevents the growth of super-Earths. For any number of oligarchs, systems of weak pebbles are also incapable of producing a super-Earth mass planet in 10 Gyr.

They don’t consider the possibility that the disc itself could explain the orbits of the minor planets.  And may indeed be where they originated in the first place.  In fact, the very existence of the minor planets could suggest there were too many “oligarchs” for a “super-Earth” to form.  Hmm!

 

February 13, 2016

Is Planet X Needed? – Further Comments on Trujillo & Sheppard and Batygin & Brown

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 7:42 pm

In my last post, Does (Brown and Batygin’s) Planet 9 (or Planet X) Exist?, I ignored the media squall which accompanied the publication on 20th January 2016 of a paper in The Astronomical Journal, Evidence for a Distant Giant Planet in the Solar System, by Konstantin Batygin and Michael E Brown, and discussed the coverage of the issue in New Scientist (here [paywall] and here) and in Scientific American (here [paywall]).

The idea that there may be a Planet X is not original to the Batygin and Brown paper.  It was also proposed in particular by Chadwick A. Trujillo and Scott S. Sheppard in a Nature paper A Sedna-like body with a perihelion of 80 astronomical units dated 27th March 2014.  The New Scientist and Scientific American feature articles were not informed by Batygin and Brown.  Scientific American explicitly referenced Trujillo and Sheppard (as well as a paper by C and R de la Fuente Marcos).

A key part of the evidence for a “Planet X” is that for the orbits of a number of trans-Neptunian objects (TNOs) – objects outside the orbit of Neptune – including the minor planet Sedna, the argument of perihelion is near 0˚.  That is, they cross the plane of the planets near when they are closest to the Sun. The suggestion is that this is not coincidental and can only be explained by the action of an undiscovered planet, perhaps 10 times the mass of the Earth, lurking out there way beyond Neptune. An old idea, the “Kozai mechanism”, is invoked to explain how Planet X could be controlling the TNOs, as noted, for example, by C and R de la Fuente Marcos in their paper Extreme trans-Neptunian objects and the Kozai mechanism: signalling the presence of trans-Plutonian planets.

I proposed a simpler explanation for the key finding.  My argument is based on the fact that the mass of the inner Solar System is dispersed from its centre of gravity, in particular because of the existence of the planets. Consequently, the gravitational force acting on the distant minor planets can be resolved into a component towards the centre of gravity of the Solar System, which keeps them in orbit, and, when averaged over time and because their orbits are inclined to the plane of the Solar System, another component at 90˚ to the first, towards the plane of the orbits of the eight major planets:

160205 Planet X slash 9

My suggestion is that this second component tend will gradually reduce the inclination of the minor planets’ orbits. Furthermore, the force towards the plane of the Solar System will be strongest when the minor planets are at perihelion on their eccentric orbits, not just in absolute terms, but also when averaged over time, taking into account varying orbital velocity as described by Kepler. This should eventually create orbits with an argument of perihelion near 0˚, as observed.

Has such an effect been taken into account by those proposing a Planet X?  The purpose of this second post on the topic is to look a little more closely at how the two main papers, Batygin & Brown and Trujillo & Sheppard tested for this possibility.

Batygin & Brown

The paper by Batygin and Brown does not document any original research that would have shown AOPs tending towards 0˚ without a Planet X by the mechanism I suggest.  Here’s what they say:

“To motivate the plausibility of an unseen body as a means of explaining the data, consider the following analytic calculation. In accord with the selection procedure outlined in the preceding section, envisage a test particle that resides on an orbit whose perihelion lies well outside Neptune’s orbit, such that close encounters between the bodies do not occur. Additionally, assume that the test particle’s orbital period is not commensurate (in any meaningful low-order sense—e.g., Nesvorný & Roig 2001) with the Keplerian motion of the giant planets.

The long-term dynamical behavior of such an object can be described within the framework of secular perturbation theory (Kaula 1964). Employing Gauss’s averaging method (see Ch. 7 of Murray & Dermott 1999; Touma et al. 2009), we can replace the orbits of the giant planets with massive wires and consider long-term evolution of the test particle under the associated torques. To quadrupole order in planet–particle semimajor axis ratio, the Hamiltonian that governs the planar dynamics of the test particle is [as close as I can get the symbols to the original]:

Η=-¼(GM/a) (1-e2)-3/2 Σ4i=1(miai2)/Ma2

In the above expression, G is the gravitational constant, M is the mass of the Sun, mi and ai are the masses and semimajor axes of the giant planets, while a and e are the test particle’s semimajor axis and eccentricity, respectively.

Equation (1) is independent of the orbital angles, and thus implies (by application of Hamilton’s equations) apsidal precession at constant eccentricity… in absence of additional effects, the observed alignment of the perihelia could not persist indefinitely, owing to differential apsidal precession.” [my stress].

After staring at this for a bit I noticed that the equation does not include the inclination of the orbit of test particle, just its semimajor axis (i.e. mean distance from the Sun) and eccentricity.  Then I saw that the text also only refers to the “planar dynamics of the test particle”, i.e. its behaviour in two, not three dimensions.

Later in the paper Batygin and Brown note (in relation to their modelling in general, not just what I shall call the “null case” of no Planet X) that:

“…an adequate account for the data requires the reproduction of grouping in not only the degree of freedom related to the eccentricity and the longitude of perihelion, but also that related to the inclination and the longitude of ascending node. Ultimately, in order to determine if such a confinement is achievable within the framework of the proposed perturbation model, numerical simulations akin to those reported above must be carried out, abandoning the assumption of coplanarity.”

I can’t say I found Batygin & Brown very easy to follow, but it’s fairly clear that they haven’t modeled the Solar System in a fully 3-dimensional manner.

Trujillo & Sheppard

If we have to discount Batygin & Brown, then the only true test of the null case is that in Trujillo & Sheppard.  Last time I quoted the relevant sentence:

“By numerically simulating the known mass in the solar system on the inner Oort cloud objects, we confirmed that [they] should have random ω [i.e. AOP]… This suggests that a massive outer Solar System perturber may exist and [sic, meaning “which”, perhaps] restricts ω for the inner Oort cloud objects.”

I didn’t mention that they then referred to the Methods section at the end of their paper.  Here’s what they say there (and I’m having to type this in because I only have a paper copy! – so much for scientific and technological progress!):

Dynamical simulation. We used the Mercury integrator to simulate the long-term behaviour of ω for the Inner Oort cloud objects and objects with semi-major axes greater than 150AU and perihelia greater than Neptune.  The goal of this simulation was to attempt to explain the ω clustering.  The simulation shows that for the currently known mass in the Solar System, ω for all objects circulates on short and differing timescales dependent on the semi-major acis and perihelion (for example, 1,300 Myr, 500 Myr, 100 Myr and 650 Myr for Sedna, 2012 VP113, 2000 CR105 and 2010 GB17, respectively).”

In other words their model reproduced the “apsidal precession” proposed in Batygin & Brown, but since Trujillo & Sheppard refer to ω, the implication is that their simulation was in 3 dimensions and not “planar”.

However, could the model used by Trujillo and Sheppard have somehow not correctly captured the interaction between the TNOs and the inner planets?  The possibilities range from apsidal precession being programmed in to the Mercury package (stranger things have happened!) to something more subtle, resulting from the simplifications necessary for Mercury to model Solar System dynamics.

Maybe I’d better pluck up courage and ask Trujillo and Sheppard my stupid question!  Of course, the effect I propose would have to dominate apsidal precession, but that’s definitely possible when apsidal precession is on a timescale of 100s of millions of years, as found by Trujillo and Sheppard.

February 5, 2016

Does (Brown and Batygin’s) Planet 9 (or Planet X) Exist?

Filed under: Media, Orbital dynamics, Physics, Science and the media — Tim Joslin @ 7:23 pm

What exactly is the evidence that there may be a “Super-Earth” lurking in the outer reaches of the Solar System?  Accounts differ, so I’ll review what I’ve read (ignoring the mainstream media storm around 20th January!), to try to minimise confusion.

New Scientist

If you read your New Scientist a couple of weeks ago, you’ll probably have seen the cover-story feature article Last Great Mysteries of the Solar System, one of which was Is There a Planet X? [paywall for full article – if, that is, unlike me, you can even get your subscription number to give you access].  The article discussed the dwarf planets Sedna and 2012VP113.  The orbits of these planetoids – and another 10 or so not quite so distant bodies – according to New Scientist and the leaders of the teams that discovered Sedna and 2012VP113, Mike Brown and Scott Sheppard, respectively, could indicate “there is something else out there”.

Apparently, says NS:

“[the orbits of Sedna and 2012VP113] can’t be explained by our current understanding of the solar system…  Elliptical orbits happen when one celestial object is pushed around by the gravity of another.  But both Sedna and 2012VP113 are too away from the solar system’s giants – Jupiter, Saturn, Uranus and Neptune – to be influenced.”  Something else must be stirring the pot.”

“Elliptical orbits happen when one celestial object is pushed around by the gravity of another.”  This is nonsense.  Elliptical orbits are quite usual, beyond the 8 planets (i.e. for “trans-Neptunian objects”) which is where we’re talking about.  The fact that the orbits of Sedna and 2012VP113 are elliptical is not why there may be another decent-sized planet way out beyond Uranus (and little Pluto).

I see that the online version of New Scientist’s article Is There a Planet X? has a strap-line:

“Wobbles in the orbit of two distant dwarf planets are reviving the idea of a planet hidden in our outer solar system.”

Guess what?  The supposed evidence for Planet X is nothing to do with “wobbles” either.

The New Scientist article was one of several near-simultaneous publications and in fact the online version was updated, the same day, 20th January, with a note:

Update, 20 January: Mike Brown and Konstantin Batygin say that they have found evidence of “Planet Nine” from its effect on other bodies orbiting far from the sun.

Exciting.  Or it would have been, had I not been reading the print version.  The link is to another New Scientist article: Hints that ‘Planet Nine’ may exist on edge of our solar system [no paywall]. “Planet Nine”?  It was “Planet X” a minute ago.

Referencing the latest paper on the subject, by Brown and Batygin, this new online NS article notes that:

“Brown and others have continued to explore the Kuiper belt and have discovered many small bodies. One called 2012 VP113, which was discovered in 2014, raised the possibility of a large, distant planet, after astronomers realised its orbit was strangely aligned with a group of other objects. Now Brown and Batygin have studied these orbits in detail and found that six follow elliptical orbits that point in the same direction and are similarly tilted away from the plane of the solar system.

‘It’s almost like having six hands on a clock all moving at different rates, and when you happen to look up, they’re all in exactly the same place,’ said Brown in a press release announcing the discovery. The odds of it happening randomly are just 0.007 per cent. ‘So we thought something else must be shaping these orbits.’

According to the pair’s simulations, that something is a planet that orbits on the opposite side of the sun to the six smaller bodies. Gravitational resonance between this Planet Nine and the rest keep everything in order. The planet’s high, elongated orbit keeps it at least 200 times further away from the sun than Earth, and it would take between 10,000 and 20,000 Earth years just to complete a single orbit.”

Brown and Batygin claim various similarities in the orbits of the trans-Neptunian objects.  But they don’t stress what initially sparked the idea that “Planet Nine” might be influencing them.

Scientific American and The Argument of Perihelion
Luckily, by the time I saw the 23rd January New Scientist, I’d already read The Search for Planet X [paywall again, sorry] cover story in the February 2016 (who says time travel is impossible?) issue of Scientific American, so I knew that – at least prior to the Brown and Batygin paper – what was considered most significant about the trans-Neptunian objects was that they all had similar arguments of perihelion (AOPs), specifically around 0˚.  That is, they cross the plane of the planets roughly at the same time as they are closest to the Sun (perihelion).  The 8 (sorry, Pluto) planets orbit roughly in a similar plane; these more distant objects are somewhat more inclined to that plane.

Scientific American reports the findings by two groups of researchers, citing a paper by each.  One is a letter to Nature, titled A Sedna-like body with a perihelion of 80 astronomical units, by Chadwick Trujillo and Scott Sheppard [serious paywall, sorry], which announced the discovery of 2012 VP113 and arguably started the whole Planet X/9/Nine furore.  They quote Sheppard: “Normally, you would expect the arguments of perihelion to have been randomized over the life of the solar system.”

To cut to the chase, I think that is a suspect assumption.  I think there may be reasons for AOPs of bodies in inclined orbits to tend towards 0˚, exactly as observed.

The Scientific Papers

The fact that the argument of perihelion is key to the “evidence” for Planet X is clear from the three peer-reviewed papers mentioned so far.

Trujillo and Sheppard [paywall, still] say that:

“By numerically simulating the known mass in the solar system on the inner Oort cloud objects, we confirmed that [they] should have random ω [i.e. AOP]… This suggests that a massive outer Solar System perturber may exist and [sic, meaning “which”, perhaps] restricts ω for the inner Oort cloud objects.”

The Abstract of the other paper referenced by Scientific American, Extreme trans-Neptunian objects and the Kozai mechanism: signalling the presence of the trans-Plutonian planets, by C and R de la Fuente Marcos, begins:

“The existence of an outer planet beyond Pluto has been a matter of debate for decades and the recent discovery of 2012 VP113 has just revived the interest for this controversial topic. This Sedna-like object has the most distant perihelion of any known minor planet and the value of its argument of perihelion is close to 0 degrees. This property appears to be shared by almost all known asteroids with semimajor axis greater than 150 au and perihelion greater than 30 au (the extreme trans-Neptunian objects or ETNOs), and this fact has been interpreted as evidence for the existence of a super-Earth at 250 au.”

And the recent paper by Konstantin Batygin and Michael E Brown, Evidence for a Distant Giant Planet in the Solar System, starts:

Recent analyses have shown that distant orbits within the scattered disk population of the Kuiper Belt exhibit an unexpected clustering in their respective arguments of perihelion. While several hypotheses have been put forward to explain this alignment, to date, a theoretical model that can successfully account for the observations remains elusive.

So, whilst Batygin and Brown claim other similarities in the orbits of the trans-Neptunian objects, the key peculiarity is the alignment of AOPs around 0˚.

Is There a Simpler Explanation for ~0˚ AOPs?

Let’s consider first why the planets orbit in approximately the same plane, and why the Galaxy is also fairly flat.  The key is the conservation of angular momentum.  The overall rotation within a system about its centre of gravity must be conserved.  Furthermore, this rotation must be in a single plane.  Any orbits above and below that plane will eventually cancel each other out, through collisions (as in Saturn’s rings) and/or gravitational interactions (as when an elliptical galaxy gradually becomes a spiral galaxy).  Here’s an entertaining explanation of what happens.

This process is still in progress for the trans-Neptunian objects, I suggest, since they are inclined by up to around 30˚ – Sedna’s inclination is 11.9˚ for example – which is much more than the planets, which are all inclined within a few degrees of the plane of the Solar System.  What’s happening is that the TNOs are all being pulled constantly towards the plane of the Solar System, as I’ve tried to show in this schematic:

160205 Planet X slash 9

Now, here comes the key point: because the mass of the Solar System is spread out, albeit only by a small amount, because there are planets and not just a Sun, the gravitational pull on each TNO is greater when it is nearer the Sun (closer to perihelion) than when it is further away. There’s more of a tendency for the TNO (or any eccentrically orbiting body) to gravitate towards the plane of the system when it’s nearer perihelion.

This is true, I believe, even after allowing for Kepler’s 2nd Law, i.e. that the TNO spends less time closer to the Sun.  Kepler’s 2nd Law suggests the time an orbiting body spends at a certain distance from the centre of gravity of the system is proportional to the square of that distance, which you’d think might cancel out the inverse square law of gravity.  But the mass of the Solar System is not all at the centre of gravity.  The nearest approach of Neptune to Sedna, for example, when the latter is at perihelion is around 46AU (astronomical units, the radius of Earth’s orbit) but about 476AU when Sedna is at aphelion.

The most stable orbit for a TNO is therefore when it crosses the plane of the Solar System at perihelion, that is, when its argument of perihelion (AOP) is 0˚.  Over many millions of years the AOPs of the orbits of Sedna and co. have therefore tended to approach 0˚.

I suggest it is not necessary to invoke a “Super-Earth” to explain the peculiarly aligned arguments of perihelion of the trans-Neptunian objects.

January 23, 2016

Greater Interannual Seasonal Temperature Variability in a Warming World?

Filed under: Agriculture, Global climate trends, Global warming, Science, UK climate trends — Tim Joslin @ 5:42 pm

You attempt to use the correct scientific jargon and then realise that sometimes the English language is insufficiently precise.  What I mean by the title is to ask the important question as to whether, as global warming proceeds, we will see a greater variation between summers, winters, springs and autumns from year to year.  Or not.

My previous two posts used Central England Temperature (CET) record data to show how exceptional the temperatures were in December in 2010 (cold) and 2015 (warm) and highlighted two other recent exceptional months: March 2013 (cold) and April 2011 (warm).  I speculated that perhaps, relative to current mean temperatures for a given period, in these examples a calendar month, both hot and cold extreme weather conditions are becoming more extreme.

What prompted today’s follow-up post was an update from the venerable James Hansen, Global Temperature in 2015, to which a link appeared in my Inbox a few days ago.  This short paper documents how 2015 was by a fair margin globally the warmest year on record.  But it also includes a very interesting figure which seems to show increasingly greater variability in Northern Hemisphere summers and winters:

160120 More variable summer and winter temps

I’ve added a bit of annotation to emphasise that the bell curves for both summer and winter have widened and flattened. That is, not only have the mean summer and winter temperatures increased, so has the variance or standard deviation, to use the technical terms.

If true, this would be very concerning. If you’re trying to grow food and stuff, for example, it means you have to worry about a greater range of possible conditions from year to year than before, not just that it’s getting warmer.

I was about to suggest it might be time to panic. But then it occurred to me that there must surely have been some debate about this issue. And sure enough Google reveals that Hansen has written about variability before, and more explicitly, such as in a paper in 2012, titled Perception of climate change, which is free to download.  Hansen et al note “greater temperature variability in 1981-2010” compared to 1951-80.

Trouble is Hansen et al, 2012 was vigorously rebutted by a couple of Harvard boffs.  Andrew Rhines and Peter Huybers wrote to the Proceedings of the National Academy of Sciences, where Hansen et al had published their paper, claiming that Frequent summer temperature extremes reflect changes in the mean, not the variance [my stress].  They attributed Hansen’s flattening bell curves were due to various statistical effects and asserted that mean summer and winter temperatures had increased, but not the standard deviation, and therefore the probability of relative extremes.

That left me pretty flummoxed, especially when I found that in Nature that another bunch of eminent climate scientists also claimed, No increase in global temperature variability despite changing regional patterns (Huntingford et al, Nature 500, p327–330, 15 August 2013).

Just so we’re clear, what the guys are saying is that as global warming proceeds – not even when we reach some kind of steady state – temperatures will just on average be shifted up by a certain amount.

I have to say I find this very difficult to believe, and indeed incompatible with the fact that some parts of the world (continental interiors, say) warm faster than others (deep oceans) and the observation that the wind blows in different directions at different times!

Furthermore we’ve just seen, between Decembers 2010 and 2015 in the  CET record, a much greater spread of temperatures than in any comparable period (actually in any period, period, but we’re concerned here with variability over a few years – less than a decade or two, say – when the climate has had little time to change) in the previous 350 years.  I take the liberty of reproducing the graph from my previous post:

160114 Dec 2015 related CET analysis slide 2a

December 2015 was 10C warmer than December 2010, 2C more than the range between December temperatures in any other era.

And I also recollect figures like this one, showing the freakishness of summer 2003 in Switzerland, where, like the UK, there is a long history of weather records:

160120 More variable summer and winter temps slide 2

This appears on the Climate Communication site, which shies away from any mention of increased variability.  But the original Nature paper in which it appeared, Schär et al, 2004 is very clear, and is even titled The role of increasing temperature variability in European summer heatwaves. The synopsis (which is all I can access – pay-wall) notes that:

Instrumental observations and reconstructions of global and hemispheric temperature evolution reveal a pronounced warming during the past approx 150 years. One expression of this warming is the observed increase in the occurrence of heatwaves. Conceptually this increase is understood as a shift of the statistical distribution towards warmer temperatures, while changes in the width of the distribution are often considered small. Here we show that this framework fails to explain the record-breaking central European summer temperatures in 2003, although it is consistent with observations from previous years. We find that an event like that of summer 2003 is statistically extremely unlikely, even when the observed warming is taken into account. We propose that a regime with an increased variability of temperatures (in addition to increases in mean temperature) may be able to account for summer 2003. To test this proposal, we simulate possible future European climate with a regional climate model in a scenario with increased atmospheric greenhouse-gas concentrations, and find that temperature variability increases by up to 100%, with maximum changes in central and eastern Europe. [My stress].

Hmm. Contradictory findings, scientific debate.

My money’s on an increase in variability. I’ll keep an eye on that CET data.

January 19, 2016

Two More Extreme UK Months: March 2013 and April 2011

Filed under: Effects, Global warming, Science, Sea ice, Snow cover, UK climate trends — Tim Joslin @ 7:17 pm

My previous post showed how December 2015 was not only the mildest on record in the Central England Temperature (CET) record, but also the mildest compared to recent and succeeding years, that is, compared to the 21 year running mean December temperature (though I had to extrapolate the 21-year running mean forward).

December 2010, though not quite the coldest UK December in the CET data, was the coldest compared to the running 21 year mean.

I speculated that global warming might lead to a greater range of temperatures, at least until the planet reaches thermal equilibrium, which could be some time – thousands of years, maybe.  The atmosphere over land responds rapidly to greenhouse gases. But there is a lag before the oceans warm because of the thermal inertia of all that water. One might even speculate that the seas will never warm as much as the land, but we’ll discuss that another time. So in UK summers we might expect the hottest months – when a continental influence dominates – to be much hotter than before, whereas the more usual changeable months – when maritime influences come into play – to be not much hotter than before.

The story in winter is somewhat different.  Even in a warmer world, frozen water (and land) will radiate away heat in winter until it reaches nearly as cold a temperature as before, because what eventually stops it radiating heat away is the insulation provided by ice, not the atmosphere.  So the coldest winter months – when UK weather is influenced by the Arctic and the Continent – will be nearly as cold as before global warming.   This will also slow the increase in monthly mean temperatures.  Months dominated by tropical influences on the UK will therefore be warmer, compared to the mean, than before global warming.

If this hypothesis is correct, then it would obviously affect other months as well as December.  So I looked for other recent extreme months in the CET record.  It turns out that the other recent extreme months have been in late winter or early spring.

Regular readers will recall that I wrote about March 2013, the coldest in more than a century, at the time, and noted that the month was colder than any previous March compared to the running mean.  I don’t know why I didn’t produce a graph to back then, but here it is:

160118 Extreme months in CET slide 1b

Just as December 2010 was not quite the coldest December on record, March 2013 was not the coldest March, just the coldest since 1892, as I reported at the time.  It was, though, the coldest in the CET record compared to the 21-year running mean, 3.89C below, compared to 3.85C in 1785.  And because I’ve had to extrapolate, the difference will increase if the average for Marches 2016-2023 (the ones I’ve had to assume) is greater than the current 21-year mean (for 1995-2015), which is more than half likely, since the planet is warming, on average.

We’re talking about freak years, so it’s surprising to find yet another one in the 2010s.  April 2011 was, by some margin, the warmest April on record, and the warmest compared to the 21-year running mean:

160119 Extreme months in CET slide 2

The mean temperature in April 2010 was 11.8C.  The next highest was only 4 years earlier, 11.2 in 2007.  The record for the previous 348 years of CET data was 142 years earlier, in 1865, at 10.6C.

On our measure of freakishness – deviation from the 21-year running mean – April 2011, at 2.82C, was comfortably more freakish than 1893 (2.58C), which was in a period of cooler Aprils than the warmest April before the global warming era, 1865.  The difference between 2.82C and 2.58C is unlikely to be eroded entirely when the data for 2016-2021 is included in place of my extrapolation.  It’s possible, but for that to happen April temperatures for the next 6 years would need to average around 10C to sufficiently affect the running mean – the warmth in the Aprils in the period including 2007 and 2011 would need to be repeated.

So, of the 12 months of the year, the most freakishly cold for two of them, December and March, have occurred in the last 6 years, and so have the most freakishly warm for two of them, December and April. The CET record is over 350 years long, so we’d expect a most freakishly warm or cold month to have occurred approximately once every 15 years (360 divided by 24 records).  In 6 years we’d have expected a less than 50% chance of a single freakishly extreme monthly temperature.

According to the CET record, we’ve had more than 8 times the number of freakishly extreme cold or warm months in the last 6 years than would have been expected had they occurred randomly since 1659.

And I bet we get more freakishly extreme cold or warm months over the next 6 years, too.

 

January 14, 2016

Just How Exceptionally Mild Was December 2015 in the UK?

Filed under: Global warming, Science, Sea ice, UK climate trends — Tim Joslin @ 5:24 pm

“Very” is the answer, based on the 350+ year long Central England Temperature (CET) record.  Here’s a graph of all the CET December temperatures since 1659:

160114 Dec 2015 related CET analysis slide 1
As is readily apparent from the graph, the mean temperature of 9.7C in December 2015 was much higher than in any previous year.  In fact, only twice before had the average exceeded 8C.  Decembers 1934 and 1974 were previously tied as the mildest on 8.1C.

But how much was the recent mild weather due to global warming and how much to normal variability? Apart from anything else the mild spell has to coincide with a calendar month to show up in this particular dataset.  And it so happened that the weather turned cooler even as the champagne corks were in the air to celebrate the end of 2015.

To help untangle trends from freak events, I’ve included some running means on the graph above.  The green line shows the mean December temperature over 5 year periods.  For example, thanks in large part to December 2015, the 5 Decembers from 2011 to 2015 are the mildest in succession, though other periods have come close.

The red and black lines show 11 and 21 year running means, respectively.  The black line therefore represents the long-term trend of December temperatures.  These are close to the highest they’ve ever been, though in some periods, such as around the start of the 19th century, the average December has been as much as 2C colder than it is now.  Perhaps some exceptionally mild Decembers back then – such as 1806 – were as unusual for the period as December 2015 was compared to today’s Decembers.

I therefore had the idea to plot the deviation of each December from the 21 year mean centred on that year, represented by the black line on the graph above.  If you like, I’ve simply subtracted the black line from the blue line.

A health warning is necessary.  I’ve had to extrapolate the 21 year mean, since we don’t yet know what weather the next 10 Decembers (2016 to 2025) will bring.  We’ll have to wait until 2025 to see precisely how unusual December 2015 will prove to have been.  In the meantime, I’ve set the mean temperature for 2016 through 2025 to the last 21 year mean (i.e. the one for the years 1995 through 2015).

With that proviso, here’s what we get:

160114 Dec 2015 related CET analysis slide 2a
The green line now shows the difference between the mean December temperature for a given year and the mean December temperature for the 21 years including the 10 before and the 10 after the given year.

We can see that December 2015 was, at 4.91C much more mild than contemporary Decembers than was any other December, with the proviso that I’ve not been able to take Decembers after 2015 into account.

The next most freakish December was the aforementioned 1806 which was 3.86C warmer than the mean of Decembers 1796 through 1816.

What’s going on? Is it just weather – something to do with the ongoing El Nino, perhaps – or is something else afoot?

One hypothesis might be that, with the climate out of equilibrium due to global warming, greater variability is possible than before. Our weather in 2015 may have been driven by a heat buildup somewhere (presumably in the ocean) due to global warming. On average this perhaps doesn’t happen – we may suppose our weather to be often determined by regions of the planet where the temperature hasn’t changed much, at least at the relevant time of year. Specifically, the Greenland ice-sheet hasn’t had time to melt yet.

It won’t have escaped the notice of my eagle-eyed readers that the graph above also shows 2010 to be the most freakishly cold December in the entire CET record.

Perhaps, until the ice-sheets melt, the deep oceans warm and the planet reaches thermal equilibrium, we’ll find that when it’s cold it’s just as cold as it used to be, but when it’s warm it’s a lot warmer than it used to be.   Just a thought.

It might be worth mentioning a couple of other, not necessarily exclusive, possibilities:

  • Maybe the situation will continue even when the planet is in thermal equilibrium.  Maybe, for example, assuming there is some limit to global warming and the Arctic seas still freeze in winter, we’ll still get cold weather in winter just or nearly as cold as it ever was, but we’ll get much warmer weather when there’s a tropical influence.
  • It could be that weather patterns are affected by global warming, especially through the later freezing of Arctic ice.

Or December 2015 could just have been a freak weather event.

September 16, 2015

Will Osborne’s UK National Living Wage Really Cost 60,000 Jobs?

Filed under: Economics, Inequality, Minimum wage, Unemployment — Tim Joslin @ 7:25 pm

It’s pretty dismal if you’re left-leaning in the UK right now.  Not only did Labour lose the election catastrophically and – adding to the shock – much more badly than implied by the polls, they’ve now gone nuts and elected a leader who can’t win, and even if he did advocates policies that belong in the 1970s.  Meanwhile Osborne is implementing a policy Labour should have been pushing during the election campaign, namely what is in effect a higher minimum wage, his so-called National Living Wage (NLW) for over 25s.  Of course, Osborne’s overall package is disastrous for many of the poorest households who will be worse off even with the NLW because of simultaneous cuts to tax credits.

If you’re following the debate around the NLW – for example as expertly hosted by the Resolution Foundation – it’s clear that the Big Question is how much effect the NLW (and increased minimum wages in general) is likely to have on (un)employment.  Now, based on logical argument (that being my favoured modus operandi), and, of course, because my philosophy is to question everything, I am highly sceptical of the mainstream line of reasoning that labour behaves like paper-clips.  Put up the price of paper-clips and you’ll sell fewer; put up the price of labour and unemployment will rise is the gist of it.  But this ignores the fact that increasing wages itself creates demand.  More on this later.

Much as I believe in the power of reasoned argument, I nevertheless recognise that it’s a good idea to first look at the strengths and weaknesses of the opposing position.  In this post I therefore want to focus on the meme that Osborne’s NLW will cost 60,000 jobs.  How well-founded is this estimate?  You’ll see it quoted frequently, for example, by the Resolution Foundation and on the Institute for Fiscal Studies’ (IFS) website and no doubt in mainstream media reports.  The original source is the Office for Budget Responsibility.  As far as I can tell the 60,000 figure first appeared in a report, Summer budget 2015 policy measures (pdf) which was issued around the time of Osborne’s “emergency” budget in July (the “emergency” being that the Tories had won a majority), when he bombshelled the NLW announcement.

So, I asked myself, being keen to get right to the bottom of things, where did the OBR boffs get their 60,000 estimate from?  Well, what they did was make a couple of assumptions (Annex B, para 17 on p.204), the key one being:

“…an elasticity of demand for labour of -0.4… This means total hours worked fall by 0.4 per cent for every 1.0 per cent increase in wages;”

They stuck this into their computer, together with the assumption that “half the effect on total hours will come through employment and half through average hours” and out popped the 60,000 figure.

But where does this figure of -0.4 come from?  They explain in Annex B.20:

“The elasticity of demand we have assumed lies within a relatively wide spectrum of empirical estimates, including the low-to-high range of -0.15 to -0.75 in Hamermesh (1991). This is a key assumption, with the overall effects moving linearly with it.”

The Hamermesh reference is given in footnote 3 on p.205, together with another paper:

“Hamermesh (1991), “Labour demand: What do we know? What don’t we know?”. Loeffler, Peichl, Siegloch (2014), “The own-wage elasticity of labor demand: A meta-regression analysis””, present a median estimate of -0.39, within a range of -0.072 to -0.446.” (my emphasis)

Evidently Hamermesh is the go to guy for the elasticity of demand for “labor”.  So I thought I’d have a look at how Hamermesh’s figure was arrived at.

I hope you’ve read this far, because this is where matters start to become a little curious.

Both papers referred to in footnote 3 are available online.  Here’s what Hamermesh actually wrote (it’s a screen print since the document was evidently scanned in to produce the pdf Google found for me):

150916 National Living WageSo what our guru is actually saying is that although the demand elasticity figure is between -0.15 and -0.75, as assumed by the OBR, his best guess – underlined, when that was not a trivial matter, necessitating sophisticated typewriter operation – was actually -0.3.

So why didn’t the OBR use the figure of -0.3?

Perhaps the answer is to do with the -0.39 they quote from the Loeffler, Peichl and Siegloch paper (pdf).  But this is what those guys actually wrote:

“Overall, our results suggest that there is not one unique value for the own-wage elasticity of labor demand; rather, heterogeneity matters with respect to several dimensions. Our preferred estimate in terms of specification – the long-run, constant-output elasticity obtained from a structural-form model using administrative panel data at the firm level for the latest mean year of observation, with mean characteristics on all other variables and corrected for publication selection bias – is -0.246, bracketed by the interval [-0.072;-0.446]. Compared to this interval, we note that (i) many estimates of the own-wage elasticity of labor demand given in the literature are upwardly inflated (with a mean value larger than -0.5 in absolute terms) and (ii) our preferred estimate is close to the best guess provided by Hamermesh (1993), albeit with our confidence interval for values of the elasticity being smaller.” (my emphasis)

Yep, the Germanically named guys from Germany came up with a figure of -0.246, not the -0.39 in the OBR’s footnote 3.  The OBR’s -0.39 is a rogue figure.  It must be some kind of typographical error, since they correctly quote the possible range ( -0.072 to -0.446) for the demand elasticity.  Bizarre, frankly.

It’s even more mysterious when you consider that the OBR would surely have used the elasticity of demand for labour previously.

Based on the sources they refer to it seems the OBR should have plugged -0.3 at most into their model, not -0.4.  This would have given a significantly lower estimate of the increase in unemployment attributable to the introduction of the NLW, that is, roughly 45,000 rather than 60,000.

Why does this matter?  It matters because the idea that a higher minimum wage will increase unemployment is one of the main arguments against it, frequently cited by those opposed to fair wages and giving pause to those in favour.  Here, for example, is what Allister Heath wrote recently in a piece entitled How the new national living wage will kill jobs in the Telegraph:

“…it is clear that George Osborne’s massive hike in the minimum wage will exclude a significant number of people from the world of work. There is a view that this might be a worthwhile trade-off: if millions are paid more, does it really matter if a few can’t find gainful employment any longer? Again, I disagree: there is nothing more cruel than freezing out the young, the unskilled, the inexperienced or the aspiring migrant from the chance of employment.

Being permanently jobless is a terrible, heart-wrenching state; the Government should never do anything that encourages such a catastrophe.”

Clearly, Heath’s argument (which I don’t in any case agree with) carries more weight the greater the effect of a higher minimum wage on unemployment.  But getting the numbers wrong isn’t the only problem with the OBR’s use of the demand elasticity of labour, as I’ll try to explain in my next post.

Older Posts »

Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 27 other followers