Uncharted Territory

April 26, 2010

On Climate, and Causes in Complex Systems

Filed under: Complex decisions, Credit crisis, Economics, Global warming, Reflections, Science — Tim Joslin @ 4:06 pm

Why do so many travellers, such as those marooned by the Eyjafjallalokull ash cloud, invoke a panic response on finding they can’t leave a foreign land? I remember that when I was on a memorable trip to Albania in, if I recollect correctly, 1996, our group was playing leapfrog, as it were, with another minibus full of tourists, along a road to the coast. When we stopped – often – for a passenger to relieve the symptoms of one or other of the local stomach-bugs, the other bus passed us, only for us to see them stopped by the roadside a few minutes later. Eventually we pulled up beside them to chat. It turned out that Albania’s borders were shut, in the hope of trapping whoever had blown up the Tirana police-chief. The other group were cutting their trip short. So illogical. We simply carried on with our holiday. [I drafted this a few days ago, but, since then, I’ve heard, Radio 4’s Today programme is discussing the psychology – and even genetics – of the have-to-get-home phenomenon, right now!].

It beats me why so many people spent thousands of euros hiring cars to drive across Europe. Surely staying until the ash alert blew over would have been both cheaper and less stressful.

Whatever they did, though, even those who have spent the last week hitch-hiking from Athens to Calais abroad cannot fail to have heard that, apparently, global warming will lead to more volcanic eruptions.

Is this something we should worry about?

In short, no.

Volcanoes and ice ages

The scare seems to be based on a study of the end of the last ice age:

“Huybers and Langmuir spliced two databases of volcanic eruptions worldwide over the last 40,000 years.

Eruption levels stayed low until around 12,000 years ago, then suddenly they suddenly shot up. The melting ice released so much pressure that the newly liberated volcanoes erupted at up to six times their normal rate, the researchers estimated.

The inferno lasted for 5,000 years and could have pumped enough CO2 into the atmosphere to raise concentrations between 40 and 50 parts per million, the researchers estimate. Changes in ocean chemistry probably released the rest.”

I love the eruption levels “suddenly” shooting up “suddenly”!

Now, a couple of kilometres of ice over a volcano is one thing. It’s reasonable to suppose that would prevent the pressure in a magma chamber that would otherwise have caused an eruption from doing so.

But melting a kilometre or so of ice takes quite some time. And besides, since the last ice age, there aren’t so many ice-sheets left. Worst case, in a few centuries, perhaps, we could feel the effects of some pent-up volcanic activity.

In the meantime, the worst that could happen is that some eruptions are brought forward by a few years.

The hype around the volcano scare exploits our innate difficulty in conceiving long periods of time. It also resonates with research a while back which noted more eruptions at certain times of year. The suggestion was that particular weather conditions – changes in pressure – around volcanoes, could set them off.

This triggering is an entirely different kettle of fish.

Volcanoes and the weather or short-term climate change

Consider a simple model of volcanic eruptions as the sudden release of something we might call “pressure” that builds up over time. Let’s suppose that the main cause of the build-up of “pressure” is geological. Let’s also assume that the weather can cause seasonal variations in pressure. In this model, eruptions will occur when the total pressure crosses some threshold, as in the following diagram:

Because I’m lazy, and Powerpoint is a step back from pen and paper (and reverting to that and scanning is a hassle right now), I’ve shown the total pressure (dotted line) as the sum of the geological pressure and seasonal variations for the first eruption only, but hopefully you get the idea.

It’s not very usual for volcanoes to erupt every 3-5 years, of course – 50 years or so might be more usual – and in real life every eruption is different.

Hopefully, though, it’s fairly easy to see that eruptions are much more likely in this system during the period when the seasonal effect tends to increase pressure. Over this period the total pressure (dotted line) increases much faster than when the seasonal effect is to decrease pressure.

In fact – and hold this thought – if the rate of increase in pressure due to short-term variability is faster than the long slow build-up of pressure, then eruptions, according to this simple model, will always occur during the short-term upswing in pressure.

My proposition is that it is very easy to exaggerate the effect of the seasonal cycle as a “cause” of eruptions. It is merely a trigger.

You can also see that, if, say, the seasonal pressure changes – a gradual trend on top of the annual fluctuations, perhaps, or an increase in amplitude of the cycle – it will not have a large effect on the frequency of eruptions over a long period. The periodicity of the system will still be driven by geological processes. The weather is a secondary driver in this system.

Now, if you didn’t know that volcanic eruptions are caused by a build-up of “pressure” underground, you might hypothesise that they’re caused by weather conditions. You might collect a lot of data and calculate correlation coefficients to “prove” your theory. You might even argue convincingly that, because we know what causes the weather, the weather must cause volcanic eruptions rather than vice versa and furthermore, it is not the case that both the weather and volcanic eruptions are caused by a third factor.

But you’d be wrong.

Could this mistake happen in other circumstances, though?

Solar cycles and the AMO

That old chestnut, solar cycles, surfaced yet again in New scientist a week or two ago. The claim is that there’s a “compelling link between solar activity and winter temperatures in northern Europe.”

Well, maybe there is.

But anyone who’s stayed awake this far will realise that it’s not enough to determine a correlation between solar cycles and weather patterns. Maybe the solar cycle does trigger a change from one state of the AMO (Atlantic Multidecadal Oscillation) to another. But that doesn’t make it the sole or even the main cause of the variability.

To recap, I first explored the idea of the AMO when I became concerned that the emphasis being put on shrinking Arctic ice as an indicator of global warming (GW) could backfire if the shrinkage reverses. My first post on the topic was therefore titled: Spin Snow, Not Sea Ice, the AMO Is Real!. Back then, I noted that the AMO cycle – likely to be variable in length, especially now we have the extra GW complication – tends to be of the order of 60 years or so, with the previous cooling phase lasting from the 1940s to the 1970s. Maybe we’re entering another one.

That first post suggested a mechanism for the AMO, which I discussed a little more in my second post on the topic, Why the AMO Overshoots. So I won’t repeat myself today.

Later, in 1740 And All That I looked at a historical example of a sudden switch from mild to cold winters in NW Europe. The weather pattern that leads to cold winters might be termed an “anti-monsoon”, as I first discussed back in January in Snow Madness and the North-West European Anti-Monsoon.

Two other posts Ice Pie and Ice Sickle explore aspects of the AMO.

The basic argument in all these posts is that the natural cycle – the AMO – is characterised by a set of feedbacks. Positive feedbacks – perhaps including the effect of lying snow, as considered in That Snow Calculation – produce distinct warming (Arctic ice melt) and cooling (Arctic ice recovery) phases. Negative feedbacks cause one phase to flip to the other. But the exact timing of the tipping-point may be caused by external triggers.

My proposition is that by the end of warming phase of the AMO, the seas (especially the Arctic and the North Atlantic) are relatively warm compared to the land. Any sudden cooling event could trigger a flip to the cooling phase, because the land cools quicker than the ocean, so would become relatively even colder.

Possible sudden cooling events are volcanic eruptions or the change to a cooling phase of the solar cycle, as discussed previously for the case of 1740.

A critical point is that the sunspot cycle is much shorter than the AMO (see AMO discussion and graphs in my first post on the subject):

NASA graph of yearly sunspot numbers

The sunspot cycle indicates the total irradiance from the sun, and the rate of variation is comparable to that of other causes such as GW:

IPCC Fig 2.16 recent changes in solar irradiance

Further Implications

Not too many scientists claim volcanic eruptions are “caused”, as opposed to triggered, by variation in the weather or by climate change. Most understand that only over the sort of long timescale that is needed to melt an ice-sheet would the frequency of eruptions change.

But far more common is the explanation of apparent climate cycles – such as the AMO – by variations in solar output. In cases such as this, it is necessary to do more than just prove a correlation. The causal mechanism needs to be clear and must be shown to be quantitatively sufficient to explain the observed phenomena.

Considerable care is required whenever attempting to explain the “causes” of complex system behaviour.

The need to distinguish between triggers and underlying causes of cyclic behaviour also applies elsewhere in the climate system. Furthermore, the distinction between triggers and underlying causes may become blurred – both may be of similar magnitude, creating a resonant system. In particular, over longer timescales than so far discussed, the Milankovitch cycles are not enough alone to explain the ice age cycle. Perhaps they resonate with another cycle internal to the climate system.

In other domains too, it is not possible to assume that the “cause” in a complex system is just that which is evident on the surface. The lax lending practices and cheap money that are held to have caused the credit crisis may just be one part of a deeper, more complex cycle of optimism, deregulation, increased trade and globalisation on one hand and retrenchment and nationalism on the other.

April 13, 2010

The Earth is a Thermometer

Filed under: Global warming, Science — Tim Joslin @ 9:00 pm

I allude, of course, to one of my earliest posts on the subject of the short-term (years to decades) variability of the climate, particularly in the northern mid-latitudes, i.e. where I live – wherein it was revealed that the Earth is, in fact, a giant fridge!

I’ve recently been pondering on the question of the extent to which short-term changes in the Earth’s climate would show up in sea-level measurements. Serendipitously, my Inbox beeped a few days ago (OK, it didn’t make a noise, I’m just thinking of the movie treatment!) to tell me that a piece, Science Story: the Making of a Sea Level Study, by Martin Vermeer, had just been posted to Realclimate.

Vermeer’s essay is fascinating on at least two levels. It gives an insight into the sociology of science, and it describes an interesting finding. I’d urge everyone to read the piece carefully. But just in case some don’t want to, I’ll summarise what it says.

First, though, let’s just show what we’re trying to explain.

Observed sea-level rise

The IPCC has (of course!) put together a graph of how sea-level has risen since 1880:

Annual average global mean sea-level (IPCC Fig.5.13)

What bothers me about this particular graph is that the rate of sea-level rise was apparently only marginally affected by the slight cooling of global temperatures over the period from the early 1940s to the late 1970s. This cooling is usually explained as resulting from aerosols from industrial pollution, so it might be suggested that only land areas were affected. But the data do not bear this out. See the IPCC graphs in my previous post discussing the Atlantic Multidecadal Oscillation (AMO) natural cycle.

Causes of sea-level rise

The IPCC has summarised research on ice-melt and the heat in (and hence expansion of) the oceans to put together a breakdown of the different causes of sea-level rise (all figures in mm/yr):

This data reflects a lot of uncertainty, particularly as to whether Antarctica is gaining or losing ice! What surprised me, though, was that thermal expansion only accounts for just over half the sea-level rise over the rapid warming period 1993-2003 and somewhat less before that. It’s therefore a little odd that so many people are going round saying that most of the sea-level rise is due to thermal expansion.

So how is the data explained?

Vermeer describes how he was able to refine the mathematical description of sea-level rise:

Rahmsdorf’s first approximation

The story starts with a paper by Stefan Rahmstorf in 2007, referred to as R07, which suggested that the rate of sea-level rise is proportional to the difference between the actual (average surface global) temperature and an equilibrium temperature.

Rahmstorf’s approximation makes a lot of sense for sea-level rise caused by melting ice. You can imagine that the amount of ice melting each year is proportional to the difference between the temperature that year and an equilibrium temperature when no ice would melt (or the same amount would be formed as melts).

But ice melt is not the only cause of sea-level rise. However, imagine that the temperature at the surface of the sea is above equilibrium. The sea will continue to take up heat for many years, because it takes so long for the depths to warm up. Rahmsdorf’s approximation also includes this gradual take up of heat (though it’s all lumped together with ice melt in one term). There’s a slight problem, though, that Rahmstorf doesn’t account for the heat taken up immediately by the surface ocean as temperature rises. Also, Rahmstorf’s model doesn’t allow for the gradual slow-down in heat uptake as the depths warm up.

Vermeer’s (and Rahmstorf’s) second approximation

Vermeer came along and, working together with Rahmstorf, as described at Realclimate, took account of the uptake of heat by the surface ocean as temperature rises. The rate of sea-level rise therefore becomes proportional to the difference from an equilibrium temperature plus a quantity determined by the rate of temperature rise. i.e. for the arithmetically inclined, the rate of sea-level rise, dH/dt = a(T-Te) + b(dT/dt) where T and Te are the average surface temperature at a given time and at equilibrium and a and b constants to be determined.

A few comments:
1. Note how the surface ocean acts as an extension of the atmosphere for heat, similar to the way it does for CO2.
2. Vermeer still does not model any slowing of heat uptake (and hence sea-level rise) with time. For CO2, the Bern carbon cycle model focuses solely on this aspect (i.e. the Bern model is of a pulse upwards in atmospheric CO2 asymptotically returning to equilibrium). Curious how the work on heat uptake has gone in an entirely different direction to that on CO2 uptake.
3. Note that you can’t conceive of the ocean responding to a temperature rise. Rather, the overall average global surface temperature is largely determined by that of the ocean surface. One way to think about it is that excess heat absorbed by the ocean causes both the sea-level and the surface temperature rise.

Vermeer’s negative b problem

Vermeer’s model works well, he says, when tested against climate model data, and historical data for the last millennium. Unfortunately, though, with observed data from 1880 onwards, he can only make a fit with negative b. That is, it seems that the top 100m of water loses ~4.9cm/C (or K, i.e. degree of temperature) rather than (as determined from the computer models) ~2.5cm/C.

It simply cannot be the case that b is negative. Clearly, what’s needed is a model that’s even more complex, and adds up all the terms – what happens to the ocean surface waters immediately the temperature rises, the ice melt, soil moisture and so on.

Vermeer suggests implausibly that either a temperature rise has an initial negative response on sea-level – which makes no sense (note that the increase in atmospheric water vapour is an order of magnitude less than the surface water thermal expansion at ~2mm/C) – or “a positive, but time-lagged sea-level response”. This simply can’t be what’s really happening. I prefer logic to screeds of math, but as pointed out here more rigorously, if b is negative, that would imply that a halt in rising temperatures would cause a sudden increase in the rate of sea-level rise! And you can see clearly from the IPCC’s sea-level graph, shown above, that volcano induced cooling caused by Agung in 1963 and El Chichon in 1982 (Pinatubo in 1991 is not quite so clear, and other short-term variability in the data is likely due to El Nino events or, more precisely, the El Nino Southern Oscillation or ENSO) led to actual sea-level falls (presumably the cooling was more than ~0.1C which would cause a 2.5mm sea-level drop, based on Vermeer’s estimate for the true value of b, exceeding the average rate of sea-level rise, 2.46mm/yr, according to Chao et al, see below). b can’t be negative. Period.

I think the real reasons for a poor fit of Vermeer’s equation to recent data are some or all of:
1. The equation is too simple (as already mentioned).
2. The data is poor – note how much of the sea-level rise is unexplained in the bottom-up analysis in Table 5.3 from the 4AR, quoted above.
3. The data is accurate, but there are causes of sea-level change, not directly attributable to global warming, which are not properly accounted for (which at least partly explain the error bars in Table 5.3).

Let’s explore other causes of sea-level change:

The dam data

Vermeer describes how he improved the statistical fit of his model to the data by including an analysis of water stored in reservoirs since the start of the era of serious dam-building post WWII, as described in a paper by Chao et al. Now, I’m quite prepared to accept that dam-building accounts for short-term variability and that adjusting for reservoir water allows Vermeer to achieve a better fit.

But the improved fit doesn’t necessarily mean that the parameters a and b are any more correct.

My problem with the Chao paper is that it downplays the reverse effect – water released rather than water stored. There are several sources of this.

First, though, let’s get our units sorted. Chao suggests that reservoirs have stored around 11,000 km3 of water, equivalent to a sea-level drop of around 30mm. Let’s call it 3mm/1000 km3. What else could cause the release of 1000 km3 water or more?

Silt and seepage

Chao et al add about 3000 km3 of water as seepage from dams, raising water tables near reservoirs. Fine, but surely this water might have “seeped” from rivers if it hadn’t been trapped behind dams?

Chao et al also ignore the silt behind dams, arguing (contrary to other sources they reference) that the silt would have flowed to the sea with the water. Some might, but much might have been deposited on river flood-plains – that of the Nile, for example.

Use of fossil water

I really don’t see how reservoir water can be reduced from the sea-level rise when the use of water from aquifers is not added to sea-levels. For example, Fred Pearce writes in “When the rivers run dry” (p.79) that;

“Overall total pumping in India, China and Pakistan probably exceeds discharge by 150-200 cubic kilometres a year. The boom has so far lasted twenty years…”

That’s 3-4,000 km3 just there.

Pearce also describes (p.81ff) how at least 1000km3 has been taken from the High Plains aquifer in US.

And fossil water is being used all over the world.

The Aral Sea

The 4AR IPCC report (section 5.5.6, p.419) covers most of sources of sea-level rise I’m covering here, but for some reason doesn’t mention the Aral Sea.

There’s another 1000 km3.

And there are other. less dramatic examples: the Dead Sea, Lake Chad…

The Long Tail

Then there are wetlands and other ecosystem changes that have the effect of removing water, mainly from soils. Let’s do a tiny bit of math:
– humans have affected roughly 100m km2 of land. (2/3 of the land area – I’m just doing order of magnitude stuff here). 1cm of water over that area is 1000 km3.
– or take of the order of getting towards 10m km2 that we could estimate has been deforested or degraded over the 20th century. Say 30cm of water over that area is another 3000km3.
– similarly, maybe 1m km2 turned to desert, with perhaps 1m of water lost from the soils, depending what we started with. That’s another 1000km3.
– or drained wetlands, probably a few 100,000 km2. But several metres of water could be lost – that is maybe another 1000km3 in total. (Plus sandstorms could end up releasing mass to the sea, raising sea-level).

Then we have fossil-fuel burning which releases water as well as CO2 (some of which increases sea-level when it is taken up by the oceans). Maybe 500km3 in total (25Gt CO2/yr, only ~1/4 taken up in the oceans but ~8Gt H2O depending on the fuel all taken up ultimately).

Conclusion

Vermeer’s model of the effect of temperature on sea-level may well be a step forward. However, it requires modelling against properly adjusted data. Chao et al find a virtually straight line relationship i.e. roughly constant 2.46mm/yr sea-level rise (difficult to reconcile, at least by sight, with the slower rate of sea-level increase from 1940s to ~1980, after adjustment, shown in Vermeer’s paper), which is implausible. Maybe with proper adjustment, the parameter for Vermeer’s short-term warming term will return to the more plausible positive value of ~2.5cm/C.

The whole story shows some of the problems with the scientific process:
– it seems the Chao et al reservoir adjustment has been included simply because it is clearly quantified. The aquifer and ecosystem water storage changes need to be quantified and added to Vermeer’s and other models. Collectively, it’s likely they are more significant than reservoir water storage. The IPCC just assesses research that’s been done. It would be helpful if a list of topics that need to be urgently researched were included in each report.
– it is essential that mathematical modelling of data is not permitted to gain a life of its own. The true functions are much more complicated than Vermeer’s equation, which should really be divided into at least 3 functions: ice melt due to difference from an equilibrium temperature (itself increasing as ice melts); surface sea water expansion; the effect of slower deep water warming (perhaps similar in form to the ice melt). But, quite clearly, the surface waters expand as their temperature rises. If other effects of increased temperature act to reduce sea-level, then these must be modelled as separate terms.
– Vermeer (and Rahmstorf, Chao et al, various reviewers and so on) have put in a huge amount of effort producing papers for publication, but I just wonder if it might not have been a more effective use of time to have had a more open dialogue about the general approach to be followed. In other words, is the peer-review filter letting through enough of the right sort of thinking at the right time? Do we need better fora – somewhere between Realclimate and journals – for scientific debate at a higher level?

April 9, 2010

Job Sums

I’ve been trying to avoid commenting on the General Election campaign, since it would be a huge distraction from far more important issues, but I can no longer ignore the absurd reasoning that’s making its way into the media.

Yesterday, the Guardian, bless their little cotton socks, tried, under the banner “Reality check”, to answer the question “Do national insurance rises cost jobs?” (if you follow the link, then don’t be puzzled – as usual, the online title is different to that in the print version of the paper). The Guardian’s answer is slightly to the “solid” side on a cute little dial that goes from “shaky” to “solid” – let’s call it “mushy”. They seem to think NI rises might cost jobs.

The article included some strange logic, most notably from Richard Dodd of the British Retail Consortium who apparently argued that “…in a competitive market, retailers will struggle to pass the tax on in the price of goods…”. The “competitive market” has nothing to do with it, since the tax will affect all employers. No-one has a new competitive advantage as a result of the tax.

The Guardian also failed to question why business leaders might be against an NI rise. The point is that increasing taxes (like other costs) reduce profitability (temporarily) because in general it takes time to raise prices and recover margins following an increase in costs. As clearly testified by Richard Dodd’s concerns about how “retailers will struggle to pass on the tax”.

But the Guardian’s piece made a bigger mistake – in fact they managed to completely miss the point. You can only answer a question like whether an NI increase will “cost jobs” by considering also what happens to the money raised by the tax. Taxes rob Peter to pay Paul, so if you can only evaluate the effect on any measure – in this case jobs – by looking at the issue in the round.

Since, as argued by the Guardian, the effect on (private sector) jobs of the NI increase is marginal and the money will be spent on retaining jobs in the public sector, then, if it’s the overall number of jobs in the economy you care about, you should be in favour of the NI proposal. The arguments put forward by the Tories and their business friends are misleading.

[I should say I don’t actually believe the prime goal of an economy should be to create jobs and I don’t believe the Tories or business leaders do either. The goal should be to produce as much as possible with as few resources – including people – as possible. Then we’ll all be rich and jobs will then take care of themselves. What I object to is all the dissembling. Having said that, unemployment is high and rising, so it’s not the best time to be bearing down on jobs. In other words, the trajectory Labour wants to put the economy on makes more sense to me than that which the Tories propose. We may as well, for instance, maintain staffing levels in the NHS – thereby saving and improving lives – and, in particular, continue to invest in the IT necessary for future efficiency savings, rather than have people sitting around on the dole].

Today’s FT gives us some clues on how many jobs would be lost by reducing public expenditure by an amount equivalent to that which would be raised by the NI increase. The FT appears to consider a slightly different question, i.e. the effect on jobs of additional public spending cuts in 2010-11 (i.e. this financial year), as proposed by the Tories. The point, which several BBC news bulletins have missed this morning, is that the NI rise only comes in in 2011-12. With the usual disclaimer that unless I’ve completely misunderstood something, in which case perhaps someone will be good enough to put me right…

And it’s surprisingly in the FT, where a “Cameron adviser discloses cuts detail” that the serious dissembling starts.

First, there’s an enormous howler. The article describes a proposal for £1-2bn in job savings by natural wastage this financial year, 2010-11. That is, during the year that’s already started. But the article appears to reckon on a saving of the full annual cost of the jobs – estimated to be £50,000 each – this financial year. Wrong. You can only reckon on that saving if the jobs disappear at the start of the financial year. On average they will disappear halfway through the year (actually later than that, because the Tories wouldn’t even be able to start until May 7th). So on average only £25,000 will be saved this financial year per job shed. Therefore, to save £1-2bn this financial year would require the wastage of £1-2bn/£25,000 = 40,000 – 80,000 jobs, not the 20,000 to 40,000 stated.

Note that if the jobs are lost other than by natural wastage there will be redundancy costs and less, or more likely negative, cashflow savings this financial year. Basically the Tories need to find 40-80,000 retirees or leavers this year who have not yet been accounted for. And whose jobs are so inessential that they don’t need to be replaced. Tough call, I’d have thought, when there aren’t so many other jobs out there to move to.

Furthermore, some of the cost savings are in things like office space, not salary. There’s always going to be a delay in realising such savings, because you can’t move to a smaller office every time someone retires and is not replaced.

Even furthermore, the cost in benefits of 40-80,000 people who would otherwise have had a public sector job to go to needs to be subtracted from the fiscal saving. Let’s be generous and assume that this has been taken account of in the £50,000pa annual cost of a public sector job quoted in the article. You can do your own sums if you want to assume the actual saving is less than £50,000pa (or less than £25,000 saving on average in the current FY, 2010-11).

Second, we’re discussing jobs in the overall economy. The FT article considers how the Tories propose to save an extra £12bn this financial year:

“Other cuts set out by Sir Peter include reductions in IT spending, yielding ‘potentially at least’ £2bn to £4bn. Renegotiation of contracts with suppliers of goods and services – which Sir Peter described as ‘not rocket science … it’s not about beating them up on price’ – would save about £3bn.

Cuts to ‘discretionary’ spending, such as consultants and staff expenses, should yield a further £2.5bn for 2010-11, he said. He declined to be drawn on a figure for property costs.”

Let’s see. Reductions in IT spending will cost jobs at IT suppliers, not all of them overseas. “Consultants” last time I looked were living, breathing working people as well. Reducing staff expenses would cost jobs indirectly as would renegotiation of contracts. The trouble is the lead time on renegotiation of contracts as well as “property costs” – realised presumably by selling offices – is months to years, so achieving the promised cashflow savings this financial year is implausible, to say the least.

I simply don’t find the Tory plans credible. They’d have more chance of getting my vote if they were actually honest about what they believed in. I remember Labour came to power in 1997 with a promise to stick to the Tory spending plans for the next two years. Cameron thinks he knows better. His position is contradictory – he said on the radio this morning that it was difficult for an Opposition to make spending plans, yet he’s confident he can make huge additional cuts this year. Cameron was once thought of as the new Blair. He now seems to have morphed into the new Thatcher. It seems to me that he’d give the economy the sort of shock treatment it received in the early 1980s. Steeply rising unemployment, an assault on the public sector and so on. Maybe it needed it then. I don’t know. But if it needs it now, perhaps Cameron should be making that case, not promising to save jobs when, at least in the short term, his policies are more likely to produce higher unemployment than would otherwise be the case.

Cameron is giving the impression that he can reduce public sector borrowing and unemployment this year and next compared to Labour’s plans. If he really believes this then he’s seriously wrong and not ready for the job of PM. If he doesn’t believe he can square the circle, then perhaps he should clear up the misunderstanding (or is he already planning to make his old chum George Osborne the fall guy when the Government can’t deliver?). The only other possibility is that he’s deliberately misleading the electorate.

April 8, 2010

Cold FT

Filed under: FT, Global warming, Media, Science, Science and the media — Tim Joslin @ 5:54 pm

I wrote earlier, in relation to a story in today’s Guardian, that: “Solving the GW problem is difficult enough without the constant drip-feed of confusing reporting of the issue.” Even worse, though, is when influential media editors themselves appear to be confused by sceptics. A colleague has drawn my attention to a recent FT editorial and a subsequent letter by David Henderson, who, it turns out, is a campaigning sceptic.

On close inspection, the FT editorial is troubling. It appears to support sceptic attempts to undermine climate science.

The FT’s first point is that scientists “must be open about sharing the data that underlie their findings”. Fine, we’ve all long since been agreed on that. But data has not been systematically kept as secret as some would have you believe.

The FT goes on to say, though, that “scientists should devote more effort to observation”. Worryingly, the FT seems to believe there is some doubt about the veracity of the recent temperature record. This is simply not the case. There is some debate about whether it was as warm – globally, or, more likely, just regionally – several centuries ago, during the so-called Medieval Warming Period, as it has been over the last couple of decades. This question will not be resolved by gathering more data now, and in any case will become increasingly academic as the world warms over the coming decades.

The FT concludes by suggesting that “scientists should give weight to all the evidence, not just the consensus”. This is confused on two levels. Debates about “the evidence” – data – are matters of detail, and the IPCC already reports differing findings.

What the sceptics really want is for the IPCC to “give weight to” different interpretations of the data. But many possible causes of warming, for example variations in solar output, are already taken account of. They are incorporated into the energy balance model that informs mainstream climate science. As Lionel Messi reminds us mere mortals, there’s always scope for improvement, of course. The next IPCC report is likely to reflect, for example, the improved understanding gained over the past few years of how natural climate cycles affect the way the planet is warming.

What’s left are alternative paradigms such as the idea that variations in the solar wind could cause fluctuations in the flux of cosmic rays entering the Earth’s atmosphere which in turn could affect cloud cover and hence climate. At present this explanation seems a little contrived and there are serious gaps in understanding. Research may eventually determine an effect that should be included in climate models. To ask the IPCC to “give weight to” the cosmic ray theory as an alternative explanation, though, simply makes no sense. It would be like asking someone doing a jigsaw to make use of pieces that belong to a different puzzle. The only way the cosmic ray theory – or any other explanation of the data – would make sense is if it is coupled with proof that greenhouse gases will not have the warming effect predicted by the vast majority of climate scientists.

Most students of the history of science would not recognise modern climate science as in crisis. The theory remains entirely coherent, without having to invoke ad hoc means to “save the appearances”, unlike for example cosmology, which over the last few decades has had to invent dark matter, dark energy and the rapid inflation of the early universe.

The FT appears to share the general confusion following, not just “climategate”, but years of sceptic sniping and deliberately and unintentionally misleading reporting of the complex global warming issue.

Sceptics such as David Henderson are now taking advantage by dramatically exaggerating every potential flaw in the scientific process, like players on a losing football team feigning serious injury at the slightest provocation, in the hope that the referee will red card the opposition.

In fact, the way Henderson goes on in his letter you’d imagine all collective human endeavour is doomed to failure. How did we ever manage to organise ourselves to bring down a single woolly mammoth, let alone put a man on the Moon?

Ice Sickle

I continue to fret about the emphasis on the Arctic sea-ice extent as an indicator of global warming (GW).

I have to chop down (got to justify my blog entry title somehow!) a Guardian story, “Arctic sea ice still low despite winter recovery” (p.20 in today’s print edition), the online version titled incoherently “Arctic winter ice recovers slightly despite record year low, scientists say” and cryptically subtitled “Figures from the National Snow and Ice Data Centre [the NSIDC] indicate six or seven-year low over past three decades”. (They mean 2010 has had the 6th or 7th lowest maximum ice extent – which occurs in March – on record, i.e. of the last 32 years).

The story itself is garbled as well:

“Last night [NSIDC] released the data for the winter of 2009-10 showing the maximum extent reached on 31 March was 5.89m square miles (15.25m sq km). This was 250,000 square miles (650,000 sq km) below the 1979 to 2000 average for March…”

What the NSIDC actually said was that the average for March (15.10m km2 or 5.83m square miles – btw, wouldn’t it be simpler if we all standardised on km2?) was 250,000 square miles below the 1979-2000 March average. In fact, NSIDC’s news posting was titled “Cold snap causes late-season growth spurt” and noted that the maximum sea-ice extent occurred later than usual at the end of March, when the ice extent was only marginally below the 1979-2000 average for that date, as can be seen in the graph illustrating this BBC story about the launch of a satellite to monitor the situation.

I would have thought the real story was the recovery in the maximum Arctic sea ice extent compared to the last few years. “Arctic sea ice still low” is arguably a little misleading.

It is really not helpful to keep spinning Arctic sea ice shrinkage as an indicator of GW. There will be a vicious backlash should nature conspire to undermine the Arctic ice melt narrative. It will then become even more difficult to muster the political will to deal with GW.

The Guardian story goes on to note that:

“Last month, Japanese scientists reported in the journal Geophysical Research Letters that winds rather than climate change had been responsible for around one-third of the steep downward trend in sea ice extent in the region since 1979. The study did not question global warming is also melting ice in the Arctic, but it could raise doubts about high-profile claims that the region has passed a climate “tipping point” that could see ice loss sharply accelerate in coming years.”

Maybe this is what the researchers did actually say – I may have to go the library to check – but, as I pointed out before, it makes no sense to try to distinguish “winds” from “climate change”. Winds are not caused by some arbitrary external force, they are determined by differences in temperature, albedo (reflectivity), moisture content and so on between different areas of the planet. Winds are part of the climate system that is changing, so it is simply meaningless to separate the cause of ice melt into “winds” and “climate change”.

Solving the GW problem is difficult enough without the constant drip-feed of confusing reporting of the issue.

April 6, 2010

Scraping Greece off the Floor

Filed under: Business practices, Concepts, Economics, Risk — Tim Joslin @ 5:33 pm

Wolfgang Munchau writes very pessimistically today at the FT that “Greece will default, but not this year”.

The core of the problem is a self-fulfilling prediction. Because of the risk of Greece defaulting, the yield on its bonds, and consequently the cost of new borrowing, is 3% over Germany’s. As Munchau points out, the market implies that there is a 17% chance of losing 17% of the value of Greek bonds (1.7 is approximately the square root of 3 so 17% of 17% is approx. 3% – you could also say a 30% chance of losing 10% of value or vice versa, etc – just thought I’d point out the basis of Wolfie’s calculation). The Greek national debt would obviously become even more unmanageable after a few years of borrowing at such a premium, with debt repayments becoming an ever-increasing proportion of government expenditure. The cost of borrowing would rise even higher… Hence Munchau’s gloom.

A sacrilegious thought has occurred to me. To avoid the interest-rate death-spiral self-fulfilling prediction, why doesn’t Greece simply say that existing bonds will bear the first loss? They could then issue “New Bonds” (TM) at something close to the rate for German bunds (as they call them in the trade). Hopefully, the Greek public finances would be in better shape by the time the stock of New Bonds is large compared to the Old Bonds. Maybe it would be best practice for countries to issue long-dated junior debt when times are good, to prepare for the next financial crisis…

My cunning plan might even reduce the yield on existing bonds, free as they would be of interest-rate death-spiral risk. Everybody would be happy. Except Wolfgang Munchau, of course – he’s never happy.

Come on Greece, you’re a sovereign state. Almost. You can do what you like! Why borrow at +3% (that was yesterday, it’s +4% today, apparently) when you don’t have to?

And did I say the idea is sacrilegious? For starters, corporates can and do reorganise their capital by buying back their own debt below its nominal value and issuing more under different terms. And in fact many governments have reduced the cost of paying down the national debt by increasing the risk of existing borrowing. They do it simply by selling assets, up to and including the right to raise taxes. Buyers purchasing an income stream – for example, in the UK, the right to collect tolls on the Dartford Crossing is apparently for sale – are logically the same as bond investors. The difference is that asset purchasers don’t have to worry about the rest of the national debt – which, of course, becomes more difficult to fund without the income stream. Essentially, asset sales or privatisations are a conspiracy between governments and the asset purchasers against existing bond holders. In stark contrast to asset purchasers, new bond purchasers only rank pari passu with existing lenders. At least, until Papandreou reads this…

If the interest-rate death-spiral trap can be avoided by selling off income streams anyway, why bother with the pretence? Simply issue “New Bonds”.

April 4, 2010

BBC Muppets in Malaysia

Filed under: BBC, F1, Media, Sky, Sport — Tim Joslin @ 7:03 pm

I wonder how many other F1 fans missed the last 19 laps of this morning’s Malaysian GP? Yeap, the BBC switched coverage from BBC 1 to BBC 2 part way through the race, on lap 37 of 56, to be exact. Totally unnecessary. Apparently they needed to make way for some religious event. But they could have shown the Easter Service on BBC 2 instead of BBC 1, or, if that wasn’t acceptable, the entire race on BBC 2. In fact, I’ve just bashed off a complaint.

I’ll let you know how much of my licence fee is refunded for cutting short a race I was thoroughly enjoying ( I was watching maybe an hour behind real-time on a PVR, having paused at various stages of breakfast, so, by the time I found out the end of the race was on the other side, it was way too late).

I couldn’t help reflecting on how poor the BBC’s sports coverage can be. And how little competition there is in general in the sports broadcasting market.

A while back I had an idea as to how to sort the mess out. Maybe if I keep saying it people will take some notice.

To recap, my suggestion was to sell rights to sporting events to multiple bidders. If the highest bid is, say, £100m (for exclusivity), then the rules would allow one of the other bidders to obtain the rights for £55m, the first bidder also paying £55m (joint exclusivity). If another bidder comes in then all three would pay, say, £40m. This prevents one organisation achieving a monopoly, but, aside from that, leaves everyone better off – fans get a choice of channel (or delivery mechanism if one is an internet broadcast); the winning bidder saves some money (so can afford to buy some more content); the second and subsequent bidders get access to content they wouldn’t have obtained otherwise; the sport gets more money (£110m with two bidders, £120m with three, rather than £100m with a single bidder, in this example) and more coverage.

One problem with Ofcom’s current plan to regulate the sports broadcasting market is that it only generates competition between platforms: Sky, Virgin, BT and so on will all effectively pay the same price for Sky Sports channels.

It does nothing to encourage new, perhaps cheaper sports channels, and in my judgement makes channel competition even less likely.

Platform providers should never have been allowed to run channels in the first place. This should have been addressed when Sky first came along. Maybe Murdoch’s outrageous interference with our democratic processes has muddied the waters. A little clarity of thought would surely have led regulators to nip such a dysfunctional monopoly in the bud.

The current measure – to regulate the wholesale price for Sky Sports bundles – is not sensible, as some commentators seem to think. Immediate suspicion is always justified whenever very specific regulatory measures are proposed. If anything, this one will cement the position of Sky Sports. There are “two sides” in this, but they’re not Sky and the competition. They’re the customer and the sports themselves.

And the customer is getting screwed. Sky Sports will cost at least £20 a month (since the wholesale price is £17.14). That’s £240 a year. The BBC’s entire licence-fee is about £140. So they can’t compete, unless, as I also suggest, they charge a separate sports licence-fee (since many people watch no sport at all), once the analogue switch-off ensures that all viewers can have access to dozens of channels (which could include BBC Sports 1, 2…).

Many people will not take Sky Sports, but make do, as now, with the residual “free to air” coverage on BBC and ITV, some of it controversial because it arguably reduces the income to the sports.

If you have a monopoly, in this case Sky Sports, profit is maximised when some people can’t afford the product. That’s just how it is – 10 million paying £240 a year (£2.4 bn) brings in more money than 15 million paying £120 (only £1.8 bn). If putting the price after costs up 10% costs you less than 10% of your customers then you may as well do it.

But more people could afford to watch at least some sport if there were offerings at, say, £240 (Sky?), £120 (ESPN, ITV?) and £60 (BBC?) a year. In this scenario Sky would show (as now) a lot of sport and the BBC just the choice events. Because multiple broadcasters would show at least some events, Sky would actually be able to show even more sport for £240. And sports wouldn’t be disadvantaged if mandated to allow “free-to-air” coverage (sorry the use of the term “free-to-air” makes me laugh – ITV is free to the viewer, since ads pay the bills, but BBC coverage is not “free”, it’s just that you’re not allowed not to buy it! A situation that’s becoming increasingly bizarre).

And you’d even be able to choose which channel to watch those choice events on!

All this while the sports themselves earn even more from the rights!

Blog at WordPress.com.