Uncharted Territory

January 18, 2011

On Hulme on Science

This post is an addendum to my previous musings on Mike Hulme’s Why We Disagree About Climate Change. In particular I want to respond to Paul Hayne’s comment that:

“Mike Hulme’s argument is not relativist. He is arguing that there really is no argument that can leverage action, which seems pretty true.”

OK, I suppose – after re-reading Chapter 3 of Why We Disagree – my claim did go a bit far. However, I’m not sure I want to concede the point fully.

First, I’m not the only one who’s confused. What was uppermost in my mind I think were the comments about Why We Disagree made by Peter Kircher in Science (pdf), to which Hulme refers on his website, as detailed in my original post.

I concur with Kircher’s view that “Hulme’s book invites misreading” and his disquiet over Hulme’s infamous passage (p.80-2) discussing how science “must concede some ground to other ways of knowing.” There is, though, a way in which this makes sense, which Hulme doesn’t identify and which doesn’t in any way undermine science.

Second, any critique of science must always address the fundamental precept that science is about testing theories against reality. It either describes the world or it doesn’t. There’s no room for compromise with “other ways of knowing”.

There’s one little fly in the ointment, though, which is very apparent in the social sciences. Concepts are not always easy to define. What is “poverty”, for example? Before you can study “poverty” you have to get out there and translate what people mean by “poverty” into something or things that you can actually measure.

Hulme refers to “local tacit knowledge”, which he patronisingly suggests is “not conventionally classified as scientific knowledge”. He muddles strategies for coping with climate conditions with describing “environmental change” and weather-forecasting, but certainly some of what he’s driving at very much is scientific knowledge – climate science relies on interpretations of subjective historic anecdotal evidence in diaries, ships’ logs and so on.

The issue is merely about communication between scientists and those affected. In the case of climate change, science may need to translate its scientific predictions – expressed in terms of directly measurable parameters – into language that relates to people’s day to day experiences. But those experiences are not “other ways of knowing”.

Let’s take the example of “severe winter weather” in the UK, since “here’s one I prepared earlier”! As I explored recently – there is no direct correlation between measurable parameters and the common perception of, in this case, what constitutes a “cold winter”. No-one writes books about, say, February 1986, which was exceptionally cold, whereas (slightly) milder conditions with more snow, such as the Winter of Discontent (1978-9) and perhaps December 2010, linger much longer in the collective memory.

Science could, in principle, develop a “severe winter” index which included temperature extremes, averages, snowfall, lying snow days and so on. Trouble is, different people would want to constitute the index differently. Hence we all have to refer to the same variables if we want to make comparisons. This is what science is. It doesn’t stop us all making our subjective judgements, though.

So, there’s an inescapable conclusion: we have to agree on a framework, on what we can measure in order to make objective comparisons.

And this is the real weakness of Hulme’s work. In terms of both the science and making decisions on emission trajectories, we need a quantitative framework. Or we simply can’t reach any sort of agreement. It’s all very well to note that people have different values, but we can’t conceivably ever agree what is an acceptable level of climate change based on religious and political views. It is irrelevant on one level that the media distort the debate as Hulme goes on to discuss in Chapter 7, The Communication of Risk. This doesn’t alter the consequences of different courses of action and therefore the optimum path by one iota.

It might also be worth noting, en passant, that it is in fact historically somewhat unusual for public opinion to greatly matter in decision-making. The reason the media has influence is a result of our current political system. At most other times in history a ruler, or elite would simply make the decision. The long-term interest of society as a whole was the responsibility of a small group and not something actively contested between different interests. Maybe, as a civilisation, we need ways of making a clearer distinction between the general interest and the individual and sectional interests that drive our political processes. Tricky stuff!

Nevertheless, just as we can only meaningfully discuss and quantify the physical phenomena of climate change within the agreed framework that we call science we can only decide on a course of action in response to global warming by agreeing a framework that permits quantification.

And that framework is called economics.

January 6, 2011

Musing on “Why We Disagree About Climate Change”

I give in. Over the holiday season I’ve been improving my education by reading Mike Hulme‘s thought-provoking book Why We Disagree About Climate Change: Understanding Controversy, Inaction and Opportunity. Maybe I’m committing a cardinal sin, but I’m going to try to capture my thoughts on Why We Disagree without a clear idea where they’re leading. Perhaps because I haven’t finished reading the book yet!

Whilst I would recommend Why We Disagree as an introduction to a set of issues that are too rarely and superficially discussed, I find myself alarmed at the way the debate is going. The movement that has grown up in response to the global warming threat is more deeply confused than ever. There is a lack of clarity of thought around far too many aspects of the problem of global warming and how to organise a coherent response.

But what’s prompted me to jump the gun was committing another cardinal sin. I idly followed a backlink to this blog. It was automatically generated, but turned out to be very relevant. It led to a post at Haunting the Library, a new GW sceptic blog, which laid into a truly cringeworthy Guardian CiF piece by a Polly Higgins.

Polly Higgins argues for a “law on ecocide“. She particularly wants to prosecute corporations. Presumably she wants to create a world government or at least rewrite the Treaty of Westphalia.

Preamble: Power and the Law

Putting to one side Polly Higgins’ unwise use of comparisons between ecocide and Nazi genocide – I kid you not, read her piece – she misunderstands the role of the law.  The law expresses power relations between society and the individual.  Once the law proscribes (or prescribes) behaviour of some kind, the norm – for example, “don’t sell drugs” – has been agreed.  OK, the legal system is a contested space, but probably best to fight only battles you have a chance of winning.

Enacting a law does more than express a societal norm.  It also criminalises the behaviour it proscribes (or in the case of civil law recognises plaintiffs’ rights to redress).  There are other options than criminalising undesirable behaviour.  Once you have the power to enforce an agreed norm, e.g. if you’re an elected government, you could tinker with incentives.  In the case of drink, for example, rather than try to enforce a law limiting consumption you can raise taxes on alcohol.  When it comes to drinking and driving, though, a strict legally enforceable limit is considered appropriate.

So, what Polly Higgins fails to appreciate – besides the social norm of avoiding casual comparisons with Nazi atrocities – is the need for a coherent two-step plan:

1. Establish the power to prohibit or limit specific behaviour.

2. Identify the appropriate levers to control said behaviour.

A problem with global warming is that it is global in its diverse causes and global in its diverse impacts.  (1) is therefore difficult to achieve.  This might not matter if we either had a world government with the power to deal with the issue or the states that are prepared to project their power, principally the US, were prepared to do so in support of the cause.

But even if (1) were achievable, (2) also needs to be addressed.  Criminalisation might not be the most effective strategy.  The absurd, prohibitionist “war on drugs”, for example, arguably creates more problems than it solves.   In general, I suggest, it’s not a good idea to try to criminalise behaviour in which many people are engaged.

Where the issue gets interesting of course is in the interaction between (1) and (2).

The more power you have the easier it is to criminalise behaviour.  If you happen to be in control of a totalitarian government you can outlaw whatever you like.

On the other hand you need very little power to change incentives.  Simply by buying something you change behaviour throughout the supply chain.  To make a real difference you need to do a bit more than that.  But people are prepared to accept taxes that increase the price of things they may enjoy, but know aren’t good for them, at least in excess.  Hence tobacco and alcohol duty are accepted and are effective to some extent, whereas prohibition would fail, even if the laws required could be passed in the first place.

In the case of carbon emissions, where on the power-spectrum is Polly Higgins, do you think?

Which brings me on to Mike Hulme’s book, Why We Disagree About Climate Change.

Hulme’s project is an important one, and well executed, but, at least as far as Chapter 6, I fear a critical theme has been omitted and that this undermines the whole discussion.

Society and the Individual

Society is not the sum of individual interests.  Let me repeat that in case anyone missed the point.  Society is not the sum of individual interests.

Consider how behaviour is regulated.

Society as a whole attempts to regulate the behaviour of individuals by legal or other means as seen in relation to Polly’s counterproductive proposal to criminalise “ecocide”.

Individuals’ behaviour takes into account sanctions and incentives society as a whole may put in place.

Society has an interest in maintaining itself indefinitely; individuals may or may not be concerned about the future.  Society and individuals are qualitatively different.  Much of the time we pursue our individual interests.  But we expect certain agents to act in the interests of society.

We have collective values.  We expect society to maintain itself even if it doesn’t directly affect us.

Let’s cut to the chase.

Comparing Markets and Values – a Category Error

Chapter 5 of Why We Disagree, The Things We Believe, is primarily about religion but suggests three categories of solutions: correcting markets, establishing justice and transforming society (section 5.4, p.161-9).

But we’re comparing apples and lemons.  Market interventions are a way of achieving agreed objectives within the whole of society, in this case globally.  Establishing justice and transforming society are motivations of individuals or groups for influencing the objectives.

It is alarming not just that Hulme writes:

“These sweeping ideas for commodifying carbon and globalising its market through either free or regulated trade sit uneasily with many of the beliefs expressed in religious and other spiritual traditions”.

but also that he notes WWF UK have:

“…suggested that there are inherent contradictions in ‘attempting to market less consumptive lifestyles using techniques developed for selling products and services.’ “

This is simply incoherent.  Especially to a WWF “Supporter” who yesterday participated in an online WWF “Membership” survey.  If market research isn’t a “technique developed for selling products and services” I don’t know what is.

If we actually want results, I suggest we have to leave it to the technocrats to determine how best to reduce carbon emissions.  All pressure groups can and should do is build support for objectives, such as to reduce carbon emissions, or perhaps more broadly avoid dangerous climate change.

The problem is all pressure groups have their own over-riding objective – to boost their own support.  They need to have a coherent ideology.  They need to sell the illusion that it is their brand that is effective.

Of course, WWF is easy to pick on, in part because they don’t in fact have Members, only Supporters.  They’re not alone in spinning in this particular way, but to my mind, strictly speaking members have – or at least can have – some kind of influence.  This is not the case in WWF.  Policy is not determined by votes of the members, for example.  WWF is selling you something.  And the ratio of brand to product is very high!

If we really want to stop global warming we need to recognise the constraints on coordinated global action.  Market-based solutions of some kind are pretty much the only game in town.

Society, Individuals and Value

Hulme reports in Chapter 4, The Endowment of Value on the discounting of the costs of climate change as used, for example, in the Stern Report.  Every time I read about this I find myself entirely bemused and Hulme, I think, has unintentionally helped me put my finger on the problem.  What am I saying?  Why give Hulme the credit? – I knew this already, I was just hoping Hulme did too.

Stern’s economic analysis – or at least his use of discounting of future costs – looks at the problem from the perspective of individual actors (people or commercial enterprises, say).  We need to consider society as a whole.

The issue is around the discount rate – how much future costs of damage caused by climate change should be reduced by each year when compared with costs in the present.

As Hulme reports (p.126), Stern used two discount rates, a “time discount rate” of 0.1% and a “per capita growth rate” of 1.3% since “we” will be wealthier in the future.

The time discount rate always gets me.  It’s to allow – and this is serious – for the possibility of the human race going extinct before our global warming problems come home to roost.  This is ludicrous.  The logic implies that if we don’t “go extinct” we won’t have invested enough in preventing global warming.  Presumably this is why people don’t save enough for their pensions.  Obviously there may only be a 90% chance of needing the money – you could die or win the lottery.

Imagine if the scenario were just a little different.  Imagine extinction of the human race is certain if CO2 reaches say 500ppm.  We have to spend £10bn now to stop that happening in 100 years.  Ah, but we may be extinct anyway!  So let’s spend just £9bn (that’s £10bn discounted by 0.1% for each of the next 100 years – I know this should be compounded so it’s actually less than £9bn, but that’s not the point.  The point is…).  Damn, we haven’t spent quite enough – CO2 will hit 500ppm and wipe us out.  As I say, this is ludicrous.

OK, so that’s the overall discount rate down from 1.4% to 1.3%.

What about this “per capita growth rate”?  The argument is that it’s not worth spending £1 today to save £1.0129 next year (compounded over time, so it adds up, don’t it), because we’ll be able to create £1.013 worth of goods and services (or capital) next year for the same effort it takes to create £1 this year.

Very persuasive.

Not.

The problem I have with this is that I can’t think of any effect of global warming that won’t rise in proportion with the per capita growth rate.  In fact, it’s very easy to make a case for many costs to rise faster than the per capita growth rate!

Consider the loss of capital assets caused by global warming.  For example, a city may be destroyed due to rising sea-levels or increased storminess or both. The value of that city in the future will be greater than it is today.  Its productive capacity will have increased by as a best estimate – you’ve guessed it – the per capita growth rate.

Or people may be killed due, say, to the effects of a heatwave.  The direct economic loss will be their own individual productive capacity, which will on average have increased from today’s by – you’ve guessed it – the per capita growth rate.

How do we put a value on quality of life?  Well, biodiversity, the preservation of cultural artefacts and ways of living (referred to by Hulme as “natural capital and the aesthetics of living”, p.115) will, at a best guess, be worth as much in the future as a proportion of everything we consume, as they are today.  That is, the costs of damage to the environment will have increased by – you’ve guessed it – the per capita growth rate.

Of course, it’s easy to argue that in the future we will value our own lives, the lives of others and the environment relatively more highly than we do now.  And Hulme even cites Ronald Inglehart who has made the persuasive argument that concern for the environment is a luxury good valued more highly the more wealthy we become.   (Damn, something else I can’t claim to have thought of first!).

I might even add that the only way per capita productivity growth can occur is if we become more interconnected and specialised.  The relative cost of damage to capital and loss of life will therefore be higher in the future.  The more we rely on each other the more costly disruption becomes.

I can see no justification whatsoever for using a discount rate of greater than zero.  Arguably it should be negative.

Stern correctly argued that we should have no pure time preference.  That is, we as individuals may prefer jam today over jam tomorrow (so don’t put enough in our pension funds), but society as a whole transcends time.  We’re all gonners if we do anything other than weigh benefits to me now as at most equal to costs to someone else at some indeterminate time in the future.

Science and Society

Hulme has me really alarmed in his discussion of science (Chapter 3).  We disagree about climate change because we disagree about the science, apparently.  Yes, of course, but society as a whole needs some way of evaluating the risks.

Hulme seems to take a relativist position akin to that of Steve Fuller.  Take this summary from his website:

“In Philip Kitcher’s wide-ranging essay in Science on ‘The Climate Change Debates’ [pdf] I am struck by two things – which are not very new, but which are very important. First, is how the framing and public discourse around climate change differs between countries: as Kitcher puts it, where ‘societies … are inclined to see matters differently’. This is brute fact sociological reality, just as non-negotiable as the radiation physics of a CO2 molecule. Recognising this means that as soon as scientific knowledge enters public discourse – whether this knowledge is robust, imprecise or tentative – different things will happen to it and different social realities will be constructed around it. For me, this is the essence of the climate change phenomenon.

The second, related, thing to emphasise is how predictive claims about the climate future – and its impacts – are inextricably bound up with imaginations (e.g. scenarios) and value judgements (e.g. discount rates) about the future. One could argue that such considerations fall within the legitimate reach of ‘climate science’ and the elite scientific expertise Kitcher claims any genuine democracy needs. But for me it is these extra-scientific dimensions of climate change ‘knowledge’ which motivated me in my book ‘Why We Disagree About Climate Change’ to challenge a narrow appeal to science for engaging our publics around the idea of climate change. It really is not about ‘getting the science right’. It is just as much about engaging our imaginations, about facing up to the ways different peoples and cultures construct meaning for themselves, about the very different values we attach to the future. And because of this I don’t believe Cassandras such as Jim Hansen and Steve Schneider should have the last word.”

I’m baffled how he can refer to James Hansen as a “Cassandra” – Kitcher’s essay suggests only that Hansen and Schneider “play the role of Cassandra”, standing outside the debate rather than within it. I can imagine accusing the scientists of hubris, perhaps, thinking they know more than they do, but crying wolf [which is what I assume Hulme means – otherwise his comment would make no sense at all – since the point about Cassandra is she was right and nobody listened!]?  No.  Hansen certainly calls it as he sees it – everything I’ve ever seen of his has been rooted in what I would describe as fairly mainstream science.  (Though I’ve ordered “Storms” from Amazon anyway – it’s just come out in paperback).

Hulme urges us to face up to “brute-fact sociological reality”, as if this represents some new way of looking at the world, something that – perhaps with one exception – we’re all missing.  Global warming is a physical phenomenon – and by the way, sociological realities are nothing if not negotiable, unlike the  radiation properties of a CO2 molecule – but of course it’s a sociological problem.  The same way as anything from dying to the financial crisis is only a problem if we want it to be.  All “problems” are socially constructed, by definition.  CO2 molecules don’t have problems, only sentient beings do.

Society defines problems and has to construct ways of dealing with them.

The nature of the global warming problem depends on the physical phenomenon.  Sure, we can choose whether to say “oh, that’s interesting” or to define it as a problem.  But we can’t do that unless we understand the physical phenomenon.

So, in agreeing the extent of the problem, we have to use the best socially-constructed mechanisms we have.  All science is is a method of determining the most effective ways of predicting (and therefore controlling) physical phenomena.  Essentially by testing interventions against outcomes.  There’s some logic in there and quite a bit of institutional gubbins, but unless we collectively – and we’re talking about global society here – come up with a better “way of knowing”, it really is a matter of “getting the science right”.

Once we’ve done that we can argue about what to do about it. Or rather, we have to argue in in parallel, since science – like society – never ends.

But we mustn’t muddle up things we can measure – temperature, say, or precipitation, or the ability of particular species to survive – with how we feel about those things or our desires as to the kind of society we’d like to see in the future.

And when we come to try to put into effect what we’ve decided to do, maybe it would be a good idea to use methods that are to some extent predictable, and perhaps those that past experience would suggest are most likely to be successful – which probably won’t include implicating much of the global population in a new heinous crime of “ecocide”.  As I said, we’ll have to give it to the technocrats.  Who will almost certainly use markets to do a lot of the heavy-lifting.

We can afford individuals to be irrational, but society itself has to be rational and objective.

I suspect Hulme is going to go on to tell me in subsequent chapters that society is not rational and objective.  Quelle surprise!

The whole point is that when it comes to existential threats we can’t afford not to be rational and objective.

Maybe global warming isn’t in fact an existential threat.  Maybe we can use it as terrain we can contest with our various views of how we’d like the world to be.

But before we conclude that we can afford to disagree about climate change, it would be a good idea, perhaps, for us to remain rational and objective long enough to determine the nature of the physical phenomenon within somewhat tighter constraints than at present.

At the moment we disagree about climate change because, like naughty children, we think we can get away with doing so.

(to be continued…)

July 13, 2010

Managing Climate Expectations

It’s rather worrying, when you think about it, that the Himalayan glaciers may be completely gone by 2350, according to the IPCC. The only trouble is, nobody cares, because they erroneously claimed they’d be gone by 2035.

I’m becoming more and more concerned by the tendency of climate change cheerleaders to try to find worse and worse evidence of climate change. Consider a couple of recent posts at Realclimate.

First we have  Recent trends in CO2 emissions. The lead author is Corinne le Quéré who is an oceanographer, not an economist.

The argument is over the extent to which actual carbon emissions have exceeded the IPCC emission scenarios – the authors of the post seem to be keen to emphasise the overshoot. But these scenarios are purely designed to illustrate how atmospheric CO2 and other GHG levels might increase over the rest of the century and therefore how much warming might occur, if corrective action isn’t taken. They are guesstimates, with no basis in science, social or otherwise.

My personal opinion is that the scenarios have outlived their useful life, since the key determinant of future emissions will be the effectiveness of corrective action, not whether future economic growth is “fossil fuel intensive” or whatever.

In 2008, as discussed in the post, the actual emissions were higher than nearly all the IPCC examples. But towards the end of 2008, remember, the global economy fell off a cliff. The economic growth and hence emission levels in 2008 were clearly unsustainable. The Realclimate post also notes – rather than emphasises – that projected emissions in 2009 exceed the bulk of the scenario projections by less than was the case for 2008. No further projections are given.

Clearly, we can only determine how close the scenarios are to reality over a long period, and especially by taking account of the business cycle.

Why we’re even discussing the fit between the IPCC scenarios and actual emissions in a given year is beyond me.

It seems to me there is a dangerous tendency on the part of advocates for action to mitigate climate change to promote data showing the situation is worse than expected. This is unwise. It polarises the debate even more. Scrupulous objectivity is essential.

The worst example of “worse than expected” syndrome is the reporting of Arctic sea-ice, as I highlighted on here some time ago (subsequent posts are linked via the comments).

A number of commentators, such as Joe Romm [see Note], report the state of the Arctic sea ice on an almost daily basis (the NSIDC provides daily data, which I see now show the ice extent is now greater than in 2007 – perhaps we should revise our whole opinion on global warming!).

When I first started investigating the possible natural cycle in the Arctic sea-ice back in February, I noted:

“If I were a climate specialist about to make a song and dance over a particular piece of evidence for GW, I think I’d make pretty sure the phenomenon in question hadn’t happened before.”

I’m currently trying to collate my thoughts on the AMO – blogging has its plus side, but it’s pants for organising information – and I’ve come across a few tidbits in the IPCC’s latest report (AR4).  Here’s what the Technical Summary has to say (section TS.3.1.2, p.37):

The warming in the last 30 years is widespread over the globe, and is greatest at higher northern latitudes. The greatest warming has occurred in the NH winter (DJF) and spring (MAM).  Average arctic temperatures have been increasing at twice the rate of the rest of the world in the past 100 years. However, arctic temperatures are highly variable. A slightly longer arctic warming period, almost as warm as the present, was observed from 1925 to 1945, but its geographical distribution appears to have been different from the recent warming since its extent was not global.” [my italics]

As I said, notwithstanding the last desparate clause, which I don’t even recognise as a scientific statement (in my opinion the 1925-1945 warming was every bit as global), it’s happened before.

I know that even the Technical Summary of AR4 is not something you actually read, but you might expect commentators like Joe Romm to have browsed the thing.  Failing that, you’d have thought they’d at least look at the IPCC’s pictures.  Here’s one I haven’t posted on here before – check out the top panel in particular:

I say it again: claiming short-term changes in Arctic sea-ice extent “prove” GW is exceptionally foolish.  There may be a cycle – on top of the GW trend – which has overshot, which could mean a decade or more of sceptics saying the ice-melt – and hence GW – has “reversed”.  Hysterical climate change popularisers such as Joe Romm are becoming less part of the solution than part of the problem  (which reminds me that maybe I should try to get hold of Mike Hulme’s book, which I gather makes a similar point).

It should be a priority to understand (or debunk) the AMO cycle.  Which makes my attempts to raise awareness on the other Realclimate post I mentioned at the outset all the more frustrating.

————-

Note: I originally started writing this post weeks ago – I just discovered it amongst my drafts.  It was at the exact point where I’ve written “see Note” that I was distracted by the question of where the precise dividing line lies between nationalism and racism, as illustrated by the case of Joe Romm.  Since then I’ve been much amused that the search parameters used to find my blog have included “Joseph Romm asshole”!  I feel myself under no pressure to pull my punches in the rest of this post.

June 24, 2010

Joe Romm’s New Scapegoat

This is unseemly, I know.  But it’s very disappointing when someone you thought was on you side reveals their true colors.  Ask Obama how he feels about General McCrystal.

BP’s CEO Tony Hayward has wisely decided to go into hiding.  So a new scapegoat has been found.  A Senator Joe Barton has dared to describe (Youtube clip) the $20bn the President has demanded as a “shakedown”.  He makes some perfectly reasonable comments.

Clearly the word “shakedown” must be incredibly offensive to Americans.  Much more offensive, I presume than using the term “veddy veddy” to (presumably) mock the English accent.

I don’t know how many readers Joe Romm has, but I would imagine a good proportion of them are in the UK.  I would have thought alienating them was straight out of the Tony Hayward school of poor PR.   Once again, the irony of it!

Oh, and maybe there are only 60 million Brits.  But there are over a billion Indians.  And they’re noticing the double standards being applied.

Shockingly, my comment in support of Joe Barton on Joe Romm’s silly post has been “awaiting moderation” since this morning.  I can only conclude I’ve been ostracised.  Lucky I have a thick Brit skin.

Anyway, here’s my shocking point of view:

“Well, I think Joe Barton has a valid point. There’s no legal basis for expropriating the $20bn from BP. It’s for consequential losses (e.g. the effects on tourism, fishers etc) for which the legal framework is compensation by fines per barrel of spill. So maybe BP is going to have to pay twice. Who knows?

Btw, to declare an interest, or rather not, I invest my spare pocket money in alternative energy not oil companies (though some Big Oil shares might be part of my managed pension fund). Anyway, I’m underweight BP, so I shouldn’t be so bothered. But this is a point of principle.

Joe Barton is just being shouted down by public opinion.

Since when are opinion polls a reliable guide to what’s right or wrong, true or false?

We have, on the one hand, public opinion, rather than the pre-existing legal framework, determining how much compensation BP should pay.

But, on the other hand, we have global warming, where, many would argue, we hope our leaders will listen to the science and not fickle public opinion.

Smart move, guys. Smart move.”

December 8, 2009

Playing the Energy Game

Filed under: Books/resources, Climate change, Energy, Energy policy, Global warming — Tim Joslin @ 7:13 pm

I mentioned last week that I was planning to play the Energy Game based on David MacKay’s book Sustainable Energy Without the Hot Air (SEWTHA) at the Science Museum’s Dana Centre. Well, I’ve been true to my word, although I arrived there slightly breathless after taking the Victorian tunnels from South Kensington tube station. It turns out the Dana Centre is at the other side of the museum, nearer Gloucester Road station.

Anyway, the “game” was worthwhile. It involved adjusting the UK’s energy supply and demand by using two columns of magnetic blocks to represent (decarbonised) energy supply (different flavours of wind and solar power and so on) and demand alleviation measures. We were formed into (moderated) groups to carry out this exercise and then presented our solutions in a final plenary session. All very MBA.

The attendees were all sensible and well-informed. I was therefore quite surprised by some of the outcomes. I also felt the game constrained thinking a little too much. To improve it significantly would require a software implementation and I wonder if the organisers will consider creating one.

The attendees as a whole seemed very accepting of biofuels (not our group, though, but it took quite some discussion) even though the small contribution to our energy supply suggested took up 20% of the UK’s land! There was also a general distrust of carbon capture and sequestration (CCS) and of nuclear power. Partly this was because the constraints of the game allowed only a very small proportion of the UK’s energy to be obtained from these sources, which are not entirely renewable. I felt that the idea that we could only use a small amount of nuclear power because existing known reserves had to be divided equally around the world to be particularly suspect.

Tony Robinson suggested last night that Neanderthal man died out during a previous episode of climate change (a Heinrich cooling event during the last ice age) because he failed to trade, unlike our own ancestors. If we are to solve the energy problem, then it seems to me trade must be at the heart of the solution. For me, the Energy Game as it is now builds in too much UK self-sufficiency (though is inconsistent in addressing the issue, since it does allow desert-based concentrated solar power (CSP) to be a large part of the solution). The UK is an arbitrary market in the modern world, for starters: why not English or European self-sufficiency?

Incidentally, if we are not prepared to build the Supergrid, then those countries poorly endowed with renewable energy relative to their consumption will be obliged to bid up the cost of the world’s uranium supplies and go nuclear. They will end up with more than their “fair share” because other countries will be better off using solar, wind etc. Whether the UK is one of these countries should be for the game to discover.

The game did allow energy price to be taken into consideration. This didn’t stop most groups spending a fortune on rooftop (or other) PV in the UK.

There are several aspects of the Energy Game that could be better captured in a computerised version:

1. An easier and more accurate cost analysis. Costs of demand management actions (e.g. improved insulation of buildings) also need to be included (it wasn’t on Thursday).

2. Warnings, consequences and implicit assumptions. For example, above a certain proportion of wind energy it’s necessary to either store the energy (with some losses) or trade it (also involving costs, e.g. for transmission lines). The game could produce a report detailing technical and political assumptions. In particular, it might highlight the importance of political action to create as wide and as depoliticised an energy market as possible (Europe, North Africa and the Middle East, perhaps). The game should also highlight problems such as how flying is to be powered (our team had no liquid fuel in the mix!), and note the food-supply implications of the use of biofuels.

3. A better appreciation of the time element, which was totally absent on Thursday. Instead of simply adding energy blocks, the team could specify ramp-up and ramp-down rates (and curve shapes – straight line or S shaped) for energy technologies and demand management measures. Costs could also be allowed to evolve over time, e.g. PV in particular might gradually become cheaper, in the same way as other technologies have done in the past. The game could then show you the energy mix at certain dates (e.g. 2020, 2050 – it might have built-in retirement dates for existing power-generation facilities) and give a traffic-light report on whether specific targets have been met (e.g. 20% renewables by 2020, 20% emission reduction by 2020, 80/95% emission reductions by 2050 etc.).

In short, I think you could go to town on this game in a computer programme based on the magnetic version. Your moderators would need a kit consisting of laptops and projectors (and venues would need screens or white walls!), but these are readily available these days.

Nevertheless, the Energy Game is already a worthwhile exercise.

November 11, 2009

Pissing in the Wind, Part 1

Filed under: Books/resources, Climate change, Energy, Energy policy, Global warming, Wind — Tim Joslin @ 11:51 am

When I worked as part of a team made up of nationals of several different European countries, we’d be fond of swapping phrases from different languages (all translated into English). Most would make Hank Paulson blush, and this is a family blog. But one I liked was the equivalent of the English phrase “to make a mountain out of a molehill”. In Holland (or was it Greece?), you’d say instead “to make an elephant out of a mouse”. So, of course, we combined the two and made elephants out of molehills and mountains out of mice. My most notable contribution was the phrase “pissing in the wind“.

What’s bugging me is the question of the potential for generating energy from wind-power. In what’s fast becoming the Bible for such matters, Sustainable Energy Without the Hot Air (SEWTHA), David MacKay asserts that you can only practically generate around 2W of wind power per m2 on or around the UK.

David therefore concludes (page 216) that the UK could feasibly build 35GW of onshore capacity and 29GW of offshore, total capacity 64GW, producing on average 4.2kWh/day/person and 3.5kWh/d/p, 7.7kWh/d/p in total. (Other energy plans for the UK including more or less wind energy are discussed elsewhere in SEWTHA).

Sorting out the units

One man’s sensible units are another man’s bizarre eccentricity. I want to convert David’s units for comparison with other, even more eccentric, sources. Personally I’d like to divide by 24 to get rid of both the hours and the day – David’s wind totals 7.7kWh/day per person, that is 7700/24W per person – call it 300W. Now we’ve got to something I can relate to! And I don’t know, but 300W seems not a lot more than the lights and the TV to me! Maybe we’re going to discover the wind won’t save us…

Anyway, figures are often given in TWh/year for the UK. Strange but true.

I assume MacKay bases his estimates on 60m people. So 7.7kWh/d/p is 7.7*60m*365kWh/yr for the UK or 7.7*60*365GWh/yr = ~170TWh/yr.

How much wind do we need for 1 million jobs?

David MacKay is now an energy advisor to the UK Government, so his view counts. But I keep reading higher figures for the potential for the UK to generate wind energy than 170TWh/yr.

For example, on Saturday I picked up a booklet One million climate jobs NOW! which notes on p45:

“In 2008 the total UK supply of electricity was 401TWh. 7TWh of that came from wind. In 2008 the UK had 3.4GW of installed wind power. So approximately 2TWh of electricity were produced that year for each [G]W of installed capacity. [So far so OK: cf David’s 170/64 or a bit over 2.5TWh/yr/GW installed capacity]. 150GW of installed capacity should produce 300TWh, three quarters of current electricity production.”

Obviously, if there is not enough wind for 150GW of capacity and/or for 300TWh/yr, the whole 1 million jobs plan starts to unravel.

Sorting out the units again

One man’s sensible units are another man’s bizarre eccentricity… What does “150GW capacity” mean? Let’s work instead in terms of average output, because we’re going to be considering average wind-speeds (really we should be considering average power in the wind, which is different, but, hey, the modern Principia will have to wait!). Let’s go back to the energy needed of 300TWh/yr. What average power output do we need to achieve this?

What a pretty pass we’ve come to when we’re calculating in Watt-hours per year!! We want Watt-years per year, in other words, simple Watts!! There are roughly 24*365 = 8760 hours in a year, so 300TWh/yr = 300,000/8760GWyears/year = 35GW, rounded up a tad.

To create 1 million jobs we need to build enough wind-turbines to give an average power output of 35GW.

Is there enough wind?

Now we can finally start to make comparisons. How much wind is really out there? And how much of that do we need?

What’s been bothering me for some time now is that MacKay bases his figures (all derived from the 2W/m2 power density) on wind-turbines having to be spaced in a grid 5 times their diameter (5d) apart, as described in his Technical Chapter B, p.265.

This argument seems to apply to current technology only, but is also somewhat counter-intuitive as you would have thought you could simply put taller wind-turbines in between the ones you’ve already got and they wouldn’t interfere. If you only used 2 heights you’d double up to 4W/m2 and we could create our 1 million jobs, moreorless.

In fact, the idea that you can only extract the same amount of energy per unit land area whatever the diameter of the wind turbines is somewhat paradoxical. Surely 1cm turbines spaced 5cm apart is not going to be as good a solution as 100m turbines spaced 500m apart! All very odd: MacKay’s Paradox, perhaps!

Furthermore, it would seem the proximity of other wind turbines is only a problem downwind. Perpendicular to the direction of the wind it might even be better for the turbines to be next to each other as, like New York skyscrapers, the resistance of one would force air towards its neighbour. In many locations useful wind will normally come from one direction (the west near the UK). If only the downwind turbines have to be 5d apart, then you should be able to generate 5 times as much energy, 10W/m2. Now we’re talking!

But I don’t want to stop here. With different designs, e.g. turbines at different heights or funnelling air towards turbines, you might be able to do even better than that. In principle you should be able to capture a proportion of all the energy in the wind up to whatever height you could engineer. How much energy is this?

Problem

MacKay (Chapter B, p.263 ff) only considers the kinetic energy of the wind passing through a single turbine.

But we know that the wind turbines interfere with each other, otherwise we could put them right next to each other and there’d be no 5d rule of thumb. What I’d like to answer are questions such as:
– what proportion of the energy in the air does a large field of wind-turbines extract?
– can we do better than extract 2W/m2 with better technology?
– are we likely to hit any limits, i.e. can we extend a field of wind-turbines indefinitely without weakening the wind?

Obviously this is just a blog (but, hey, what might it lead to?), not a scientific treatise on the subject. Nevertheless, we can take a stab at answering these questions.

Thought experiment

Let’s work out the kinetic energy of the entire mass of air up to the top of the atmosphere passing between two imaginary poles a metre apart across the 6m/s wind direction. A quick calculation shows that this column of air – 15psi (sorry, pounds – 2.2/kg – per square inch – ~2.5cm2 – you can do the calculation yourself – OK, the conversion is 15psi = ~15/2.2*40*40kg/m2) in old units – weighs ~10000kg. Wow!

If the wind speed all the way to the top of the atmosphere is an even 6m/s (a conservative assumption as it moves faster higher up, we’ll try to come back to this), then the kinetic energy of the air passing between the poles every second is, by the formula 1/2mv2, with 6m of air passing every second, 1/2*6m*10000kg*(6m/s)^2 = ~1 million Joules, that is, (1 Joule per second =1 Watt) we have 1MW of power every metre across that there gentle breeze. Wow, again!!

This is rather different to the figure of 140W/m2 (note the different units) David MacKay calculates because he only considers the energy in a cross-section of the air, the 1.3kg/m3 that actually passes through a 1 m2 cross-section of wind-turbine. The wind goes up a long way and (by these back of an envelope calculations) only 140/1 million = 0.00014 or ~1/7000th of it passes through a 1 m2 cross-section near the ground! (The calculation by mass of air considered, i.e. 1.3/10000, gives roughly the same answer).

But the wind comes from somewhere. If you had many rows of wind turbines, part of the energy will be extracted by each row. The wind for the later rows will have to come from somewhere or we’d be becalmed. The answer is it comes from the other 9998.7kg of air above the wind turbines!

This rather explains MacKay’s Paradox, since we have to suppose air can only fill the lee (downwind) side of the wind turbine from above or below or even from the sides (so perhaps we can’t put our turbines right next to each other after all) at a limited rate (mostly from above). When a wind turbine creates a partial vacuum, the engineers’ rule of thumb used by MacKay is that a “hole” 1m in diameter is filled in 5m, 100m in 500m and so on.

OK, not all the air will necessarily be moving in the same direction (otherwise the weather system we know & love wouldn’t operate as observed), but if even half the mass (remember the air is less dense the higher you go) is, we have 5000kg of air and 500kW/m2 to play with.

Even if we can only extract 1% of this energy, that works out at 5kW/m2.

We can’t keep extracting 1% of the energy, though, from row after row of wind turbines, so maybe we should consider the air-mass to be a wall of wind, from which we could extract, say, 10% of the energy in total, that is, 50kW per metre length of the wall. This is equivalent to funnelling all the air through 100% efficient wind-turbines, that is, extracting all the energy in the wind, up to a height of 50,000/140 = ~350m (the 140W/m2 is David MacKay’s power per unit area of wind-turbine at a wind speed of 6m/s).

Or, perhaps more practically, we could extract around 1/3 of the energy (MacKay suggests 50%, but I’m going to be a bit less optimistic) in the air up to 1000m, one kilometre. (Note that this doesn’t allow for air density decreasing with height, but then again I’m not yet making any allowance for the fact that the wind-speed increases with height).

To obtain the 35GW average power output we need for our 1 million jobs would therefore need a wall of such wind-turbines 35GW/50kW = 700,000m or 700km long. Ouch!

Or perhaps, since we’re talking about the UK, we could have a 1400km wall of wind turbines 500m high, which sounds a bit more practical.

Implications

My 1400km wall of wind-turbines 500m high is very roughly equivalent to (say) a field of large wind-turbines (100m+ diameter) 1400km, that is, 14,000 wind-turbines long (i.e. around the whole length of the UK), right next to each other, but only 5 wind-turbines, that is, with 5d spacing, 2 km across.

The “wall of wind” is therefore equivalent to ~14,000*5 = ~70,000 wind-turbines in total, implying an average output of of 35GW/70,000 or 0.5MW at a wind-speed of 6m/s. Wind turbines are normally quoted in capacity. The 35GW average output was based on a capacity of 150GW and empirical rather than theoretical figures relating average output to capacity. Anyway, my calculations suggest the wind-turbines each have a capacity of 150GW/70,000 = ~2MW, which is a little bit low for such large devices, but in the right ballpark. In particular, I’ve estimated cautiously for the efficiency of the turbines and have made no allowance for a higher wind speed at a higher altitude.

This higher wind-speed is absolutely crucial, because what I hope I’ve demonstrated is that a field of wind-turbines actually extracts energy from higher up in the atmosphere. A field deep enough would actually slow the entire air flow. What happens is that the first row of wind-turbines slows the air, creating a partial vacuum downwind. This is filled mostly from above, slowing the air higher up.

Consider the graphs of wind-speed against height and power density of wind against height David gives here. They’re astonishing. The wind power at a 10m height is around 100W/m2 for a 6m/s wind at that level, but at 100m where the air flows faster it’s nearly 250W/m2 and at 200m where it flows faster still we’ve got over 300W/m2 to play with!

What’s actually happening, of course, is that all the other things on the ground – water, trees and so on – are already capturing the energy in the lowest part of the atmosphere, which fills from above.

Or, to look at it another way, the wind is created by a high pressure mass of air essentially collapsing into a low pressure area, which literally fills, as the weather-men say.

Bearing all this in mind, it seems to me that we’re pissing in the wind in the first place building wind turbines near ground level. We should start 100m or 200m or even 300m up.

Conclusion

There is (significantly) more than 500kW/m or 500MW/km of kinetic energy in a flow of air – 100s of kms across – moving towards the British Isles at an average speed of 6m/s, creating what we call a (west) wind. If we could extract all this energy we’d “only” need a 70km wall of wind turbines for an average output of 35GW.

The limit of 2W/m2 only applies to the technology we are using just now to extract energy from the wind. At this stage in the development of the industry, there are plenty of sites and it’s the technology that’s expensive. This will change over time, and there will be an incentive to design machines to extract more of the energy from the wind, particularly higher in the atmosphere.

It may be possible to extract significantly more than 2W/m2 by building turbines closer together across the wind direction and (as, to be fair, David MacKay points out), much taller.

However, maybe we have to bear in mind that we might not be able to build row after row of giant wind-turbines indefinitely. From a British Isles (UK and Republic of Ireland) point of view this might not be too much of a problem, since we are on the western seaboard of Europe. But eventually if we build turbines along the west coast, perhaps along the spine of the country and in the North Sea, we could just conceivably start to affect the very wind itself – the Danes and Germans might not be so pleased!

To determine whether this hypothesis is true, we have to look at other aspects of the energy in the wind. The kinetic energy arises from the potential energy of different pressures of different air-masses. And we need to look at how that potential energy itself is generated.

In other words: how renewable is the wind?

Another time, maybe.

October 31, 2009

“Carbonomics” Critique, Part 1

I began reading “Carbonomics” by Steven Stoft late yesterday. I’m only just starting Chapter 3 (of 31) but I can already reach a conclusion.

My very first impression was that “Carbonomics” brings some logical thinking to the debate. I see no reason to change my view: there is no doubt a lot of good material in the book.

But within minutes I could see that Stoft’s overall prescription, sadly, is in dreamland.

I’m posting my initial thoughts immediately whilst I am still in a state of shock.

The history of thought is littered with discarded, but complex and sophisticated, bodies of knowledge, from scientific theories – the Ptolemaic universe perhaps, to political programmes – communism, for example; indeed more than bodies of knowledge, entire institutions, even civilisations, all built on foundations that later proved to be constructed of no more than intellectual straw.

Some of the foundations of “Carbonomics” consist of no more than straw.

I am indeed stunned. I started reading and first came across some encouraging comments in the Preface (a chapter which should never be skipped). The author notes the inefficiency of current policies to improve energy security and global warming and promises to “fix energy policy”. He will be guided by the story of physics, and produce Mr Tompkins in Wonderland for economics. “The hardest part of learning new ideas is giving up misconceptions”, he writes.

I must admit that by this point I was already starting to feel a little uneasy. I don’t, for example, believe that “physicists have a tradition of explaining advanced ideas to the public just because they find the concepts fascinating.” No, they do it to try to prove how clever they are (except for a small number who simply have Asperger’s syndrome). And, given that their belief system doesn’t hang together (relativity and quantum physics are as yet unreconciled) they hope that the more positive feedback – or pats on the back – they can extract from their audience, the truer what they have told them will become. Stoft notes that Einstein “found the uncertainty of quantum mechanics… so disconcerting that he never accepted it”. Quite right. Einstein was a holistic thinker. That was his genius. All the facts had to be taken into account, however alien a theory eventually resulted. He understood that all may not be as it seems, but he could not accept contradictions into his world view, even if others could live with them. So in asserting that “God does not play dice”, Einstein was not being a stick in the mud, but demonstrating he was on the side of the good guys. Even if he didn’t have the whole answer, at least he knew there was a question.

I labour the point because it soon became apparent that Stoft’s thinking is not sufficiently rigorous. He is not prepared to accept inconvenient truths.

It’s a shame, because Stoft starts so well with an excellent account of the effects of the 1970s oil price spike. When the Great Depression is so often mentioned as the worst of economic times, I often feel that the discourse is US-centric – cultural domination perhaps. For the 1970s was as decisive for modern Britain as the 1930s was across the Pond. Inflation and unemployment, a pervasive sense of decline tinged with incipient anarchy. The Punk Era, swept away by the Thatcher Revolution.

Never mind, my point is that Stoft’s prescription will fail. Reading his first chapter I assumed Stoft would urge measures to keep the oil price high. But it suddenly dawned on me that his prescription is the precise opposite!

There’s a why Stoft is wrong, which owes something, I feel, to a US-centric world view.

And then there’s the how Stoft is wrong. I’m afraid to say he has not followed his own prescription in the last line of his Preface, to “pay close attention to the way governments and markets really work”.

Stoft, it seems, still bears grudges against OPEC. On page 4 he explains how he wants to avoid “paying OPEC another trillion dollars in tribute”. He writes of how, by 1986 “OPEC had been crippled”. On p.5 he notes how he will explain “how to crush OPEC again”. On page 6 he reminds us that “conservation… crushed OPEC in the early 1980s”. There’s a bit of a lull while he advocates a “consumers’ cartel” to counter OPEC and worries about how to deal with “free rides”…

Powerful stuff. Where have I heard this sort of thing before? Oh, yes, I remember now – it’s eerily reminiscent of Russia railing against NATO. Yes, that’s right, Russia’s demon is a mutual-defence pact. To many in Russia (unfortunately many of those in positions of power), the idea of Ukraine or Georgia joining NATO – to ensure, as sovereign nations, their own defence – is little short of an invasion of the Motherland itself. I wonder, I just wonder, if OPEC members feel the same way. Let’s just step into Wonderland for a moment. Maybe they feel they have a right to the riches under the desert (or wherever). I know, I know, I’m of the view that oil wealth is a fortunate (or often not so fortunate) windfall. But the actual state of affairs is what we have to deal with – and de facto those countries endowed with generous fossil-fuel reserves are determined to maximise the value of those reserves.

In solving the problem of global warming (and energy security) we have to deal with the world as it is, not how we would like it to be.

Maybe I can lay down something of a more specific principle here. Short of war, there will only be progress in international negotiations if win-win situations are created. Sorry about the cliche. Maybe I can get rid of it. Because, actually, we’re in a multilateral situation and we need win-win-win… in fact a win superscript n, win raised to the power of the number of interest groups.

Stoft is writing from the US. Let’s put to one side that he hasn’t even convinced his own country’s body politic to take the problem seriously yet, let alone of his particular approach. Let’s pretend he manages to do that. Even if that were to happen, I’ve got news for him. The world out there is not full of buddies who will be happy to participate in a “consumers’ cartel”. In fact, it may be unfair only to Canada & Australia to say that the US has only one reliable sidekick with any clout at all on the world stage. Yeap. Be nice to the UK. OK, I’m being facetious – there is some alignment of national interests, at least with the EU and Japan. But the problem is that several populous developing countries show no clear sign of wanting to play ball.

I feel I’ve written moreorless enough for a first reaction, so it’s fortunate that how Stoft is wrong has already been touched on in previous episodes of Uncharted Territory.

The general problem is the Displacement Fallacy, though I appreciate that Stoft intends to avoid this through international agreements, starting with China. Good luck, mate, but I don’t think you’ll manage it.

Reflections on Oil supplemented by Reflections on Reflections on Oil considers how the oil market will react to attempts to choke off demand. The important point is that the oil producers themselves will act as buyers of last resort.

Before I sign off I should mention that Stoft’s discussion of a tax on fossil-fuel and an “untax” (general distribution of the tax revenues) will not work as he seems to expect for imported products. Stoft is clearly unaware of the Man in the Wardrobe fallacy. Oil at $80 + $20 tax (Stoft’s example in ch.2, on p.21) will not have the same outcome as oil at $100. In the first case, the importing country still has $20 to spend, perhaps on more oil imports or perhaps on other goods, the sellers of which can themselves then afford to import more oil.

I haven’t read enough yet to determine whether Stoft is aware of the rebound effect or Jevons’ Paradox, whereby using a resource more efficiently can actually increase consumption in the long term. The signs aren’t good, though.

Although I’m disappointed with Stoft’s overall vision, I will read on, because large parts of Stoft’s analysis are sound. The first part of chapter 2 shows, for example, how cheap it would be to move away from reliance on fossil fuels.

Watch this space.

—–
As an unreferenced endnote, I admit hypersensitivity to inaccuracies or ambiguities and two have been particularly irritating:
– in Ch.2, footnote 1, p.19 Stoft writes of a policy “that would ‘cap the long-term concentration of greenhouse gases [GHGs]… at 450 [ppm]’. We are now just over 380 ppm.” CO2 alone is at “just over 380 ppm”. I can only guess whether the policy proposal referred to is to keep all GHGs at a CO2 equivalent level of 450ppm or to keep CO2 below 450ppm – which, it’s now becoming clear, would be too high.
– at the start of Ch.3, on p.21 Stoft remarks that: “Back in the 1800s… Jevons predicted peak coal in England”. Maybe it’s a cultural thing, but to me “the 1800s” refers to the decade 1800-9, inclusive. Stoft means “the 19th century”, here. Jevons in fact wrote “The Coal Question” in 1865 (Wikipedia). And, btw, he was probably talking about Britain, not “England” (Wikipedia thinks so). No offence taken.

October 27, 2009

Scientific American’s Sustainable Future

Scientific American’s customer management is appalling. When I first subscribed to the print edition, the magazine’s online presence was trumpeted as one of the benefits. I therefore understood I would also obtain access to the Scientific American Digital (how quaint!). Nope. I got no more online access than I had previously and ended up paying a Scientific American Digital subscription on top of the print subscription. Someone should call the Advertising Standards Authority! (Annoyingly my online subscription has now expired, and, I see from the correspondence page – which publishes letters on topics in the edition, I kid you not, 4 months earlier, like we’re still in the 1950s – that I appear not to have received the July issue at all).

Just lately – in the midst of a UK postal strike – I can find no way to notify my address change or even log on at http://www.scientificamerican.com. The site recognises none of the several numbers on the address labels of the magazines I’m sent. The contact email address intl@scientificamerican.com simply doesn’t work. Mind-blowing. Scientists, eh? Hardly surprising there were dodgy solder-joints at the LHC, was it?

Nevertheless, I persist with Scientific American. It’s worth it for the quality of the articles. And, I have to say, its old-fashioned feel.

The lead article in the November issue is titled: “A Plan for a Sustainable Future”, by Mark Z Jacobson and Mark A Delucchi . It discusses how the entire world could be powered by wind, water and solar power by 2030. And it’s well worth a read.

The authors note that building “millions of wind turbines, water machines and solar installations” is not without precedent. For example, “during WWII the US retooled automobile factories to produce 300,000 aircraft”. For clean energy the numbers are feasible: the list includes 490,000 tidal turbines, 3,800,000 5MW wind turbines, 49,000 concentrated solar power (CSP) plants and 40,000 solar PV plants.

I’m afraid I have some quibbles:

  • The authors quote a US Energy Information Administration projection of 16.9TW of global energy demand in 2030, compared to 12.5TW now.  I suspect 16.9TW will prove to be a massive underestimate.  As well as a greater population and higher living standards, there’ll be new sources of demand in 20 years, for example, for large numbers of desalination plants to produce fresh water.  I’d be amazed if we aren’t using twice as much energy by 2030 as we are now.
  • On the other hand, ruling out wind and solar power production “in the open seas” is suspect: I would have thought there was a lot of scope to generate power there, e.g. on floating islands, which I’ve seen proposed, probably in Scientific American itself.
  • Nuclear power is dismissed because of the “carbon emissions” caused by “reactor construction and uranium mining and transport”, but no explanation is given as to why these activities couldn’t be powered by clean energy.
  • Interestingly, the authors are concerned about all forms of pollution, so rule out carbon capture and sequestration (CCS) and biofuels on the grounds of air pollution other than CO2. I’d have liked to see at least a nod to the other problems with these primitive technologies: principally the difficulty of capturing all the CO2 in a coal-fired plant, the cost of burying the carbon and the risks; and, for biofuels, the land use problems – not just food vs fuel, but that the land would store carbon quicker if left alone!
  • I doubt that geothermal energy is “renewable”.  There may be a lot of it, but the rocks will reheat only very slowly.
  • The authors suggest that we deploy 1,700,000,000 – yeap, 1.7 billion – “rooftop photovoltaic systems”.  I think this is nuts.  First off, I’m really struggling with the numbers – the 0.003MW – or 3kW – size of each system must refer to average (mean) output to be consistent with the rest of the article.  But, according to my mate David MacKay (print edition, p.40), 20W/m2 is going some for solar PV in the sort of countries where there are a lot of roofs. So these systems would have to be 150m2 each. They have big roofs in America, I guess. But my more fundamental objection is that the output of 100,000 of these babies only adds up to 1, yes one, of the 40,000 PV power plants. What’s easier, do you think, fit solar panels on every roof in a medium-sized town, such as Southampton where I come from, or stick them all in a big field outside of town (perhaps a long way outside, like in North Africa, where funnily enough you need far fewer panels)? I’ll give you a clue: let’s be pessimistic and say it takes 1/2 hour to put a panel in a standardised array in a field and optimistically 2 days to put the scaffolding up so you can get on the roof without a health and safety violation – before you start the pretty much bespoke installation process. Barmy idea, isn’t it? I worry that the inclusion of the rooftop PVs owes more to some kind of philosophical belief in the virtues of localism than to sound scientific (or economic) reasoning. And of course the article concludes by advocating the dreaded feed-in tariffs. What better way of transferring money to those with big roofs from those, um, without big roofs?

Nevertheless, notwithstanding a few hints that it may be informed by countercultural ideology, I recommend taking a look at “A Path to Sustainability by 2030”.

But the November 2009 Scientific American is worth buying for another article alone. No, not more minute analysis of the “Hobbits of Indonesia” (not read that one yet, but – to go all Iain M Banks for a moment – does the obsessive human interest in the details of our family tree perhaps represent some kind of species-level insecurity?), but “The Rise of Vertical Farms” by Dickson Despommier. The author should perhaps have credited “The World Without Us“, but he makes the point that we should farm indoors and leave nature to absorb the excess carbon we’ve been stuffing into the atmosphere.

The key argument is that you can grow so much – so much less riskily too – in controlled climate conditions indoors: “4 growing seasons, double the plant density, and 2 [or more, surely, of many crops – judging by a photo I once saw in the Guardian of a hydroponic indoor vegetable farm in Tokyo] per floor”, so that, excusing the quaint American units, a “30-story building covering one city block [5 of these ‘acre’ things] could … produce 2,400 acres of food”!

Despommier worries about how his vision can be made to happen, but in fact it’s simple. As soon as a realistic price is put on ecosystem services, there’ll be a huge economic incentive to invest in “vertical farms”.

October 12, 2009

Superfreakonomics, Oliver Burkeman, Hubris and Bounded Rationality

Oh dear, oh dear, oh dear! Hubris meets rationality…

I very much enjoyed Freakonomics. I see from the position of the bookmark in the copy on my shelf that I’ve read past halfway, so it must have been good. I recollect that I was particularly impressed by the discussion of the absence of ill-effects of a policy of random selection of pupils by over-subscribed schools in Chicago, clearly the fairest solution. In fact, I remembered the discussion of random selection in Freakonomics just last week when I read of a rant by a Mike Best, Headteacher, Beaminster school, Dorset:

“It was George III who said that the pathway to hell was paved with good intentions, and so it is with Labour initiatives. They have ranged from the mad (random allocation of school places)…”

Sir, George III was famously mad, and, if I recollect any history at all, died before the Labour party was even formed…

Unlike George III, the Freakonomics authors, Levitt and Dubner, urge policy to be made on the basis of dispassionate analysis of data. And not, perhaps, on the say so of so-called experts with a vested interest.

Considering myself an arch-rationalist, I eagerly read an article by Oliver Burkeman in today’s Guardian discussing the sequel to Freakonomics, Superfreakonomics. I didn’t know whether to laugh or cry.

The reviewer’s comments make interesting reading too. Burkeman writes, for example, that:

“Those arrested on charges of terrorism, [the authors] explain, are disproportionately likely to rent their home, have no savings account or life insurance, be a student, and have both Muslim first and last names. Superfreakonomics makes no mention of the possibility that the police might simply be targeting Muslims disproportionately, and Levitt seems genuinely baffled that anyone might object, on civil-liberties grounds, to targeting all those who fulfilled the relevant criteria.”

Burkeman seems to be implying that he believes behaviour likely to lead to arrest on charges of terrorism is evenly distributed throughout the population, and that Muslims are therefore being targeted unfairly. Maybe I’m missing something here, and I don’t want to offend anyone, but isn’t the main terrorist threat at present from Muslim extremists? Just as a while back the main threat in the UK was from Irish nationalists? Or are these social phenomena just a figment of my imagination? Maybe in WWII British soldiers took more Germans than Americans prisoner just because they were targeting them disproportionately.

But this is nothing compared to Burkeman’s discussion of Superfreakonomics’ espousal of the geo-engineering plan to block out sunlight by “pumping large quantities of sulphur dioxide into the Earth’s stratosphere through an 18-mile-long hose, held up by helium balloons…”. Apparently, Nathan Myhrvold is promoting the idea. He should know better as well.

Anthropogenic stratospheric SO2 injection is a complete and utter non-starter, for the simple fact that warming isn’t the only problem caused by CO2 emissions. This has been very well known for some time. Conferences have been held to discuss the problem. I’d have expected Burkeman to know this.

5 minutes thought might cause one to wonder as to the biological effects (the impact on ecosystems, crop yields…) of decreasing light reaching the Earth’s surface – at the same time as CO2 levels are increasing. And you’d still have time to realise that we’d have to keep squirting SO2 into the stratosphere indefinitely, because it only stays up there for a short while, whereas the warming CO2 will remain in the atmosphere until we stop emitting it and/or do something to get the level in the atmosphere back to pre-industrial levels. Any disruption of the SO2 hosing process for any reason (war, terrorism, economic dislocation, court injunctions…) would lead to rapid temperature increases, because the CO2 would no longer be masked. And before the egg-timer rang you’d realise that any hint of adverse side-effects would make the plan entirely impractical on political grounds.

Myhrvold and the Freakos (sounds like a 60s rock band, don’t it?) have, it seems, walked into the hubristic trap of believing they understand the whole problem. Messing with the biosphere and the climate system requires other forms of analysis than the correlation of data-sets and a good understanding of the importance of the role of incentives in explaining human behaviour. The authors have exceeded their intellectual authority – they are skilled at analysing “closed” economic problems (where the boundary can easily be defined), but don’t seem to appreciate that tackling global warming is an “open” problem. I’m particularly astonished at this given their background as behavioural economists – I can hardly believe they are not aware of the concept of “bounded rationality“.

All Burkeman does is lamely point out that:

“The primary objection to this plan, as with other ‘geoengineering’ schemes, is that there’s no predicting the unknown negative effects of meddling in such a complex natural system. And it’s strange, given how much is made in both Freakonomics books of the law of unintended consequences, that they don’t mention this in the context of Myhrvold’s plan.”

Quite. But Oliver, they can’t even deal with the known knowns, let alone even the known unknowns. You don’t need to fret about the unknown unknowns!

The geo-engineering twaddle is all a shame, as Superfreakonomics apparently argues that:

“The problem with trying to reduce carbon emissions … is that the incentives are all wrong. Too many of the benefits are ‘externalities’, from which the people making the sacrifices will never benefit – and the whole history of economics demonstrates that such completely unself-interested behaviour is impossible to implement on a large scale, especially when so many people suspect that their sacrifice would not, in fact, make a significant difference to the outcome.”

I wouldn’t underestimate the potential of peer-pressure – as Burkeman puts it, “our self-interest can include a desire for the warm glow of acting in a moral or charitable way” – but I doubt this will be enough. Surprisingly, Burkeman doesn’t press this argument against the economists – whose profession has been known to not fully understand that there IS such a thing as society – but tails off into incoherence after noting that:

“This, of course, is desperately tricky territory. My immediate personal response is that Levitt’s view is irresponsible defeatism, which I find repugnant.”

“Repugnant”???!!! I’m with Levitt here. We all need to grow up and face facts.

Don’t squirt SO2 into the sky because, if this is the level of intellectual debate on how to deal with global-warming, all I can say is that we need the heavens to help us! (If I may be permitted to pluralise in a cryptic nod to Battlestar Galactica – buy the box-set if you don’t know what the frak I’m on about!).

April 10, 2009

The Sea, The Sea

Filed under: Books/resources, Climate change, Global warming, Science — Tim Joslin @ 5:11 pm

About a week ago I was browsing David MacKay’s excellent resource, “Sustainable Energy – without the hot air“. This, and a brief conversation earlier the same evening, had started me pondering (again) on the thorny topic of CO2 uptake by the oceans. Specifically, I wanted to make some progress towards answering the question:

“If we reduce the level of CO2 in the atmosphere from its present 390ppm or an even higher level in future, will the oceans release CO2 they are currently absorbing (about 2GtC/year)? And, if so, over what timescale?”

Professor MacKay includes a chapter (31, The last thing we should talk about) on geo-engineering. He notes:

“If fossil-fuel burning were reduced to zero in the 2050s, the 2Gt[/yr] flow from atmosphere to ocean would also reduce significantly. (I used to imagine that this flow into the ocean would persist for decades, but that would be true only if the surface waters were out of equilibrium with the atmosphere; but, as I mentioned earlier, the surface waters and the atmosphere reach equilibrium within just a few years.) Much of the 500Gt we put into the atmosphere would only gradually drift into the oceans over the next few thousand years, as the surface waters roll down and are replaced by new water from the deep.”

Now, the model I have in my head of CO2 uptake by the oceans is one of flows of CO2, rather than a chemical equilibrium. David MacKay’s comment caused some self-doubt on my part. The Professor is clearly not what we chess-players might refer to as a “rabbit”. Strong grand-master would be nearer the mark.

As regular readers will be aware, I’d reached a somewhat different conclusion to that of Professor MacKay. I concluded that the ocean will continue to helpfully take up 2GtC/yr from the atmosphere, on the basis that this may be the capacity of the processes to remove CO2 from the atmosphere.

I specifically doubted, though, that the oceans will continue to absorb a fixed proportion of our emissions, on the grounds that “the ocean ‘knows’ nothing about emissions – all it can possibly be affected by is the level of CO2 in the atmosphere.”

But this idea of “equilibrium” between the surface waters and the atmosphere suggests instead that the ocean can be considered as an extension of the atmosphere, so that if the total increase in CO2 in a year from fossil-fuel burning and terrestrial biosphere changes was (say) 6GtC, 4GtC would stay in the atmosphere and 2GtC would end up in the ocean; if it were 12GtC, 4GtC would end up in the ocean.

Now, undoubtedly there is an equilibrium between the waters at the very surface of the ocean and the atmosphere: that’s how these things work. Horrifically, I’m suddenly reminded of questioning on a very similar topic during a mock interview for university conducted by my school headmaster, who had himself written chemistry textbooks…

Anyway, undoubtedly, too, there are flows of carbon in various forms to and from the deep ocean.

The question is how we combine these idea of equilibrium and flows into a single model that will help us at least put a sign to flows of CO2 from atmosphere to ocean in various scenarios.

The consequences of a pure equilibrium would be that:

1. The ocean will continue to absorb a fixed proportion of net emissions, i.e. it will proportionally reduce the impact on atmospheric CO2 levels of future increases in atmospheric CO2.

2. As soon as atmospheric CO2 levels peak, the ocean will start to release a fixed proportion of any net reduction. i.e. it will be more difficult to get the atmospheric CO2 level back down, say to 350ppm.

On the other hand, if the true explanation is that in a (hypothetical) steady-state there is a balance between flows of CO2 from the ocean to the atmosphere and vice versa, then we need a different sort of explanation. We would have to conclude that processes that remove CO2 from the atmosphere are sensitive to a higher concentration of CO2 and are therefore proceeding more rapidly because CO2 is at around 390ppm compared to a historic level of 200-280ppm.

It’s likely that the processes controlling the interchange of CO2 between the air and the sea are sensitive to other factors, such as temperature and acidity (affected by the cumulative total of CO2 absorbed). But so far, these parameters have changed relatively little. When they do, all the evidence is that they will slow the rate of CO2 uptake by the oceans.

But the crucial point is that in a flow model, the oceans will continue to remove CO2 from the atmosphere as long as the atmospheric level is above the stable long-term level which prior to industrialisation was 200-280ppm.

To jump ahead a little, the question as to whether an equilibrium is dominant is likely to reduce to what we mean by the “surface waters”, since, at the limit, the surface of the ocean must be in equilibrium with the atmosphere next to it. In other words, how quickly does CO2 disperse away from the surface of the ocean?; and from power-station chimneys through the atmosphere to the surface of the ocean?

Looking at the rest of Professor MacKay’s chapter on geo-engineering, I couldn’t help reflect that there is a contradiction. If an equilibrium between the surface waters and the atmosphere is the dominant mechanism, then one would have thought there was little to be achieved by geo-engineering approaches to increase the absorption in a limited area of ocean (sprinkling them with calcium carbonate to absorb CO2 directly or with iron filings to encourage algal growth).

So, for the umpteenth time, I found myself referring to “the doorstop” – the AR4 IPCC Scientific report. And I can report that parts of the relevant sections of this document are virtually content-free. Now, I’ve been in situations when a lack of content has been highly desirable. The objective of some business communications, for example, is to say precisely nothing of any significance. I suggest, though, that the IPCC should not be playing this game.

Let’s turn first to section 6.4.1.4 on p.452. Here we learn that:

“There is evidence that terrestrial carbon storage was reduced during the LGM [last glacial maximum] compared to today. Mass balance calculations based on C13 [isotope] measurements on shells of benthic foraminifera yield a reduction in the terrestrial biosphere carbon inventory (soil and living vegetation) of about 300 to 700GtC…”

This doesn’t really tell us much about the mechanism of CO2 exchange between the oceans and the atmosphere, but is a rather scary fact. Warming leads to carbon leaving the oceans and being taken up by land flora. Ah, I hear you think, the trees take up carbon and the oceans release it to restore equilibrium. Sorry, Grasshopper. The trouble is that as the planet warms the level in the atmosphere goes up as well. This suggests to me that the oceans do indeed release carbon as the planet warms. It’s not pull by the “trees”, but push by the “seas”.

As I said, this is a rather scary fact. Given that the planet is warming rather rapidly. And that the exchange of carbon between atmosphere and oceans takes place at the surface. Where it’s warming. The fact that the deep ocean takes millennia to cool is not really relevant. Hmm, maybe I’ve jumped ahead again.

But back to the story.

Turn now to p.446 of the IPCC report, where we find Box 6.2: What Caused the Low Atmospheric CO2 Concentrations During Glacial Times? (Seems an odd way to phrase it, as glacial times are the norm, but let’s go on!). The answer is no-one really knows. (Actually, the answer to the IPCC’s question is easy: in glacial times the atmospheric CO2 level is so low it limits photosynthesis, so we should really be asking: What causes higher CO2 levels in interglacials?). Still, no-one really knows. Or as the IPCC put it:

“In conclusion, the explanation of glacial-interglacial CO2 variations remains a difficult attribution problem.”

There’s one proviso. There’s a speculative theory (no more than a hypothesis, really) that increased amounts of dust containing iron cause increased phytoplankton growth which causes the ocean to take up carbon from the atmosphere. I mention this because the complete line of reasoning is that colder conditions cause less plant growth, that is more deserts from where dust can blow… This would restore the idea of a “push” by the land – more trees, less dust leads to more carbon in the atmosphere. The trouble is that there’s no evidence that this mechanism could explain more than a small proportion (if any) of the observed changes in CO2.

So much for the top-down approach.

Is our understanding of the physical processes any better?

Let’s see how far we can get. The IPCC Science report notes in section 7.3.1.1 (p.514) that there are 2 “pumps”, i.e. processes that remove CO2 from the atmosphere:

1. The solubility pump – dissolving CO2, giving carbonic acid:
CO2 + H20 <—> HCO3+ + H+ (1)
buffered by carbonates (e.g. CaCO3, calcium carbonate):
CaCO3 <—> Ca++ + CO3– + HCO3+ + H+ <—> Ca++ + 2HCO3+ (2)
(see a previous post for how this might be helped along by dumping some more chalk in the sea).

2. The biological pump whereby phytoplankton (algae) takes up carbon as it grows.

The IPCC note that:

“Together the solubility and biological pumps maintain a vertical gradient in CO2… between the surface ocean (low) and the deeper oceans (high)…”

[my emphasis]

This is where this whole topic starts to do my head in. How can it be that there is less CO2 at the surface, yet the oceans are taking up the CO2 we’re emitting through burning fossil fuels and forests?

Obviously there is a circulation in the oceans. The IPCC note (we’re still on p.512) that:

“In winter, cold waters at high latitudes, heavy and enriched with CO2… because of their high solubility [sic, I don’t know what they’re trying to say either], sink from the surface layer to the depths of the ocean. This localised sinking, associated with the Meridional Overturning Circulation (MOC)… is roughly balanced by a distributed diffuse upward transport of [CO2] primarily into warm surface waters.”

This exchange of dissolved CO2 – lots coming up, rather less going down – constitutes the “solubility pump”, but the biological pump, which, remember, involves organisms taking up CO2 near the ocean surface – effectively from the atmosphere – only operates downwards.

So here’s what I think is happening: there is still a net release of CO2 from the solubility pump, but less CO2 is released now that atmospheric CO2 is around 390ppm compared to when it was lower (280ppm say), because of simple equilibrium chemistry. This assumes there is plenty of carbonate about to stop, through equilibrium (2), the oceans becoming more acidic, reducing CO2 uptake by pushing equilibrium (1) to the left.

So whereas previously with CO2 at 280ppm, the solubility pump would have released (say – these are hypothetical figures) 4 GtC/yr and the biological pump taken 4GtC/yr back to the ocean depths, now, with CO2 at 390ppm, the solubility pump might be releasing only 2GtC/yr but the biological pump is still taking up 4GtC/yr. Hence the net 2GtC/yr uptake by the oceans which is in large part saving us from ourselves.

Digression: I have to say that I can’t help making the observation that the solubility pump depends on the MOC, and that there are those who think the MOC might eventually fail, driven as it is by the cooling of surface waters flowing from low to high latitudes (the IPCC discusses this in Box 5.1, p.397). This would, according to my reasoning, lead to a decrease in the release of CO2 via the solubility pump, increasing the net uptake of CO2 by the oceans, though this may be offset if the biological pump is also weakened (by a reduction in nutrient upwelling, say). I am therefore hypothesising a mechanism (a negative feedback) helping to cause interglacial warming periods to be self-limiting. I should point out, though, that this is completely the opposite of what the IPCC say (e.g. sections 7.3.4.1 and 3 to 5, p.530 and 532-3 and 7.3.5.4 on p.536). Digression over.

Let’s summarise where we are: I am suggesting that the equilibrium between CO2 in the atmosphere and in the oceans is potentially important. Even though the oceans release CO2 through this mechanism, the equilibrium chemistry means they release less as atmospheric CO2 rises.

But how much less?

I mentioned at the outset that it is not in dispute that CO2 is in equilibrium between the air and the water at the surface of the ocean. But how deep is the surface? What is the gradient in CO2 concentration away from the surface of the ocean? How much extra CO2 can be taken up (or as we have seen how much less released) in a year? Is the mechanism saturated at 2GtC/year as I assumed when I reported on my home-made carbon-cycle model?

It’s when we try to answer these questions that the IPCC Science report becomes – how shall I put it? – a little disappointing.

We turn now to Chapter 7: Couplings Between Changes in the Climate System and Biogeochemistry. In section 7.3.4.1 (p.528) we “learn” that: “Equilibration of surface ocean and atmosphere occurs on a time scale of roughly one year.” My school headmaster would have a fit! This sentence is indeed content-free. There is no definition of what is meant by “surface ocean”. Is it 1mm, 1m or 100m? Until we can answer this question we are unable to quantify the effect of the “solubility pump”.

Back to chapter 5. Section 5.4: Ocean Biogeochemical Changes includes some interesting diagrams (p.405) showing how “anthropogenic carbon” is dispersed in the oceans. These show that carbon levels are most elevated, compared to pre-industrial levels, in the top 200m or so of the oceans – “more than half of the anthropogenic carbon can be found in the upper 400m” (p.404) – and in the North Atlantic.

The trouble is, we’re no nearer answering the question as to how long we can consider it takes to renew the active layer of the oceans that exchanges CO2 with the atmosphere.

Let’s try another tack. Let’s say (generously) that the layer is 100m, on average, based on inspection of CO2 diffusion diagrams in the IPCC report. Let’s say it takes 1000 years for the oceans to completely turn over – a figure noted a few times by the IPCC. If the oceans are 5000m deep (on average) as shown in the IPCC figures, then the 100m “surface layer” is renewed every 50th (100/5000) of 1000 years, that is every 20 years.

Now we can try to answer the question posed at the start:

“If we reduce the level of CO2 in the atmosphere from its present 390ppm or an even higher level in future, will the oceans release CO2 they are currently absorbing (about 2GtC/year)? And, if so, over what timescale?”

The answer depends on the timescale we are looking at:

1. If we reduced the level of CO2 in the atmosphere overnight (more realistically by say 1ppm from one year to the next), then the surface layers of the ocean will release some carbon as it re-equilibrates with the atmosphere.

2. But if, more realistically, we reduce the level of atmospheric CO2 from one 20 year period to the next, we can consider the outcome as follows:
– in both 20 year periods the ocean will outgas the same amount of CO2 from the deep;
– in the first period the ocean will carry away more carbon (or release a little less) than in the second period.
There is no correlation between what happens in the second period and in the first.

3. After a millennium or so, the ocean might release more carbon because of the extra carbon it is absorbing now. On the other hand, more carbon may simply end up in sediments.

Conclusion: The oceans will not release a significant proportion of the anthropogenic carbon they have absorbed since industrialisation if we reduce the level in the atmosphere back to 280ppm over a century or two.

“Equilibrium” and “flow” models of oceanic carbon uptake are relevant over different timescales. The flow model is applicable to decades and centuries, the equilibrium model to years and (possibly) millennia.

I believe it is inaccurate to say, as David MacKay does, that:

“If fossil-fuel burning were reduced to zero in the 2050s, the 2Gt[/yr] flow from atmosphere to ocean would also reduce significantly.”

The increase in annual oceanic CO2 uptake due to the difference between CO2 levels in the atmosphere and the ocean is partly due to the difference between CO2 levels now and when the current surface waters were last exposed to the atmosphere, that is, the difference between 390ppm and 280ppm approx. and partly due to the difference between the CO2 level compared to the previous year – about 2ppm. If (as I’ve assumed) 1/20th of the surface waters are renewed each year, we should allow 1/20th of (390-280)ppm, that is 5.5ppm as the comparable CO2 concentration difference. 5.5 is several times 2, so the dominant cause of net oceanic CO2 uptake at present is the renewal of oceanic surface waters, not annual increases in atmospheric levels of CO2.

In other words, when Professor MacKay goes on to say:

“Much of the 500Gt we put into the atmosphere would only gradually drift into the oceans over the next few thousand years, as the surface waters roll down and are replaced by new water from the deep.”

he is correct – this process is going on. But, I suggest, it accounts for at least 75% of the 2GtC/yr of our CO2 pollution that the oceans are helpfully soaking up for us.

And if we were to reduce atmospheric CO2 levels by say 1ppm/year (e.g. by ceasing fossil-fuel burning and enacting a programme of worldwide reforestation), oceanic surface re-equilibration would reduce the annual decrease by only about 10%, and with atmospheric CO2 at its current level, and all else being equal (unfortunately it probably won’t be), the solubility pump performance attributable to oceanic surface water turnover will continue to remove around 1.5GtC/year (about another 0.75ppm).

To go on, reducing atmospheric CO2 concentrations at a rate of 1.65ppm (based on the above figures), reducing to 1ppm as we approach the pre-industrial equilibrium, would allow us to return from 450ppm to 280ppm in around 170/1.325 [(1.65+1)/2] or around 130 years.

[Though, as I said, all else is not equal and positive feedbacks due to warming of the oceans and decreased albedo because of loss of ice-cover, etc. will most likely increase this timescale significantly.  On the other hand, if we do it before the deep ocean has warmed, we might just save the planet!].

Older Posts »

Create a free website or blog at WordPress.com.