Uncharted Territory

April 20, 2009

Still Baffled by BERN, but a little wiser

Filed under: Global warming, Science — Tim Joslin @ 6:49 pm

About a year ago, I professed myself “baffled” by the BERN carbon cycle model.  Since then, I’ve finally twigged how the oceans will behave over the next century or so.  This post aims to further clarify my current understanding.

In particular, I remain convinced that the concept of an airborne fraction (AF) of carbon emissions is entirely erroneous; it is unsafe to base policy on the idea that chemical processes and ecosystems will take up a fixed proportion of annual carbon emissions.

As well as my last blog entry (and the two previous discussions it references), I have been engaged in an extended email dialogue with the Climate Philosopher, who has added a question to my original post on the BERN model:

“Do you think all of the processes depend on the level in the atmosphere dCO2(Outflow)/dt = a(CO2 – p) where a & p are constants. This would be my simple understanding.
– ie is BERN *completely* wrong?

Or are there some processes that are BERN-like e.g. equilibrium with the upper oceans so that dCO2(Outflow)/dt = a1(CO2 -p) + b(dCO2(Inflow)/dt)

I’d like it that the simple model (that bern is completely wrong) was the valid one.”

The answer to the question posed in the second paragraph is “yes”, although the “or” beginning the sentence is logically incomplete and, in this case, misplaced – we cannot categorise “all” carbon uptake processes in any single way.

Here’s a numerical summary of what I think is happening (based on IPCC AR4 data, all figures very approx.):

  • “Surface water” is (by definition) in eq’m with atmosphere.  According to the IPCC (Fig. 7.3), such water holds 18GtC more than the pre-industrial level. i.e. approx. 0.16GtC per ppm increase in atmospheric CO2 (that is ~18GtC divided by the 110ppm increase – from 280ppm to 390ppm – in atmospheric CO2 levels).
  • This process of “uptake by re-equilibration” (the Climate Philosopher’s b(dCO2(Inflow)/dt) ) is therefore weak – accounting for ~0.3GtC per year (0.16GtC/ppm from above times an annual increase in atmospheric CO2 of a bit under 2ppm) increase in CO2 held in surface waters.
  • But there is a turnover of ~10% of ocean surface water p.a. This accounts for the Climate Philosopher’s other term: a(CO2 -p).
  • In this process the ocean exchanges carbon between surface and the deep ocean.  Even though, this process releases carbon (because there is more in the deep ocean than the surface waters), it releases less now than before industrialisation, because the descending waters hold more carbon than before.
  • By the overturning process, the deep ocean therefore currently takes up 1.8GtC p.a. more than before industrialisation.
  • The total extra carbon uptake of 1.8 (from overturning) + 0.3 (because of 2ppm/yr increase) = 2.1GtC/yr, a good fit with published data based on observations.
  • Sanity check: a letter to Nature by Peter Cox (I can’t access more than the synopsis either) suggests the ocean *could* take up 5GtC/yr under BAU by 2100 implying that by then there will be 50GtC more than the pre-industrial level in surface water.  The CO2 in the atmosphere would therefore be 280ppm + 50/0.15 = 280+330 = ~600ppm.  Sounds about right.

And the consequences are…

1. It’s hands-up time.  The idea in my original post that “it appears that removal by the oceans is indeed saturated (AR4, p.26 & elsewhere)” is wrong (and too pessimistic).  The AR4 reference is to the data on ocean uptake over the last 25 years which has increased from about 1.8 to 2.2GtC/year (although these figures are very rough estimates).  The point is that, whilst annual emissions have increased significantly since 1980, the atmospheric level, which is dominant in determining the ocean uptake rate, has not increased so much.

2. The idea that a fixed proportion of annual anthropogenic carbon emissions remains in the atmosphere (i.e. that the AF remains constant) is also false.  My original post is correct on this point, if a little pessimistic on the ability of the ocean to take up CO2.  Curiously, whilst trying to find a bit more information about the BERN model, I came across this recent paper by Terenzi and Khatiwala (pdf). I have to say I’m rather disappointed there’s no reference to my original post, since I noted from a bit of ad hoc modelling that the AF only remains roughly constant “while CO2 emissions and atmospheric levels are increasing at a fairly steady rate.”  Terenzi and Khatiwala note that:

“Specifically, our results suggest that both the quasi-constancy of AF over the past half-century, and its particular numerical value of ~50%, are essentially a consequence of exponentially growing emissions with a nearly-constant growth rate of 1/40th per year.”

So basically, as T&K point out, policies assuming a constant AF are quite possibly misguided! Both T&K (loads of equations) and myself (back of an envelope) reach the conclusion that the “constant” AF is an artefact, entirely data-dependent, a mere coincidence!!

3. I still can’t relate the BERN carbon cycle model to the real world.  It appears to assume atmospheric carbon will return asymptotically to equilibrium following the emission of a pulse of carbon.  Deriving an AF in this way makes little sense for several reasons:

  • Different feedbacks have different effects over different time periods.  For example: after some centuries elevated levels of dissolved CO2 in the oceans will affect the oceanic ability to take up more CO2; warming of the land (fast) and oceans (slow) will at some point affect CO2 uptake; etc.  I haven’t even considered uptake of carbon by the biosphere, but the response will likely not resemble a chemical equilibrium, since secondary ecosystem responses will modulate carbon uptake. The process will also differ considerably between the oceans and land.
  • The natural carbon cycle is not in equilibrium.  Rather, because of the different time-periods of various feedbacks, it oscillates, giving us the ice age cycle (in resonant response to Milankovitch forcings).
  • You simply can’t model individual years’ carbon emissions according to the BERN model, since we’re already out of equilibrium, by more and more each year.  This observation, in itself, casts considerable doubt on the “constant AF” conception.

It does rather seem to me that the idea that “if industrial emissions ceased tomorrow” atmospheric carbon would progressively decline to approach an equilibrium level is entirely suspect.  Furthermore, when we consider possible scenarios of future annual carbon emissions we have a more complex situation, perhaps more of a bifurcation: if our emissions continue to increase rapidly, the AF will increase, even without positive carbon cycle feedbacks (only the relatively tiny amount of carbon taken up by re-equilibration of the ocean surface waters is proportionate to emissions; other carbon uptake processes are proportionate, at best, to the difference between current and pre-industrial CO2 levels); whereas, if we decrease our annual emissions, natural processes will help us, and the AF will actually decrease – or even go negative.


April 12, 2009

Ocean Carbon Uptake: Further Reflections

Filed under: Global warming, Science — Tim Joslin @ 5:05 pm

In my previous 2 posts, The Sea, The Sea and How To Freeze A Mammoth, I have argued – nay, stronger than that, pointed out – that the 2GtC/yr of carbon that the oceans are helpfully taking up from the atmosphere is due largely to a reduction in the amount of carbon released annually as currents exchange deep with surface water.  The deep sea has a higher carbon content than the surface waters because of the “biological pump” whereby organic material (krill poo, dead whales etc) descends through the water column.

Towards the end of my last blog entry, I discovered that the quantification I had previously sought in vain in the 1000 odd pages of the IPCC’s latest Science report does in fact exist in their carbon cycle diagram, Fig. 7.3 on p.515.  These figures broadly support the guesstimates I made towards the end of my initial blog entry of the series.

In case you don’t have a copy of the IPCC report to hand, let me explain what Fig. 7.3 tells us.  It notes that, of total anthropogenic carbon emissions during the industrial era, 18Gt remains in the surface waters and 100Gt is now in the intermediate and deep ocean.  The diagram even includes a flow of 1.6GtC/yr from the surface to the deeper ocean.

The ocean “surface” is that part which, moreorless by definition (perhaps the scientists could make this explicit sometime), is in equilibrium with the atmosphere.  Now, CO2 in the atmosphere is at roughly 390ppm, against a preindustrial level of 280ppm.  1 ppm ~= 2GtC (~ means approx.).  So, the surface layers of the ocean represent an “extension” of the atmosphere for the purposes of holding carbon dioxide, of, according to the IPCC, only 18/220, i.e. very ~ 10%.

That is, as we increase the atmospheric CO2 by 2ppm/year, 4GtC, the part of the ocean in equilibrium with the atmosphere is helping us out by dissolving an additional 0.4GtC.

Add 0.4GtC to the 1.6GtC “removed” annually by turnover of the surface waters (and, I suppose, diffusion) – though in fact only removed in the sense that the turnover of the surface waters results in the emission to the atmosphere of 1.6GtC/yr less carbon than would be the case without the elevated atmospheric level of CO2 caused by industrial carbon emissions – and we get the observed 2GtC/yr total net uptake by the oceans compared to the rough equilibrium between the atmosphere and the oceans in the few thousand years prior to the industrial era.

Until changing conditions (e.g. rising temperatures) affect the relevant processes, the consequences are:

1. The oceans will continue to reduce any increase (or increase any reduction) in atmospheric CO2 by about 10% due to the reasonably fast process of chemical equilibration.

2. The oceans will continue to take up around 1.6GtC/yr whilst atmospheric CO2 levels remain at their current elevated level.  This level of uptake will only increase slowly if our annual CO2 emissions continue to increase – i.e. as I discussed some time ago, in this scenario, the oceans will take up a declining proportion of our annual emissions and more will remain in the atmosphere.   In fact, there’s no direct relation between our annual emissions and the airborne fraction (AF).  It is daft to suppose there would be.

2A. On the other hand, if our annual emissions decline, we will still get the benefit of the 1.6GtC net removal from the atmosphere attributable to oceanic circulation.

3. Ocean CO2 uptake is not very sensitive to geo-engineering interventions to increase the amount of CO2 that dissolves in it, e.g. by dumping calcium carbonate in the sea (though this might eventually be worth doing – expensive though it would be because of the mass that would have to be transported – in order to preserve shelled creatures, corals etc).  The problem is that the surface waters only turn over about once every ~10 years on average (18Gt extra carbon held in total divided by 1.6Gt transported to the depths each year – my previous guesstimate was once every 20 years).

4. Ocean CO2 uptake is very sensitive to changes in the circulation of the oceans. Since such circulation is more likely to lessen than to increase, we really are getting ourselves in deep water!
[Note (12/6/09): this is a potentially misleading throwaway comment – as explained previously a reduction in the rate of oceanic circulation would, assuming the biological pump is unaffected and atmospheric CO2 levels remain elevated, lead to a reduction in the rate of release of carbon by the oceans, i.e. overall the oceans would take up even more atmospheric carbon].
[Note (18/11/09): Did I write this? The only problem is that I no longer believe the ocean circulation is likely to lessen over the next century or so. As it is thermally driven it must increase in strength, a positive feedback – see later post – since this will bring more carbon to the surface from the deep ocean].

5. Ocean CO2 uptake is very sensitive to changes in the biological pump, which removes 11GtC (according to Fig.7.3) each year.

I hardly ever keep my promises for future blog entries, but in the unlikely event that I do on this occasion, next time I’ll discuss what factors could affect the biological pump…

How to Freeze a Mammoth, or, Has the IPCC Got it Wrong?

Filed under: Global warming, Science — Tim Joslin @ 12:25 pm

My previous post attempted to answer the question as to whether the oceans would continue to take up CO2 if the level of the gas in the atmosphere started to decrease.  To sum up, I concluded that, in fact, the oceans would continue to help us out.  The reason is that different mechanisms dominate the exchange of CO2 between the sea and the air over different timescales:

  • over short timescales – years – the surface layers of the oceans are in equilibrium with the atmosphere.  The oceans (to a limited degree) buffer changes in atmospheric CO2.
  • over intermediate timescales – decades to centuries – the turnover of the surface waters of the oceans dominates the chemical equilibrium of the surface waters with the atmosphere.  The ocean will continue to remove carbon whilst the level in the atmosphere declines over decadal timecales, whilst this level remains greater than the equilibrium with the oceans as a whole, that is (arguably) while it is greater than around 280ppm.  Over time, though, this equilibrium point will shift (upwards) as ocean warming and acidification reduce the capacity of the processes controlling the net annual removal of CO2, notably the “solubility pump”.  [I’ve now noticed that the IPCC implicitly support this conclusion – in their carbon cycle diagram (Fig 7.3, p.515), they show that 18Gt of anthropogenic carbon has ended up in the “surface ocean”, available to bubble back out if the atmosphere level of CO2 decreases suddenly, but 100GtC has ended up in the “intermediate and deep ocean”, from where it can’t easily be re-released.  The proportions are roughly what I assumed in my previous post, too].
  • over very long timescales – millennia – the entire ocean is in equilibrium with the atmosphere.  Though the effect on this equilibrium of relatively small changes in the processes driving the exchange of CO2 is very high.  In particular, the “biological pump” removes 10GtC/yr (I explain below where this figure comes from).  A 10% change in efficacy, sustained for a millennium, therefore represents around 1000GtC, somewhat more than is present in the atmosphere.

Whilst writing the previous post, I came across what appears at first glance to be a bit of a howler by the IPCC.  In section Robust findings (no less), and elsewhere, they claim that:

“A potential slowing down of the ocean circulation and the decrease of seawater buffering with rising CO2 concentration will suppress oceanic uptake of anthropogenic CO2.” (my emphasis).

I beg to differ.  I don’t understand how a “slowing down of the ocean circulation” would have this effect.

Here’s a carbon cycle diagram, quite similar to (though rather simpler than) the one the IPCC include as Fig. 7.3 on p.515 (this one’s thanks to NASA via Wikipedia and has no copyright restrictions, though the IPCC stuff might not have any either).  Blue numbers represent annual carbon flows, black ones carbon stores:


Now, what’s important is that the “solubility pump” returns an annual net 100 – 91.6 = 8.4GtC to the atmosphere from the oceans.  The solubility pump would be directly affected by a slowing of the oceanic circulation.

The 8.4GtC is counterbalanced by 10GtC removed from the atmosphere by “marine biota”.  This “biological pump” would not be directly affected by a slowing of the oceanic circulation (though might be affected indirectly, by a reduction in available nutrients).

The IPCC seems to think that blocking a process – the solubility pump – with the net effect of adding carbon to the atmosphere will “suppress oceanic uptake of anthropogenic CO2″.  This conclusion seems more than a little fishy to me!

At first glance the IPCC seem to be confused by their deltas.  They have analysed the solubility pump in terms of the difference between the pre-industrial state and the present, with lots of nice diagrams showing where the “anthropogenic carbon” has ended up.  The pump is, it seems, putting 2GtC less carbon into the atmosphere than before.  But the solubility pump used to be balanced by the biological pump, which takes carbon out of the atmosphere.  If we stop the solubility pump, we’ll still be left with the biological pump!  If this happened (and it’s quite a big “if”), more carbon would be removed from the atmosphere each year!!

Why is this important?

Well, I think it would be a good idea to really understand the ice age cycle before trying to predict what will happen to the carbon cycle over the next century or two as we warm the planet.  The point is that the carbon cycle plays a large part in reinforcing the Milankovitch cycles which change the pattern of warming of the Earth over thousands of years.

One puzzle that it seems to me should be resolved is why the planet does not just keep on warming as it comes out of an ice age.  It was warmer than it is now during the last interglacial 120,000 years ago (120 kya) and at the end of the last ice age 10 to 5 kya (IPCC p.460 ff) (though unless we act now, in 50 to 100 years it will be significantly warmer than during those periods).  Warming during an interglacial is fast: strong positive feedbacks are in play – warming causes increased CO2 release from the oceans (see previous post) which causes more warming.

We need some negative feedbacks!  The obvious one is that in a wetter and warmer world, land uptake of CO2 starts to exceed oceanic release (which is why it might be a good idea to allow reforestation so that nature can help solve the problem for us).  Another negative feedback, I suggest, may be a slowing of the oceanic circulation – driven as it is by the cooling of poleward currents (IPCC Box 5.1, p.397).  Since the poles warm faster than equatorial regions this switch-off of ocean circulation is likely to happen as the world warms, as often discussed in the media, such as the film The Day After Tomorrow. (Though real life would be nowhere near as dramatic!).

Such a sudden cessation of warming could help explain how mammoths are found so well-preserved in permafrost!  More to the point, it could help explain how CO2 stops rising at the end of interglacials.   The situation is complex, since instability is produced, as cooling caused by slowing of the ocean circulation would tend to cause the circulation to restart.   Not only that, the sudden cooling would reduce CO2 uptake in high northern latitudes in particular (by inhibiting plant growth).  [This points to a problem with a mechanism that just relies on slow cooling to explain the turning point in the ice age cycle – the land (taking up carbon at this point) cools faster than the oceans (which are releasing carbon).  This would surely cause a net increase in CO2, tending to reverse the cooling.].

I therefore speculatively hypothesise that the peak warming in (natural) interglacials is caused by a reversal of rising CO2 caused by a stop-start sequence in the ocean circulation.  This may act together with the Milankovitch cycles to tip the Earth back into a cooling phase leading to the next ice age.  Also, of course, if the cooling freezes the vast northern wetlands (which we’re now melting), e.g. in Siberia, it very quickly removes a large source of methane, which, because methane breaks down in the atmosphere relatively quickly to less powerfully warming CO2, would very quickly produce more cooling.

Has the IPCC got it wrong?  And missed part of the explanation for the ice age cycle?

Afterword: It occurs to me that some people might think that increased CO2 uptake due to a slowing of the ocean circulation might represent something of  a get out of jail card.  On the contrary.  It would surely result in even worse climate instability than we’re already heading for.  We need to reduce GHG levels before we get to The Day After Tomorrow point.

April 10, 2009

The Sea, The Sea

Filed under: Books/resources, Climate change, Global warming, Science — Tim Joslin @ 5:11 pm

About a week ago I was browsing David MacKay’s excellent resource, “Sustainable Energy – without the hot air“. This, and a brief conversation earlier the same evening, had started me pondering (again) on the thorny topic of CO2 uptake by the oceans. Specifically, I wanted to make some progress towards answering the question:

“If we reduce the level of CO2 in the atmosphere from its present 390ppm or an even higher level in future, will the oceans release CO2 they are currently absorbing (about 2GtC/year)? And, if so, over what timescale?”

Professor MacKay includes a chapter (31, The last thing we should talk about) on geo-engineering. He notes:

“If fossil-fuel burning were reduced to zero in the 2050s, the 2Gt[/yr] flow from atmosphere to ocean would also reduce significantly. (I used to imagine that this flow into the ocean would persist for decades, but that would be true only if the surface waters were out of equilibrium with the atmosphere; but, as I mentioned earlier, the surface waters and the atmosphere reach equilibrium within just a few years.) Much of the 500Gt we put into the atmosphere would only gradually drift into the oceans over the next few thousand years, as the surface waters roll down and are replaced by new water from the deep.”

Now, the model I have in my head of CO2 uptake by the oceans is one of flows of CO2, rather than a chemical equilibrium. David MacKay’s comment caused some self-doubt on my part. The Professor is clearly not what we chess-players might refer to as a “rabbit”. Strong grand-master would be nearer the mark.

As regular readers will be aware, I’d reached a somewhat different conclusion to that of Professor MacKay. I concluded that the ocean will continue to helpfully take up 2GtC/yr from the atmosphere, on the basis that this may be the capacity of the processes to remove CO2 from the atmosphere.

I specifically doubted, though, that the oceans will continue to absorb a fixed proportion of our emissions, on the grounds that “the ocean ‘knows’ nothing about emissions – all it can possibly be affected by is the level of CO2 in the atmosphere.”

But this idea of “equilibrium” between the surface waters and the atmosphere suggests instead that the ocean can be considered as an extension of the atmosphere, so that if the total increase in CO2 in a year from fossil-fuel burning and terrestrial biosphere changes was (say) 6GtC, 4GtC would stay in the atmosphere and 2GtC would end up in the ocean; if it were 12GtC, 4GtC would end up in the ocean.

Now, undoubtedly there is an equilibrium between the waters at the very surface of the ocean and the atmosphere: that’s how these things work. Horrifically, I’m suddenly reminded of questioning on a very similar topic during a mock interview for university conducted by my school headmaster, who had himself written chemistry textbooks…

Anyway, undoubtedly, too, there are flows of carbon in various forms to and from the deep ocean.

The question is how we combine these idea of equilibrium and flows into a single model that will help us at least put a sign to flows of CO2 from atmosphere to ocean in various scenarios.

The consequences of a pure equilibrium would be that:

1. The ocean will continue to absorb a fixed proportion of net emissions, i.e. it will proportionally reduce the impact on atmospheric CO2 levels of future increases in atmospheric CO2.

2. As soon as atmospheric CO2 levels peak, the ocean will start to release a fixed proportion of any net reduction. i.e. it will be more difficult to get the atmospheric CO2 level back down, say to 350ppm.

On the other hand, if the true explanation is that in a (hypothetical) steady-state there is a balance between flows of CO2 from the ocean to the atmosphere and vice versa, then we need a different sort of explanation. We would have to conclude that processes that remove CO2 from the atmosphere are sensitive to a higher concentration of CO2 and are therefore proceeding more rapidly because CO2 is at around 390ppm compared to a historic level of 200-280ppm.

It’s likely that the processes controlling the interchange of CO2 between the air and the sea are sensitive to other factors, such as temperature and acidity (affected by the cumulative total of CO2 absorbed). But so far, these parameters have changed relatively little. When they do, all the evidence is that they will slow the rate of CO2 uptake by the oceans.

But the crucial point is that in a flow model, the oceans will continue to remove CO2 from the atmosphere as long as the atmospheric level is above the stable long-term level which prior to industrialisation was 200-280ppm.

To jump ahead a little, the question as to whether an equilibrium is dominant is likely to reduce to what we mean by the “surface waters”, since, at the limit, the surface of the ocean must be in equilibrium with the atmosphere next to it. In other words, how quickly does CO2 disperse away from the surface of the ocean?; and from power-station chimneys through the atmosphere to the surface of the ocean?

Looking at the rest of Professor MacKay’s chapter on geo-engineering, I couldn’t help reflect that there is a contradiction. If an equilibrium between the surface waters and the atmosphere is the dominant mechanism, then one would have thought there was little to be achieved by geo-engineering approaches to increase the absorption in a limited area of ocean (sprinkling them with calcium carbonate to absorb CO2 directly or with iron filings to encourage algal growth).

So, for the umpteenth time, I found myself referring to “the doorstop” – the AR4 IPCC Scientific report. And I can report that parts of the relevant sections of this document are virtually content-free. Now, I’ve been in situations when a lack of content has been highly desirable. The objective of some business communications, for example, is to say precisely nothing of any significance. I suggest, though, that the IPCC should not be playing this game.

Let’s turn first to section on p.452. Here we learn that:

“There is evidence that terrestrial carbon storage was reduced during the LGM [last glacial maximum] compared to today. Mass balance calculations based on C13 [isotope] measurements on shells of benthic foraminifera yield a reduction in the terrestrial biosphere carbon inventory (soil and living vegetation) of about 300 to 700GtC…”

This doesn’t really tell us much about the mechanism of CO2 exchange between the oceans and the atmosphere, but is a rather scary fact. Warming leads to carbon leaving the oceans and being taken up by land flora. Ah, I hear you think, the trees take up carbon and the oceans release it to restore equilibrium. Sorry, Grasshopper. The trouble is that as the planet warms the level in the atmosphere goes up as well. This suggests to me that the oceans do indeed release carbon as the planet warms. It’s not pull by the “trees”, but push by the “seas”.

As I said, this is a rather scary fact. Given that the planet is warming rather rapidly. And that the exchange of carbon between atmosphere and oceans takes place at the surface. Where it’s warming. The fact that the deep ocean takes millennia to cool is not really relevant. Hmm, maybe I’ve jumped ahead again.

But back to the story.

Turn now to p.446 of the IPCC report, where we find Box 6.2: What Caused the Low Atmospheric CO2 Concentrations During Glacial Times? (Seems an odd way to phrase it, as glacial times are the norm, but let’s go on!). The answer is no-one really knows. (Actually, the answer to the IPCC’s question is easy: in glacial times the atmospheric CO2 level is so low it limits photosynthesis, so we should really be asking: What causes higher CO2 levels in interglacials?). Still, no-one really knows. Or as the IPCC put it:

“In conclusion, the explanation of glacial-interglacial CO2 variations remains a difficult attribution problem.”

There’s one proviso. There’s a speculative theory (no more than a hypothesis, really) that increased amounts of dust containing iron cause increased phytoplankton growth which causes the ocean to take up carbon from the atmosphere. I mention this because the complete line of reasoning is that colder conditions cause less plant growth, that is more deserts from where dust can blow… This would restore the idea of a “push” by the land – more trees, less dust leads to more carbon in the atmosphere. The trouble is that there’s no evidence that this mechanism could explain more than a small proportion (if any) of the observed changes in CO2.

So much for the top-down approach.

Is our understanding of the physical processes any better?

Let’s see how far we can get. The IPCC Science report notes in section (p.514) that there are 2 “pumps”, i.e. processes that remove CO2 from the atmosphere:

1. The solubility pump – dissolving CO2, giving carbonic acid:
CO2 + H20 <—> HCO3+ + H+ (1)
buffered by carbonates (e.g. CaCO3, calcium carbonate):
CaCO3 <—> Ca++ + CO3– + HCO3+ + H+ <—> Ca++ + 2HCO3+ (2)
(see a previous post for how this might be helped along by dumping some more chalk in the sea).

2. The biological pump whereby phytoplankton (algae) takes up carbon as it grows.

The IPCC note that:

“Together the solubility and biological pumps maintain a vertical gradient in CO2… between the surface ocean (low) and the deeper oceans (high)…”

[my emphasis]

This is where this whole topic starts to do my head in. How can it be that there is less CO2 at the surface, yet the oceans are taking up the CO2 we’re emitting through burning fossil fuels and forests?

Obviously there is a circulation in the oceans. The IPCC note (we’re still on p.512) that:

“In winter, cold waters at high latitudes, heavy and enriched with CO2… because of their high solubility [sic, I don’t know what they’re trying to say either], sink from the surface layer to the depths of the ocean. This localised sinking, associated with the Meridional Overturning Circulation (MOC)… is roughly balanced by a distributed diffuse upward transport of [CO2] primarily into warm surface waters.”

This exchange of dissolved CO2 – lots coming up, rather less going down – constitutes the “solubility pump”, but the biological pump, which, remember, involves organisms taking up CO2 near the ocean surface – effectively from the atmosphere – only operates downwards.

So here’s what I think is happening: there is still a net release of CO2 from the solubility pump, but less CO2 is released now that atmospheric CO2 is around 390ppm compared to when it was lower (280ppm say), because of simple equilibrium chemistry. This assumes there is plenty of carbonate about to stop, through equilibrium (2), the oceans becoming more acidic, reducing CO2 uptake by pushing equilibrium (1) to the left.

So whereas previously with CO2 at 280ppm, the solubility pump would have released (say – these are hypothetical figures) 4 GtC/yr and the biological pump taken 4GtC/yr back to the ocean depths, now, with CO2 at 390ppm, the solubility pump might be releasing only 2GtC/yr but the biological pump is still taking up 4GtC/yr. Hence the net 2GtC/yr uptake by the oceans which is in large part saving us from ourselves.

Digression: I have to say that I can’t help making the observation that the solubility pump depends on the MOC, and that there are those who think the MOC might eventually fail, driven as it is by the cooling of surface waters flowing from low to high latitudes (the IPCC discusses this in Box 5.1, p.397). This would, according to my reasoning, lead to a decrease in the release of CO2 via the solubility pump, increasing the net uptake of CO2 by the oceans, though this may be offset if the biological pump is also weakened (by a reduction in nutrient upwelling, say). I am therefore hypothesising a mechanism (a negative feedback) helping to cause interglacial warming periods to be self-limiting. I should point out, though, that this is completely the opposite of what the IPCC say (e.g. sections and 3 to 5, p.530 and 532-3 and on p.536). Digression over.

Let’s summarise where we are: I am suggesting that the equilibrium between CO2 in the atmosphere and in the oceans is potentially important. Even though the oceans release CO2 through this mechanism, the equilibrium chemistry means they release less as atmospheric CO2 rises.

But how much less?

I mentioned at the outset that it is not in dispute that CO2 is in equilibrium between the air and the water at the surface of the ocean. But how deep is the surface? What is the gradient in CO2 concentration away from the surface of the ocean? How much extra CO2 can be taken up (or as we have seen how much less released) in a year? Is the mechanism saturated at 2GtC/year as I assumed when I reported on my home-made carbon-cycle model?

It’s when we try to answer these questions that the IPCC Science report becomes – how shall I put it? – a little disappointing.

We turn now to Chapter 7: Couplings Between Changes in the Climate System and Biogeochemistry. In section (p.528) we “learn” that: “Equilibration of surface ocean and atmosphere occurs on a time scale of roughly one year.” My school headmaster would have a fit! This sentence is indeed content-free. There is no definition of what is meant by “surface ocean”. Is it 1mm, 1m or 100m? Until we can answer this question we are unable to quantify the effect of the “solubility pump”.

Back to chapter 5. Section 5.4: Ocean Biogeochemical Changes includes some interesting diagrams (p.405) showing how “anthropogenic carbon” is dispersed in the oceans. These show that carbon levels are most elevated, compared to pre-industrial levels, in the top 200m or so of the oceans – “more than half of the anthropogenic carbon can be found in the upper 400m” (p.404) – and in the North Atlantic.

The trouble is, we’re no nearer answering the question as to how long we can consider it takes to renew the active layer of the oceans that exchanges CO2 with the atmosphere.

Let’s try another tack. Let’s say (generously) that the layer is 100m, on average, based on inspection of CO2 diffusion diagrams in the IPCC report. Let’s say it takes 1000 years for the oceans to completely turn over – a figure noted a few times by the IPCC. If the oceans are 5000m deep (on average) as shown in the IPCC figures, then the 100m “surface layer” is renewed every 50th (100/5000) of 1000 years, that is every 20 years.

Now we can try to answer the question posed at the start:

“If we reduce the level of CO2 in the atmosphere from its present 390ppm or an even higher level in future, will the oceans release CO2 they are currently absorbing (about 2GtC/year)? And, if so, over what timescale?”

The answer depends on the timescale we are looking at:

1. If we reduced the level of CO2 in the atmosphere overnight (more realistically by say 1ppm from one year to the next), then the surface layers of the ocean will release some carbon as it re-equilibrates with the atmosphere.

2. But if, more realistically, we reduce the level of atmospheric CO2 from one 20 year period to the next, we can consider the outcome as follows:
– in both 20 year periods the ocean will outgas the same amount of CO2 from the deep;
– in the first period the ocean will carry away more carbon (or release a little less) than in the second period.
There is no correlation between what happens in the second period and in the first.

3. After a millennium or so, the ocean might release more carbon because of the extra carbon it is absorbing now. On the other hand, more carbon may simply end up in sediments.

Conclusion: The oceans will not release a significant proportion of the anthropogenic carbon they have absorbed since industrialisation if we reduce the level in the atmosphere back to 280ppm over a century or two.

“Equilibrium” and “flow” models of oceanic carbon uptake are relevant over different timescales. The flow model is applicable to decades and centuries, the equilibrium model to years and (possibly) millennia.

I believe it is inaccurate to say, as David MacKay does, that:

“If fossil-fuel burning were reduced to zero in the 2050s, the 2Gt[/yr] flow from atmosphere to ocean would also reduce significantly.”

The increase in annual oceanic CO2 uptake due to the difference between CO2 levels in the atmosphere and the ocean is partly due to the difference between CO2 levels now and when the current surface waters were last exposed to the atmosphere, that is, the difference between 390ppm and 280ppm approx. and partly due to the difference between the CO2 level compared to the previous year – about 2ppm. If (as I’ve assumed) 1/20th of the surface waters are renewed each year, we should allow 1/20th of (390-280)ppm, that is 5.5ppm as the comparable CO2 concentration difference. 5.5 is several times 2, so the dominant cause of net oceanic CO2 uptake at present is the renewal of oceanic surface waters, not annual increases in atmospheric levels of CO2.

In other words, when Professor MacKay goes on to say:

“Much of the 500Gt we put into the atmosphere would only gradually drift into the oceans over the next few thousand years, as the surface waters roll down and are replaced by new water from the deep.”

he is correct – this process is going on. But, I suggest, it accounts for at least 75% of the 2GtC/yr of our CO2 pollution that the oceans are helpfully soaking up for us.

And if we were to reduce atmospheric CO2 levels by say 1ppm/year (e.g. by ceasing fossil-fuel burning and enacting a programme of worldwide reforestation), oceanic surface re-equilibration would reduce the annual decrease by only about 10%, and with atmospheric CO2 at its current level, and all else being equal (unfortunately it probably won’t be), the solubility pump performance attributable to oceanic surface water turnover will continue to remove around 1.5GtC/year (about another 0.75ppm).

To go on, reducing atmospheric CO2 concentrations at a rate of 1.65ppm (based on the above figures), reducing to 1ppm as we approach the pre-industrial equilibrium, would allow us to return from 450ppm to 280ppm in around 170/1.325 [(1.65+1)/2] or around 130 years.

[Though, as I said, all else is not equal and positive feedbacks due to warming of the oceans and decreased albedo because of loss of ice-cover, etc. will most likely increase this timescale significantly.  On the other hand, if we do it before the deep ocean has warmed, we might just save the planet!].

April 3, 2009

Save the forests, save the world, part 2

Filed under: Forests, Global warming, Science — Tim Joslin @ 6:36 pm

There must be something in the air in the spring, because it seems to be the time of year when I gain the energy to review a bit of GW science. It is almost exactly a year ago that I wrote briefly about how difficult it is going to be to prevent dangerous climate change (CO2 > 450ppm) if we don’t increase the amount of carbon stored in the terrestrial biosphere (shorthand: “forests”).

I’m in the process of preparing a presentation provisionally titled “Save the Forests: Fixing Global Warming for Dummies”. So I suppose it is serendipitous that my New Scientist magazine (dated 4th April 2009) fell open a couple of hours ago at a Fred Pearce article titled “Keeping the planet’s heart pumping“. I say I “suppose” it is serendipitous, because the article presages some of the ideas I was going to include in my presentation. I guess the reinforcement of my point by the publication of this article outweighs the reduction in its originality.

I’ve started to get a little ratty when anyone suggests that reforestation may be an ineffective policy. The problem is that many people realise that carbon offsetting is a sham. But the principle that we should preserve and increase the area of natural forest and preserve its integrity is absolutely correct. Right policy, wrong financial instrument (and in the case of monoculture plantations, poor execution). I intend to go into this point in more detail, and even have a title for the blog post (I’m telling you now in case I never get round to this one): “Don’t throw the forest out with the trees!”. Play on words is for children. Real men play on idioms. And eat quips!

Fred reports on the research of Victor Gorshkov and Anastassia Makarieva of the St Petersburg Nuclear Physics Institute. See here for a precis.

Gorshkov and Makarieva point out that forests generate rising air (low pressure) not just because they are dark (absorbing heat, expanding air, making it less dense and causing it to rise) but also because of what they call the “biotic pump”. That is, the trees pump moisture into the air (cooling themselves) which condenses at higher altitude. When the resulting water drops fall through the air column (my interpretation) – even to the ground as rain – the airmass becomes less dense and rises. Condensation not only reduces volume, but also releases heat, again causing airmasses to expand and rise. Rising air draws up air below it and other air rushes in from the sides and even from above. This process happens on (within reason) all scales of airmass. The effect can be clearly seen in billowing cumulus clouds. The early part of “A Cloudspotter’s Guide” describes the experience of a parachutist in a storm cloud, alternately falling and being carried up in rising pockets of air within the cloud. I don’t see how this could be explained without something like the “biotic pump”.

What really strikes me about the article, though, is that NS reports that Makarieva claims that:

“Nobody has looked at the pressure drop caused by water vapour turning to water.”

And the article – written, remember, by Fred Pearce, who has been reporting environmental issues and GW in particular for decades – goes on to note that:

“…because forest models do not include the biotic pump, it is impossible to say what wiping the Amazon off the map would mean for rainfall worldwide.”

I’ve recently been wondering whether our understanding of the climate is quantitatively strong, but qualitatively weak. Too much reliance on those computer models – remember, it’s garbage in, garbage out.

Now, the climate and weather models should be foolproof because they are held to rely on the laws of physics. But if they fail to capture accurately the process of lowering of air pressure due to the condensation of water vapour they could, I suppose, be systematically in error.

Even if this mechanism is implicit in the models, and it’s just the humans who fail to recognise it (quite feasible if the models correctly implement the laws of physics), they definitely fail (because they don’t implement feedbacks from climate to vegetation) to capture the positive feedback that causes forests to spread across continents. That is:
1. Moist forests create low pressure air masses (the rising air may directly result in rainfall over the forest and surrounding areas, in particular inland);
2. Drawing in moist air from the ocean (hence the importance of coastal forests emphasised by Gorshkov and Makarieva);
3. Creating airflow (at least seasonally) from the coast;
4. Providing rainfall to maintain and increase the area of the forest.

So, once established, a rainforest is self-sustaining, and indeed will tend to grow until it fills the continent at least over a latitudinal band or some other process or natural obstacle (e.g. mountain range) keeps it in check. Deforestation creates the reverse feedback. Once a tipping point is reached, the drying-out of a forest may become unstoppable.

I find it hard to believe that Gorshkov and Makarieva’s idea is new. Indeed, some commenters on the NS article note antecedents, notably something called the Permaculture movement, a 1970s idea of Bill Mollison and David Holmgren, though the “biotic pump” doesn’t seem at first glance to be central to the Permaculture philosophy. But NS also reports “that current theory doesn’t explain clearly how the lowlands in continental interiors maintain wet climates.”

I’m rather puzzled, since I’d always assumed that this mechanism explained cloud formation, storms, hurricanes, monsoons and why there is no forest in North Africa and air pressure there is predominantly high. I thought the problem was communication, or rather the lack of it, by the scientists. If Fred Pearce’s article can be taken at face value, it seems that the problem may instead be one of understanding, or rather the lack of it.

Train Stress and the 20:52 from King’s Cross to Cambridge

Filed under: Rail, Transport — Tim Joslin @ 9:53 am

I’ve mentioned before that it is possible to write an essay about every UK rail journey. I have something of a backlog – I hope soon to find time to explain to the world the horrors of weekend engineering work – but want to give yesterday’s journey a mention.

I went on a day-trip to Birmingham, taking in the National Trust Back to Back houses and the Barber Institute of Fine Arts. Both well worth-while.

But as ever with UK trains, as much emotional energy is expended on the journey as at the destination.

I bought advance tickets for £8 outward (11:03 Euston to 12:27 Birmingham New Street), £14:50 return (19:10 Brum to 20:34 Euston) weeks ago. There are no reservations (phew!) on Cambridge trains so you can take any you want to London. This in itself is daft, since, if I’d wanted to, I could have added to the crush on the country’s most overcrowded train, the 07:15 from Cambridge – incidentally shortly to be increased from 8 carriages to 12, which will still not be enough for everyone to have a seat, as passengers might expect, given the extortionate fares at commuting times.

I passed on the 07:15 yesterday morning and instead took the 09:15, which actually goes at 09:20 (virtually all the other fast trains are on the quarter hour in both directions), since keeping things simple for the travelling public is not very high up First Capital Connect’s priority list.

The fares were cheap, but this is not the product I want. Nor do the vast majority of the travelling public. What we require are reasonably priced walk-on fares.

The point, of course, is that the penalty for missing the train applicable to your ticket is severe. I read somewhere of someone having to fork out £200 for a new ticket on the Birmingham train. So one reason I took a train (the 09:20) to arrive at King’s Cross (a few minutes walk from Euston where my Birmingham train departed at 11:03) shortly after 10am was to minimise the possibility of missing my connection.

The stress continued through the day, of course, as everything had to be timed to ensure I was at the station in good time for the 19:10. All this, of course, adds considerably to what I term the effective journey time. You end up creating a lot of dead time making sure you don’t miss the sodding trains.

But Virgin managed to increase my train stress levels still further. Get this: when I looked at my train tickets the evening before I saw that the reservations were correct (I’m sure I checked these when the tickets arrived the day after I bought them online). But somehow the actual Cambridge to Birmingham tickets – referred to by number on the reservations – both said “From: Cambridge; To Birmingham”. How could this happen? It seems that when you book tickets online they’re not, as you might suppose, printed automatically. The operation, it appears, is not entirely controlled by computer. No, room for human error has been allowed. I strongly suspect someone takes your online booking and types it again into the ticketing system!

Reflecting on this, and the melee of ticket inspectors at Euston, a cynic might conclude that the UK railways are in reality a very expensive job creation scheme. I couldn’t possibly comment.

Anyway, more stress, as I had to check at Euston that Virgin Trains weren’t going to get arsey and leave me stuck in Brum without a valid return ticket. Then I had to get a replacement ticket issued at Birmingham New Street, which required supervision by a supervisor apparently, though I was careful to explain the problem carefully and the staff were reasonably reasonable – though an expression indicating he’d scented blood flickered across the face of the ticket inspector on the return journey, before I wheeled out my careful explanation again, in my most polite deferential manner. Advice: keep on the right side of these guys!

Still, the trains ran moreorless to time. The 19:10 left Brum a little late, but must have arrived at Euston a little early, as I reached King’s Cross at 20:42, which would have been pushing it if we’d pulled into Euston at the scheduled time of 20:34. Perhaps I should explain how such an early arrival can happen. The point, of course, is that the train timetables are padded. The LSE reported recently (pdf) that “on many routes… it is now no faster to commute into London than in the immediate post-war period, and it is substantially slower than in the 1970s”. I suspect a large part of the reason is an unintended consequence: my guess is that the rail companies have more to gain from ensuring their punctuality targets are achievable than from attempting to speed passengers to their destination as fast as the expensive technology will allow.

Luckily, then, I was at King’s Cross in time to catch the 20:45 fast train to Cambridge. Except there isn’t a 20:45. I took the 20:52 slow train, but this arrives at Cambridge after 10pm, around about the same time as the 21:15. In other words after 20:15 there is effectively only an hourly service to Cambridge. If you can’t control when you arrive at King’s Cross very accurately – assume you arrive there at a random time – then your average effective journey time is 15 minutes longer once the xx:45 fast trains stop running. Explanation: earlier in the evening you have to wait an average 15 minutes for a fast train; after 20:15 you have to wait an average 30 minutes. Catching a slow train at 20:52 or 21:52 or 22:52 gains you virtually nothing (especially as these trains are even slower than the xx:52 services during the day).

Of course, I could hardly argue that a 20:45, 21:45, 22:45, 23:45 and so on should be operated if there were no demand. But there is. Even with the current service, when a lot of people must choose to carry on what they’re doing in London a little longer to catch the fast 21:15 rather than rush for the 20:52 – heck, a lot of people must choose not to take the train to or via London so often in the first place because the evening return service is so poor – the 20:52 is packed when it leaves London and at least half full (that’s a hundred or two passengers, paying probably at least £6.00 on average for the return leg of their journey – do the math) when it reaches Cambridge.

And, to rub salt in the wounds, the 20:52 only has 4 carriages. Last night people were standing when it left London, although I managed to get a seat near the toilet. Luxury. To me this represents a complete breakdown of public control of the train operating companies, because it is completely unnecessary to reduce the train to 4 carriages. The line supports 8. No doubt the train company saves a few pounds, but this must be far exceeded by the cost in passenger inconvenience and discomfort. It seems to me it would be fairly simple to sort this out. Just apply a levy to the ticket revenue for any trains over 70% full. Above this level the passenger experience degrades. You have to sit in seats you don’t want to, couples and groups can’t always sit together and so on.

I simply can’t understand why politicians aren’t falling over each other to propose solutions to the mess that is the UK railways. Don’t they want our votes?

It’s simply a matter of setting the rules to prevent the operating companies short-changing passengers and to give them the right incentives – sticks and carrots – to run the service people want.

April 1, 2009

The Perils of Efficient Diplomacy

Filed under: Complex decisions, Global warming, Politics, Reflections — Tim Joslin @ 9:24 am

Ahead of the G20 summit, I was struck this week by a comment by Angela Merkel, in a very interesting piece in the NYT:

“ ‘International policy is, for all the friendship and commonality, always also about representing the interests of one’s own country,’ Mrs. Merkel said in an interview with The New York Times and The International Herald Tribune.”

Germany seems hell-bent on blocking new stimulus spending to try to lift the world out of recession. Merkel is playing hard-ball, even (allegedly) trying to spike plans for a $2trn stimulus package by leaking it to the press at the weekend. France seems to agree, but Sarko emphasises the regulatory agenda, and is even threatening to take his ball away if he doesn’t get what he wants. Nicolas and Angela, hitherto badmatch.com incompatible have even managed to cosy up together.

But what are the French and Germans trying to achieve? There appears to be a G20 majority in favour of a stimulus. The population (and GDP) of the US, UK and Japan, who are all clearly for it far exceeds that of Old Europe or even the EU as a whole. And it’s hard to believe that the developing countries – including potential Franco-German EU partners in Eastern Europe – wouldn’t be very much in favour.

And surely the spillover effects of economic dislocation and political instability and extremism in Russia and the countries it considers fall within its sphere of influence would not be in the interests of Western Europe.

At the end of the day, much of the G20’s output will be just words. Compliance with a woolly agreement such as a fiscal stimulus is hard to verify and there are no effective sanctions. Countries are ultimately going to make their own spending decisions.

If France and Germany find themselves in a small minority, they will be forced to go along with a stimulus, and their leaders will lose face at home.

But if France and Germany succeed in blocking an agreement to take action, they will most likely also fail to achieve agreement on the regulatory changes that are so important to them. And agreements to establish regulatory organisations and protocols are more concrete, verifiable and permanent than spending increases compared to uncertain baselines.

Old Europe seems to have put itself in a lose-lose position. And perhaps they should bear in mind that this is the G20 and not the G7. Indeed, they may have already been outmanoeuvred (for once) by the Anglo-Saxons into a diplomatic forum where it is more difficult for them to block progress than the G7 or to get their own way as in the EU.

It could be very interesting this week. The cards are up in the air. How will they fall? If the emerging countries use the (I suspect temporary) “multipolar moment” to assert their influence, then France and Germany could be the losers.

Perhaps Old Europe is adopting a negotiating position. But the US will remember that their position on Iraq did not turn out that way. And freeloading in Afghanistan, with German soldiers notoriously not allowed out after dark, is a source of continued irritation. From a UK perspective, the French and Germans have stacked the decks in the EU for a long time. The common understanding here and in much of the world is that the CAP is an outrageous subsidy to the wealthy. The fear must be that France and Germany intend to use all their diplomatic weaponry to try to achieve their own national goals, regardless of collateral damage.

But the worst aspect is, win or lose, Franco-German obstructionism might change the mood. At a time when the world needs greater international cooperation on a host of issues.

Ango and Sarki really should take a step back and think about what they are doing. They are saying they are worried about agreeing to concerted international action that might damage their economies. In a few months, at the Copenhagen climate negotiations, the boot will be on the other foot. They will be hoping the US, China and India will agree to concerted international action that might damage their economies.

Merkel’s attitude rather reminds me of transport decisions in Cambridgeshire, where, as I reported, South Cambridgeshire District Council opposes plans purely on the grounds of their perception of the narrow interest of their own residents.

Many difficult collective decisions can only be made if the interests of narrow constituencies are put to one side.

Come on Angela, Nicolas, let’s have a bit of give at the G20, as well as take!

PS Nic, Ang, here’s another bit of pre-G20 reading.

Blog at WordPress.com.