Uncharted Territory

August 5, 2018

UK Heatwaves: What are the Global Warming Risks? (1) A Really Freak Summer

I suggested last time that 2018 could beat 1976 to the title of UK’s hottest summer ever in the whole Central England Temperature (CET) series which goes back to 1659. I now doubt that will happen since a change in the weather is on its way. Here are Weathercast’s projections, albeit a couple of days out of date as their site isn’t updating at the moment (I guess someone will be in the office on Monday morning!):

180805 Change in the weather Weathercast

And here’s what the Met Office have to say:

180805 Change in the weather Met Office

The CET mean for August so far is impressive:
180805 CET to 4th August 2018but a period of average daily mean temperatures (around 16C) will soon drag the monthly mean down below the 18C necessary for the June through August average to be higher than in 1976. Note that, after a cool start, it is now very possible that 2018 as a whole will be hotter than the previous hottest year in the entire CET record, 2014.

2018 has been exceptional, though. The problem is the arbitrary period (June, July and August) defined as the meteorological summer. But for 2018 to break all-time records it’s not even necessary to split months (e.g. by taking a period ending on 7th August). As I pointed out a fortnight ago, 2018 has been easily the hottest “early summer” – May, June and July – as well as easily the hottest for the period April through July. This seems very significant to me, because it illustrates the effect of global warming so starkly, but has been ignored or unnoticed by other commentators, so here again is a graph of mean annual May through July CET (as previously published, but with the final July figure of 19.1C incorporated):

180805 May to July CET

The question is, how much worse could UK heatwaves get, with global warming?

Now, I find statements such as the following from a recent Guardian front-page lead to be extremely unsatisfactory:

“Events worse than the current heatwave are likely to strike every other year by the 2040s, scientists predict.”

Curiously, this sentence does not appear in the online version of the same article.

How can it be possible for summers as hot as 2018’s to occur every other year in less than 30 years?

The planet is warming at “only” somewhere around 0.2C per decade, so by “the 2040s” is unlikely to warm by more than 0.6C. And my patent graph above shows that May through July 2018 has been around 1.5C warmer than the same period has been on average in recent years (and more than 2C warmer than it used to be in the average year).

So it would seem that, even by the 2040s, another summer as hot as 2018 would be quite unusual if average summer temperatures are only 0.6C warmer than at present: the graph above shows few summers – less than 10% – 1C or more warmer than the 21-year running mean (the black line).

It seems the “heatwave every other year” statement refers to temperatures across Europe as a whole. This is problematic for two reasons.

First, as the European Environment Agency reports, land temperatures are rising nearly twice as fast as ocean temperatures:

“According to different observational records of global average annual near-surface (land and ocean) temperature, the last decade (2008–2017) was 0.89 °C to 0.93 °C warmer than the pre-industrial average, which makes it the warmest decade on record.

The average annual temperature for the European land area for the last decade (2008–2017) was between 1.6 °C and 1.7 °C above the pre-industrial level, which makes it the warmest decade on record.”

Second, the “every other year” claim may be based on the average for a large area. As pointed out by King and Karolyi in Nature (“Climate extremes in Europe at 1.5 and 2 degrees of warming” (pdf)):

“The highest changes in frequency are projected for the largest regions as the year-to-year variability is lower on these spatial scales…”

The UK is some islands next to Europe sticking out into the Atlantic at a latitude where weather systems usually move from west to east. Its climate is therefore influenced more by the ocean than the nearby continental landmass. Thus average UK summer temperatures have not been and will not by the 2040s rise as fast as for the land area of Europe as a whole.

The statement in the Guardian should not be taken as applicable to the UK.

Nevertheless, as summer 2018 shows, there are times when the UK is in step with Europe, climate-wise. And that’s when we get those exceptionally hot days and even entire summers.

The simple way to look at the risk of something worse than 2018 is to consider by how much freak UK summers in the past have been warmer than the average contemporaneous summer.

Let’s consider July and August first, since that’s when the heat is most intense. The following chart shows mean summer temperatures since 1659 at the top, with 11- (red) and 21-year (black) running means. At the bottom, in green, it shows the deviation of each year from the 21-year running mean centred on that date (extrapolated at the ends of the graph, by assuming no change from the first or last available year):

180805 July to August deviation

The most exceptional year in the entire 350 year series is 1911, when, as I mentioned in a previous post, the nobility reportedly played tennis in the altogether at their country estates, while riots broke out in the cities. The mean temperature in the CET series for July and August 1911 was 2.73C above the mean for 1901-1921.

If 2018 were to average 18C, as I previously assumed rather bullishly, it would only be 1.97C warmer than the mean nowadays (and maybe even less, since I’ve used the mean for 1998-2018 and not for 2008-2028, which won’t be available for another decade, but may well be higher). For 2018 to be as freakishly hot as 1911, the mean CET for August would have to be 19.6C, that is rather hotter than in the hottest August recorded (1995, at 19.2C), but following the third hottest July on record, at 19.1C. When there’s already a media frenzy about the heat in July, we’d have to experience an even hotter month – moreorless the current uncomfortable weather continuing for nearly another 4 weeks, rather than breaking up in a few days.  Not a pleasant prospect, but apparently possible, on the evidence of 1911.

And I mentioned earlier that May to July 2018 has been the hottest on record. But was it the hottest it could have been? It seems not. Other years were much more exceptional for their period, the record being 1976, which, relative to summers of the time, was more than 0.6C warmer from May to July than 2018:

180805 May to July deviation

So one real risk for the UK is that we could experience a truly freak summer, that is, a summer much hotter than the warmer summers we are experiencing because of global warming. 2018 has been exceptionally hot on some measures, but much of that has been due to global warming. It really hasn’t been a freak.

But has global warming changed what’s possible? Could there be even more serious heatwave risks for the UK than summer temperatures as much above the current norm as they were in 1911 and 1976? These questions will be addressed in the next exciting instalment!

Advertisements

July 24, 2018

Summer 2018: UK’s Hottest Ever?

Filed under: Effects, Global warming, Media, Science, Science and the media, UK climate trends — Tim Joslin @ 9:52 pm

2018 is already notable (pending final data for July) for the hottest May to July and the hottest April to July in the entire 360 years of the Central England Temperature (CET) record, as the graphs in my previous post show so eloquently. Nevertheless, besides foaming at the mouth that Englishmen are being advised not to go out in the mid-day sun (optional musical interlude), certain sections of the media are speculating rabidly as to whether 2018 could be the hottest summer ever. The Express, for example, announces that “Summer 2018 [is] on track to beat ALL RECORDS as HOTTEST day looms”, though neither the article, nor the accompanying Met Office video, quite say that.

When the media, and especially the Met Office, refer to “summer” they mean June, July and August. So pedantic. It can be nearly as hot in the UK in May and September, so comfortably the hottest May through July recorded in 360 years seems more significant to me than barely the hottest June through August.

Especially as the main feature of this summer is the lack of any significant breaks in the weather as opposed to the sheer heat – unless I’ve not been paying attention we don’t seem to have broken a single daily record for the UK as a whole so far. I went so far as to question in my post reporting on a fairly warm June whether we were experiencing a heatwave or just warm weather. I’ll grant, though, that this week, with peaks consistently over 30C here in London, does feel unpleasantly like a “heatwave”. And at least one daily record high may be anticipated.

So what are the chances of June to August, “summer”, being the hottest ever in the CET? Simple: the CET mean for August needs to be 18C or higher. I’ve put 18C in the data for August and produced this graph:
180724 June to August CET graph to 2018 border
This shows that if the CET mean for August is 18C, summer 2018 will average 17.8C in that series, just pipping 1976’s 17.77C and the 17.6C recorded in 1826.

There are a couple of assumptions. First, I still have 19.3C as the figure for July. The Met Office page is currently showing 19.4C up to 23rd, so 19.3C should be safe enough, since the next few days are likely to drag up the average for the month, although the figure can change by a few tenths right at the end of the month. I understand this to be because data from remote weather stations comes in late and I’ve noticed that the monthly figure is usually adjusted downwards, at least in winter.  Second, the figure for June was adjusted down significantly (which caught me out somewhat), but that this has been queried. Obviously if that adjustment was erroneous and the June figure is revised upwards (which I don’t expect to happen), then August doesn’t need to be so hot for 2018 to break the summer record.

But how usual is an 18C mean in the CET for August? That presents the opportunity for another graph!:
180724 June to August CET graph to 2018 border
As can be seen, 18C in August was pretty unusual for two or three centuries and not even achieved in 1976, which only managed 17.6C. But in the global warming era 18C is very possible, provided, of course, that current weather patterns continue.  If they don’t everyone will be wishing they’d trumpeted the record spring-into-summer heat!

July 22, 2018

July 2018 UK Weather: CET Records Set

Filed under: Effects, Global warming, Science, UK climate trends — Tim Joslin @ 2:51 pm

Last month I jumped the gun to report the hottest UK June since 1976 in the Central England Temperature (CET) record.  I was slightly undone by a slight downward revision so that in the event June 2018 was only equal with that in 2003 as the warmest since 1976.  Despite that, the forecast for another week of temperatures reaching the 30Cs and the CET for July to date of significantly over 19C prompts me to call July 2018 even earlier as one of the three hottest on record in the CET.  Here’s a graph (the first of many, so be prepared!):
180722 July CET graph to 2018
Only 2006 (19.7C), 1983 (19.5C) and now 2018 (the CET so far this month was 19.3C when I prepared this graph) have exceeded 19C in the CET (thanks, as ever, to the Met Office for the data). In fact, since the next hottest July was in 1783 at 18.8C – which should possibly even be discounted on the grounds that the heat was in part the effect of volcanic smog from the Icelandic volcano Laki – some wintry weather indeed would be necessary for July 2018 to now not be one of the three warmest, justifying my early call (though there’s a huge getting round to it factor in that!).

What is also striking about the July temperature graph is that the three hottest Julys – 2006, 1983 and 2018 – are all in the global warming era. Of course.

I’ve also labelled some notable years in this and subsequent graphs. In particular, I read articles drawing 1955 and 1911 to my attention. Ian Jack wrote nostalgically about 1955, though I do wonder if its impact was magnified by his age at the time. I’d personally rank 1983 – one of the few summers when I played tennis regularly – as up there with 1976. And I’m backed up by the CET data!

A brilliant Weatherwatch column in the Guardian (better even than the one of 2011 on the same topic) reports on the summer of 1911. It’s worth quoting:

“The long hot summer of 1911 is credited with changing fashions, with women shedding whalebone corsets and brassieres becoming the rage. Edwardian [sic, though Edward VII died in 1910] aristocrats are said to have taken up nude tennis at their country estates…

There was record heat in August and the sunshine continued until September, by which time the countryside was also in severe distress and riots had broken out in the cities.”

Time will tell if we’re in for a repeat!

So onto the graph-fest.

I was going to follow up last month’s post with one of the April to June CET, having noticed that the hot June had followed a distinctly mild mid to late spring (despite cold snaps continuing). Anyway, here’s that one, a little belatedly:
180722 Apr to Jun CET graph to 2018
Yep, that’s right, April to June this year has been one of the three warmest such periods in the CET record, exceeded only by 1762 and 1798. Crikey!

Then, of course, a hot June followed by an exceptionally hot July must make the early to mid summer graph (June and July) quite interesting:
180722 June to July CET graph to 2018
It is, but 2018 is still only the third hottest year, after 1976 and 2006 this time (though 2018 could still also fall behind 1826, I suppose).

Surely there must be some measure on which 2018 is (provisionally) the warmest ever?

Yes, you’ve guessed it. A mild late spring and hot early to mid summer makes 2018 a record-breaker for May to July mean CET:
180722 May to July CET graph to 2018

And that’s not it. If we add in April as well, sort of mid-spring to mid-summer, it’s not even close:
180722 April to July CET graph to 2018
Bingo!

June 27, 2018

Hottest UK June Since 1976 (and Weather Reporting Hype)

Filed under: Effects, Global warming, Media, Science, Science and the media, UK climate trends — Tim Joslin @ 2:27 pm

It always baffles me that the Met Office reports notable weather months 2 or 3 days before their end – you’d have thought they’d wait to finalise the “official” data – so this time I’m facetiously reporting before they do (assuming I type fast enough)!

I know it’s only the 27th and CET (that’s the Central England Temperature for any newbies) data has been published only up to the 25th (thanks again to the Met Office for this resource):

180626 Heatwave CET data

but it’s already a nailed-on slam-dunk that the CET mean for June 2018 will exceed the 16.1C recorded in the exceptionally hot summer of 2003, making this June the warmest since the legendary summer of 1976 (17C).

I say this simply because the forecast for the rest of the month is for fairly hot conditions to persist (thanks this time to Weathercast):

180626 Heatwave Weathercast

Simple arithmetic suggests that daily mean temperatures of around 20C (London’s are not atypical of England as a whole, slightly cooler if anything) will drag up the average for the month from 15.9C for the first 25 days to over the 16.1C recorded in 2003.  Here’s a graph showing June CET since 1659, assuming (conservatively) a mean of 16.2C this year:

180626 Heatwave June CET

Having said all that, this June and May (which I’ll come to) have not been notable for exceptional temperatures.  For example, the current “heatwave”, though fairly unpleasant, has come nowhere near breaking daily records for the CET area (though some local records may be broken, in Wales, for example).  Temperatures have so far only edged above 30C in one or two places, with 30.1C at Hampton Water Works on 25th (Monday) not a patch on the 33.5C at East Bergholt on the same date in 1976.  Even the 30.7C at Rostherne No 2 yesterday, 26th, is well below 1976’s 35.4C at North Heath.

I might even go so far as to say it’s a little bit of an exaggeration to call the current conditions a “heatwave” (at least in southern England).  The term is being devalued by tabloid reporting.  It’s an outrage!  (To use another overused word).  It’s just “hot weather”.

Given that we had several days in succession over 35C in June 1976, and we’ve had 42 years of global warming since then, and warming affects extreme events disproportionately, I wonder what temperatures we’d hit if we had similar conditions to 1976?  Presumably then (as in 2003), high pressure didn’t just sit over the UK, but drew in air from the warmest direction in summer, that is from the south-east (or even just from further east).

What has been notable this year has been the persistence of dry, sunny, windless, anticyclonic conditions, with only a small interlude of westerlies in June. That persistent high pressure conditions are fairly unusual in June is presumably the reason why, on average, June CET temperatures have risen less than other months in the global warming era (the black line in the graph above shows that, averaged over 21 years, the recent period has not been exceptional, though global warming will inevitably drag the mean temperature up over the coming decades).  Because the oceans warm only slowly, periods of weather dominated by westerlies are likely to be only a little warmer than before global warming set in.  The 5 year periods in the mid 2000s and most recently (the green line in the graph) show the potential for generally hotter Junes.

And the historical record (check out 1676 and 1846!), suggests that a truly freakish June these days (with global warming) would average well over 18C, possibly even touching 19C.  Much worse than the low 16Cs this year.

At least this June has been reasonably hot.  May was widely reported as the hottest and sunniest on record.  It was exceptionally sunny (as may also be the case for June), but nowhere near the hottest.  Here’s my latest graph of May CET:

180626 Heatwave May CET

In fact, at 13.2C in the CET, May 2018 was only equally as warm as May 2017 and less warm than in 2004 (13.4C), 1992 (13.6C) and quite a few others!

So how can May 2018 be reported as the “hottest on record”?

Well, obviously it might be because statistics are being used for a different region e.g. the UK as a whole, but I don’t think that’s the main reason.  The CET is fairly representative.

No, if you read the small print you’ll find that the “hottest May” claim is based on daily maximum temperatures only.  When you take night-time temperatures into account, as is almost universal practice, May 2018 was not exceptionally warm.  The reason for the difference lies in all that sunny weather, which tends to lead to warm days and cool nights, so that the day-time average temperature is higher than the overall average.

If that weren’t enough, weather record reporting is also afflicted by “Year Zero Syndrome”.  The CET record back to 1659 is not used, or even referred to.  Instead records are based on the period since 1910, when more comparable records begin.  It’s a bit like the way football records in England are now based on the period since the start of the Premier League in 1992, so that we no longer realise that goal-scoring feats comparable (rather than equal, because there are now 2 fewer top-flight teams) to Dixie Dean‘s 60 goals in 1927-28 are still possible.  Clearly, from the chart above, no recent May mean temperature approaches that of 1833, or even 1848.

That leads me to my usual warning.  May 1833 was about 3.5C hotter at 15.1C than the mean for the period (given by the black 21-year running mean on the graph).  Because, in line with global warming, an average May (unlike an average June) is now warmer than at any time since 1659 (the black line again), a similarly freakish May would be somewhere in the mid-15Cs.

Unless the last few years are exceptional, it’s curious that June shows the global warming signal so weakly.  I’ll have to look more closely at the data to see if that for any other months exhibits a similar feature.

June 23, 2017

How Not to Report a Weather Record: 21st June 2017

Filed under: Effects, Global warming, Science, UK climate trends — Tim Joslin @ 5:36 pm

Well, well, well.  Less than a year on from an exceptionally hot mid-September day (at least exceptionally hot for the UK, if not, perhaps, for Kuwait), and it’s only gone and happened again.

Yeap, the presumably less poisonous than mercury red liquid in my re-purposed fridge thermometer has only gone and reached 34.5C this week, on what was widely reported as “the hottest June day for 41 years”, that is, since the summer of 1976.  And curiously I was close to the epicentre of the heatwave back in ’76, in FA Cup-winning Southampton, then the hottest place in the country, just as where I am now, a few miles from Heathrow, has been this time.

And once again the record has been somewhat understated.   I explained in my post on the topic last September that the true significance of the 13th September 2016 was that it was the hottest day that had been recorded in the UK so late in the year.

You’ve guessed it.  The 34.5C recorded at Heathrow this summer solstice was the hottest daily maximum so early in the year.  Back in 1976 the temperatures over 35C (peaking at 35.6C in Southampton on 28th) were later in the month.  In other words, 21st June 2017 saw a new “date record”.

Admittedly, it was not a particularly notable date record, since 34.4C was recorded at Waddington as early as 3rd June during the glorious post-war summer of the baby-boom year of 1947.  And 35.4C at North Heath on 26th June 1976 also seems somewhat more significant than nearly a whole degree less on 21st June.  Furthermore, unlike in 1947, 1976, and, for that matter, 1893, only one “daily record” (the hottest maximum for a particular date) was set in the 2017 June heatwave.

Nevertheless, 21st June 2017 set a new date record for 5 days (21st to 25th June, inclusive) and that is of statistical significance.  The point is that without global warming you would expect there to be approximately the same number of date records each year, or, more practically, decade.  The same is true of daily records, of course – providing a recognised statistical demonstration of global warming – but my innovation of date records provides for a more efficient analysis, since it takes account of the significance of daily records compared to those on neighbouring dates.  It makes use of more information in the data.

Supporting the “hypothesis” of warming temperatures, the 5 day date record set on 21st June 2017 exceeds what you would expect in an average year, given that daily temperature records go back over 150 years.  On average you’d expect less than 3 days of date records in any given year.  But we can’t read too much into one weather event, so how does it look for recent decades?

Last September, I provided a list of UK date records from the hottest day, 10th August, when 38.1C was recorded in Gravesend in 2003 through to October 18th, promising to do some more work next time there was a heatwave.  So, keeping my word, we have the following date records:

34.4C – 3rd June 1947 – 18(!) days

34.5C – 21st June 2017 – 5 days

35.4C – 26th June 1976 – 1 day

35.5C – 27th June 1976 – 1 day

35.6C – 28th June 1976 – 3 days

36.7C – 1st July 2015 – 33(!!) days

37.1C – 3rd August 1990 – 7 days (through 9th August)

Obviously, weighting for how exceptionally hot they were, the 2010s have had way, way over their share of exceptionally hot days for the time of year during the summer months.  I’m timed out for today, but I will definitely have to get round to an analysis of the whole year!  Watch this space.

 

September 20, 2016

How Not to Report a Weather Record: 13th September 2016

Filed under: Effects, Global warming, Science, UK climate trends — Tim Joslin @ 11:21 am

Last Sunday, the Guardian website suggested Tuesday 13th September would be jolly warm:

“If the mercury rises above 31.6C, the temperature was [sic] reached at Gatwick on 2 September 1961, it will be the hottest September day for 55 years.”

“No, no, no!!”, I was obliged to point out, adding, by way of explanation that:

“If the temperature rises above 31.6C it will be the hottest September day for more than 55 years, since 1961 was 55 years ago.

For it to be the hottest September day for 55 years it will only have to be hotter on Tuesday than the hottest September day since 1961.”

Good grief.

After that I was hardly surprised – since your average journo seems not even to be an average Joe, but, to be blunt, an innumerate plagiarist – to read in the Evening Standard on the 13th itself:

“If the heat rises above 31.6C, which was reached at Gatwick on September 2, 1961, then it will be the hottest [September] day for 55 years.”

See what they’ve done there?  With a bit of help from Mr Google, of course.

In the event, it reached 34.4C on 13th, making it the hottest September day for 105 years.

Much was also made of the fact that we had 3 days in a row last week when the temperature broke 30C for the first time in September in 87 years.

But the significance of the 34.4C last Tuesday was understated.

The important record was that the temperature last Tuesday was the highest ever recorded so late in the year, since the only higher temperatures – 34.6C on 8th September 1911 (the year of the “Perfect Summer”, with the word “Perfect” used as in “Perfect Storm”) and 35.0C on 1st rising to 35.6C on 2nd during the Great Heatwave of 1906 – all occurred earlier in the month.  By the way, in 1906 it also reached 34.2C on 3rd September.  That’s 3 days in a row over 34C.  Take that 2016.  They recorded 34.9C on 31st August 1906 to boot, as they might well have put it back then.

No, what’s really significant this year is that we now know it’s possible for the temperature to reach 34.4C as late as 13th September which we didn’t know before.

I’m going to call this a “date record”, for want of a better term.  Any date record suggests either a once in 140 years freak event (since daily temperature records go back that far, according to my trusty copy of The Wrong Kind of Snow) or that it’s getting warmer.

One way to demonstrate global warming statistically is to analyse the distribution of record daily temperatures, i.e. the hottest 1st Jan, 2nd Jan and so on.  Now, if the climate has remained stable, you’d expect these daily records to be evenly distributed over time, a similar number each decade, for example, since 1875 when the records were first properly kept.  But if the climate is warming you’d expect more such records in recent decades.  I haven’t carried out the exercise, but I’d be surprised if we haven’t had more daily records per decade since 1990, say, than in the previous 115 years.

It occurs to me that another, perhaps quicker, way to carry out a similar exercise would be to look at the date records.  You’d score these based on how many days they apply for.  For example, the 34.4C on 13th September 2016 is also higher than the record daily temperatures for 12th, 11th, 10th and 9th September, back to that 34.6C on 8th September 1911.  So 13th September 2016 “scores” 5 days.

Here’s a list of date records starting with the highest temperature ever recorded in the UK:

38.1C – 10th August 2003 – counts for 1 day, since, in the absence of any evidence to the contrary, we have to assume 10th August is the day when it “should” be hottest

36.1C – 19th August 1932 – 9 days

35.6C – 2nd September 1906 – 14 days

34.6C – 8th September 1911 – 6 days

34.4C – 13th September 2016 – 5 days

31.9C – 17th September 1898 – 4 days

31.7C – 19th September 1926 – 2 days

30.6C – 25th September 1895 – 6 days

30.6C – 27th September 1895 – 2 days

29.9C – 1st October 2011 – 4 days

29.3C – 2nd October 2011 – 1 day

28.9C – 5th October 1921 – 3 days

28.9C – 6th October 1921 – 1 day

27.8C – 9th October 1921 – 3 days

25.9C – 18th October 1997 – 9 days

And you could also compile a list of date records going back from 10th August, i.e. the earliest in the year given temperatures have been reached.

The list above covers a late summer/early autumn sample of just 70 days, but you can see already that the current decade accounts for 10 of those days, that is, around 14%, during 5% of the years.  The 2000s equal and the 1990s exceed expectations in this very unscientific exercise.

Obviously I need to analyse the whole year to draw firmer conclusions.  Maybe I’ll do that and report back, next time a heatwave grabs my attention.

It’s also interesting to note that the “freakiest” day in the series was 2nd September 1906, with a daily record temperature hotter than for any of the previous 13 days.  2nd freakiest was 19th August 1932 – suggesting (together with 2nd September 1906) that perhaps the real story is an absence of late August heatwaves in the global warming era – joint with 18th October 1997, a hot day perhaps made more extreme by climate change.

Am I just playing with numbers?  Or is there a serious reason for this exercise?

You bet there is.

I strongly suspect that there’s now the potential for a sustained UK summer heatwave with many days in the high 30Cs.  A “Perfect Summer” turbocharged by global warming could be seriously problematic.  I breathe a sigh of relief every year we dodge the bullet.

 

 

 

January 19, 2016

Two More Extreme UK Months: March 2013 and April 2011

Filed under: Effects, Global warming, Science, Sea ice, Snow cover, UK climate trends — Tim Joslin @ 7:17 pm

My previous post showed how December 2015 was not only the mildest on record in the Central England Temperature (CET) record, but also the mildest compared to recent and succeeding years, that is, compared to the 21 year running mean December temperature (though I had to extrapolate the 21-year running mean forward).

December 2010, though not quite the coldest UK December in the CET data, was the coldest compared to the running 21 year mean.

I speculated that global warming might lead to a greater range of temperatures, at least until the planet reaches thermal equilibrium, which could be some time – thousands of years, maybe.  The atmosphere over land responds rapidly to greenhouse gases. But there is a lag before the oceans warm because of the thermal inertia of all that water. One might even speculate that the seas will never warm as much as the land, but we’ll discuss that another time. So in UK summers we might expect the hottest months – when a continental influence dominates – to be much hotter than before, whereas the more usual changeable months – when maritime influences come into play – to be not much hotter than before.

The story in winter is somewhat different.  Even in a warmer world, frozen water (and land) will radiate away heat in winter until it reaches nearly as cold a temperature as before, because what eventually stops it radiating heat away is the insulation provided by ice, not the atmosphere.  So the coldest winter months – when UK weather is influenced by the Arctic and the Continent – will be nearly as cold as before global warming.   This will also slow the increase in monthly mean temperatures.  Months dominated by tropical influences on the UK will therefore be warmer, compared to the mean, than before global warming.

If this hypothesis is correct, then it would obviously affect other months as well as December.  So I looked for other recent extreme months in the CET record.  It turns out that the other recent extreme months have been in late winter or early spring.

Regular readers will recall that I wrote about March 2013, the coldest in more than a century, at the time, and noted that the month was colder than any previous March compared to the running mean.  I don’t know why I didn’t produce a graph back then, but here it is:

160118 Extreme months in CET slide 1b

Just as December 2010 was not quite the coldest December on record, March 2013 was not the coldest March, just the coldest since 1892, as I reported at the time.  It was, though, the coldest in the CET record compared to the 21-year running mean, 3.89C below, compared to 3.85C in 1785.  And because I’ve had to extrapolate, the difference will increase if the average for Marches 2016-2023 (the ones I’ve had to assume) is greater than the current 21-year mean (for 1995-2015), which is more than half likely, since the planet is warming, on average.

We’re talking about freak years, so it’s surprising to find yet another one in the 2010s.  April 2011 was, by some margin, the warmest April on record, and the warmest compared to the 21-year running mean:

160119 Extreme months in CET slide 2

The mean temperature in April 2011 was 11.8C.  The next highest was only 4 years earlier, 11.2 in 2007.  The record for the previous 348 years of CET data was 142 years earlier, in 1865, at 10.6C.

On our measure of freakishness – deviation from the 21-year running mean – April 2011, at 2.82C, was comfortably more freakish than 1893 (2.58C), which was in a period of cooler Aprils than the warmest April before the global warming era, 1865.  The difference between 2.82C and 2.58C is unlikely to be eroded entirely when the data for 2016-2021 is included in place of my extrapolation.  It’s possible, but for that to happen April temperatures for the next 6 years would need to average around 10C to sufficiently affect the running mean – the warmth in the Aprils in the period including 2007 and 2011 would need to be repeated.

So, of the 12 months of the year, the most freakishly cold for two of them, December and March, have occurred in the last 6 years, and so have the most freakishly warm for two of them, December and April. The CET record is over 350 years long, so we’d expect a most freakishly warm or cold month to have occurred approximately once every 15 years (360 divided by 24 records).  In 6 years we’d have expected a less than 50% chance of a single freakishly extreme monthly temperature.

According to the CET record, we’ve had more than 8 times the number of freakishly extreme cold or warm months in the last 6 years than would have been expected had they occurred randomly since 1659.

And I bet we get more freakishly extreme cold or warm months over the next 6 years, too.

 

May 1, 2012

The Wettest Drought in History

One of my responsibilities as a teenager was to keep the lawn under control. Flymos had presumably not yet been invented, and petrol-driven mowers were perhaps too much hassle, so ours was manual. If the grass got too long it was hard work and it could even become necessary to resort to shears, which was back-breaking work. But mowing was also difficult if the grass was damp. There was therefore a trade-off each spring. The first mow had to be done when it was mild enough for the grass to be reasonably dry, but couldn’t be put off until it was too long. And as the grass grew it dried out more slowly each day. So it was essential to make use of any opportunity to mow in case the weather turned wet again. It probably only happened once or twice, but it seems I was always caught out. I’d wait for one more dry day to make the job easier, but the skies would open and a week later the job would be twice as difficult.

Nowadays the internet and improved forecasting allows me to monitor the weather far more effectively. Thus it was I’d already been out with the mower in March, and, seeing the long-range forecast, made sure I got a mow in just before it started raining early in April.

The point is that the 5-10 day forecast is now fairly reliable.

Why, then, was the UK drought – declared in a few regions in March, with hosepipe bans from 5th Aprilofficially extended in mid April?

Yes, that’d be in the middle of the wettest April on record!

We’re now in the farcical situation of the “wettest drought in history”, with a succession of “experts” (and junior ministers) popping up on TV claiming the rain in April somehow doesn’t count. Apparently it’ll run off compacted ground. Yes, maybe for the first day or two, but not after a month. With the wettest April on record followed by significant rain already in May, and more forecast in a day or two, the drought risk is simply receding. We’re in one of those surreal situations where reasons are being invented not to contradict previous claims, in this case that the drought would last into next year.

What baffles me is why the drought was extended when wet weather was forecast. Surely – since most of the time it’s dry – the drought risk is receding as long as there’s significant rain in the forecast. And, as the 5-10 day forecast is fairly reliable and everything after that isn’t, you simply run the risk of looking stupid if you don’t wait until the forecast is for dry weather.

I wonder whether there’s a tendency to believe long-term forecasts more than short-term ones. But long-term forecasts only indicate a small bias one way or another, as Met Office modelling indicates:

“New three-month forecasts by the Met office suggest little respite with April, May and June expected to be drier than average. ‘With this forecast, the water resources situation in southern, eastern and central England is likely to deteriorate further during the period. The probability that UK precipitation for April-May-June will fall into the driest of our five categories is 20-25% while the probability that it will fall into the wettest of our five categories is 10-15%, it says.’ ” [my emphasis]

So 20-25% dry plays 10-15% wet plays (presumably) 60-70% around average. Not sure I’d have put a lot of money on the “expectation” of a dry spring this year (certainly wouldn’t now!). Even less after I’d looked at the Met Office report (scroll down to find PDFs) because the model runs are all over the place.

And are these “probabilities”, anyway? Isn’t the modelling signal swamped by the noise of uncertainty? It seems to me likelihoods based on model-runs are not the same as probabilities in the real world.

I’d say the Met Office and the media (the quote marks indicate the introductory sentence was written by the Guardian’s John Vidal) need to mind their language. How about “slightly more likely than not to be” rather than “expected to be”? And perhaps “indication” rather than “forecast”? And “x% of model runs gave…” rather than “the probability that…”? And definitely “might” rather than “is likely to”!

February 24, 2011

Extreme Madness: A Critique of Pall et al (Part 3: Juicy Bits and Summary)

Filed under: Effects, Global warming, Science, UK climate trends — Tim Joslin @ 6:22 pm

I continue to be bothered by Pall et al, the paper which attempts to determine how much more likely the autumn 2000 floods in England and Wales were because of the anthropogenic global warming (AGW) since 1900.

To recap, Part 1 of this extended critique described the method adopted by Pall et al and made a few criticisms, one of which I’ll elaborate on in the first part of this post. Part 1 ended by asking why Pall et al didn’t eliminate more statistical uncertainty, given the large number of of data points they produced (they ran over 10,000 simulations of the climate in 2000 when floods occurred).

Part 2 looked more closely at how Pall et al had defined risk and uncertainty and handled it statistically. Part 3 will further question the approach adopted, in particular by considering the uncertainty introduced by the process of modelling the climate itself.

Oops, it’s a log scale, or “about this 0.41mm threshold” revisited

In Part 1, I noted the arbitrariness of the threshold for severe flooding adopted by Pall et al. They considered their model had predicted flooding when it estimated 0.41mm/day or more of runoff, but their Fig 3 clearly shows that this level actually gives rather more than the 5-7 floods in the ~2000 model runs of each of the 4 A2000N scenarios (those without AGW, the AGW runs being referred to as the A2000 series, of which around 2000 were also run) that would be expected for the once in 3-400 year event the 2000 floods are said to be.

Pall et al includes no evidence as to the skill of their model in predicting flooding or calibration between the models’ estimation of runoff in the 2000 floods and what actually happened in the real world. As I noted in Part 1, they could have run the model for years other than 2000 in order to show what is termed its “skill”, in this case in predicting flooding.

Why, then, did Pall et al not calibrate their model? Because they didn’t think it mattered, that’s why. They write:

“Crucially, however, most runoff occurrence frequency curves in Fig 3 remain approximately linear over a range of extreme values, so our FAR estimate would be consistent over a range of bias-corrected flood values.”

It’s about time we had a picture, and I can now include Pall et al’s Fig 3 itself. Ignore the sub-divisions on the bottom of the 2 scales in each diagram – these are in error as pointed out in Part 1. The question for any youngsters reading is: are the scales on these diagrams linear or logarithmic?:

Answer: logarithmic, of course.

So is it the case that the “FAR estimate would be consistent over a range of bias-corrected flood thresholds”? The FAR, remember, is the ratio of the AGW risk of flooding to the non-AGW risk of flooding. This ratio would indeed not depend on the level chosen in the model set-up to indicate flooding of the extent seen in the real world in 2000 were the runoff occurrence frequency curves linear. But they’re not. They’re logarithmic. The increased risk therefore does depend on the flood level, as was seen simply from reading figures off the diagrams in Part 1. One wonders if we’re all clear exactly what the graphs in Pall et al’s Fig 3 actually represent.

Does Pall et al actually tell us anything useful at all?

The Pall et al study assumes it has some skill in forecasting flooding in England in autumn from the state of the climate system in April. Unfortunately we have no idea what this level of skill actually is. The model has not been calibrated against the real world by running it for years other than 2000 (or if it has, this information is not included in Pall et al). Note that analysing the results of such an exercise would not be a trivial exercise, since there are two unknowns: the skill of the model and its bias. As far as we know, 0.41mm runoff in the model could be anything in the real world – 0.35mm or 0.5mm, we have no idea. Similarly we don’t know if the model would forecast floods such as those in 2000 with a probability of 1 in 10, 1 in 100 or whatever.

To be fair, Pall et al do devote one of their 4 pages in Nature to showing their modelling does bear some relation to reality. Their Fig 1 shows similar correlation between Northern Hemisphere (NH) air pressure patterns in the model and rainfall in England and Wales as exists in the real world. And their Fig 2 shows that the rainfall patterns in the model bear some resemblance to those in the real world.

But one (more) big problem nags away at me. The basic premise is that a particular pattern of SSTs and sea ice causes the pressure system patterns that lead to rainfall in the UK. Pall et al therefore used the observed April 2000 pattern as input to the A2000 (AGW) series of model runs. But the patterns used for the non-AGW (A2000N) runs were different. Here’s what they say:

“…four spatial patterns of attributable [i.e. to AGW] warming were obtained from simulations with four coupled atmosphere-ocean climate models (HadCM3, GFDLR30, NCARPCM1 and MIROC3.2)… Hence the full A2000N scenario actually comprises four scenarios with a range of SST patterns and sea ice…” [my stress]

So if the A2000 model runs can predict flooding in a particular year from the SST and sea ice pattern in April, we wouldn’t expect the A2000N runs to do so, not just because everything is warmer, but also because the SST and sea ice patterns are different! So we don’t know whether the increased flood risk in the A2000 series is because of global warming or because the SST patterns are different.

It also seems to me that were it the case that Pall et al’s model could predict autumn flooding in April around 15-20x as often as it actually occurs (around 1 in 20 times for 2000 compared to the actual risk of 1 in 3-400) as is implied by their Fig 3, then we’d be reading about a breakthrough in seasonal forecasting and more money would be being invested to improve the modelling further (and increase the speed of forecasting of course, so that it’s not autumn already by the time we know it’s going to be wet!). This isn’t just the forecast for the next season we’re talking about, which the Met Office has given up on, but the forecast for the season after that.

So I’m not convinced. I’m going to assume that Pall et al’s modelling can’t tell one year from another, and that all they’ve done is model the increased risk of flooding in a warmer world in general. (One way to test this would be to compare the flood risks of the 4 A2000N models against each other for the same extent of AGW – it could be that the models give different results simply because they suggest different amounts of warming, not different patterns).

Under this not very radical assumption, we can actually calibrate Pall et al’s modelling. We know that the floods in 2000 were a once in 3-400 year event. That implies that in each of the diagrams in Fig 3 there should be around 5-7 floods (there are – or should be – approx. 2000 dots representing non AGW model runs on each diagram). We can therefore estimate by inspecting the figures how much flooding in the model respresents a 3-400 flood – it’s the level with only 5-7 dots above. We can then read across to the line of blue dots (the AGW case) and then, by reading up to the return time scale (the one with correct subdivisions), work out how often the modelling suggests the flooding should then occur. Here’s what I get:
– Fig 3a: 3-400 year flood threshold ~0.49mm; risk after AGW once every 40 years.
– Fig 3b: ~0.47mm, and risk now once every 30 years.
– Fig 3c and d: ~0.5mm, and risk now once every 50 years.

So the Pall et al study implies, assuming it’s no better at forecasting flooding when it knows the SST and sea ice conditions in April than it is if it doesn’t, that the risk of a 3-400 year flood in England and Wales, similar or more severe to that which occurred in 2000 is now, as a result of AGW up to 2000 only, between once in 30 and once in 50 years. That is, under this assumption, the risk of flooding in England and Wales of what was previously once in 3-400 year severity has increased by a factor of between 6 and 13, according to Pall et al’s modelling.

Trouble is, the Pall et al model may have a bit of skill in forecasting flooding from April SST and sea ice conditions (the A2000 case) and this skill may have been reduced by an unknown factor when processing the data to remove the effects of 20th century warming. If Pall et al’s results are to have any meaning whatsoever they need to do further work to establish the skill of the model and calibrate it to measures of flooding in the real world.

More uncertainty about uncertainty

In Part 2 I discussed how Pall et al’s treatment of uncertainty has resulted in them actually saying very little. Essentially, they’ve estimated that the risk of autumn flooding as great as or exceeding that in 2000 has increased as a result of AGW by between 20% and around 700% – and there’s 20% probability it could be outside that range! I argued that the sources of this uncertainty are:
(i) the 4 different models used to derive conditions as if AGW hadn’t happened – fair enough, we can’t distinguish between these, (though in Part 1 I estimated how certain we’d be of the increased risk of flooding if we did assume they were all equally probable), and
(ii) statistical uncertainty which could have been eliminated.

But these are not the only sources of uncertainty. We are also uncertain of all the parameters used to drive the HadAM3-N144 model which attempts to reproduce the development of the autumn weather from the April conditions that were fed into it; we’re uncertain of the accuracy of the April SST and sea-ice conditions input into the model; we’re uncertain as to whether atmosphere-ocean feedbacks may have affected the autumn 2000 weather (Pall et al are explicit that such feedbacks were insignificant, so used “an atmosphere-only model, with SSTs and sea ice as bottom boundary conditions); we’re uncertain of the precise magnitude of the forcings in 2000 which affected the development of the autumn weather; we’re uncertain as to whether there are errors in the implementation of the models; and we’re uncertain as to whether there are processes below the resolution of the model which are important in the development of weather patterns. There are probably more.

Consider that the reason we are uncertain as to which of the 4 models used to derive the A2000N initial conditions is most correct (or how correct any of them are) is because we don’t know how well each of them perform on moreorless the same criteria as the higher resolution model used to simulate the 2000 weather. If they didn’t have different parameters, all had the same resolution and so on, then – tautologically – they’d all be the same! If we’re uncertain which of those is most accurate then we must also be uncertain about the HadAM3-N144 model. Just because only one model was used for that stage of the exercise doesn’t mean we’re not still uncertain (and for that matter the fact that we’ve used 4 in the first stage doesn’t mean we’re certain any of them, they could all be wildly wrong, a possibility not apparently taken account of in Pall et al).

It seems to me the real causes of uncertainty in the findings of Pall et al derive from the general characteristics of the models, not (as discussed in Part 2) the statistical uncertainty as to the amplitude of 20th century warming (the 10 sub-scenarios for each of the 4 cases) which has been used.

Judith Curry has recently written at length about uncertainty and her piece is well worth a look (though I disagree where statistical uncertainty belongs in Rumsfeld’s classification – I think it’s a known unknown, maybe in a “knowable” category, since it can be reduced simply by collecting more of the same type of data as one already has). In particular, though, she provides a link to IPCC guidelines on dealing with uncertainty (pdf). A quick skim of this document suggests to me that Probability Distribution Functions (PDFs) such as Pall et al’s Fig 4 should be accompanied by a discussion of the factors creating uncertainty in the estimate, including some consideration as to how complete the analysis is deemed to be. I say deemed to be, since by it’s very nature uncertainty is uncertain!

That seems a good note to end the discussion on.

Here’s Pall et al’s Fig 4 (apologies if it looks a bit smudged):

Summary

In Part 1 of this critique I identified the two main problems with Pall et al:
– the model results are not calibrated with real world data. The paper therefore chooses an arbitrary threshold of flooding.
– statistical uncertainty has not been eliminated, rather it seems to have been introduced unnecessarily.

Part 2 drilled down into the issue of statistical uncertainty and suggested how Pall et al could have used the vast computing resources at their disposal to eliminate much of the uncertainty of their headline findings.

Part 3 picks up on some of the issues raised in Parts 1 and 2, in particular noting that the paper seems to include an erroneous assumption which led them to conclude that calibration of their model for skill and bias was not important. If my reasoning is correct, this was a mistake. Part 3 also continues the discussion about uncertainty, suggesting that the real reasons for uncertainty as to the increased risk of flooding have not been included in the analysis (whereas statistical uncertainty should have been eliminated).

There are so many open questions that it is not clear what Pall et al does tell us, if anything. I suspect, though, that the models used have little skill in modelling autumn floods on the basis of April SST and sea ice conditions. If this is correct then the study confirms that extreme flooding in general is likely to become more frequent in a warmer world, with events that have historically been experienced only every few centuries occurring every few decades in the future.

Note: Towards the end of writing Part 3 I came across another critique by Willis Eschenbach.  So there may well be a Part 4 when I’ve digested what Willis has to say!

February 22, 2011

Extreme Madness: A Critique of Pall et al (Part 2: On Risk and Uncertainty)

Filed under: Effects, Global warming, Science, UK climate trends — Tim Joslin @ 2:42 pm

Keeping my promises? Whatever next! I said on Sunday that I had more to say on Pall et al, and, for once, I haven’t lost interest. Good job, really – after all, Pall et al does relate directly to the E3 project on Rapid Decarbonisation.

My difficulties centre around the way Pall et al handle the concepts of risk and uncertainty. I’m going to have to start at the beginning, since I doubt Pall et al is fundamentally different in many respects from other pieces of research. They’re no doubt at least trying to follow standard practice, so I need to start by considering the thinking underlying that. I feel like the Prime alien in Peter Hamilton’s Commonwealth Saga (highly recommended) trying to work out how humans think from snippets of information!

Though I should add that Pall et al does have the added spice of trying to determine the risk of an event that has already occurred. That’s one aspect that really does my head in.

Let’s first recap the purpose of the exercise. The idea is to try to determine the fraction of the risk of the 2000 floods in the UK attributable (the FAR) to anthropogenic global warming (AGW). This is principally of use in court cases and for propaganda purposes, though it may also be useful to policy-makers as it implies the risk of flooding going forward, relative to past experience.

Now, call me naive, but it seems to me that, in order to determine the damages to award against Exxon or the UK, those crazy, hippy judges are going to want a single number:
– What, Mr Pall et al, is your best estimate of the increased risk of the 2000 autumn floods due to this AGW business?
– Um, we’re 90% certain that the risk was at least 20% greater and 66% certain that the risk was 90% greater…
– I’m sorry, Mr Pall et al, may we have a yes or no answer please.
– Um…
– I mean a single number.
– Sorry, your honour, um… {shuffles papers} here it is! Our best estimate is that the 2000 floods were 150% more likely because of global warming, that is, 2 and a half times as likely, that is, the AGW FAR was 60%.
– Thank you.
– OBJECTION!
– Yes?
– How certain is Mr um {consults notes} Pall et al of that estimate.
– Mr Pall et al?
– Let’s see… here it is… yes, we spent £120 million running our climate model more than 10,000 times, so our best estimate is tightly constrained. We have calculated that 95% of such suites of simulations would give the result that the floods were between 2.2 and 2.8 times more likely because of global warming [see previous post for this calculation].

But Pall et al don’t provide this number at all! This is what Nature’s own news report says:

“The [Pall et al] study links climate change to a specific event: damaging floods in 2000 in England and Wales. By running thousands of high-resolution seasonal forecast simulations with or without the effect of greenhouse gases, Myles Allen of the University of Oxford, UK, and his colleagues found that anthropogenic climate change may have almost doubled the risk of the extremely wet weather that caused the floods… The rise in extreme precipitation in some Northern Hemisphere areas has been recognized for more than a decade, but this is the first time that the anthropogenic contribution has been nailed down… The findings mean that Northern Hemisphere countries need to prepare for more of these events in the future. ‘What has been considered a 1-in-100-years event in a stationary climate may actually occur twice as often in the future,’ says Allen.” [my stress]

When Nature writes that “anthropogenic climate change may have almost doubled the risk of the extremely wet weather that caused the floods” [my stress] what they are actually referring to is the “66% certain that the risk was 90% greater”, mentioned by Pall et al in court (and as “two out of three cases” in the Abstract of Pall et al even though the legend of Fig 4 in the text clearly states that we’re talking about the 66th percentile, i.e. 66, not 66.66666… but I’m beginning to think we’ll be here all day if we play spot the inaccuracy – the legend in their Fig 2 should read mm per day not mm^2, that would get you docked a mark in your GCSE exam).

We could have a long discussion now about the semantics and usage in science of the words “may” and “almost” as in the translation of “66% certain that the risk was 90% greater” into “may have almost doubled”, but let’s move on. The point is that in the best scientific traditions a monster has been created, in this case a chimera of risk and uncertainty that the rest of the human race is bound to attack impulsively with pitch-forks.

So how did we get to this point?

Risk vs uncertainty

It’s critical to understand what is meant by this these two terms in early 21st century scientific literature.

Risk is something quantifiable. For example, the risk that an opponent may have been dealt a pair of aces in a game of poker is perfectly quantifiable.

First, why, then do poker players of equal competence sometimes win and sometimes not? Surely the best players should win all the time, because after all, all they’re doing is placing bets on the probability of their opponent holding certain cards. One reason is statistical uncertainty. There’s always a chance in a poker session that one player will be dealt better cards than another. Such uncertainty can be quantified statistically.

But there’s more to poker than this. Calculating probabilities is the easy part. The best poker players can all do this. So the second question is why, then, are some strong poker players better than others? And why do the strongest human players still beat the best computer programs – which can calculate the odds perfectly – in multi-player games? The answer is that there’s even more uncertainty, because you don’t know what the opponent is going to do when he has or does not have two aces. Some deduction of the opponent’s actions is possible, but these require understanding the opponent’s reasoning. Sometimes he may simply be bluffing. Either way, to be a really good poker player you have to get inside your opponent’s head. The best poker players are able to assess this kind of uncertainty, the uncertainty as to how much the statistical rules to apply in any particular case, uncertainties as to basic assumptions.

Expressing risk and uncertainty as PDFs

PDFs in this case doesn’t stand for Portable Document Format, but Probability Density (or Distribution) Function.

The PDF represents the probability (y-axis) of the risk (x-axis) of an event, that is, the y-axis is a measure of uncertainty. Pall et al’s Fig 4 is an example of a PDF. It’s where their statement in court that they were 90% sure that the risk of flooding was greater than 20% higher because of AGW (and so on) came from.

The immediate issue is that risk is a probability function. Our best estimate of the increase in risk (the FAR) because of AGW is 150%, so we’re already uncertain whether the 2000 floods were caused by global warming (the probability is 60% or 3/5). So we have a probability function of a probability function. The only difference between these probability functions is that the one is deemed to be calculable, the other not. Though it has in fact been calculated! Furthermore, as we’ll see, some aspects of the uncertainty in the risk can be reduced, and other aspects cannot – the PDF includes both statistical uncertainty and genuine “we don’t know what we know” uncertainty (and I’m not even discussing “unknown unknowns” here, both types of uncertainty are unknown knowns).

Risk and uncertainty in Pall et al

What Pall et al have done is assume their model is able to assess risks correctly. Everything else, it seems, is treated as uncertainty.

Their A2000 series is straightforward enough. They set sea surface temperatures (SSTs) and the sea-ice state to those observed in April 2000 and roll the model (with minor perturbations to ensure the runs aren’t all identical).

But for the A2000N series they use the same conditions, but set GHG concentrations to 1900 levels, subtract observed 20th century warming from SSTs and project sea-ice conditions accordingly. There’s one hint of trouble, though, they note that the SSTs are set “accounting for uncertainty”. I’m not clear what this means, but it doesn’t seem to be separated out in the results in the same way as will be seen is done for other sources of uncertainty.

They then add on the warming over the 20th century that would have occurred without AGW, i.e. with natural forcings only, according to 4 different models, giving 4 different patterns of warming in terms of SSTs etc. As will be seen, for each of these 4 different patterns they used 10 different “equiprobable” temperature increase amplitudes.

First cause of uncertainty: 4 different models of natural 20th century warming

As Pall et al derive the possible 20th century natural warming using 4 different models giving 4 different patterns of natural warming, there are 4 different sets of results, giving 4 separate PDFs of the AGW FAR of flooding in 2000. Now, listen carefully. They don’t know which of these models gives the correct result, so – quite reasonably – they are uncertain. Their professional judgement is to weight them all equally, so that means that so far, they’ll only be able to say at best something like: we’re 25% certain the FAR is only x; 25% certain it’s y; 25% certain it’s z; and, crikey, there’s a 25% possibility it could be as much as w!

Trouble is, they can only run 2,000 or so of each of 4 non AGW simulations. So for each of the 4 there’ll be a sampling error. They treat this statistical uncertainty in exactly the same way as what we might call their professional judgement uncertainty, which certainly gives me pause for thought. So what happens is they smear the 4 estimates x, y, z and w and combine them into one “aggregate histogram” (see their Fig 4). That’s how they’re able to say we’re 90% certain the FAR is >20% and so on.

Nevertheless, their Fig 4 also includes the 4 separate histograms for our estimates x, y, z and w. It’s therefore possible for another expert to come along and say, “well, x has been discredited so I’m just going to ignore the pink histogram and look at the risk of y, z and w” or “z is far and away the most thorough piece of work, I’ll take my risk assessment from that”, or even to weight them other than evenly.

One of the 4 models may be considered an outlier, as in fact the pink (NCARPCM1) one is in this case. It’s the only one with a most likely (and median) FAR below the overall median value (or the overall most likely value which happens to be higher than the overall median). Further investigation might suggest it should be discarded.

Another critical point: x, y, z and w can be determined as accurately as we want by running more simulations, because the statistical uncertainty reduces as the square root of the number of data items (see Part 1).

I’m not going to argue any more as to whether the 4 models introduce uncertainty. Clearly they do. I have no way of determining which of the 4 models most correctly estimate natural warming between 1900 and 2000. It’s a question of professional judgement.

However, I will point out that if uncertainty between the models is not going to be combined statistically (as in the previous post) I am uneasy about combining them at all:

Criticism 6: The headline findings against each of the 4 models of natural warming over the 20th century should have been presented separately in a similar way to the IPCC scenarios (for example as in the figure in my recent post, On Misplaced Certainty and Misunderstood Uncertainty).

Second cause of uncertainty: 10 different amounts of warming from each of the 4 models of natural 20th century warming

But Pall et al didn’t stop at 4 models of natural 20th century warming. They realised that each of the 4 models has statistical uncertainty in its modelling of the amount of natural warming to 2000. The models in particular each noted a risk of greater than the mean warming. This has to be accounted for in the initial data to our flood modelling. Never mind, you’d have thought, let’s see how often floods occur overall, because what we’re interested in is the overall risk of flooding.

But Pall et al didn’t simply initialise their model with a range of initial values for the amplitude of warming for each of their 4 scenarios. They appear to have created 10 different warming amplitudes for each of the 4 scenarios and treated each of these as different cases. This leaves me bemused, as the 4 scenarios must also have had different patterns of warming, so why not create different cases from these? Similarly, they seem to have varied initial SST conditions in their AGW model since they “accounted for uncertainty” in that data. Why, then, were these not different cases?

I must admit that even after spending last Sunday morning slobbing about pondering Pall et al, rather than just slobbing about as usual, I am still uncertain(!) whether Pall et al did treat each of the 10 sub-scenarios as separate cases. If not, they did something else to reduce the effective sample size and therefore increase the statistical uncertainty surrounding their FAR estimates. Their Methods Summary section talks about “Monte Carlo” sampling, which makes no sense to me in this case as we can simply use Statistics 101 methods (as shown in Part 1).

The creation of 10 sub-scenarios of each scenario (or the Monte Carlo sampling) effectively means that, instead of 4 tightly constrained estimates of the risk, we have 4 wide distributions. Remember (see previous post) the formula for calculating the statistical uncertainty (Standard Deviation (SD)) that the mean of a sample represents the mean of the overall population is:

SQRT((sample %)*(100-sample%)/sample size) %

so varies with the square root of the sample size. In this case the sample sizes for each of the 4 scenarios was 2000+, so that of each of the 10 subsets was only around 200. The square root of 10, obviously, is 3 and a bit, so the error associated with a sample of 200 gives an error 3 times as large as if the sample size were 2000.

For example, one of the yellow runs is an outlier: it predicts floods about 15% of the time. How confident can we be in this figure?:

SQRT((15*85)/200) = ~2.5

So it’s likely (within 1 SD either way) that the true risk is between 12.5 and 17.5% and very likely (2 SD either way) only that it is between 10 and 20%.

So if we ran enough models we might find that that particular yellow sub-scenario only implied a flood risk of somewhere around 10%. Or maybe it was even more. The trouble is, in salami-slicing our data into small chunks and saying we’re uncertain which represents the true state of affairs, we’ve introduced statistical uncertainty. And this affects our ability to be certain, since it is bound to increase the number of extreme results in our suite of 40 scenarios, disproportionately affecting our ability to make statements as to what we are certain or very certain of.

Criticism 7: The design of the Pall et al modelling experiment ensures poor determination of the extremes of likely true values of the FAR – yet it is the extreme value that was presumably required, since that was presented to the world in the form of the statement in the Abstract that AGW has increased the risk of floods “in 9 out of 10 cases” by “more than 20%“. The confidence in the 20% figure is in fact very low!

Note that if the April 2000 temperature change amplitude variability had been treated as a risk, instead of as uncertainty, the risks in each case would have been tightly constrained and the team would have been able to say it was very likely (>90%) that the increased flood risk due to AGW exceeds 60% (since all the 4 scenarios would yield an increased risk of more than that) and likely it is greater than 150% (since 3 of the 4 scenarios suggest more than that).

The problem of risks within risks

Consider how the modelling could have been done differently, at least in principle. Instead of constructing April 2000 temperatures based on previous modelling exercises and running the model from there, they could have modelled the whole thing (or at least the natural forcing representations) from 1900 to autumn 2000 and output rainfall data for England. Without the intermediate step of exporting April 2000 temperatures from one model to another there’d be no need to treat the variable as “uncertainty” rather than “risk”.

Similarly, say we were interested in flooding in one particular location. Say it’s April 2011 and we’re concerned about this autumn since the SSTs look rather like those in 2000. Maybe we’re concerned about waterlogging of Reading FC’s pitch on the day of the unmissable local derby with Southampton in early November. Should we take advantage of a £10 advance offer for train tickets for a weekend away in case the match is postponed or wait until the day and pay £150 then if the match is off?

In this case we’d want to feed the aggregate rainfall data from Pall et al’s model into a local rainfall model. By Pall et al’s logic everything prior to our model would count as “uncertainty”. We’d input a number of rainfall scenarios into our local rainfall model and come up with a wide range of risks of postponement of the match, none of which we had a great deal of confidence in. I might want to be 90% certain there was a 20% chance of the match being postponed before I spent my tenner. I’d have to do a lot more modelling to eliminate statistical uncertainty if I use 10 separate cases than if I treat them all the same.

How Pall et al could focus on improving what we know

If we inspect Pall et al’s Figs 3, it looks first of all that very few – perhaps just 1 yellow and 1 pink – of the 40 non-AGW cases result in floods 10% of the time (this includes the yellow run that predicts 15%). About 12% of the AGW runs result in floods. Yet we’re only able to say we are 90% certain that the flood risk is 20% greater because of AGW. This would imply at most 4 non AGW runs within 20% of the AGW flood risk (i.e. predicting a greater than 10% flood risk).

If we look at Pall et al’s Fig 4, we see that, first:
– the “long tail” where the risk of floods is supposedly somewhat (FAR <-0.25!) greater “without AGW” is almost entirely due to the yellow outlier case. If just 10 runs in this case had not predicted flooding instead of predicting it then the long tail of the entire suite of 10,000 runs would have practically vanished.
– the majority of the risk of the FAR being below its 10th percentile (giving rise to the statement of 90% probability of a FAR of greater than (only) 20%) is attributable to pink cases.

It would have been possible to investigate these cases further, simply by running more simulations of the critical cases to eliminate the statistical uncertainty. I can hear people screaming “cheat!”. But this simply isn’t cheating. Obviously if 10x as many runs of the critical cases as non-critical ones are done, they’d have to be scaled down when the statistical data is combined (but this must have been done anyway as the sample sizes for the different scenarios were not the same). It’s not cheating. In fact, it’s good scientific investigation of the critical cases. If we want to be able to quote the increased risk of flooding because of AGW at the 10 percentile level (i.e. that we’re 90% sure of) with more certainty then that’s what our research should be aimed at.

Of course, if we find that the yellow sub-scenario really does suggest a risk of flooding of 15%, somewhat more than with AGW on top, and we don’t see regression to the mean, that might also tell us something interesting. Maybe the natural variability is more than we thought and that April 2000 meteorological conditions (principally SSTs) were possible that would have left the UK prone to even more flooding than actually occurred with more warming.

Criticism 8: Having introduced unnecessary uncertainty in the design of their modelling experiment, Pall et al did not take use of the opportunities available to eliminate such uncertainty by running a final targeted batch of simulations.

Preliminary conclusion

It looks like there’s going to have to be a Part 3 as I have a couple more points to make about Pall et al and will need a proper summary.

Nevertheless, I understand a lot better than I did at the outset why they are only able to say we’re 90% certain the FAR is at least 20% etc.

But I still don’t agree that’s what they should be doing.

We want to use the outputs of expensive studies like this to make decisions. Part of Pall et al’s job should be to eliminate statistical uncertainty, not introduce it.

They should have provided one headline figure of the increased risk due to global warming, about 2.5 times as much, taking into account all their uncertainties.

And the only real uncertainties in the study should have been between the 4 different patterns of natural warming. These are the only qualitative differences between their modelling runs. Everything else was statistical and should have been minimised by virtue of the large sample sizes.

If we just label everything as uncertainty and not as risk, we’re not really saying anything.

After all, it might be quite useful for policy-makers to know that flood risks are already 2.5 times what they were in 1900. This might allow the derivation of some kind of metric as to how much should be spent on flood defences in the future, or even on relocation of population and/or infrastructure away from vulnerable areas. Knowing that the scientists are 90% certain the increased risk is greater than 20% really isn’t quite as useful.

The aim of much research in many domains, including the study of climate, and in particular that of Pall et al should be to quantify risks and eliminate uncertainties. It rather seems they’d done neither satisfactorily.

(to be continued)

———-
23/2/11, 16:22: Fixed typo, clarified remarks about the value of Pall et al’s findings to policy-makers.

Older Posts »

Blog at WordPress.com.