Yesterday’s post suggested, first, that we’re not yet very good, at least in the UK, at forecasting rainfall 1 to 3 months ahead and, second, that reliance on the forecasts we do have when declaring droughts is, to say the least, a tad foolish.
Today, I’d like to make some constructive suggestions. I’m trying to differentiate myself from the knockers, such as Watts Up With That, though I have to say doing so requires some self-control.
As I understand it, the procedure in the UK is that the Environment Agency (EA) declares droughts based in part on advice from the Met Office, and the water supply companies then take action, such as to impose hosepipe bans, depending on their own particular situations. Some companies have more water storage than others, and the types of such resources differ: different considerations apply to aquifers, rivers and reservoirs during droughts. My suggestions are, first, to the Met Office and Environment Agency regarding rainfall modelling and the use of such forecasts, and, second, primarily to the EA, but also the water companies, regarding how droughts are declared and managed.
Pouring Cold Water on Rainfall Modelling
The Met Office (and the EA) need to seriously consider whether their current long-range (1 to 3 months out) computer forecasting models are actually yet fit for purpose. We need these forecasts, so they should persevere. Maybe this year’s extreme rainfall can provide some clues as to what the models are missing.
I thought it would be fun (I said I’d need self-control) to juxtapose the actual UK rainfall over the 3 months April to June 2012 with the forecast. Here’s the exact same Met Office figure I presented yesterday, but with big red arrows added indicating what actually happened:
I feel I should let the right hand figure sink in, so far is the outcome (the big red arrow) from the model forecasts (the column of blue +’s).
In case you don’t believe me, here are graphs of UK rainfall by calendar month for each year since 1910, from the Met Office:
You can read the rainfall totals yourself: 126.5mm in April, ~67.5mm in May, 145.3mm in June (the April and June figures were in the media – I can’t see where the Met Office publishes these, so had to estimate May’s from the graph).
Hopefully the deviation of the outcome from the forecasts has now sunk in.
Let’s look again at the scatter of the rainfall forecasts for 2012 (the columns of +’s in the top figure). The giveaway as that the scatter of rainfall forecasts for 2012 – when the initial air temperature, humidity and pressure, sea surface temperature (SST), soil moisture conditions and so on were known – is very similar to the scatter of actual rainfall over the 30 years 1971-2000, the left-hand column of x’s in each half of the figure. But the initial conditions in 1971-2000 were all different! If the model had any skill at all the forecasts for 2012 should be more bunched up than the actual rainfall for a set of other years. They’re not. Assuming the explanation I gave yesterday for the slight bias in the forecasts is correct, these model outputs simply do not tell us anything useful.
But what else might the scatter tell us? Counting the +’s suggests we’re looking at more than 30 model runs (it’s difficult to count them exactly as they may overlie each other). If we assume they’re normally distributed, we’d expect the most extreme point to be getting on for 2 standard deviations (SDs, often “sigma”) from the mean (about 1 in 40 should be above 2 SDs greater than the mean and a similar number more than 2 SDs below) at each tail (in this case most and least rainfall) of the distribution.
Now, in the left-hand plot, of rainfall just for April, we see that what actually occurred was only slightly more rainfall than that implied by the model-run which predicted the wettest weather. One might imagine that if there had been, say, 100 model runs, there’s a good chance that one of them would have predicted as much rain as actually occurred.
But in the right-hand plot, of rainfall for April through June, we see that what actually occurred was getting on for twice as much more rainfall greater than the mean than the wettest model run. That is, if the wettest model run was around 2 SDs above the mean, what actually occurred was around 4 SDs greater. Statistical tables suggest that only 1 in 30,000 model runs would predict as much rainfall as actually occurred!
It seems a bit unlikely that a 1 in 30,000 event would have occurred in the relatively short time that this forecasting technique has been used. One might therefore suppose that the modelling technique used is not capable of predicting events such as the extreme 2012 UK rainfall.
One way to test this hypothesis would, of course, be to run the model tens of thousands of times with the same initial data as input for the April to June 2012 UK rainfall forecasts (the technique presumably is to make slight changes to the initial conditions for each model run, to ensure they don’t come out the same).
It might also be worth reflecting on why the models might not work. What might they be missing? There’s only really one possibility. There must be (positive) feedbacks in the real world that are not being captured by the model. Presumably the models adjust soil moisture in response to rainfall, a positive feedback as increased soil moisture leads to evaporation which tends to lower atmospheric pressure (it allows more heat to be absorbed before the temperature rises), which in turn enables more rainfall. But do the models adjust the characteristics of surface vegetation in response to rainfall? Quite possibly not. But greater rates of growth (in response to rainfall) will increase the leaf area available to promote evaporation, feeding back to lower pressure and more rainfall, just as for soil moisture. It may be significant that this mis-forecasting occurred in the spring.
It’s also worth noting that extreme real-world events (the 2003 heatwave is another UK example) reveal model shortcomings far more clearly than do average weather conditions. In a normal year rainfall would have been within the modelled range, so no serious discrepancy would have been apparent, even if the models in fact predicted very different weather conditions to those which actually occurred.
So I suggest a bit of pencil-sharpening over at the Met Office!
Why Wasn’t the Drought Rained Off a Bit Sooner?
The other thing that has struck me about this year’s drought is that it’s only just been called off.
This makes no sense. The drought restrictions came in at the end of March. But by the end of April the shortfall in March (shown the following figure) had been more than made up.
The figure shows there was around 36mm of rainfall in March compared to about 95mm in an average year. The graph for April (above) shows 126.5mm against an average of about 70mm. March + April rainfall averages around 95+70 = 165mm, and this year was 126.5+36 = 162.5mm, near enough normal.
So why wasn’t the drought called off at the end of April? Or, alternatively – if average rainfall was not going to be enough – why weren’t we officially in drought at the start of March rather than at the end?
Part of the answer must be that there’s not a lot of point in calling a drought before April, since the water restrictions triggered by a drought declaration – typically hosepipe and sprinkler bans – are fairly pointless before then because hardly anyone wants to water their garden anyway. But if there’s an implicit annual timetable for drought management, why doesn’t the Environment Agency publish it? Keep the public onside, guys!
It may also simply be that, because we use more water in the summer, the crunch point – when water supplies would become dangerously low without significant rainfall – is more months away in the winter than in the summer. If this is the logic, then maybe the EA should provide numeric data along the lines of “assuming only 50% [or whatever is realistic - note that specific forecasts are ignored in this methodology, since, as we've established, they're just not good enough yet] of average rainfall and average use for the time of year, we have n months of water supplies”. Specific restrictions would be triggered by different values of n, that is, if it were below, say 9, hosepipe and sprinkler bans would come into effect. Of course, n would depend on the region, and in particular the water company. This would allow water supplier performance to be assessed and companies rewarded for performance, in much the same way as it is intended energy supply companies are to be incentivised to provide capacity as well as kWhs.
A “drought” is something we define – nature just gives us more or less rainfall (and storage). It seems to me the definition of drought could be somewhat smarter and in particular communicated considerably more effectively.
Weather and climate are related. When the authorities look stupid predicting the weather it undermines their messages about climate change.