Models need (and will get) improvement

Posted on

Two news stories about forecasting models caught my eye this week. The first dealt with a shortfall in the climate models and the second was good news about continuing improvements in our daily forecasting models.

I’ve written before about potential problems with initial data and assumptions in climate forecasting models – the ones used by climatologists to predict our global conditions decades in the future. Like it or not, they are not perfect.

NASA climate map

Credit: NASA, 2015. “NASA global data set combines historical measurements with data from climate simulations using the best available computer models to provide forecasts of how global temperature (shown here) and precipitation might change up to 2100 under different greenhouse gas emissions scenarios.”

Some researchers from Princeton University drove that point home with a recent paper in the journal Nature Communications. Jun Yin and Amilcare Porporato’s paper, “Diurnal cloud cycle biases in climate models” details how they carefully analyzed satellite data from 1986 to 2005 and compared the information they gleaned to what the models produce.  The two determined how the time of day that clouds form in reality versus the time of day averaged in the models can affect the amount of solar radiation the models predict.

In the climate models, the cloud cover peaks in the morning. In reality, the cloud cover peaks in the afternoon – the same time the radiation coming from the sun peaks. The amount of clouds and types of clouds between the earth’s surface and the sun make a difference in how much energy from the sun we receive. The climate models’ were over-estimating that amount and potentially forecasting hotter and drier conditions based on it.

The paper states, “Thus, on the one hand, consistent biases in DCC [diurnal cycle of clouds] between present and future climates give rise to similar TOA [top of the atmosphere] reference irradiance, so that the model tuning made for current climate conditions still remains largely effective for the global mean temperature projections. On the other hand, consistent biases have the potential to increase the uncertainty of climate projections.” In simpler terms, the researchers don’t think the temperature forecasts are completely wrong, but they have shown the margin of error may be much greater than most scientists have acknowledged up to this point.

The hope is for the results of the study to be used to improve the current models.

In another story, the National Oceanic and Atmospheric Administration (NOAA), released the news on Tuesday that they are in the third phase of a massive supercomputer system upgrade. This year’s improvements increase the processing speed to 8.4 petaflops and 60 percent more storage capacity. The added speed and storage will allow for more initial conditions data – extremely important information for forecasting – and higher resolution, which will help with accuracy with respect to geographical space and time.

The goal is to improve our forecasting capability, especially when it comes to warning of dangerous storms. The forecasting model specifically mentioned in the press release is the Global Forecasting System (GFS), which has a reputation among many forecasters of often being less than accurate more than two or three days out, even though it produces predictions for 10 days out. Improvements to the GFS are needed and quite welcome!

If you’re not a meteorologist or climatologist, you likely don’t know the frustration of making a forecast based on science and technology – much more than we had fifty years ago – and still knowing that there is a chance the models we rely on are missing critical input and getting it wrong. While most people may not consider a few degrees error in temperature a horrible thing, they’d probably agree when the temperature happens to be around 32 degrees, a few degrees in either direction can make a big difference in our weather reality.