Thursday, May 9, 2013

202. Accurate forecast? Sometimes

The changing weather can be both an endeavoring and depressing characteristic of Northeast Ohio. It can be 65 degrees and sunny one day, then snowing the next. Last year, March brought summer temperatures; this year, heading into May, it seems like we are still waiting for spring.

As I have gotten older, the weather seems to matter more to me. When I was young, and as a kid who loved baseball, I remember going to my room and crying when our little league games were rained out. On the flip side, like most kids, I loved snow days-a day home from school to sled or play football. But that was basically the extent that I cared about the weather.

There was no Internet or smart phones, for 24 hour radar updates and projections, or a cable channel dedicated to weather forecasting-there was a local news segment and the newspaper. I don't recall paying much attention to the weather forecasts, I watched the news for the sports segment, and I treated each day individually.

Today I pay more attention to the weather forecasts, and am surprised how depressed a few gloomy days in a row can make me. I even pay attention to sunrise and sunset times-and how long the days are.
With age and technology, I now regularly check the weather and the local radar-and like many people I speak with- I am amazed at how poorly the weather seems to be forecasted. We can locate the Higgs boson particle, but cannot figure out when it is going to rain? Locally, weather forecasts seem compromised as competing stations each try to out sensualize the other-with many extreme weather forecasts seeming to fall short.

I wanted to learn more about the weather and recently began watching a Teaching Company class on the subject. Coincidentally, I also began reading Nate Silver's book on predictions entitled, The signal and the noise: Why so many predictions fail-but some don't. In the book, I was pleasantly surprised to see a chapter on weather prediction.

Silver's book briefly describes the history of weather forecasts-the challenges, successes, and the difference between the government weather center (The National Weather Service), for-profit weather centers like the Weather Channel and local television forecasts.

Weather predictions are, of course, based on statistical models-in which very slight fluctuations, due to exponential functions, can have a distinct impact. Thus, when you see a weather forecast that has a 20 percent chance of rain, what it means is that when a similar forecast is made, based on the current weather module-it should rain two days out of ten. This is called "calibration" and its accuracy is easily tested.

The National Weather Service forecasts are well calibrated; however, the Weather Channel admits to "fudging a little under certain conditions." The reason, Silver surmised, "People notice one type of mistake-the failure to predict rain-more than another kind, false alarms. If it rains when it isn't supposed to, they curse the weatherman for ruining their picnic, whereas an unexpectedly sunny day is taken as a serendipitous bonus. "

Silver described this as a "wet bias" and it is worse in regards in to local forecasts, "The TV meteorologists weren't placing much emphasis on accuracy." In one study of a Kansas City meteorologist, when he predicted a 100 percent chance of rain, it failed to rain one-third of the time.

"The attitudes seems to be that this is all in good fun-who cares if there is a little wet bias, especially if it makes for great television," Silver concluded.

In making weather temperature forecasts, there are a couple of factors that must be considered as baseline statistics-that is predictions that will be tested against. There is "persistence," which is the basic "assumption that the weather today will be the same tomorrow (and the next day) as it was today," and there is "climatology," which is the "long-term historical average of conditions on a particular date in a particular area."

What Silver discovered was that commercial forecasts beat persistence and climatology up until nine days out. After day nine, climatology was actually better at making predictions than commercial forecasts-thus, the last couple days of those ten day forecast (or greater) ought to be ignored. I've noticed recently, that many local forecasts stop at about eight days, apparently aware of the meaninglessness of forecasts beyond that point.

Silver's book makes the convincing argument to weather forecasting as an overall success (despite the fudging and wet bias). He believes we have improved significantly over time-and particularly in comparison to other types of forecasting, like earthquakes and economic factors.

It's important to realize that forecasts are statistical projections, not certainty. That it rains on a day when there is only a 20 percent chance of rain is actually quite normal, actually expected, two out of ten times. It is easy to dismiss the accuracy of the forecast on many of those other eight days-days we probably fail to offer credit to the weather forecasters.

Seems I still have something to learn about weather forecasting. Maybe the forecasters are not a bad as I thought. Or maybe Northeast Ohio is really unpredictable.