Once upon a time, I took a statistics class in college. On the first day, the professor told us a simple truth we needed to understand before we started down the road of probabilities, bell curves, and standard deviations. He told us that numbers can be made to say whatever we wanted them to say. At that time, I was a mass communication major whose favorite class was Criticism of the Information Media, and the idea hit home quickly. That class taught me to think critically about the statistics we see in the news and to ask how they are calculated. Most people don’t consider it too deeply, but how we count matters.
The past few weeks have brought that point to the forefront with reports of revisions to the death toll from Hurricane Maria in Puerto Rico. This week’s revision brought the number to about 3,000, but why did the number jump so high from the initial, official report of 65? In Miami, the NBC station’s Chief Meteorologist, John Morales, published a blog post this week that explains well how they arrived at those numbers. Basically, he makes the point that what is being counted and how it’s being counted differs from report to report.
The initial, official report was of deaths directly related to Hurricane Maria’s wind, rain, and flooding. The following reports with numbers in the thousands were those of “excessive deaths” that could possibly be linked to the after-effects of Maria’s damage in the months following her landfall, and each of those follow-up reports counted things differently. Read Morales’ blog for more information.
To quote Peter Griffin on “Family Guy,” “what really grinds my gears” about the revised death toll studies more than anything is how most media outlets are reporting them – with a headline and a sentence or two and little-to-no explanation of what the numbers truly mean. They are being reported as hard fact and not best estimates of deaths indirectly caused by the storm.
There are other weather-related numbers that get tossed about regularly in news reports and taken at face value, which can often be misleading. Take for example, stating local temperature records such as the coldest or hottest high temperature for the date. The additional information needed to put those records into context includes how long records have been kept for that specific location, has the thermometer always been accurate and reliable and sited in the appropriate spot for the purpose, or has it been moved and/or replaced? Are there quality control processes for the data in place? If so, what are they? Are there historical gaps in the data?
Granted, when a reporter only has three seconds to mention a factoid in passing as he transitions to a larger story, it’s difficult to include all that information. For that reason, we need to be willing to ask those questions and dig a little deeper if it’s a topic that matters to us and a fact we plan to repeat at the coffee bar.
What about other superlatives like “most destructive hurricane” or “most expensive storm”? Are we considering inflation, insurance, building codes, population density? Is it fair to compare a hurricane which affects the densely populated Mid-Atlantic and New England region to a storm that hits mostly rural Florida and Georgia if both storms’ attributes were equal? More people and more expensive property may be affected in New England, but that doesn’t make the devastation in the Southeast any less real to those who suffered it.
Regular readers of this blog know that my goal in writing it is to challenge you to think about science in the news in an educated way. My personal opinions may differ from yours, and that’s fine with me as long as you’re thinking more critically about what you hear and read as a result. In this new virtual world of flashy headlines and little substance, it’s our personal responsibility to be media savvy.