16 March 2008
When Models Diverge
TO is interested in weather. Like time, another topic already encountered here, weather represents an example of something pervasive and ever-changing in our environment, and is something that people have long sought to measure and interpret quantitatively. It differs, though, in the level of interest and effort that's applied to predict its future. Occasional insertion of leap seconds into the calendar doesn't affect as many daily lives. Extensive computer models (as at Model Analyses and Forecasts) have been developed to project atmospheric trends based on observed data. At least today, however, they haven't replaced the need for human judgment. The problem arises when the models predict different results. Then, forecasters draw on experience to decide which choice (or hybrid) appears most likely to be accurate in the physical world (or, in the field, "to verify"). In the US, these judgments are often visible to interested readers in regional Technical Forecast Discussion pages, such as this example. Will models continue to improve, to the point where it will be vanishingly rare for human experts to need to arbitrate among conflicts? Or, will different algorithmic processes necessarily continue to yield different results in some cases, to be resolved above an algorithmic level as with discussions among a group of specialists in a topic? TO (though only an interested layperson in the field) suspects that the divergences will become rarer over time, but won't soon disappear.