The Hardball Times is running a series of articles by their writers about what they learned doing a project. Dan Szymborski wrote about projecting player performance and Mitchel Lichtman wrote about defensive statistics. Very good stuff.
A guy named Matt Hunter writes about creating a baseball simulator, which he has done, and some of the things he’s learned from the process. Many of these lessons are related to growing pains and the comparison between a simple system and increasingly complex one, which are important, but what is really eye opening is the role that the simulation shows for variance.
In the story he plays many seasons worth of games between identical NY Yankees and Boston Red Sox teams, and ends up with wildly divergent results. The point is that in a game with so many random variables in play, results will swing from one outlier to the other, with many points in between, even though talent is precisely defined. That is, these players are mathematical constructs and constant, unlike real life players, who are also very human (and thus also variant in human ways).
This flux is really important to understand when it comes to analyzing the games, both baseball and fantasy.
One of the commenters on Matt’s piece has a link to a conversation about variance and season disks in baseball simulation packages like Strat-o-matic, started apparently by Ted Turocy (Dr. Arbiter), in which he shows how our turning under-regressed talent evaluations (last year’s stats) into models for our simulation games increases the variance from year to year in unappealing ways.
This is similar to the method I used to show that looking at 14 years of fantasy league outcomes determined by chance would look a lot like what we’ve seen in Tout Wars during it’s history. Except for Schechter.