Search This Blog

July 08, 2010

Perfect Information vs. Chaos

Buck Woody recently posted an interesting blog item on the perils of expecting "perfect information" for economic modelling. My response follows.


Information can theoretically be perfect, but it's existence relies on the assumption of a completely deterministic (rational) universe. I think we'd all agree that human behaviour is often so complex as to describe it as irrational, so all economic models necessarily have some sort of fuzz factor when you apply them to the real world. Asimov spoke a lot about these topics in the Foundation novels and I think he nailed it on the head.


This mania that rich western populations have around "command and control", "risk management", and the commoditization of labour through specialization (c.f. W.E. Deming, et al) is all based on the assumption that we can measure and manage every aspect of our physical, social and economic environments. However, when tested, this assumption fails as the respective systems ARE chaotic, DO NOT always behave deterministically, and the expenditure of energy and resources to bring order to the chaos is always orders of magnitude higher than the amount of resources and energy in the system being "managed".


The kinds of modelling under discussion would be much more successful if instead of trying to get more precise answers to silly questions, we started trying to understand more about the chaotic order in our systems, and start factoring entropic terms into our understanding of our environment. Chaos theory dictates that these systems will usually behave more or less deterministically, but will only do so predictably up to a point. We have the tools to start examining the relationships in our macro-ordered black box systems, but they are not the tools of nice, safe but ultimately inadequate tenth grade algebra. They are instead the tools of advanced statisticians, and we need to understand that the results are expressed in terms of probabilities, not certainties.  We need to build chaos and entropy into our models in order to understand the range of outputs we're likely to see given a fixed set of inputs.




Given the discussion above, it's interesting to note that some of the data mining algorithms in MS SQL Server allow us to start building these improved models.  We can look at clusters of results, perform sensitivity analyses on the input variables in our systems, and perform time-series analyses to predict future outcomes.  We can build Bayesian decision trees to analyze paths of least resistance to the outcomes we want.  And all of this can happen without having to build equations in a simple algebraic form to which some kind of mythical certainty can be ascribed.  The clever people at MS Research have already done most of the heavy lifting in allowing SQL Server to do this for us.


No comments:

Post a Comment

There was an error in this gadget