I predict this swan will never fly

I have noticed recently a tiny debate going on between two blogs concerning whether or not it is sensible to assign the class of people called peasants a different distribution of ability scores to the class of people called lords. The distinction in question – 2d6 for peasants, 3d6 for lords – seems roughly fine to me in the renaissance setting in which it’s proposed, though I prefer 2d6 for peasants with a further roll of 2d4-2 if the first roll is a 12, since this gives a small probability of numbers greater than 12. I agree with this method because being a peasant is the single biggest determinant of every aspect of your life, malnutrition and lack of even basic education being a significant impediment to the development of even normal stature and mental function, let alone decent wisdom or strength scores. My Eternal Antagonist over at Monsters and Manuals disagrees, because (it would appear) he objects to the epistemic arrogance of claiming one can model class effects, and it’s an inductive fallacy to propose that just because most peasants have 2d6 stats, the next peasant one meets will have 2d6 stats.

I’m not going to address either of these arguments directly, because it’s impolite – I’m arguing with Noisms at his own blog and I’ve got nothing to say at Alexis’s. What I thought I’d do instead is briefly give my opinion of the Black Swan thesis, which Noisms references in his objection to the model. Taleb, you see, who wrote The Black Swan, is opposed to modelling.

I haven’t read this book, but I’m vaguely interested in the philosophy of science and I had heard that Taleb was not overly respectful of global warming theory, so I picked it up at a friend’s house and read the first chapter, and I was struck by the complete failure of the fundamental analogy, that of the black swan. Taleb argues that black swans, when they were discovered in Australia in the 18th century, were a freak unexpected event that biological theories of that time had not predicted, and which were worked into the theory in hindsight. These have come to represent in his theory the unpredictability of nature, and the inherent dangers of modelling anything.

Except, the problem with this is that in 1790 the biologists were working from the wrong theory. They didn’t have anything like a theory of evolution, which came later after Darwin visited Australia. Evolution, I have read, gives biologists the power to predict new animals, and in fact even to predict where they might be found or how they might behave, and had the theory been developed at that time the black swan wouldn’t have constituted much of a surprise at all, let alone a “significant random event.” While it’s trivially true that the black swan might have appeared like a significant random event at the time, what is more important is the fact that the scientists of that time were working with an imperfect theory, that had no predictive power. Taleb’s whole book about random events screwing predictive models is based on an analogy to a situation in which a (possibly) predictable event was not predicted by a theory that lacked any predictive power. It’s essentially a book whose thesis could be rewritten “Don’t make predictions from the wrong model.” Also, I would add, it’s disingenuous to claim that the swans were worked into the theory with the benefit of hindsight – Australian flora and fauna were essential data in the construction of a revolutionary new theory, evolution, which had greater predictive power. This is not the same as justifying their existence in hindsight.

There is also something a bit strange in a book which purports to claim that financial models are doomed to fail to predict significant random events (black swans) by an author who claims to have predicted the Global Financial Crisis (GFC), which he simultaneously claims is the key black swan of our time. Figure that out. He isn’t the only one to have predicted this black swan, either – I did in 2004, and lots of economists and financial people did, starting around 2004. Of course, the claim that modelling can’t handle unpredictable events is prima facie true, but vacuously so. For example, global warming theory can’t predict rapid global cooling if in 2020 a ginormous meteor hits the earth, because random events like that can’t be factored into anyone’s theory. But a meteor strike in 2020 doesn’t invalidate global warming models or the theory, and to say so is to deliberately ignore the underlying assumptions of the modelling process.

It’s actually quite hard to find on the internet criticism of Taleb’s theories, though I found one article here, also by someone who has not read The Black Swan, but who is primarily riffing off of a very shoddy-sounding Financial Times opinion piece by Taleb. This blog appears to be by a quantitative analyst, so is undoubtedly biased about Taleb’s criticisms of quantitative analysts, but makes some interesting points, particularly about the business consequences of Taleb’s theories, and the silliness of some of Taleb’s claims about the actual models that are used in finance.

I would also add that the finance world isn’t the best place to look for examples of sound modelling. It isn’t subject to any of the checks and balances of science, doesn’t have the historical lessons of science, and a lot of its methodology and results (beyond “making money”) are not made publicly available for us to check. Also, the “making money” part appears to be driven by human interpretation of the models the analysts provide, and not necessarily by the models directly. But Locklin makes the point here, I think nicely, that Taleb has made a big claim that normally distributed data is insufficient for finance modelling; but modern finance modelling doesn’t use the assumption of normality very much. Locklin claims that for this very reason he, like me, had to become a “small-time expert in kernel regression.” Kernel regression modelling has many flaws, but an assumption of normality ain’t one of them. Locklin’s rather malicious claim is that Taleb makes money and fame by telling people who know nothing about finance about something very obvious to the modellers (non-normality), while simultaneously making them think the modellers don’t realise this.

You see the same tactics in global warming denialism all the time, and hordes of armchair scientists eager to claim that they’ve seen the obvious thing (“climate isn’t weather!”) that a generation of climatologists have missed. It may make for entertaining reading, but it’s neither enlightening nor correct.

Further, Taleb is an inheritor of Popper, although Locklin claims he is an inheritor of Feyerabend and therefore an “intellectual nihilist,” an accusation I think is valid regardless of his intellectual inheritance. It’s very easy to claim that all models don’t work because of unexpected events; but a lot harder to square this “philosophy” against the continuing excellent success of, for example, life tables in the insurance industry, or models of global warming. And, a claim that all models will be destroyed by a black swan event is, contra Popper, unfalsifiable. If the event comes and doesn’t destroy the model, you claim it wasn’t really a black swan; if no black swan ever comes in our lifetime due its low probability, you never get to test the model against a black swan. I don’t think Popper would like this. Also, Taleb’s explanation for the causes of the GFC – interconnected markets sharing bad models that didn’t expect the housing meltdown – conveniently deflects blame from the agencies and institutions that were actually responsible for the crash[1], while simultaneously failing to explain the fact that the black swan event (the housing meltdown) was being predicted in very many models for years beforehand. Not only is his model built on a false analogy, but its fundamental test doesn’t have all the characteristics of a black swan anyway.

I suppose the consequence of this intellectual nihilism is what bothers me, the idea that people who don’t do science will reject perfectly good models of important stuff on the basis that you can’t ascribe theories to observed facts. It’s for this reason that we have the unedifying spectacle of Sir Noisms, who hails from the most class-stratified society in the developed world, trying to argue that it’s impossible to model differences between peasants and lords because life is just too complex. The sad finding of 100 years of research on poverty in the UK is that no, life really is that simple[2].

fn1: To be fair, Taleb does provide a reasonable set of rules to avoid a subsequent GFC, but they’re so clearly common-sense based that his “theory” is hardly necessary to justify them.

fn2: Yes, I’m aware I’m being facetious here, ecological fallacy etc. etc. blah blah

Note: the picture is from this site about the 303rd bomber group in world war 2, and the fate of the Black Swan. Models of aircrew survival in world war 2 very much allow us to expect the kind of events described on this page…