Hamming vs Taleb

The turkey problem, which more or less sums up the kinds of things Nassim Taleb thinks about, goes like this:

Taleb is on econtalk here talking about GMO crops among some other thing. A key passage with Russ is as follows:

The question now I have is that: Where is the evidence that this GMO process is a fat-tailed process rather than a thing-tailed process? Guest: The first thing you’ve got to–when you think of are we in fat-tailed or thin-tailed domains is look the other way and say, ‘What is the evidence that we are in a thin-tailed domain?

Taleb is interesting because he emphasizes themes, like tail risk, that most others ignore, indeed most others (including me) can’t even talk about intelligently. I’m not sure Taleb can either but he’s heroic for trying.

I can see both sides. On the one hand tail risk is really important, on the other hand without an intellectual framework for processing it, there isn’t much to say or do, Taleb’s voluminous speaking and writing notwithstanding. His strategy, the precautionary principle goes something like this: if you think the downside is bad, like really really bad, don’t do it (no matter what the probability) . Well, don’t do it unless your can then convince yourself that it isn’t as bad as you once thought.

Hamming may agree with Taleb for all I know and indeed Hamming spends a lot of time writing about the pitfalls of complexity. But he also tells two stories where downsides are large but the tail (the probability of extreme outcomes) looks like it should be fat but instead is ambiguous.

First, in calculating the trajectory of a missile, he found that his setting of the initial conditions of the launch didn’t matter. He tells the story as a counterpoint to GIGO,in that he inputted garbage and got out a solution that worked. The important insight is that small errors in course could be corrected by the missile’s own guidance system. There was a positive feedback loop.

Second story involves Los Alamos, in designing the atomic bomb. He found that many of their data were imprecise but he calculations were aqcurate:

But further examination showed as the “gadget” goes off, any one shell went up the curve and possibly at least partly down again, so any local error in the equation of state was approximately averaged out over its history. What was important to get from the equation of state was the curvature, and as already noted even it had only to be on the average correct. Hence garbage in, but accurate results out never-the-less!
These examples show what was loosely stated before; if there is feedback in the problem for the numbers used, then they need not necessarily be accurately known.

Taleb understands positive feedback loops and the law of large numbers and that systems can have the capacity for self-repair. I would say his solution to the precautionary principle (wait until you are sure it isn’t bad) actually requires one of these phenomena to be in place to avert catastrophe. So why not talk about the wonders of how nature builds resilience as well?

Maybe Taleb feels frustrated that discussion of tail risks in the mainstream does not do justice to the magnitude of downside in downside scenarios. So, this line of thinking goes, he doesn’t emphasize the resilience of systems because he’s trying to motivate people by telling scary stories.

I would hardly agree that’s necessary, watch CNN for a few hours to get a feel for how well covered scary things are. Taleb’s emphasis is somewhat smarter (or at least smarter sounding) than run-of-the-mill scare mongering. But does he achieve anything more than CNN with his “just don’t do it” advice?

Leave a Reply