## Insights from The Black Swan, Part 3 – The Ludic Fallacy

Long ago, while reading *The Black Swan *by Nassim Nicholas Taleb, I began a series of blog posts (here and here) in which I promised to continue “reflecting here as I encounter insights that excite me as a tester.” Did you think that because I’ve published three non-related posts since then and nearly two years have passed (let’s try not to think about that rate of posting), I was done reflecting on *The Black Swan*? Me too. It turns out that I had started a draft of a third post nearly two years ago, but never returned. I’m on a mission to make good on my many accumulated drafts and notes and thoughts and get into a solid writing habit, so here goes with that third post.

In Chapter Nine of *The Black Swan*, Taleb presents two characters to help illustrate what he calls the ludic fallacy: Fat Tony, a street-smart, slick-talking student of human behavior who “has this remarkable habit of trying to make a buck effortlessly”; and Dr. John, an efficient, reasoned, “former engineer currently working as an actuary for an insurance company.”

The full picture that Taleb paints of Dr. John is in many ways a spot-on caricature of me. Dr. John is “thin, wiry, and wears glasses,” all of which fits my bill. Like Dr. John, I’m meticulous and habitual. I too know a bit about computers and statistics, although while Dr. John is an engineer-turned-actuary, I am an actuary-turned-software-tester. It all hit a little too close to home, as you’ll see.

After proper introductions, Taleb proposes a thought exercise in which he poses a question to both Fat Tony and Dr. John:

Assume that a coin is fair, i.e., has an equal probability of coming up heads or tails when flipped. I flip it ninety-nine times and get heads each time. What are the odds of my getting tails on my next throw?

My eyes light up and my heart races at this. I’m flashing back to grade school: “I know! I know! Let me answer!” My Talebian doppelgänger, Dr. John, expresses my immediate thought:

One half, of course, since you are assuming 50 percent odds for each and independence between draws.

Of course, of course! I’m excited because this understanding was a bit of a revelation in my college probability class. Yes, 99 straight heads is eye-popping, but us studious mathematicians must look past our emotions and see that the previous flips have no bearing on the next flip. The odds of 100 straight heads is one number (a very low one), but the odds of a head on the 100th flip is one in two, just like the 99th flip and just like the 1st flip.

Taleb turns the question to Fat Tony, who also says “of course,” but gives a different answer: 1%. His reasoning (with Taleb’s translation)?

You are either full of crap or a pure sucker to buy that “50 pehcent” business. The coin gotta be loaded. It can’t be a fair game. (Translation: It is far more likely that your assumptions about the fairness are wrong than the coin delivering ninety-nine heads in ninety-nine throws.)

“Of course,” indeed. Dr. John and I have fallen for — and Fat Tony has seen through — Taleb’s ludic fallacy: “the attributes of the uncertainty we face in real life have little connection to the sterilized ones we encounter in exams and games.”

This is an important lesson for anybody, and especially for anybody in several risk-intensive fields, but I take it to heart as a software tester. If I stick to my background in strict mathematical thinking, I put myself in a box, limiting my view of possible reality. Anybody can perform mathematic calculations — more significantly, any *machine* can — but it takes a learned mindset to observe and question assumptions. Assumptions like the fairness of a coin, the stability of a codebase, the behavior of a user class. It also takes a human to assign *meaning* to observed outcomes. Like the meaning of a coin that hasn’t turned up tails in 99 tries (even if based on a probabilistic model of coin flipping it’s just one of many random outcomes), or the meaning of a web form taking three times longer than average to load during the 12 o’clock hour four days in a row (even if based on a probabilistic model of network behavior it’s just one of many random outcomes).

As a tester, I’ve had to learn (and am still learning) to be more like Fat Tony, the human observer, and less like Dr. John, my actuarial spirit father.

Because here’s the thing: humans write the software. We can’t say “these are the odds of a bug happening here, because there are X variables and Y ways to interact and Z paths.” Humans write the software’s code, each with their own human tendencies for specific types of bugs and their own human understanding of how the software should work.

And here’s the other thing: humans use the software. If a machine were to use the software, instead of a human, one that mechanically executed every single possible combination of variables and buttons and paths and configurations and network connections and on and on — sure, there’s probably a nice probabilistic model for how common certain types of bugs will be. But humans use the software in specific, meaningful ways. It’s rigged, like the coin. It’s the tester’s job to (1) observe the the 99 heads, (2) understand the meaning of the 99 heads, and (3) use that observation and meaning to uproot any unsound assumptions.