Insights from The Black Swan, Part 1

I am reading a book. (I’ll wait for your applause.)

(Thank you.)

I am reading Nassim Nicholas Taleb’s The Black Swan right now. I’m less than a hundred pages in, but I’m already convinced all human beings should read it. I could wait to finish the whole thing and write a tidy little recap here, but I decided it would be more fun to witness how long it actually takes me to read a book by regularly posting “insights”  — nuggets that, as I read them, make my tester brain cells wriggle.

So here is the first bit that I found worthy of reflection. In this quote, Taleb describes what he calls the “round-trip error”, by referencing his earlier example of a turkey being fed every day for a thousand days, until one day (the Wednesday before Thanksgiving) he is not.

“Someone who observed the turkey’s first thousand days (but not the shock of the thousand and first) would tell you, and rightly so, that there is no evidence of the possibility of large events, i.e., Black Swans. You are likely to confuse that statement, however, particularly if you do not pay close attention, with the statement that there is evidence of no possible Black Swans. Even though it is in fact vast, the logical distance between the two assertions will seem very narrow in your mind, so that one can be easily substituted for the other.”

Notice the emphasis, which is the author’s own: The difference between no evidence of a Black Swan (an improbable event with extreme consequences — in this case, the turkey’s unexpected demise) and evidence of no possible Black Swan. Is this not one of the critical thought and communication challenges of a software tester? When the testing of a product reveals no evidence of critical bugs, it is easy — and biologically natural, according to Taleb — to confuse that as evidence that there are no critical bugs present.

The former assertion — no evidence of possible bugs — has meaning and impact that is mostly dependent on context. The mission of my testing and the particular sampling of tests I’ve chosen and executed, among other factors, will have a lot to say about what “no evidence of possible bugs” actually means, including whether more testing, and what tests in particular, could be valuable.

But the latter assertion — evidence that no possible bugs exist — has no meaning. It only has truth in the isolated island nation of Simplestan, where there is but one computer and one user, and where the software is so simple that, not only are all possible risks known, but it is possible to develop a finite number of tests to cover all possible bugs. (You may know Simplestan by one of its other names: Paradise, or Boringville.) In the rest of the world, we have to train ourselves to remember that “evidence that no possible bugs exist” is a falsehood — a seductive one (it feels so similar to the other!), but one that can negatively impact the quality of the product when testers and stakeholders are led to believe in it.

Now, it feels like I’ve always been very aware of all this. But I think that may just be evidence of how good this book is.

Page with Comments

    1. Yes — I believe Michael’s series was my first exposure to the book. I finally picked it up about a year ago after hearing so many testers delight over it.

      But no, I hadn’t read that article until now. Good read! Thanks for sharing.

  1. Just re-read this and the quote from Dijsktra from 1970 came to mind:
    ” Program testing can be used to show the presence of bugs, but never to show their absence! “

Leave a Reply

Your email address will not be published. Required fields are marked *