Testing on the Edge

I am a tester on the edge.

For several years, I’ve noticed tensions within myself while I test software: I tend to teeter on the edge between sets of two things – tactics, concepts, mindsets, emotions. When I’m aware that I’m testing on the edge in some way, I make note of it. It’s happened enough now that I’m convinced it’s a thing, and worth sharing.

During the years when I collected these examples, I struggled with what word or phrase I could use to describe them in a way that would be memorable. I thought for a time that an appropriate image was balance – maybe walking on a balance beam or a tightrope. But balance isn’t quite right; as you’ll see below, the idea isn’t to seek some perfect mix of each side. More recently I thought, hey, maybe it’s yin and yang! Borrow from the ancients, right? But yin and yang represent complementary forces that form a dynamic system; a whole greater than the parts. Again, not quite there.

Most recently, I heard James Bach use the word tension during the Rapid Software Testing course; he was describing things like diversification in tension with cost vs. value. I immediately saw a connection to my testing on the edge concept. Nice! (There is also a tangential concept covered on the RST Appendices (p. 14) called “Exploratory Testing Polarities.”) But something else I learned during the RST course is that if I name things myself I am more likely to remember them. So, awkward as it may seem, I’m sticking with the refrain that’s been in my head throughout the years: testing on the edge.

On to the examples.


Confidence vs. Self-doubt

As a tester, I find it important to keep on the edge between confidence in my abilities and healthy self-doubt.

I think having self-doubt is the more obviously desirable trait for testers. We are natural-born and well-practiced questioners, and we question not just the product and the project but ourselves as well. Is this a good test? What am I trying to learn by doing this? Am I being efficient? What assumptions am I making? What are my blind spots? My biases? Skilled testing flows from healthy self-doubt.

But too much self-doubt can be crippling. I teeter back to confidence to get things done. I question myself to refine my decisions, but I trust myself to actually make decisions. This is a good enough test. This is an efficient use of time. I’m making this assumption because it is reasonable. I have practiced, I can do this. Skilled testing flows from confidence.

But too much confidence leads to rashness, conceit, blind spots… so I teeter. I stay on the edge.

Clean vs. Dirty Test Environments

When I test on my current project, I use the same databases for a long time, often carrying over from release to release. This has the benefit of allowing the test data to become “dirty” over time, improving the chance of revealing bugs that only occur in complex scenarios that resemble the real world. For the same reason, I usually avoid deleting test data after testing a specific scenario; by letting data from various tests accumulate over time, I serendipitously stumble into interesting bugs later on (more on serendipity in a bit). Some bugs love the dirt and grime.

Then again, it’s difficult to see some other bugs through opaque glass. Maybe if things weren’t so dirty, I could see more. I also try to keep a clean database, with little data, where I clean up after myself after tests. This helps me when I need to see how things work under very specific conditions; when a bug shows up in the clean environment, it’s much easier to see how it got there and find its critical conditions.

Of course, in practice, I don’t say, “Now it’s time to test in Dirty Database A. Okay, switching to Clean Database B for this.” I stay on the edge, teetering between dirty- and clean-environment mindsets and habits as I navigate my exploration.

MFAT vs. OFAT

I stay on the edge when it comes to variance of factors while testing: I teeter between varying multiple factors at a time (MFAT) in order to shake out bugs as quickly as possible and varying one factor at a time (OFAT) to make it easier to pin down the critical condition that exposed a found bug. This is a common source of tension in my exploratory testing. Varying conditions one factor at a time, noting each condition as I go, makes it much more likely that, when I encounter a bug, I can say “Aha, this, this, and this led to that bug.” But I also know that testing strictly in this manner is time-consuming and can be very boring, even soul-draining. By shaking things up with an MFAT strategy, I increase my chances of brushing against a bug in less time, while keeping my senses alert and interested.

Regression Checking vs. Testing

When it’s time for me to test for regression bugs in a new release of the software I test, I have a couple of objectives, constrained by limited resources (namely, my time, as I am my team’s only tester): (1) to cover as much of the same ground as possible, from release to release, to have some confidence that things that were once working are still working; and (2) to test the once-working things with fresh eyes, looking for new issues by investigating in new ways. This means I end up testing on the edge: teetering between regression checking and regression testing.

Regression checking emerges from the part of me that wants to follow a checklist, to feel like I’m not forgetting anything, to do things in the same way as the past, running checks that I’ve developed through years of testing. Regression testing emerges from the part of me that wants to explore the “same old” software with new eyes, purposely avoiding the temptation to run the same checks. I don’t want to forget anything important, but sometimes it’s worth the risk of forgetting one minor thing if it means getting out of a check-focussed rut and letting my mind wander familiar territory in unfamiliar ways. Hence, I tend to the edge.

Meta-thinking vs. Subconscious thinking

I need to be aware of my own thinking: how I am thinking, what my biases are, my emotions, my thought processes; but I can’t be constantly aware. Too much meta-level thinking can be a hindrance to good testing – I believe I do my best when I also lean on my subconsciousness, that stuff that we usually call instinct. And the more I try to be hyperaware of how that subconsciousness is working, the more (I fear) it will cease to work at all.

For the most part, I believe that self-awareness of how my thinking works should be relegated to when I am not actually testing: to quiet times of reflection. So that if I decide something needs correction of some kind (a bias become too blinding, maybe), I can hopefully let that happen to my subconscious mind, and not try to be aware of it consciously the next time I am testing.

There’s no easy answer to this, but there is a lot of literature on the subject. I read up, and I keep on the edge.

Notes vs. Flow

I keep lightweight testing notes that serve a few purposes: keep track of what I’ve tested; new test ideas (expanding the checklist); possible bugs; troubleshooting notes while following up on a bug. This last purpose can be very potent, helping me keep track of conditions I’ve tried and the results I observed as I uncover a better view of the bug.

But here’s something else that’s potent while testing: flow. When I test uninterrupted for awhile, I can get into a flow state, where I keep most new information in my brain’s working memory, interacting with the software, asking and answering questions on the fly. Stopping to take a note as each new piece of information pops up breaks this flow. Taking a note because I think it’ll help me keep track of something can actually disrupt my brain’s natural ability to keep track of things on its own. Have you ever stopped to take a note while testing, returned to the software, and thought, “Now, wait… what was I doing?”

So I teeter. I stay on the edge. My default preference is to keep in a flow as much as possible. What pushes me to take notes most often is an abundance of potential bugs that aren’t quite relevant to what I’m trying to learn about at the moment: I can hold things relevant to the current thread of testing in working memory, but I will forget unexpected behavior that bubbles up on the periphery.

Chaos vs. Order

Effective testing is enhanced by the chaos of randomness and chance. I’m just skimming the surface here, but a great deep dive on the concept of serendipity in testing is Rikard Edgren’s webinar, “Testers Are Often Lucky.”

This idea of chaos also ties into what I said about “dirty” test environments above. While I test, I often indulge my brain’s subconscious impulses: What if I click there? What if I fill these fields with values like this? What if I navigate these screens in this order instead of that? When I ride these impulses without concern for what I’m actually doing – when I don’t let chaos be hemmed in by order – I find wholly unexpected bugs in the software.

Yet completely unbounded chaos can be unproductive. Order has its own value in testing. What happens when I find a bug: how do I figure out how to reproduce it after my chaotic flourish? Or what happens after an hour of chaos: how do I know what I’ve accomplished and keep a sense of coverage?

I like to think of this one as keeping on the edge between Batman and the Joker. I teeter between the order that helps keep track of what’s been tested, including conditions and variables that may help with reproducibility, and the chaos that stirs up productive serendipity.


I’d love to hear from you. Does this concept resonate with you? I’ve been the only tester on my team (and company) for the last three years, so I am particularly curious how much of this has to do with wearing all of the tester hats. Do you teeter too? In what ways are you a tester on the edge?

Page with Comments

  1. Jeremy-

    I dig this kind of thinking and think it absolutely has value in testing.

    Speaking of the ancients, more workable analogues might be Aristotle’s Golden Mean, Confucius’s Doctrine of the Mean, or the Buddha’s Middle Way.

    However, I like (on a personal level) that you mention ying and yang, because philosophical Daoism is near and dear to me. I’ve also tried to adopt some of its ideas into my testing.

    The concepts of the Dao (the Way), the natural, spontaneous order of the universe, and wu wei, living and acting spontaneously in harmony with the Dao, neatly map to purely exploratory testing. “Wandering on the Way,” as Zhuangzi would say.

    Keep on wandering, fellow tester! And, when necessary (leaning to that other side of the edge), plan and script of course. ;D

    1. Thanks, Colby. I don’t think I’ve cracked an ancient philosophy text since college, but your comment has certainly renewed my interest!

      [furiously scribbling down concepts to look up later…]

  2. This post is amazing. The behavior you describe is exactly how I work. I wasn’t able yet to find the right words for it, but these are.
    Thanks for this post, I will quote that in the future.

  3. Had this in my to-read list for a long time, and I’m glad I finally picked this up. These descriptions of balance, especially flow vs. note taking, and confidence vs. self-doubt, are ones I identify with strongly, and you’ve written them far more eloquently than I could’ve.

    I think as testers we have different buckets of knowledge and so teeter on multiple axes? We balance things like client knowledge, domain and system knowledge, and context/project-dependant knowledge and have all that running through our heads when we test, which can be both useful when finding bugs, and a hindrance when it comes to the more chaotic nature of testing?

    Thank you again for this post!

    1. Thank you for your comment, Gem! I really appreciate it.

      I love your suggestion of the different knowledge axes we teeter on as testers – I feel that way too. We need to know a lot of different things to test well; I’ve also found I can know juuust enough about something to start seeing bugs where there aren’t any.

      This idea also makes me think of how I sometimes teeter between testing for different interests. For example, between testing on behalf of the designer’s and development’s specific vision of how something should work, and testing on behalf of my imagined user’s expectations and desires.

Leave a Reply

Your email address will not be published. Required fields are marked *