The New York Times had a recent quiz that sparked some thought among the audience. It was a simple task: figure out the mathematical rule behind a sequence of numbers. As a reader, you can enter your own numbers and see if they pass or fail your guess at the rule.

This is very similar to the way we unit test. Provide an input, validate the output. Here, you provide the input (the numbers), and the output is whether the number sequence matches the rule.

You can imagine the unit test for it:

```
function checkRule (numbers) {
// some magical formula we’re supposed to figure out
};
describe(‘my mathematical equation’, function () {
it(‘should pass when doubling each number’, function () {
expect(checkRule([1,2,4])).to.be.true;
expect(checkRule([2,4,8])).to.be.true;
expect(checkRule([3,6,12])).to.be.true;
expect(checkRule([5,10,20])).to.be.true;
});
})
```

Looking at this code, it’s easy to assume that the rule is “Each number should double the previous one”. After all, our four assertions pass, so we’ve got green tests!

The trick with the quiz is that the mathematical equation is very simple: each number must be larger than the previous one. This broad rule means that it’s easy for people to assume their complex solution is the correct one. Every input they give to validate their rule returns true, so it must be right.

Yet there’s a flaw to this testing methodology, as the article points out:

Remarkably, 78 percent of people who have played this game so far have guessed the answer without first hearing a single no. A mere 9 percent heard at least three nos — even though there is no penalty or cost for being told no, save the small disappointment that every human being feels when hearing “no.”

The article attributes this as “confirmation bias” which partially applies. But a better description is a lesser known bias called Congruence Bias. (one I was unaware of before hearing about this article on The Skeptic’s Guide to the Universe). This bias is “the tendency to test hypotheses exclusively through direct testing, in contrast to tests of possible alternative hypotheses.”

In our tests above, we’re only checking for positive results. We never ask “does this fail if I provide data which contradicts the rule?”

Every suite of unit tests should have negation checks. A simple `expect(passesRule([2,4,6])).to.not.be.true;`

would have easily shown us that the rule passes despite 6 not being twice that of 4.

Again, from the article:

When you want to test a theory, don’t just look for examples that prove it. When you’re considering a plan, think in detail about how it might go wrong.

That second part rings especially true for unit testing. It’s easy to assume that because your tests pass, the code and the tests are working as expected. But we must remember what Edsger Dijkstra said long ago:

Testing shows the presence, not the absence of bugs.

Think about confirmation and congruence bias next time you’re writing your tests. Keep in mind the phrase “fail fast”. Prove that your code really is what it says it is and always keep a skeptical mind when coding. Don’t wait until it’s too late to learn the harsh truth.

In the words of Richard Feynman:

The first principle is that you must not fool yourself—and you are the easiest person to fool.