Unit test pitfalls

I have been reading a number of articles on unit tests and test driven development (TDD). I even attended a few unit test specific talks at the NFJS Java Symposium to make sure that I was up to date on all of the latest and greatest. Unit tests and TDD are all the craze these days and everyone is touting their benefits. But there are also a number of pitfalls that must be understood before entrusting your life with these safety nets.

Pitfall 1: Parallel Concerns

While writing a unit test for class Foo you think of cases A through F. These six cases represent the whole of what you are concerned about for Foo. Feeling confident in the tests you being coding. As you are coding the implementation, you’re are thinking about cases A through F since that’s the whole of the concerns that you know about.

In the middle of coding you realize that there is a case G. G is not in the tests you already wrote so you add a unit test for it. So now, as far as you know, the functionality is completely covered by cases A through G.

But what if there are cases H and I? Since you didn’t think of them initially, they’re not in the unit tests. Since they didn’t become evident as code was written they are likely not accounted for. If you’re lucky, cases H and I will be implicitly tested and/or coded for. But what if they’re not?

The concerns of testing and coding tend to be parallel to eachother; if you thought of it, then it will be coded and tested. The test and the code will typically approach the problem in the same way. What about orthogonal concerns — things that you didn’t think of or aspects that are only uncovered when approached from a different angle?

With coding experience, one gains a sense of knowing what you don’t know. These aspects can usually be boxed into a corner, documented and possibly explicitly tested for. We’re all human and this knowing what you don’t know certainly isn’t quantifiable so there are going to be orthogonal concerns that are missed.

Pitfall 2: Invalid or not Useful

Some time ago I was working with a third party library. I wanted to provide another implementation for some custom functionality. I was in luck; the code was structured in such a way that there was an interface right where I needed it. And to top things off, there were even unit tests provided with the interface. This was going to be a good day!

I started off by running the unit tests on the existing (functional) implementation. I was rewarded with green bars; my safety net is in place. Sure, I don’t know what the unit tests are acutally testing for, but there are a number of them and the current developers use them. I will learn what the tests do as need arises and I will add my own as necessary.

I started to stub out my own implementation. After a few hours, I have all of the basics in place. Now’s a good time to see how far off base I am. I run the tests and all shows up green! How good can this day get?

But wait! I notice an error in my logic that should not produce a correct result. Well, maybe the unit tests don’t explicitly test for that case. I’ll make a note to write a test.

But something just doesn’t feel right so I quickly stub out another implementation that just retuns default hardcoded values. I run the unit tests and all is green. What the ….?!?

After devoting a heafty chunk of time to weeding through the existing unit tests I discovered the problem: the unit tests were all negative tests. In other words, each test checked to ensure that something wouldn’t happen. Since my code never returned null or threw an exception, they all passed. Fifteen tests that made up about one thousand lines of code served to tell me nothing useful in my case.

Pitfall 3: Requirements Match

The business logic for computing the results of some portfolio requires the use of a spatial partitioning tree structure. You approach the problem by first writing a suite of tests. These tests are robust and cover all corner cases. Due to the complexity of the algorithm, the tests even go as far as to parse the internal tree structure to provide that extra degree of certainty.

After a week of development time, you and the rest of the engineering staff are confident in the logic. The software is pushed off to quality control and the results come back: zero defects. The obligatory pizza and beer party is thrown for this extraordinary feat and the engineering staff goes home for a well deserved weekend of rest.

On Monday, still riding the high of the previous week, you arrive early only to find a post-it on your monitor telling you that the client rejected the software. You feel your heart sink. How could this be?.

After some investigation, it was determined that your understanding of the tree structure and that of the clients differed drastically. The software was certainly free from defects but the results did not match the requirements.

Conclusion

All of the pitfalls have the common theme: don’t let the green light lull you into a sense of false confidence. Unit tests may provide a safety net but make sure that the net is directly under you and that it isn’t just lying on the ground. Also make sure that if the tightrope is frayed and does break that the safety net isn’t made of the same faulty material.


One comment


Leave a comment