Some answers on TDD

published: Wed, 24-Nov-2004   |   updated: Thu, 16-Jun-2005

One of my readers asked me a few questions about Test-Driven Development (TDD) and I thought them interesting enough that I decided to mold them and my answers into a question-and-answer article.

Q: You've talked about TDD quite a bit so I'm curious. Which testing framework you use?

JMB: I tend to use NUnit for C#, Delphi 8 and Delphi 2005 (i.e. the dotNET universe), and DUnit for Delphi 7 and earlier.

Realize though that TDD doesn't really state categorically that you should use one of the xUnit frameworks. All TDD talks about is the need to write tests, and to write them before the code you're developing, so that you get a failing test that will pass with some judicial code.

I'll admit that there are times when I just write a small test program when the entire xUnit framework would be overkill. For example, elsewhere on my website is an exhaustive test program for the sorts in my book; it doesn't use DUnit.

So, in essence, the xUnit frameworks provide a nice environment for writing tests, but it doesn't mean that they should be used exclusively. The important thing is to write test code and use it.

Q: When you use TDD how complete a test case do you write before "going for the green", so to speak?

JMB: It depends. With "strict" TDD, you are supposed to write a simple test that tests a single item of functionality, and then write the simplest code that makes it pass. I tend, however, to streamline it a bit and would write a more complex test that relies perhaps on several connected pieces of functionality, and then make that pass. This means, in turn, that my cycles of red, green, refactor, tend to be longer than the usual few minutes or so. I do try though to make the cycle less than 30 minutes; any more than that, I feel as if I'm writing too much untestable code.

And then again, if I'm writing a class that'll be used a lot, especially if I've written some similar classes in the past, I'll write the class as a skeleton and implement each public method (or property getter/setter) to throw an exception (or raise one, in Delphi-speak) as a place holder. That way I'll know immediately if I use a method that hasn't been properly implemented (I'll get an exception), and I'll know at the end which tests haven't been written (because the corresponding method is still a one- liner throw statement). This trick is especially useful if the class is going to implement an interface (or several interfaces) because you have to implement all the interface's methods in one fell swoop otherwise your code will not compile.

Notice that if you do write the skeleton of a class with throws/raises for its public methods and properties, don't get so wedded to your initial design that you don't recognize its deficiencies. Always be prepared to refactor your class. Obviously, the more intricate or detailed you make your initial design the harder this refactoring becomes, so be aware that you could be relaxing the TDD discipline too much.

Q: How do you determine what to write for error messages for your checks?

JMB: Error messages? Tough one, that. I tend to be parsimonious with my error message text: "Add failed", "Count is wrong after adding an item", that kind of thing. The reason is that since you are fairly rapidly cycling between writing tests and writing code, it's fairly easy to understand where the problem lies in the event of a test failure. If it isn't obvious, then you can add more text to the message strings to help you pinpoint the problem area, and then retest.

Nine months down the road when one of your tests fails during a build, for example, you may rue the day that you used terse error messages, but, in reality this situation is just the same as the previous one. It should be fairly easy to find the problem, especially if all developers are running the majority of tests during their normal development.

Q: Anything else you'd like to say in summary about TDD based on my previous questions?

JMB: In my view, TDD is more of a development discipline than a restrictive set of never-to-be-broken programming rules. If you take the basic premise that you should write a test, then write the code to make the test pass, running all tests all the time, and then refactor every now and then, you'll do fine. It may not be pristine TDD as described by Beck, et al, but it's light years ahead of having nothing.

But then again, having said that, I do subscribe to the viewpoint that the red-green-refactor cycle shouldn't be so long that you haven't run a test for several hours because the code you're writing is so complex and intricate. If I approach that particular situation (and for me it's about 30 minutes), I tend to feel that I'm out of control, that my test is too all encompassing and I should break it down, that my code is too complex and should be refactored.

So, try "strict" TDD for a while. As you do so, you'll find that you'll start taking shortcuts. Some of these shortcuts will be beneficial and will enable you to become more efficient ("hey, let's test and code Insert() and Count() at the same time"), and some of them will be detrimental ("oh, rats, I'm getting some bizarre exception in the middle of this code that's taken me 4 hours to write, I'll need the debugger to sort this lot out"). After some time doing this, you'll find that you have molded TDD to suit your programming style, rather than the other way around.