Thursday, January 15, 2009

Test Driven Blogging

Years ago when I first started learning and practicing Test Driven Development (TDD), I became extremely overzealous blindly believing all code needed to be unit tested (one of the more extreme cases were my over-the-top uses of TSQLUnit which included testing simple, low risk DDL changes that, in retrospect, was probably a bit too much with so little to gain.) This attitude was probably a common rookie mistake of becoming enamored with a new methodology (or language or framework or ....) by assuming this is how all software should be written. It is as if I had discovered the elusive magic bullet that a lot of programmers spend most of their careers searching for. Unit testing, along with its subset TDD, has steadily grown in popularity spanning the diverse programming communities spectrum. How could I possibly go wrong in my new found beliefs?

However, this past year, I have seriously reconsidered my views on unit testing. Instead, I have significantly curtailed the use of unit testing by being more selective as to when it should be applied in code. I have realized that while testing is an important tool, it is far from the "be all and end all" that others have proclaimed it to be. Instead, I readjusted my thinking to focus on what is genuinely the most important goal in programming which is to actually deliver good working software in an iterative manner.

On Stack Overflow (SO), Kent Beck an early pioneer of TDD and a creator of JUnit (the precursor for all modern xUnit style frameworks), provided a very interesting and perhaps unexpected response to the question: How deep are your unit tests?

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence.

The reality of what Beck wrote can not be ignored. (At least, I think it's him. The tone and content of his other responses on SO seem to indicate that it might just be.) Principally, working code is more important than the tests themselves. Tests can be just one of numerous methods to achieve the goal of delivering good quality software on a frequent basis but tests must certainly not overshadow this intent.

Along with Beck, it was unquestionably reassuring to read a recent post of Ayende (a notable .NET blogger/developer) that also tackles head on this very topic regarding software delivery and testing:

I want to make it explicit, and understood. What I am riling against isn't testing. I think that they are very valuable, but I think that some people are focusing on that too much. For myself, I have a single metric for creating successful software:

Ship it, often.

Keep in mind these statements are from the creator of Rhino Mocks, one of more popular mock testing frameworks for .NET. Subsequently, it carries a lot weight as it was written by someone who does truly and thoroughly understand the virtues of unit testing and TDD. Ayende is someone who I most certainly admire being as close to the ideal model of a 10x programmer in not just the realm of .NET but in programming in general (some of that admiration stems from being in complete awe of his unearthly prolific blogging). Having him confirm something that I myself have come to realize on my own this past year definitely does help to validate my current views. Quite simply, I have substantially toned down my TDD rhetoric and restrained my testing impulses in favor of renewing my true objectives in programming.

My learning TDD coincided with my learning of .NET and C#. At that point in time, I compulsively consumed the writings of a somewhat noted blogger in the .NET community who relentlessly championed unit testing and TDD. Practically treating this person's words as pure gospel, I considered this individual to be quite representative of the ALT.NET community by serving as one of the leading voices for all that it embodies. This is a community deeply immersed in the ways of unit testing and TDD.

However, with my now newly reformed outlook on software development, I have become much more wary of that blogger's "best practices" crusades. The blog still continues to be obsessively fanatical over the non-negotiable importance of unit testing and TDD almost to the exclusion of any other competing methodologies or tools. Their dogmatic writings assuredly fall into the "focusing on that [testing] too much" camp as described earlier in Ayende's post. Originally, this programmer's "test driven blog" once held one of the few selective spots on my blog's "Blog List" links. But, since it no longer carries the same relevance to me as it used to, I decided to remove it from the list.

Nevertheless, I still read that blog from time to time because it does have great information and observations regarding software development and best practices in the .NET ecosystem. In addition, I still consider myself a TDD practitioner despite reducing the scope and influence of unit testing in relation to my programming style. I just now better grok what my priorities are.


J.P. Hamilton said...

First off, you need to blog more:) Second, I appreciate everything you have said about TDD. I am in a similar spot myself. I think TDD is great, but after shooting for 100% coverage and mocking the known universe, I know that it can be taken to the extreme. In fact, I think that tons of mocking could be a code smell. At a certain point, it seems to hinder my ability to refactor effectively. I am definitely searching out a more rational approach to the practice.

Ray Vega said...

@J.P. Hamilton-

As I wrote above, when you are first learning something new such as unit testing and TDD your initial instinct is to apply it everywhere and anywhere you can. By doing this, it typically helps you as the learner to both understand it and build a mastery of it over time.

Along those same lines, the TDD trend and movement is more about getting non-testing programmers in the habit of doing testing as part of their development process. But, in order to do that, the easier path to sell this concept to them is to just simply say all code should be tested versus just certain code.

It is probably better to strive for quality over quantity when it comes to writing tests (which is the potential pitfall when targeting 100% test coverage). Otherwise, you might end up spinning wasteful cycles on writing (test) code that might not provide any true benefit (direct or indirect) to delivering and maintaining your software. Worse, it could end up being costly having a negative impact as the developers are drowning in test code that needs to be written and maintained.

Once you know the insides and out of how to unit test in general, it takes additional time and practice to know when and what to write tests for. This can be more difficult to categorize and define because it does require more experience and skills with writing good quality code and software in general as you develop an eye to recognize what code screams for testing and what does not. As I mentioned, for most unable to discern when to write tests, it is easier to just test everything (or not to test at all) which is not necessarily a good thing.

As to what precisely are those situations in code that do indeed need to have tests written for, I can not provide a whole lot of concrete examples as I am still learning. However, I will offer up that any code with any form of complex conditional logic would probably always be a good candidate.

Oh yes, I'll keep on working on blogging more frequently. :-)