How Much Testing Is Enough?
One of the major decisions that a development group needs to make in regard to all of this is: how much testing is enough? There is no simple answer. The only approach that I know of to address this is to discuss it in context with if the project is new design or if the project is enhancement of an existing code base.
When designing a new application, the world is wide open for using test first and all of the advantages of the test process. It is a relatively simple thing to install a product like Cobertura, for example, and set high test coverage thresholds. I participated in a project where we started the threshold at 90% and worked it up to 94%. Thatís outstanding, but at the same time probably a little on the overkill side.
But the point is that in such an environment, high thresholds are possible. From that high coverage comes total confidence in the code base and its ability to be deployed to production with little concern. I guess it all depends on how much pre-deployment testing you want to have to do in comparison to the time you want to allocate to developing test coverage.
My view is that when the world is open I want the most coverage I can get in the places where most of the problems are going to occur. Iím not going to be too concerned with writing what we kindly refer to as "coverage tests", i.e. tests written against every method in every object to get the coverage level as high as possible. Why write tests against accessor methods in data objects, for example? Whatís the point? What are you testing; that the setter method wonít barf if you send it a null? I mean, come on...
What I do want to test is any method that provides a service or makes a business process decision. This does not include faÁade or delegation methods, or methods that simply act as pass throughs (as some persistence layer objects sometimes do). Service methods solve the problem domain and so those are the methods that need to be tested.
And we might be surprised at how few of these types of methods there are as a percentage of the entire code base when the application is designed properly. What percentage I couldnít speculate on, but far less than might at first be guessed. Of course, the individual developer has to be trusted to make mature independent decisions then as to what tests are written and which tests are not written. But thatís an organizational issue that is out of scope for this discussion.
The long and the short of writing tests for new development is that itís a lot easier to do because it is being done in a test first process and so coverage can be consistent, predictable, and dependable.
Existing Code Base
We can spend our whole life going through an existing application writing tests that mean nothing. Most code bases which have not had a good test policy (or any test policy) applied to them are generally plagued with coupled objects. Many times this is because the design was not done by Interface but rather by abstract and concrete classes using too much inheritance instead of aggregation and applying strategy design patterns and taking full use of polymorphism.
That said: such an application runs and is not falling apart or bleeding exceptions with every user submit from the GUI. Under the sheets things may be a mess, but it works. So, to go in and start creating Interfaces and refactoring code to write tests is to beg for trouble. If the application wasnít broken before this kind of silly effort is started, it surely will be in a very short time under this kind of activity.
The Agile mantra of "anytime you touch code, leave it in a little better state" is a great thought to guide us as we poke around in an existing code base. It is pointless to refactor and test relatively straightforward methods that have little chance of being passed crummy parameters (particularly if the "good neighbor policy" appears to be followed in the code) or that stand little chance of blindly returning what might or could be a crummy result. Just leave it alone!
Rather, refactor the method that controls some critical aspect of the business process where these types of maladies may be lurking. Pick your battles carefully because many times making such a refactor will unveil the lack of Interfaces on certain key objects the method uses. And thatís where things can get sticky in a coupled code base: itís not so easy to just create that interface now and apply it across the breadth of concrete objects that must implement it.
Selecting these complex business process methods and testing them may only provide a 20% code base coverage. However, we can casually - though confidently - say that 80% of the problems come from 20% of the code base, and so we can thus conclude without much of a leap of faith that we thus have an 80% confidence level over the entire code base by only tightening up 20% of the code. Howís that for logic?
Appendix: Completed Code That passes All Tests
Hereís the production code needed to support all of this: