Why 50% Test Coverage Seems More Painfull Than No Test Coverage

31 Aug 2007

Recently I was on a project where a bunch of code had been written before we arrived. It was quite a struggle to get the application under test. After a number of months the team hit 50% and then we just stayed there. We had a hard time getting client developer buy-in on the push upward from 50%. I didn’t really understand this attitude at first, but after talking with the devs, I realized that the tests were mostly a nuisance for them. They saw it like this:

“If I have to gut a few pages, as part of a changing requirement, now I also have to spend a day fixing the stupid tests. And the tests never really catch any bugs, so what was the point? All the tests are doing is slowing me down.”

Since the coverage was low and many of the test writers were new to unit tests we didn’t really have a lot of protection from bugs. But we also had a sizable suite to maintain. They were feeling all the pain of keeping a test suite running but seeing none of the benefits. Now, as a guy who’d been on projects with good test coverage I could see how the suite was making things better: First, I had a roughly 50% chance of being able to use the tests as documentation for any class I opened up. And second, the tests were catching some problems before they made it into the repository – but not enough for the client devs to notice.

We never did convince them that writing tests allowed them to go faster in the long run and it caused me to wonder how many teams get to 30-60% coverage and give up because the benefits are too subtle and the pain is too much?