Sunday, October 17, 2004

The reality of Junit

On a few recent projects, I've written some Junit tests. In fact, a few hundred of them. I'm no expert, but I think that junit "out of the box" isn't ready for primetime. Real projects need to add a lot more infrastructure to make it work without become burdensome.

Here are my observations on scalability:

Once you have a lot of Junit tests...
1) When tests fail, it takes a long time to look at them all and figure out what is going wrong. A very long time. Especially, if the classes under test are still immature. Sure, if it's all green, life is good. But on projects with many modules (and many separate developers), the bug list can get pretty big and development can't always fix the tests right away.

2) It becomes very hard (a burden, in fact) to make major improvements to your classes (especially changes that affect the class signatures). All those tests have to be rewritten -- or at least reviewed. In some ways, this slows down innovation -- especially on a V1.0 release.

3) The results aren't summarized well. Which tests failed differently from the last run? Which tests already have bugs reported in the tracking system? This can take a long time to review.

4) For projects that have a test plan, it is painful to correlate the junit test back to the test plan. Which tests aren't implemented yet? Which tests have been code reviewed? Which tests are being deferred because that functionality is no longer expected in the current release?

5) What about all the legacy code that doesn't have junit tests? This problem comes up in every project.

6) Who reviews the test written by the quality assurance department? Are the test really right? Does code pass because the test is improperly written?

It's nice to see a lot of green at the end of the test. It's just not easy to get there on a big project.

-- jorge.

No comments: