I was asked today ‘do we need to test this internal page just for developers?’ and that got me thinking about why I encourage testing. I’ve been practicing TDD/BDD since 2004 and this article is the response I should have given.
In order, my top reasons for testing:
- Design aid/architecture teacher IF you listen to your tests.
- Tests as documentation that executes. They are much more trustworthy than a comment.
- Regression prevention. (Prevent bugs in production)
To clarify my first claim that tests teach design and encourage good architecture:
- Tests promote the concept of ‘outside in’ development by encouraging simple interfaces as the test is using the object as another object would.
- Behavior. Of the objects. BDD. Make sure they play nice:
- Lots of Stubbing/Mocking needed? It better be a controller/commander object.
- Are the Stubs/Mocks exceedingly complicated and difficult to construct? Could be some feature envy going on.
- Plain old objects have few dependencies and therefore don’t need much stubbing/mocking. Since these are fun to write and test, your developers will be encouraged to write more simple objects. Small objects talking to each other. Smalltalk.
- Mindful/reflective testing leads to ‘legos not spaghetti’, increased velocity, and higher team morale.
Warning: It took me years to figure out how to achieve balance between value and total test maintenance time. Bad/slow/flaky tests can kill morale.
Some hard fought lessons follow.
Don’t test the view in server-side ‘unit’ tests!
- Dom resolution is best handled by Big Browser
- The view is not a good or stable object - it talks to everything:
- You’ll have to stub/mock a billion complicated and changing objects so the tests will be crystal in their fragility.
- The view itself changes constantly for non-architecture related reasons. The churn rate can bury a team.
- Force all logic out of the view: As little behavior as possible.
- Test view helpers.
Absolutely have integration level tests but keep them lean and healthy.
- No mocks or stubs
- Hit the DB or other persistence
- Hit test servers of outside systems
- Of course there’s always Selenium…
…Warning: Maintenance costs of Selenium tests are extreme unless you have white-hot automated testers with years of experience learning in the field (hopefully at some other company that has your exact stack). Even with a hardened team of QA/Testers/Breaker/SDETS maintenance of Selenium or any other full path/no stubs test suite will be far more expensive than other levels of testing.
As a way of explaining, here are some questions to ask when an integration test fails:
- Is the test server up?
- Did it change recently?
- What did that other team (perhaps in another timezone) do?!?
- Has a browser update ruined everything?
- Has an OS level update ruined everything?
- Is the test DB up?
- Was this caused by those intermittent internal connection issues we can’t reproduce and so IT can’t fix?
- I’m not sure if this list has an end.
Also, if your project relies on vendor APIs… Oof. Maybe they have a test server (best of luck) or you stub and hope the vendor announces api changes.
Despite all the disadvantages of integration tests, they really do catch a lot of bugs when done well or even decently. Other times they are crazy time sucks with little benefit. It’s shocking how fast one state can transition to another but that is a whole other article.
So, should the developers have tested that ‘internal page just for developers’ given the above thoughts?
Disclaimer: I haven’t done enough React/Angular/New Hotness Javascript to know if any of this advice applies. My experience is mostly with Ruby (sometimes on Rails), Clojure, and Java systems.