Quick Response to “Test-induced Design Damage”

Share Button

David Heinemeier Hansson wrote an interesting critique of Test Driven Design two days ago. I understand where he’s coming from with his concern that mindlessly shaping all design around tests can (will?) lead to poor design in some areas. It was interesting, and I like to be challenged and think about these things. Personally, I think his extrapolation to proposing “TDD is Dead” is hyperbole. Here are my thoughts on the bits of his blog that jumped out at me…

“But the testing pyramid prescribes that the unit level is where the focus should be, so people are sucked into that by default.”

It’s probably true that some people are sucked into always defaulting to the unit level. The problem with that is the “always”, but the unit level is where the focus should be. Why? Because that is where the complexity lives – in each unit. The reason we separate code into units is to separate complexity into simpler, composable parts. Testing exhaustively at non-unit levels (i.e. by integrating components) requires a combinatorial explosion of tests… or not testing exhaustively (might be a viable option at  your work, but not at a payments company). We use integration tests as well, but I don’t typically use them to prove all facets of functionality, instead only to prove the components integrate correctly and can deliver the core functions.

“Controllers are meant to be integration tested, not unit tested.”

I work in the Java world, not the Rails world, and I completely agree with this. When I started at Tyro, we were writing unit tests for controllers and integrated web tests for the presentation (i.e. JSPs)  of most features as well. The result? All controller code was tested twice. (It’s pretty hard to test a JSP without hitting its controller.) There’s a dirty word to describe this practice: waste. We ditched controller tests a long time ago now and no one has ever missed them. Controllers are still tested, just not by unit tests. From time to time, some logic in a controller might get a bit complex, resulting in more paths than are practical to web test. The solution is easy there: extract that complexity into another class, unit test the complexity, and web test the integration. Controllers should never be complex. A David points out, they are mostly just an adapter layer between models and views. Let any kind of business logic complexity live in there and it’s got two responsibilities. Your goal should be to make controllers so simple that they don’t need unit tests.

“Finally, the fear of letting model tests talk to the database is outdated”

Yes, and no. Yes, there should not be a fear of testing against the database. In fact, there should be a preference towards it. Not testing against real databases (the same ones you’ll use in Production) leaves a big fat layer of assumptions in your app. However, in large systems with lots of database interaction and great test coverage you can quickly max out the build time if you use the database everywhere. My thoughts: have a preference for DB-backed testing, but not at the expense of developer productivity. The team needs to be aware of when the preference is damaging their velocity and find the balance. I’ve written a lot of data-heavy, back-end Java, so mileage with Ruby on Rails may vary (maybe it never becomes a problem).

“Above all, you do not let your tests drive your design, you let your design drive your tests!”

I don’t think it’s this simple. One great advantage of using tests to drive design is that your desire for simple tests commutes to an implementation of simple classes. Yes, you can probably achieve similar outcomes by doing some more thinking or drawing, but test-first is also a useful tool for driving towards this goal. Not the only tool, or a required tool, but a useful tool. I write lots of simple code at home and rarely write tests for it, but recently I found one part I was writing was a bit gnarly so I decided to write a unit test for it. Upon starting to write the test, I found I wasn’t able to because of the design of the code, and when I thought about the design for a second I realised I was mashing several responsibilities into the one place. Tests can drive designs to bad places, but in my experience they more often drive them to good places.

I often re-iterate to my team that testing is primarily about building confidence in the code, and secondly about building a safety net for those who pass this way next. There is no mandate in TDD for writing tests that do not build confidence (do you TDD getter methods?), or for spending hours on tests that increase confidence by small fractions. I think David is right that there are probably zealots out there who are blindly going down these kinds of paths. Perhaps he’s right that some designs are bastardised by having stringent testability requirements, though I don’t recall seeing a lot of this in my travels. So, he’s right that mindlessly shaping all design around tests can have bad effects, but no software developer should be doing anything mindlessly. I think there is value in letting tests shape your design much of the time, so long as one keeps in mind that there will be occasions where the tests have to be shaped by the design instead.

Everything in moderation, right? Including moderation.

Share Button

6 thoughts on “Quick Response to “Test-induced Design Damage”

  1. One thing we must not forget about unit tests is that they directly promote SOLID principles. Unit tests force a “unit” to have single (or few) responsibilities, they force the unit to be “open-closed”, they force interfaces and force dependency inversion.

    In the end, your unit tests check that you’re using SOLID principles, it isn’t just some crazy dogma. However, just using SOLID principles does not magically guarantee good design, but it is a prerequisite. I suspect the “design damage” David Heinemeier is talking about is actually just bad design, often caused by leaky frameworks and bad abstractions. The fact that he’s talking about databases in the concrete sense instead of the persistence layer which could be in-memory or streams or whatever, shows that what he’s aiming for is going to be crusty and inflexible code.

    • I have no experience with Rails, but we all know it’s geared towards producing web interfaces from databases with as little code as possible in between, and there’s a whole corner of the internet documenting people’s migration away from Rails once their app got complex. I wonder how much of David’s frustration is from people forcing TDD into this context inappropriately, whereas people writing *real code* (j/k) get a lot more value out of the process.

  2. “Controllers are meant to be integration tested, not unit tested.”

    I have to disagree strongly on this one, although it will depend on where we draw the boundary between unit and integration testing.

    Sure, those old controller unit tests are unhelpful, because as you say, any complexity should be refactored out anyway. The complexity that we’re interested in is really in how the JSP and controller work together.

    But giving up and replacing these with big web-driven integration tests doesn’t work very well. They are slow to write, slow to run, and can lead to high coupling between tests due to shared use of stub services. And when these are required to stand in for web unit tests, then your earlier comment of “requires a combinatorial explosion of tests… or not testing exhaustively” applies.

    But if you think of the controller, JSP and Javascript together as a unit, it makes far more sense. Why can’t this be unit tested just like any other unit? Or call it a small-scope integration test, if you prefer that term. But there should be no services, no database, and no navigating through other pages just to establish the desired initial state.

    It becomes simple to test these units exhaustively at the unit test level. Then you can do a couple of basic happy and failure cases as full integration tests just to provide confidence that things are correctly wired together.

    Inspired by the strengths and weaknesses of the various ways we’ve done testing at Tyro, and leveraging the same tools, I recently built a framework to make this approach more convenient: https://github.com/mattprovis/uitest

    • While I see nothing wrong with the approach you’re suggesting, Matt, you should be careful of not running into the same mistake that David did and stating: becase bad experience we should always alternative. The problem in the sentence is “always”, or in your comment, “there should be no services, no database, and no …”. The logic is “our current dogma has an issue, so let’s create a new dogma”. The problem is the dogma itself, and the solution is to realise there’s no one-size-fits-all method. David makes good arguments for not avoiding databases in tests, but there are also situations where that’s the right thing to do. It may or may not make sense to include Services in the testing you’re describing above, depending on how much the service’s results affect the results that appear on screen.

      For what it’s worth, I don’t think “unit” is a good name for what you’re doing. Wikipedia (source of all truth) says “one can view a unit as the smallest testable part of an application” and I think that is a good way of thinking about it. Once you integrate a few things and test them together, that’s an integration test. Nothing wrong with that, and nothing wrong with stubbing out half way down the stack, but it’s not really a unit. Perhaps “presentation test” would be a good alternative?

      • I was describing my vision of a particular category of test. Under that model, those exclusions (database, services, etc) must apply, simply because if they were included it becomes quite a different kind of test (a more conventional integration test with a deeper stack). Which is fine, there are clearly valid reasons for having tests at various levels. Its just not the kind I’m talking about.

        I agree that my terminology needs some refinement. Presentation testing sounds good, I’ll give that some further thought. While calling them web unit tests certainly bends the definitions a bit, I find that language is helpful to highlight the parallels with conventional unit testing. The idea is that the granularity at the conceptual level would be very similar to that of a unit test, and the same kinds of calls would be mocked out. It’s just that the nature of Spring web development requires a few different elements to be combined together to achieve this.
        Actually, this has got me thinking further: should terminology like this be tied to the conceptual or logical level? For example, if the app was built with Swing, Vaadin, or some mobile toolkit, the entire presentation for some feature might easily be contained within a single class file. So then unit tests for methods in that class would be presentation tests. Should technical details of a framework affect the general terms we use to talk about these concepts?

  3. good article. Evaluating and reviewing efficiencies through existing processes is key to success.

    Just because the industry( other devs) set a pattern it does not mean it applies to all implementations.

    Quote “Your goal should be to make controllers so simple that they don’t need unit tests.”

    That statement above in my view carries a lot of weight. In my experience, there are times where u end up adding logic in a controller because to some extent there seemed to be no better place for it. And then if that becomes the case there should be no harm in adding a few unit tests for it. Also, curious as to why running more tests is really an issue? Time to comit deploy and build?

    Cheers,
    Nitin
    Your soon to be good friend 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.