I had a lot of recurrent conversations over why we should use automatic testing, unit testing or TDD. Through these conversations, I’ve heard a lot of arguments pro and con to these techniques, so I decided to summarize them.
To be honest, I am very passionate about TDD. I believe this technique is a must know for every developer who’s serious about his or her craft. Despite my personal preference, I will do my best to present both sides of the arguments as clear, pragmatic and even as I can. If you spot any mistakes or errors please let me know and I will update the article.
I will split the discussion in a few parts because I feel the arguments can be separated from general to specific.
>Testing takes time and it doesn’t provide enough value.
(I’ve never actually heard this argument, but I can imagine people using it.)
The baseline value you can get from an application is **people using it**.
*Code doesn’t just work*. No matter how smart your programmers are, they will miss something.
If the application crashes, people won’t be using it.
If the application doesn’t do what it’s supposed to, people won’t be using it.
Unless you want to make some crazy tests on humans (e.g. seeing how much they can stand using a non-working application), this is not the way to go. You need to verify that the application does what it’s supposed to.
## Manual Testing
>Manual testing is repetitive work and the computer does it much better.
>The amount of manual testing required for an application increases over time. The only way to keep up is to add more testers, which then adds overhead on management.
>Manual testing is not only repetitive work, but also discovery and critical thinking.
>Our developers write clean code, review it thoroughly and test the application.
The logical conclusion is that manual testing is excellent for discovering hidden issues. All the repetitive manual testing should be automated. However, testing automation doesn’t come without overhead. Adding more people to test also adds an overhead. Which one is larger? The problem is especially difficult when working with legacy code.
Testing at its core involves two things: understanding what it is that you are testing and actually performing the tests. The repetitive tasks from second part can be automated; the first part requires smart people and cannot be automated. So, is it better to ask from more people to learn or to manage the automatic tests?
From this analysis, it seems that **learning is the bottleneck**. If you remove all repetitive work, you only have to optimize learning.
In the case of developer testing, the problem is that they either test the whole application (which is time consuming) or only the parts that were changed (which is error prone). However, if you cannot do automatic testing, this is the next best thing because it creates the premises for catching and fixing most of the bugs early in the process. The risk you’re facing is that the bugs that escape this process can damage your product. Due to the side effects from the code, these bugs could be nasty. Or, if you’re lucky, they might also be trivial. You don’t know which ones you’re gonna get.
Removing repetitive work has beneficial effects for your team. It allows each and every team member to focus on doing the core activities needed for his/her work. In the case of testers, they learn and do “smart” testing, being suddenly freed from the worries of repeating the same scenario over and over again.
In the end, the choice depends on your context, but going towards automatic testing seems to be the best way.
## Automatic Testing Done By Testers
>Automatic testing eliminates repetitive work.
>Developers should test the code they write. Testers should only do smart testing.
Testers usually test applications from the user interface. Automating this part is possible with various tools – some of them incredibly expensive, some of them free, depending on what they’re used for.
The question here is why should testers find from the user interface issues that can be more easily found from the code? This is an overhead that can be moved towards programmers, leaving testers to worry about things like workflows through the application, user experience, interface design, hidden issues etc.
I think that most teams come to some sort of equilibrium between programmer automatic testing and tester automatic testing.
I’ve lately worked mostly with teams from startups, without testers so programmer automatic testing was a must. Please share your experiences.
## Automatic Testing Done By Programmers
*Saw the pros above*
> Tests add overhead to development. Whenever code changes, tests need to be changed as well.
> Automatic programmer testing requires a testable design. Testable design is sometimes different from OOD.
Writing tests is not enough. Programmers need to write tests that are extremely simple.
For example, some of the guidelines for writing tests are:
* They should be very short
* They should run very quickly
* Each test should assert on one and only one thing
It’s fairly difficult to learn good programmer testing, and there are many traps that a team can fall into if tests are not simple enough. Some of the typical issues are:
* Tests take a long time to run, so people don’t run them anymore
* It’s very hard to change the code because you need to change a lot of tests each time
Some of the problems are alleviated by using powerful tools (ReSharper for Visual Studio or Eclipse are highly recommended) that make changes much easier. The other part of the problem – writing simple tests – takes practice and it’s an art in itself. The [software craftsmanship] movement shows how any programmer can learn these techniques.
Regarding testable design, the typical issues that arise are due to creating seams. Seams are ways to inject the code of collaborators for a class into that class. The most used one is through constructors and interfaces, and this means passing all collaborators in the constructor as an interface (thus creating lots of public interfaces). Other are possible, like having read/write properties for each collaborator, having read/write function properties (delegates, lambdas) etc. These kind of things contradict our intuition on OOD.
I agree with Roy Osherove when he says in “The Art of Unit Testing” that **testable design is not OOD, but a different type of modular design**. The main purpose of any design method is to produce code that does what it has to do and that is flexible. Testable design allows this in a slightly different way than OOD, with one advantage: it allows more validation. Pragmatically speaking, this advantage is much more important than violating our aesthetic senses.
Regarding the productivity, it’s true that writing tests slows down the development process. However, according to practitioners, empirical evidence and to a few studies (including one from Microsoft), **the time for writing tests is more than made up for in the testing and maintenance phase** so **on the whole lifetime of the project, productivity is larger when writing tests**. Bear in mind that neither of this is definite evidence that it always works; I believe that it’s however a very good reason to try it.
One thing is clear: doing automatic programmer testing in the right way is difficult, it requires lots of practice, continuous learning and re-training your intuition but creates value by validating your code.
## Unit Testing After Writing the Code
>We know exactly what to test because the code is written.
>We design the system as we want before writing the code.
>Your tests will be influenced by the code.
>You never find the time to write all the tests if you write them after the code.
>Upfront design is not needed. Emergent design does the work for you.
The typical things I find out when programmers tell me that they write tests is
* the tests are written after the code
* there’s never enough time to write tests, so they get postponed indefinitely
We also need to take into account in this discussion the *tendency towards overdesign* and the tendency for *analysis paralysis*. I’ve known a lot of teams that started their projects by building a framework that could be used not only for that project but also for other similar projects. Most such projects failed.
Programmers love to prove they’re smart. Many of them do it by discussing about ideas rather than producing something. (Remember, I’m a programmer as well, so I can say these things.)
Since we know all this, anything that helps moving programmers towards *just enough* design is beneficial.
## Test First Programming
> I get to design before writing tests and write the tests that test the code matches the design.
> The tests are written before the code.
> Easier to do than TDD
> Potential for overdesign, analysis paralysis.
TFP solves one issue: you will write the tests. It doesn’t solve the others: overdesign, analysis paralysis. It allows however a focus on a small part of the code and thus less design, so it’s already better.
## Test Driven Development
> You always have tests for all the code.
> Very little upfront design (not 0, but very little).
> Good, simple design emerges from applying TDD.
> It’s unnatural and hard to do
> How do I know emergent design works?
I believe that the unnatural part of TDD is due to our formation as programmers. We are trained to give solutions, whereas TDD makes us ask questions and teaches us that the solution will emerge. This is almost a blasphemy, it goes against everything we think we know and it requires a leap of faith.
If that’s not enough, we need to suspend our sense of design because sometimes we pass through a defactoring (making code look worse but doing the same thing) phase. It’s like all the world moves upside down.
Once you pass these psychological obstacles, you need to go even further by training yourself over and over to think simpler and to ask good questions. I found during my TDD practice that **the bottleneck of TDD is understanding what tests to write**.
And you always wonder: Does emergent design really work? What if it doesn’t work in some case? Does TDD really work? Are there any complex applications built using TDD? Are there studies that support TDD as a best practice?
My take on it all has been pretty simple. I’ve heard about TDD, I tried it, I saw its benefits, then I read some more about it, then I practiced some more and then I started using it in real applications and I had to learn some more and now I don’t want to work without doing TDD. My reasons are:
* TDD makes me advance at a sustainable pace
* TDD helps me avoid analysis paralysis and overdesign
* TDD makes me focus on the thing I need to do
* TDD forces me to work on small things only
* The tests are always there to save me from my mistakes
If we look in the industry, there are companies using TDD extensively. Some of them may just say that they do it but we can’t be sure. However, there are some that we can be fairly certain that apply TDD: Hashrocket, Obtiva, 8th Light. The Ruby on Rails community is especially keen on doing TDD.
Regarding emergent design, we cannot have a proof it always works; from the collective experience, it seems to work in the majority of cases. Algorithms for example are an area where emergent design doesn’t seem to work very well. However, most applications nowadays use very few algorithms and in very specific places. The knowledge and experience of the developer is and will always remain very important.
## In The End
I don’t think this is the end, but at least I hope I gathered a lot of useful information that can help you decide how you do testing. Let me know if this article helped you or not. If you need help learning unit testing or TDD [contact me].: /my-take-on/software-craftsmanship/