11 June 2019
By: Tomas Malmsten

So, What is Developer Testing then?

As hinted at in my last post I wanted to dig into some of the different developer tests there are. Such things as unit test, integration tests, behaviour tests, specification tests and so on. We will also look at some different disciplines such as TDD, BDD, test first and test later, as well the concept known as Good Unit Tests (GUTs). I am no expert in all of the above but I've worked quite a bit with several of them. So I'll try to shed some further light on the stuff I know and hopefully make you more curious on the stuff I know less about. I'd love to get feedback, either on Twitter or [LinkedIn](linkedin ref), where we can talk about the topics.

Before I go in to the above topics though I'd like to start with a slightly more in depth look at what I mean when I talk about developer tests. I find that this is one of the less understood, and yet very important, aspects that we as developers need to understand in order to write good tests.

When I explain testing to people I often make a difference between developer tests and tester tests. In my last post I talked about the different drivers testers and developers have when creating tests (it is an important key to what comes, so if you haven't read it you should). This time I'll dig in to what I think developers should focus on when writing tests.

Martin Fowler recently wrote a very good article where he argues that the inner quality of software is a no-question. The quality he talks about here is the quality we as developers aim to uphold with our test (and our code and everything else of cause). Tests play a very integral role in providing this quality in a couple of different ways.

First off, lets state what should be obvious. We can not refactor code if we do not have full test coverage which guaranties that the refactoring does not change outward behaviour. Refactoring is the act of changing internal structure without changing external behaviour. If external behaviour changes it is no longer refactoring. So here is one very good reason to have a trustworthy test suite for the code we write. Creating this suite while creating the code helps ensure that the code is designed for testability.

Second point, when we open up a code base and start to read through it to understand where we need to make a change in order to introduce a new function or feature we need to understand how the different parts interact. This is often hard using a static view of the production code alone. We can lean on ordinary documentation, if there is any, to better understand but it is often not helpful enough. So we often end up having to run the application, sometimes in debug mode, to see the actual execution flow. Tests can be used to provide this context. Unit tests are used to demonstrate the use cases for each independent unit and how it interacts with its collaborators. Higher level module tests are used to document how we integrate the different units into a module, mocking out other modules but using the injection mechanism used in production to wire the current module. This is a kind of documentation that is extremely hard to write and to understand if it's not in test format. But tests runs the code and clearly shows the flow of the application. So here is one of the key communication points we need to keep in mind when writing tests: That the reader should be able to use them as live documentation.

Thirdly, detailed specifications for a system are often a pain in the back to maintain and read. Where we have many detailed specifications which are important to understand we should use specification like tests to document them. Tools like Rspec, SpecFlow and Cucumber were created for this. It gives us a way to provide a running specification which will catch us when we break it. It also give us a granular specification which we can trace; from regular spoken/written language expressed as in the way that non-technical stakeholders explain their problems; to the code that implements each of the specifications.

There are several other things we use developer tests for that could be mentioned. But the above three are points that seems, to me, have not been clearly communicated to many developers. I've work with many teams who write tests, both first and after, who haven't understood the above use cases for tests.

Perhaps the most misunderstood distinction above is refactoring. Most people I meet use the term to mean making changes other than the change specific for the feature. Even changes that change external behaviour. I think it is dangerous to water down such an important concept. If we need a name for all changes we make that are not specific to the current feature we work on we should create a new one. This is not, for me, to be a language police. It is about the importance of the concept refactoring. Certain concepts are very important. A list is not a tree, but both are data structures. Change of external structure is not refactoring, but both are changes.

The other two points are often completely new to teams I work with. They see developer tests as things that should ensure that their changes do not cause regression. This purpose alone often leads to the test code being a tangled mess which is difficult to navigate and change. I've been there and done that so I know the pain. And even when the test code is well structured, and perhaps even clean and DRY (Don't Repeat Yourself) it's still really hard to navigate. When we switch focus from regression testing to documentation we can also start to appreciate that test code should not strive to be DRY but DAMP (Descriptive And Meaningful Phrases). With this focus our test code becomes the place where we start to explore the production code base, not where we end up when our changes breaks the test.

The regression aspect also applies to how many use BDD and specification testing. BDD was created to facilitate the communication between the business and the developers. To help define the domain language (as in DDD) and tie that to the production code. Basically the executable glue point between common language and programming language. It was not intended to be a regression testing tool, I.E it was not created as a tool for tester tests. When used like intended it is a marvellous way to facilitate and document this communication. One where the business stakeholders can read, in plain language, exactly what is going on in the system (if they want to). And where the developers can look up and understand the various domain concept in context. The language used should be, as far as possible, reflected in the code. This one of the tools we can use to define the ubiquitous language.

None of the above is in no way in direct conflict with tester testing. The tests created with above points in mind can be used to provide the proof that testers need as well. It is a matter of providing output data that can used for both purposes. But the output from developer tests should be provided in such a way that it is easy for developers to understand and find the failure, both in their development environment and when finding the issue on the CI/CD server. This often does not satisfy testers need of proof. So some care needs to be taken to provide both. Also, if developer tests are forced to provide the proof for testers, and all that proof, then there is a risk that the developer tests will suffer. In such cases I think it's better to find the places where the conflicts arise and provide extra tester tests there. However, I should probably state that this is a strategy I have so far not proven, so if any of you reading this have more experience I would love to hear from you (see above, or to your right).

Tags: Testing Agile Software Craft