You are here

Do you like what you are reading? Do you want to receive more content like this, in your inbox?
I send out articles just like this to my newsletter about once per week. Subscribe now:
* indicates required
Do you like what you are reading? Do you know people who might be interested too?

Share This Page:

Improve Your Agile Practices

Enroll in the FREE 7-lesson course that will help you and your team to become more efficient and agile.

In this course, you will create a strategy for improvement. You can start improving after the first email, and you will use the strategy to stay on track months or years later.

* indicates required
Preview Email Course

Simple Design Passes It's Tests

Never miss one of my articles: Readers of my newsletter get my articles before anybody else. Subscribe here!

Banner Passes Its Tests

Lesen Sie diesen Artikel auf Deutsch | Previous Part: Changes and Simplicity | First Part: Overview

Every design must serve a purpose. Design that does not serve a purpose is not simple - It is useless. So we need tests to verify that our design still serves its purpose. And we want to automate those tests so we can run them after every design change.

Those tests verify that the system we build is correct from a technical point of view. They test if every part of the design plays its role correclty and if the different parts play together like they should. We also need another type of tests: Tests that verify that the system has the correct functionality, from a users point of view. Make sure that you implement both types of tests and that you never mix them.

In this third part of the series about "Simple Design", I will show you how tests will help you to achieve simple design.

Tests to Guide the Implementation

How do you ensure that you have a testable system? You just write the tests before the implementation. Then you have no other chance than to write the implmentation in a testable way.

But isn't this expensive?

Writing test cases before writing the code doesn't take any more effort than writing test cases after the code.

-- Steve McConnell, "Code Complete 2", page 503

When we assume (for a moment) that we need the tests anyway, it is not more expensive. In this case, writing the tests first is probably even cheaper: When you have a small, executable test, finding and fixing defects is easier. But do you need all the tests? Yes you do, because you will have to change your software and you need a quick way to make sure you did not introduce any defects with the change. Therefor you need lots of automated tests with different levels of detail.

Debugging is expensive. Very expensive. Even if you are good at it. Finding a problem in the debugger just takes a lot of time. You have to navigate to the point in the code you want to check, make sure the system is in the right state, and test your hypothesis. And when you did not find the problem, you have to start again, because you can not step back in the debugger. And then you have to start again. And again. When you write tests first, you will waste less time debugging. You will start the debugger less often. And when you do, you will be faster, because you don't have to debug the whole system, but only small, focussed test.

Defects are expensive. The longer we don't find them, the more expensive they get: More and more code will depend on the defect. When we finally fix the defect, it will be harder to make sure we did not introduce any side effects. When you work on a system that has many long-lived defects, you'll probably recognize that every time you fix a defect, you introduce a new one. Defects are also expensive for your users because they did not get the software they wanted. They maybe have to find workarounds. They'll get future features later because you spend time fixing defects. Defects are not only expensive for us in development, but also for our users! You need lots of automated tests with different levels of detail to increase your chances that you'll ship correct software.

Microsoft did a study with several teams that used Test Driven Development. Their finding was that writing tests first was not significantly more expensive and had a big positive effect on quality:

Our experiences and distilled lessons learned, all point to the fact that TDD seems to be applicable in various domains and can significantly reduce the defect density of developed software without significant productivity reduction of the development team.

-- Nachiappan Nagappan, E. Michael Maximilien, Thirumalesh Bhat, Laurie Williams, 2008

Different Types of Tests

You need different types of automated tests to make sure that your system does the right thing and is technically correct. You can classify your tests along differnt dimensions.

The "layer" of the test:

  • UI: Those tests drive an application through the UI and are truly end to end.
  • Service: Those tests drive an application through a service layer or through an API. They are "almost" and to end. They often run faster and are more stable than "UI" tests.
  • Unit: Those tests only drive a small part of the application through its public interface. They often run even faster and are even more stable than "Service" tests.

Agile Testing Pyramid

The "Agile Testing Pyramid" suggests that you should have lots of "Unit" tests, because they are fast and stable. You should only have a very small number of "UI" tests because they are slow and brittle. If you need tests that exercise the system from end to end, you should write them on the "Service" layer - But keep the number of those tests small too.

What is being testet:

  • Functional Tests: Verify that the system contains the right features and behaves correctly from a business point of view. Those tests and their results make sense to the developers and users of the system. They help the developers to build the right system.
  • Technical Tests: Verify that the parts of the system are technically correct. Those tests and their results make sense to the developers only. They help the developers to build the system right.

Granularity of Tests

The granularity of the part under test:

  • Unit Tests: Test a small, coherent part of the system (often a single class) decoupled from the rest of the system.
  • Integration Tests: Test if our system interacts correctly with the outside world. Ideally they test only one class of our system and how it interacts with an external system or a user.
  • Integrated Tests: Test a larger part of our system, often together with external systems. The "Service" and "UI" tests from the agile testing pyramid are integrated tests, but you could also have integrated tests that test smaller parts of your system. Some people call those tests "Integration Tests" too, but I think this is confusing.

Try to make your "Technical Tests" either "Unit Tests" or "Integration Tests". Every test should have only a single reason why it can fail. There should only be one test for every reason to fail - that means, no overlapping responsibilities in tests. This makes diagnosing test failures much easier. You'll also have fewer "false negative" test failures during refactoring when you follow these rules. Avoid "Integrated Tests" here, because an integrated test has - by definition - many reasons why it can fail: Every unit that plays a role in the integrated test can fail!

Functional tests should make sense to your users and customers. This means that they often have to be "Integrated Tests". But they still should test the smallest possible unit. And they still should have only one reason to fail to produce a meaningful result. Try to automate as much of your functional tests as possible through the "Service" layer. Avoid automating functional tests through the UI. Those tests would almost certainly have more than one reason to fail - Like, when your UI changes.

You should test your UI, but avoid integrated UI tests (That is, end-to-end tests through the "UI" layer). Make sure to de-couple the UI logic from the rest of your system. Then test the UI and UI-Logic with "Integration Tests" (i.e. tests that only test a single class of your system together with the UI). Then automate the functional tests through the "Service" layer. You will be more flexible and have fewer false-negative test failures this way.

Automated Tests to Detect Regressions

You have to change existing software all the time. From minor refactorings to huge behavior changes, changing software is what you do day-to-day. You add new classes, change existing code, delete code and whole files, move stuff around, rename things, and so on.

Automated tests can provide the safety net you need to make sure that you don't introduce errors in this process. But you have do write your automated tests in a certain way to achieve this, otherwise they will do more harm than good.

You want your tests to turn red when you did something wrong, and stay green as long as everything works how it is supposed to. This sounds pretty simple and obvious, but it is very hard to achive. Have you ever refactored some code, and although everything was still working correctly, 15 tests were red? Right. So those 15 tests have violated this simple rule. What did they do wrong?

Tests depending on a concrete implementation: Some Tests depend on some subtle details of a concrete implementation. For example, they depend on the fact that you check the validity of function parameters with a certain library. When you switch the library, those tests will fail. Do not "assert" or "verify" implementation-specific details. Test for results!

More than one reason to fail: For example, a test that will fail when the result of a function call is wrong (what the test is supposed to verify) and when the result was null. There are lots of variations of this problem. Like tests with 10 asserts. Or a single test that tests the same method multiple times with different input data. Only test one specific case - Try to have only one "assert" or "verify" per test!

Overlapping reasons to fail: I often see test suites where a single problem might cause many different tests to fail. A simple edit of a single line might result in 15 red tests. There schould be only one test for every aspect of correctness!

Testing state when you are interested in behavior: A test executes the code under test, and then checks some variable that should have a certain value. But the test is not really interested in the value, only that some behavior has happened that is supposed to set the value. This was the default to write tests before there were mocking frameworks. By doing this, you reduce your options for refactoring: You cannot move the code setting the varialbe, because the test would fail, even if the behavior was still correct. Don't use "assert" when you are interested in interactions - Use "verify"!

Testing behavior when you are interested in state: A test that is supposed to check if the system is in the correct state, but does so by verifying that some behavior (that is supposed to set the state) has happened. This is often caused by an over-use of mocking frameworks. Again, reduce your options for refactoring: You cannot move the code that sets the state anymore. Don't use "verify" when you are interested in state - Use "assert"!

Testing state and behavior in the same test: Those tests have multiple reasons to fail, are hard to understand and produce lots of false negatives. Always avoid them. Don't use "assert" and "verify" in the same test!

Tests as Documentation

It is not enough to write code and create a simple design. You also have to communicate the design to your team members - And to your future self!

Teams often use code comments, diagrams, wiki pages or documents to communitcate and document their software. This is very dangerous: All those kinds of documentation will quickly become outdated. No matter how hard you try to keep it up-to-date, some parts of your documentation will "degrade". Soon you will have documentation that lies: Documentation that describes the state of the system how it was, not how it is. This documentation is worse than useless: It leads developers down the wrong path. First they waste time finding the correct part of the documentation, then they waste more time figuring out why and how the code is different.

So how can you document your code and your design? You keep your design simple and your code easy to read. And you use tests as "living documentation": Tests as documentation cannot be outdated. If they were outdated, they would fail!

But you have to keep a few things in mind to be able to use your tests as documentation.

Keep your functional tests organized. Your functional tests document what your system can do. Make it easy to get a quick overview and to find out more about each feature.

Keep your functional tests simple and concrete. Your functional tests should be based on concrete, simple examples. They should describe exactly what the system does in a certain situation. They should also hint why the expected response of the system is the correct response.

Keep all the unit tests of a certain unit in the same file. For example, in Java there should be exactly one test class for an implementation class (or a small group of implementation classes). When the test class becomes too big, take that as a hint to split up the unit under test.

Use meaningful test names. The name of a test should describe the expected behavior of the system, not what the test does. When your test names describe what the tests verify, your tests are not a documentation of your design, but a documentation of themselves. When the test fails, you should know what problem has caused the failure by simply looking at the test name. A good test name describes the context of the test, what action the test executes and what the expected response of the system is.

public class NewUserServiceTest {
	@Test
	public void createsNewUserWhenAllDataIsValid() {
		//...
	}
}

When you name your tests like this, you can use tools like agiledox to extract the documentation from your tests.

When you follow those rules, you will have it easier to use your tests as documentation. But there are no guarantees. You should still ask yourself from time to time: "Are my tests a good documentation of my design or code?" And if the answer is not a clear "Yes!", refactor.

Conclusion

Tests are an important part of simple design. You should automate as much tests as possible to get really fast feedback.

They help you verify that your design serves its purpose. They are your safety net when you change or add something. They help you drive your design when you write them first. And they are your living documentation to communicate your design to your team and your future self.

The quality and design of your tests might be even more important than the quality and design of your production code.

My name is David Tanzer and I have been working as an independent software consultant since 2006. I help my clients to develop software right and to develop the right software by providing training, coaching and consultanting for teams and individuals.

Learn more...