Shutl

Testing and Refactoring Legacy Code

You always use Test-Driven Development when developing a new feature for your favourite application, right?

But today I’d like to talk about a different way of using tests; I’ll walk you through a recent experience that required working with legacy code.

I’ll try to present a hands-on exercise, extracted from a real-life context, where we use tests to understand, refactor and improve the design of a legacy codebase.

In other words, we’ll learn how to regain ownership and control.

What is Legacy Code

When I use the expression “Legacy Code” I simply mean a piece of code for which we don’t have tests.

For an engineer at Shutl, this would be an unfamiliar sight; we fully embrace Test-Driven Development and use it for all our applications as standard.

However, it’s extremely unlikely that you’ll only ever work with just the code you’ve created; dealing with legacy code is an inevitability, for example when working with libraries provided by a third party.

Even when you have some confidence that the code “works”, the absence of a test suite usually represent a big issue.

In fact, it may be hard to understand, just by looking at the code, what it does in all possible scenarios. To illustrate this problem, imagine if you will a very long method with several nested “if” statements and loops. Similarly, the lack of tests is also a problem when you need to modify the behaviour, even slightly. How do you know that the code still works? How will your change affect other components using the same library?

The example I’m about to show is extracted from a real-life example. The classes have been renamed for anonymity, and it’s been condensed and made more concise. I’m using Java here, but most of the principles will be valid for other languages. You can find the project on GitHub, you are welcome to import it in your favourite IDE, check the first commit out and use it for your next Kata.

The Price Client

A food market needs to publish the price of their products in near real time. For this reason they build an online service for their customers. In addition, they also support an API version and they release a client library, so that their biggest customers can easily integrate their own software with the service, for things like constant price monitoring etc.

The library is composed of some accessory classes and a main class called Client:

As you can see, there is only a public method: getRates() which takes a RateRequest object as a parameter, and returns a RateResponse as a result.

So far so good, but just by skimming through the code it is not immediate to understand what it does. For example, which host does it try to connect to? I might want to know that if I need to open my company firewall to access the API endpoint. What happens in the case of an error? How does the response look like when there is a connectivity issue, or if my access token expires? In order to answer these questions we needed to come up with a strategy.

Retrofitting the tests

The plan consists in writing a series of tests in order to achieve full coverage of the Client class. I also use a coverage tool to check the coverage of the class under test (Cobertura  and EclEmma are good choices). We use the coverage tool also to check the real execution path while running a single test.

When writing the test I made a self-imposed rule: I cannot modify the “code” (from now on I’ll use that term to refer to the implementation), with one notable exception: I decide that some simple refactoring steps are allowed, like “Rename Variable” or “Extract Method“, provided that I can safely perform them in an automated manner using the IDE.

The first test

I start with the simplest test I can write, as follows:

I create an instance of Client – I call the only public method and then check the result.
When I run the test, I get an exception at this point in the code:

ConfigurationService.getPricesApiEndpoint() raises a java.net.ConnectException. A few moments later, while browsing the codebase, one folder catches my attention: dummyservers. By taking a closer look I figure out that it contains two Sinatra applications, the first one implements a dummy version of the configuration service, the other is a dummy implementation of the API server. I read the docs and start both servers. I run the test again and this time…green!

I run my code coverage tool and here’s the output:

coverage_01

I’m happy about the fact that I have some evidence that Client can work. I’m not so happy because I realise that my current testing strategy is not very effective. In fact, as it currently stands I’m running an end-to-end test. In order to do that, I need to run all the external services that Client wants to talk to. What happens if I want to test a scenario when the server returns an exception, like “Internal Server Error”? I don’t really want to run a different version of the dummy server for each case; that would be slow, brittle and difficult to maintain.

I really want to test Client in isolation, so I need a different strategy.

Unit testing Client.java

I stop the execution of the dummy servers and re-run the test to quickly check that the network exception is still there. This time I circumvent the exception:

In Client class I extract a protected method:

Inside the test class, I create a new class called TestableClient, that extends the original client and overrides the newly created method to return a String. I now use the testable version of the class in the test:

The test is still red, but it now fails at a later stage:

This failure is similar to the previous one, so I get around it by extracting  another protected method, ConfigurationService.getApiToken() and overriding it in TestableClient. I can now re-run the test and look for the next failure.

Before doing that, let’s pause a moment to consider what is happening.

This approach is powerful because it doesn’t require previous knowledge about the code; I don’t need to understand the code before writing the test. This is possible because writing a test is a pure function of the “structure” of the code, not of “what the code does”. Here, I’m incrementally acquiring knowledge of what the code does by adding constraints, one at a time, to force the execution of the code to a certain path.

In the case of nested if-statements the best strategy is to try to add test coverage by traversing the shortest branch first, which is obviously easier to achieve because there are less conditions to be matched in order to reach the end of that branch. In this example I am lucky, I don’t have nested conditions.

Back to the test, the next failure is at the line:

This is interesting because the “status” variable is used in an if statement immediately after. I decide to force the execution to the “else” part adopting the usual trick. TestableClient overrides getStatus() to return 500.

The test is still red, but now the failure happens in the expectation, I’m expecting a successful response but instead get null. I update the expectation and the test turns to green.

I go back and give the test an appropriate name now that I have a somewhat deeper understanding of the code:

This aspect is very important because a test should read like a story: “Given some preconditions are met, When I perform an action, Then something happens”.

More tests

Back to Client.java, in order to achieve full coverage I need to cover two more branches, one for a status code of 200, the other for a status code of 403, plus one test to simulate an exception.

I add the following three tests:

I manage to achieve full coverage.

The procedure is always the same, so I will skip the steps. If you want to see them, check the history on the repo. This is how TestClient looks like at this stage.

Through the process of writing these tests I manage to identify two hard-coded dependencies: one is ConfigurationService, the other is JsonApiClient (where jsonResponse object comes from). I’ll shortly show how we can get rid of these dependencies.

Improving the design

I’ve learnt a lot about my legacy class and I have full test coverage, which is nice. So what’s next?

As I re-read my changes I am bugged by the fact that in the test I still have a “testable” version of the original class. This is a symptom of “coupling”.

If you remember the first test we wrote earlier, we had to override getPricesApiEndpoint() in order to circumvent the dependency on ConfigurationService.

If only there was a way of injecting this dependency into Client class from the outer world, then in the test I could inject a fake version of ConfigurationService and I would’t need the TestableClient any more.

Well as it turns out, that’s entirely possible.

Dependency Injection

In Client I introduce a member variable and an alternative constructor:

The second constructor is the one that allows the injection of an alternative ConfigurationService object. Similarly, I create the same constructors in TestableClient:

I run all the tests to make sure I didn’t break anything.

I have now an instance of ConfigurationService inside Client class, but we’re still missing something.

Fighting against Static Methods

If we look at getPricesApiEndpoint() in Client:

here I am calling a static method on ConfigurationService, not an instance method. In terms of testability, static methods are a big problem.

For simplicity I assume I cannot mock static methods (even though there may be a way I don’t want to go through that path, that way madness lies)

What I need is an instance method, because instance methods can be easily mocked.

In ConfigurationService I can create an instance method, that delegates the behaviour to the static method:

As a side note, I don’t want to change the original method by simply removing the “static” keyword, because it may still be used in other parts of the codebase.

I make use of the same technique I used for the other static method, getApiToken() and I introduce an instance method apiToken().

Just a few more steps to go.

In the test, I replace the instantiation of TestableClient by using the alternative constructor and I inject a mocked version of ConfigurationService:

Then I remove the methods getPricesApiEndpoint() and getApiToken() from TestableClient.

The tests are now failing with a “Connection refused” exception. This happens because I’m not using the injected instance of ConfigurationService inside Client.

To do that, in Client I change getPricesApiEndpoint() and getApiToken() to use the instance variable:

Once I do that, the tests are back to green.

I can now inline these methods as I don’t need to override them anymore.

With the same steps, I get rid of the remain overridden methods, removing the need for the TestableClient, I can now simply use Client.

The final TestClient looks now much better.

Summary

There is still some work to be done in Client, but we’ve sure come a long way.

We’ve learnt how to cover a piece of legacy code with tests and how this process allows us to incrementally understand the code.

We’ve also explored how to use Dependency Injection to remove coupling and improve the design of code, in a safe manner and by using “baby” steps.

You can take a look through these steps and try to have a go yourselves (You can find the project on GitHub). There isn’t one fixed way of achieving this, but I believe that the overall aim of covering legacy code with tests before attempting to extend it is the way to go.

CC image by doctorow)

Discussion

2 responses to ‘Testing and Refactoring Legacy Code

  1. s/brake/break

    other than that good stuff. I particularly like the TDD-style minimal changes approach. I like the practice of ‘make changes that can be undone in one step’ between test runs.

  2. Thanks Lawrence for your comment, I fixed the typo. It’s true, when you refactor sometimes the direction you are taking is not always clear so it is easier to go for baby steps. If you make a change that brings you nowhere it is very easy to rollback. Also using git to create savepoints is very helpful.

Leave a Reply

Your email address will not be published. Required fields are marked *