Colliding Objects

Coding, design, and broken feedback loops

So you just spiked out some new functionality. It took a whole day and you now have a nice pile of code which is doing its job, with absolutely no tests. Obviously you want to add tests, but then you find yourself either of these two situations.

Situation 1: Untestable Code.

It is often the case that your code is untestable, that is: there are no means for you to exercise the pieces of functionality that you want to test and/or there are no means for you to inspect their outputs or side effects. This happens because in a spiking sitatuation you care mostly (solely?) about getting the code to do whatever it is you want it do. Hence, you do not write tests, so you never have to deal with testability obstacles that plauge your code (the exact degree of which may vary). If you try to refactor the code into a more testing-friendly shape you soon realize that your options are limited: as there are no tests, there is no way for you to determine whether a refactoring step was indeed behavior-preserving.

In other words:

To add tests you need to refactor. When there are no tests you can't refactor.

There are some techniques to cope with that. But they deserve their own post. Let us focus on the second case.

Situation 2: Reverse Engineering.

You somehow managed to write testable code but, still, you did not write any tests yet. You now have to cope with this: in order to have meaningful tests you need to reverse engineer meaningful inputs from the code. These "inputs" can be anything from primitive values, all the way to JSON or XML blobs, to some smart comibnation of domain and library objects. These inputs can be passed from the test to the production code either directly (usually via parameter passing) or indirectly, through the interaction of the production code with Stubs/Mocks created by your test. Either way, you want these inputs to be meaningful in the sense that they need to make execution pass through the code paths that your test cares about.

Here is the technique that I usually use for reverse engineering inputs. I call it Game of Stubs

  1. Start from a simple test the has no assertions.
  2. Stub all collaborators. This will isolate the code and prevent tests runs from affecting the real world. The stubs should initially be the most degenerate value that could possibly work. a null or an empty object {} is a good starting point.
  3. Run your test. It will usually fail with an exception because your stubs, being degenerate, do not provide the services they are expected to provide.
  4. Examine the failure message. Add the missing bit to the stub.
  5. Repeat until execution passes through the path you are interested at.
  6. Add assertions/expectations as needed.

Here's an example.

We start with an assertion-free test and we stub all collaborators. In node.js I use the rewire module which provides a way for tests to alter dependencies of production code:

var rewire = require('rewire');
var promoter = rewire('../acceptance/promoter.js');

promoter.__set__('Deployer', {});

describe('promoter', function() {
  it('does something', function(done) {
    promoter('a', 'b');

When I run this test I get an exception-induced failure:

1) promoter does something
   TypeError: object is not a function
   TypeError: object is not a function
  at main (/home/imaman/workspace/green-site/acceptance/promoter.js:77:18)
  at null.<anonymous> (/home/imaman/workspace/green-site/spec/promoter.spec.js:12:5)
  at null.<anonymous> (/home/imaman/workspace/green-site/node_modules/jasmine-node/lib/jasmine-node/async-callback.js:45:37)

Here, we failed with an object is not a function error. Thus, we transform the Deployer collaborator into a function.

var rewire = require('rewire');
var promoter = rewire('../acceptance/promoter.js');

function DeployerStub() {
promoter.__set__('Deployer', DeployerStub);    

We now repeat the process. Running the test again, we get: TypeError: Object #<DeployerStub> has no method 'init', so we add an init() function:

function DeployerStub() {
  this.init = function() {};

promoter.__set__('Deployer', DeployerStub);

And so on....

This may take a while but you can do it almost on auto-pilot mode. Instead of digging into the production code you just need to look at the failure message and add whatever it is that is missing there. Eventually, you will get a nice set of minimal stubs, which is exactly what is needed for a good unit test.


Here's a sequence of commits, depicting a Game of Stubs I had recently played: