Old Age Code

Is your code ready for retirement? Is it suffering from the diseases of old age? Do you have code you can’t even imagine retiring? It’s just too critical? Too pervasive? Too legacy?

Jon & The Widgets

Conveyor Belt - thanks to https://www.flickr.com/photos/qchristopher
Conveyor Belt – thanks to https://www.flickr.com/photos/qchristopher

Jon’s first job out of school was in the local widget factory, WidgetCo. Jon was young, enthusiastic and quickly took to the job of making widgets. The company was pleased with Jon and he grew in experience, learning more about making widgets, taking on ever more responsibility; until eventually Jon was responsible for all widget production.

After a couple of years one of WidgetCo’s biggest customers started asking about a new type of square widget. They had only made circular widgets before, but with Jon’s expertise they thought they could take on this new market. The design team got started, working closely with Jon to design a new type of square widget for Jon to produce. It was a great success, WidgetCo were the first to market with a square widget and sales went through the roof.

Unfortunately the new more complex widget production pipeline was putting a lot of pressure on the packing team. With different shape widgets and different options all needing to be sorted and packed properly mistakes were happening and orders being returned. Management needed a solution and turned to Jon. The team realised that if Jon knew a bit more about how orders were going to be packed he could organise the widget production line better to ensure the right number of each shape widget with the right options were being made at the right time. This made the job much easier for the packers and customer satisfaction jumped.

Sneeze - thanks to https://www.flickr.com/photos/foshydog
Sneeze – thanks to https://www.flickr.com/photos/foshydog

A few years down the line and Jon was now a key part of the company. He was involved in all stages of widget production from the design of new widgets and the tools to manufacture them through to the production and packaging. But one day Jon got sick. He came down with a virus and was out for a couple of days: the company stopped dead. Suddenly management realised how critical Jon was to their operations – without Jon they were literally stuck. Before Jon was even back up to speed management were already making plans for the future.

Shortly after, the sales team had a lead that needed a new hexagonal widget. Management knew this was a golden opportunity to try and remove some of their reliance on Jon. While Jon was involved in the initial design, the team commissioned a new production line and hired some new, inexperienced staff to run it. Unfortunately hexagonal widgets were vastly more complex than the square ones and, without Jon’s experience the new widget production line struggled. Quality was too variable, mistakes were being made and production was much too slow. The team were confident they could get better over time but management were unhappy. Meanwhile, Jon was still churning out his regular widgets, same as he always had.

But the packing team were in trouble again – with two, uncoordinated production lines at times they were inundated with widgets and at other times they were idle. Reluctantly, management agreed that the only solution was for Jon to take responsibility for coordinating both production lines. Their experiment to remove their reliance on Jon had failed.

The Deckhand - thanks to https://www.flickr.com/photos/neilmoralee
The Deckhand – thanks to https://www.flickr.com/photos/neilmoralee

A few years later still and the new production line had settled down; it never quite reached the fluidity of the original production line but sales were ok. But there was a new widget company offering a new type of octagonal widget. WidgetCo desperately needed to catch up. The design team worked with Jon but, with his workload coordinating two production lines, he was always busy – so the designers were always waiting on Jon for feedback. The truth was: Jon was getting old. His eyes weren’t what they used to be and the arthritis in his fingers made working the prototypes for the incredibly complex new widgets difficult. Delays piled up and management got ever more unhappy.

So what should management do?

Human vs Machine

When we cast a human in the central role in this story it sounds ridiculous. We can’t imagine a company being so reliant on one frail human being that it’s brought to its knees by an illness. But read the story as though Jon is a software system and suddenly it seems totally reasonable. Or if not reasonable, totally familiar. Throughout the software world we see vast edifices of legacy software, kept alive way past their best because nobody has the appetite to replace them. They become too ingrained, too critical: too big to fail.

14256058429_f7802658c8_zFor all it’s artificial construct, software is not so different from a living organism. A large code base will be of a level of complexity comparable with an organism – too complex for any single human being to comprehend in complete detail. There will be outcomes and responses that can’t be explained completely, that require detailed research to understand the pathway that leads from stimulus to response.

But yet we treat software like it is immortal. As though once written software will carry on working forever. But the reality is that within a few years software becomes less nimble, harder to change. With the growing weight of various changes of direction and focus software becomes slower and more bloated. Each generation piling on the pounds. Somehow no development team has mastered turning back time and turning a creaking, old age project into the glorious flush of youth where everything is possible and nothing takes any time at all.

It’s time to accept that software needs to be allowed to retire. Look around the code you maintain: what daren’t you retire? That’s where you should start. You should start planning to retire it soon, because if it’s bad now it is only getting worse. As the adage goes: the best time to start fixing this was five years ago; the second best time is now.

Sad dog - thanks to https://www.flickr.com/photos/ewwhite
Sad dog – thanks to https://www.flickr.com/photos/ewwhite

After five years all software is a bit creaky; not so quick to change as it once was. After ten years it’s well into legacy; standard approaches have moved on, tools improved, decade old software just feels dated to work with. After twenty years it really should be allowed to retire already.

Software ages badly, so you need to plan for it from the beginning. From day one start thinking about how you’re going to replace this system. A monolith will always be impossible to replace, so constantly think about breaking out separate components that could be retired independently. As soon as code has got bigger than you can throw away, it’s too late. Like a black hole a monolith sucks in functionality, as soon as you’re into run away growth your monolith will consume everything in it’s path. So keep things small and separate.

What about the legacy you have today? Just start. Somewhere. Anywhere. The code isn’t getting any younger.

Tooling in the .net World

As a refugee Java developer first washing up on the shores of .net land, I was pretty surprised at the poor state of the tools and libraries that seemed to be in common use. I couldn’t believe this shanty town was the state of the art; but with a spring in my step I marched inland. A few years down the road, I can honestly say it really isn’t that great – there are some high points, but there are a lot of low points. I hadn’t realised at the time but as a Java developer I’d been incredibly spoiled by a vivacious open source community producing loads of great software. In the .net world there seem to be two ways of solving any problem:

  1. The Microsoft way
  2. The other way*

*Note: may not work, probably costs money.

IDE

The obvious place to start is the IDE. I’m sorry, but Visual Studio just isn’t good enough. Compared to Eclipse and IntelliJ for day-to-day development Visual Studio is a poor imitation. Worse, until recently, it was expensive as a lone developer. While there were community editions of Visual Studio, these were even more pared down than the unusable professional edition. Frankly, if you can’t run ReSharper, it doesn’t count as an IDE.

I hear there are actually people out there who use Visual Studio without ReSharper. What are these people? Sadists? What do they do? I hope it isn’t writing software. Perhaps this explains the poor state of so much software; with tools this bad – is it any wonder?

But finally Microsoft have seen sense and recently the free version of Visual Studio supports plugins – which means you can use ReSharper. You still have to pay for it, but with their recent licensing changes I don’t feel quite so bad handing over a few quid a month renting ReSharper before I commit to buying an outright license. Obviously the situation for companies is different – it’s much easier for a company to justify spending the money on Visual Studio licenses and ReSharper licenses.

With ReSharper Visual Studio is at least closer to Eclipse or IntelliJ. It still falls short, but there is clearly only so much lipstick that JetBrains can put on that particular pig. I do however have great hope for Project Rider, basically IntelliJ for .net.

Restful Services

A few years back another Java refugee and I started trying to write a RESTful, CQRS-style web service in C#. We’d done the same in a previous company on the Java stack and expected our choices to be similarly varied. But instead of a vast plethora of approaches from basic HTTP listeners to servlet containers to full blown app servers we narrowed the field down to two choices:

  1. Microsoft’s WCF
  2. ServiceStack

My fellow developer played with WCF and decided he couldn’t make it easily fit the RESTful, CQRS style he had in mind. After playing with ServiceStack we found it could. But then begins a long, tortuous process of finding all the things ServiceStack hadn’t quite got right; all the things we didn’t quite agree with. Now, this is not entirely uncommon in any technology selection process. But we’d already quickly narrowed the field to two. We were now committed to a technology that at every turn was causing us new, unanticipated problems (most annoyingly, problems that other dev teams weren’t having with vanilla WCF services!)

We joked (not really joking) that it would be simpler to write our own service layer on top of the basic HTTP support in .net. In hindsight, it probably would have been. But really, we’d paid the price for having the temerity to step off the One True Microsoft Way.

Worse, what had started as an open source project went paid for during the development of our service – which meant that our brand new web service was immediately legacy as the service layer it was built on was no longer officially supported.

Dependency Management

The thing I found most surprising on arrival in .net land was that people were checking all their third party binaries into version control like it was the 1990s! Java developers might complain that Maven is nothing more than a DSL for downloading the internet. But you know what, it’s bloody good at downloading the internet. Instead, I had the internet checked in to version control.

However, salvation was at hand with NuGet. Except, NuGet really is a bit crap. NuGet doesn’t so much manage my dependencies as break them. All the damned time. Should we restrict all versions of a library (say log4net) to one version across the solution? Nah, let’s have a few variations. Oh, but nuget, now I get random runtime errors because of method signature mis-matches. But it doesn’t make the build fail? Brilliant, thank you, nuget.

So at my current place we’ve written a pre-build script to check that nuget hasn’t screwed the dependencies up. This pre-build check fails more often than I would like to believe.

So managing a coherent set of dependencies isn’t the job of the dependency tool, so what does it do? It downloads one file at a time from the internet. Well done. I think wget can do that, too. It’s a damned sight faster than the nuget power shell console, too. Nuget: it might break your builds, but at least it’s slow.

The Microsoft Way

And then I find things that blow me away. At my current place we’ve written some add-ins to Excel so that people who live and die by Excel can interact with our software. This is pretty cool: adding buttons and menus and ribbons into Excel, all integrated to my back-end services.

In my life as a Java developer I can never even imagine attempting this. The hoops to jump through would have been far too numerous. But in Visual Studio? Create a new, specific type of solution, hit F5, Excel opens. Set a breakpoint, Excel stops. Oh my God – this is an integrated development environment.

Obviously our data is all stored in Microsoft SQLServer. This also has brilliant integration with Visual Studio. For example, we experimented with creating a .net assembly to read some of the binary data we’re storing in the DB. This way we could run efficient queries on complex data types directly in the DB. The dev process for this? Similarly awesome: deploy to the DB and run directly from within Visual Studio. Holy integrated dev cycle, batman!

When there is a Microsoft way, this is why it is so compelling. Whatever they do will be brilliantly integrated with everything else they do. It might not have the flexibility you want, it might not have all the features you want. But it will be spectacularly well integrated with everything else you’re already using.

Why?

Why does it have to be this way? C# is really awesome language; well designed with a lot of expressive power. But the open source ecosystem around it is barren. If the Microsoft Way doesn’t fit the bill, you are often completely stuck.

I think it comes down to the history of Visual Studio being so expensive. Even as a C# developer by day, I am not spending the thick end of £1,000 to get a matching dev environment at home, so I can play. Even if I had a solid idea of an open source project to start, I’m not going to invest a thousand quid just to see.

But finally Microsoft seem to be catching on to the open source thing. Free versions of Visual Studio and projects like CoreCLR can only help. But the Java ecosystem has a decade head start and this ultimately creates a network effect: it’s hard to write good open source software for .net because there’s so little good open source tooling for .net.

Rich domain objects with DivineInject

DivineInject is a .net dependency injection framework, designed to be simple to use and easy to understand.

You can find the source for DivineInject on github. In this second part in the series we’ll look at creating rich domain objects, the first part covers getting started with Divine Inject.

Simple Domain Objects

As an example, imagine I have a simple shopping basket in my application. The shopping basket is encapsulated by a Basket object, for which the interface looks like this:

interface IBasket
{
    IList<IBasketItem> GetBasketContents();
    void AddItemToBasket(IBasketItem item);
}

The contents of the basket are actually backed by a service to provide persistence:

interface IBasketService
{
    IList<IBasketItem> GetBasketContents(Guid basketId);
    void AddItemToBasket(Guid basketId, IBasketItem item);
}

I can create a very simple implementation of my Basket:

class Basket : IBasket
{
    private readonly IBasketService _basketService;
    private readonly Guid _basketId = Guid.NewGuid();

    public Basket(IBasketService basketService)
    {
        _basketService = basketService;
    }

    public IList<IBasketItem> GetBasketContents()
    {
        return _basketService.GetBasketContents(_basketId);
    }

    public void AddItemToBasket(IBasketItem item)
    {
        _basketService.AddItemToBasket(_basketId, item);
    }
}

Since I’m using dependency injection, IBasketService sounds like a dependency, so how do I go about creating an instance of Basket? I don’t want to create it myself, I need the DI framework to create it for me, passing in dependencies.

I want to do something with how the Basket is created, so let’s start with a simple interface for creating baskets:

interface IBasketFactory
{
    IBasket Create();
}

When I’m creating a basket I don’t care about IBasketService or other dependencies; calling code just wants to be able to create a new, empty basket on demand. How would we implement this interface? Well, I could do the following – although I shouldn’t.

class BadBasketFactory : IBasketFactory
{
    public IBasket Create()
    {
        // DON'T DO THIS - just an example
        return new Basket(
            DivineInjector.Current.Get<IBasketService>());
    }
}

Now I’d never suggest actually doing this, explicitly calling the dependency injector. The last thing I want from my DI framework is to have references to it smeared all over the application. However, what this class does is basically what I want to happen.

DivineInject however can generate a class like this for you; this is configured at the same time you define the rest of your bindings:

DivineInjector.Current
    .Bind<IBasketFactory>().AsGeneratedFactoryFor<Basket>();

This generates an IBasketFactory implementation, which can create new IBasket implementations on demand (they will all actually be instances of Basket); all without having references to the DI framework smeared across my code. If I want to use the IBasketFactory, for example from my Session class, I declare it as a constructor arg the same as I would any other dependency:

public Session(IAuthenticationService authService, 
               IBasketFactory basketFactory)
{
    _authService = authService;
    _basketFactory = basketFactory;
}

The DI framework takes care of sorting out dependencies and I get a Session class with no references to DivineInject. When I need a new basket I just call _basketFactory.Create(). Since I have nice interfaces everywhere, everything is easy to mock so I can TDD everything.

Rich Domain Objects

Now what happens as my domain object becomes more complex? Say, for example, I want to be able to pass in some extra arguments to my constructor. Returning to our basket example: as well as starting a new, empty basket – isn’t there a possibility that I want to continue using an existing basket? E.g. in case of load balanced servers or fail-over. What do I do then?

I start by changing Basket, to allow me to pass in an existing basket id:

private readonly IBasketService _basketService;
private readonly Guid _basketId;

public Basket(IBasketService basketService)
{
    _basketService = basketService;
    _basketId = Guid.NewGuid();
}

public Basket(IBasketService basketService, Guid basketId)
{
    _basketService = basketService;
    _basketId = basketId;
}

I now have two constructors, one of which accepts a basket id. Since all Basket instances are created by an IBasketFactory, I need to change the factory interface, too:

interface IBasketFactory
{
    IBasket Create();
    IBasket UseExisting(Guid id);
}

I now have a new method on my IBasketFactory, if I was hand-coding the factory class I’d expect this second method to call the second constructor, passing in the basket id.

What do we need to tell DivineInject to make it generate this more complex IBasketFactory implementation? Nothing! That’s right, absolutely nothing – DivineInject will already generate a suitable IBasketFactory. Our original declaration above, is still sufficient:

DivineInjector.Current
    .Bind<IBasketFactory>().AsGeneratedFactoryFor<Basket>();

This generates an IBasketFactory implementation, returning a Basket instance for each method it finds on the interface. Since one of these methods takes a Guid, it tries to find a matching constructor which also takes a Guid, plus any dependencies it knows about. DivineInject can automatically wire up the right factory method to the right constructor, using the arguments it finds in each. Now, when a session wants to re-use an existing basket it just calls:

_basketFactory.UseExisting(existingBasketId)

This creates a new Basket instance, with dependencies wired up, passing in the basket id. Everything is still using interfaces so all your collaborations can be unit tested. Behind the scenes DivineInject generates the IL code to implement your factory interfaces, leaving you free to worry about your design.

By following this pattern we can create rich domain objects that include both state and dependencies: it becomes possible to create stateful objects that have behaviours (methods) that make sense in the domain. Successfully modelling your domain is critical to creating code that’s easy to understand and easy to change. DivineInject helps you model your domain better.

Cutting Corners

The pressure to deliver yesterday is strong. If it’s not customers nagging you, it’s project managers breathing down your neck or your own self-doubt that this should have been simpler: the desire to get the task done quicker can often be irresistible. How do you strike the right balance between cutting corners and polishing the turd?

While working through a feature I maintain a “navigator pad” of things I want to come back to. These are refactorings I’ve spotted, tests that need cleaning up, design smells to look at or just plain questions I’m curious to know the answer to (can foo ever actually be null? is this method really used?) This list ebbs and flows as I’m working through a feature: some days I seem to do nothing but add new things to it, other days I manage to cross half the list off as some much-needed refactoring becomes critical to complete the next change. But the one constant throughout a feature is the nav pad.

Recently I was nearing the end of a feature and my nav pad didn’t seem to be getting any shorter. I’d spent a good bit of time refactoring things, but new problems kept appearing – it didn’t seem like I’d ever be “Done”. The feature was way behind schedule, my self-doubt was growing: I’m trying to do a good job, I don’t want this to take any longer but I keep spotting things I got wrong before or simply missed. Suddenly one morning, within the space of a couple of hours, I crossed 20 items off the nav pad, sat back and realised: it’s empty! I was Done.

The next thing that struck me was what a strange occurrence this was: I couldn’t remember the last time I’d actually crossed everything off the nav pad. There would always be some last refactorings on the list that on balance could wait until another time; some tidy up that could wait until another day; some question that I no longer cared to know the answer to. But for the first time in a long time, I’d crossed everything off!

Then the doubt sets in: have I over-engineered this? Could I have been done quicker? The pressure to cut corners is really strong: we’re always pushed to be done faster, to do the absolute minimum we can get away with. Yet I know what needs to be done, I know what the problems are with this code: I’ve written them all down in the nav pad. If I don’t fix them now, then when?

A pattern I see all-too-frequently when I come up against a design smell: I can see the design is wrong, the tests are a mess, the production code is a mess; there’s definitely a better way, I just can’t see it at the minute. I park the refactoring on the nav pad. I come back to it later after ticking off a few more parts of the feature, but I still can’t see a way to resolve the design smell. I spend a couple of hours refactoring back and forth – in the end I declare bankruptcy and raise an issue in the issue tracker. If I’m lucky I’ll pick up the issue again in a couple of months, have a half-hearted look at it but realise I can’t remember what I was really thinking at the time and close the issue. More likely after a few months with nobody picking up the issue I’ll quietly close it. My code guilt has been neatly dealt with. But the crap code still remains.

The pressure to cut corners is incredibly strong, that pressure is strongest when you’re facing a particularly difficult design change. You’ve identified a problem in the design, probably made obvious by other changes you’ve made. You’re struggling to correct it, which means it isn’t easy to resolve; but it’s obviously a problem because you’ve already spent time trying to resolve it. That means the next time you come through here you’re going to spot the same problem and hate the you of today for not fixing it. And yet, this moment right now is the clearest you’ve ever understood the problem. If you give up now, you’ll have to reload into memory all the context you’ve got right now – what makes you think you’ll be in less of a rush in six months time? That you’ll have time to re-learn this code? Time to do what should have been done today?

The pressure to be done yesterday is strong, but today is the best you’ve ever understood this code: so use that understanding to leave it better than you found it. If you’ve removed all the sharp edges you saw on your way through then at least you’re leaving the code better than you found it. Tomorrow when you pass this way, you’ll pass through a little quicker, with fewer sharp edges to distract you. But today? Today you have code gardening to do.

Getting started with DivineInject

DivineInject is a .net dependency injection framework, designed to be simple to use and easy to understand.

You can find the source for DivineInject on github.

Why another DI framework?

Because dependency injection is important – but done wrong it can do more harm than good. DivineInject is opinionated about the right way to use dependency injection:

  • Constructor injection or death
    Setter injection is bad for your health, so just say no
  • Dependencies are singletons
    Dependencies are external to your application – your DI framework doesn’t need to know about users or sessions or threads.
  • Domain objects can be rich, too
    Your domain model doesn’t have to be anemic

Constructor Injection

Setter and method injection are much harder to get right – so DivineInject simply doesn’t support them. If you can’t implement your dependencies as constructor arguments, then maybe you should refactor the dependency so you can.

Singleton Dependencies

Dependencies are external to your application. They are external things that your application depends on like databases and web services; these things are generally used across your application. Hint: if a dependency is only used in one or two places, it isn’t an application-wide dependency.

Things like users and sessions are domain concepts in your domain, not in the domain of dependency injectors. All dependency injection frameworks get wrapped up in different scopes, which makes the frameworks harder to use. DivineInject simply doesn’t support them — if you need something user-scoped or session-scoped, then implement the logic yourself. It isn’t hard, and if you ever want to understand the lifecycle of your objects it’s in your code, not mine — which will make reasoning about your code or debugging it a million times easier.

Rich Domain Objects

DivineInject borrows an idea from Google Guice – with Guice it is called “assisted injection”, in DivineInject we call it generated factory injection. The idea is the same — providing a simple way to create objects with constructors which accept runtime arguments as well as dependencies to inject. This allows you to create rich, stateful domain objects which also have dependencies.

Getting Started

So how do you get started with DivineInject? I’ll assume you’ve not been living under a rock and already know what dependency injection is. In which case basic usage of DivineInject boils down to three steps:

  • Add the dependency
  • Configure bindings
  • Create your root object

Add the Dependency

DivineInject is available as a NuGet package:

Install-Package DivineInject

Configure Bindings

When DivineInject creates a new instance of a class it calls a constructor, each of the constructor arguments is a dependency of the class — something external to the class, for which an implementation must be provided. But how do we know which value to pass for each dependency? This is controlled by the bindings.

Your bindings must be configured near the start of your application — e.g. in the main method or global.asax.

There are basically two ways to configure bindings with DivineInject.

1) bind an interface to a concrete type. DivineInject will pass the same (singleton) instance of the given concrete type whenever it encounters a constructor argument of the interface type.

DivineInjector.Current
	.Bind<IOrdersService>().To<OrdersService>();

2) bind an interface to a specific instance. DivineInject will pass the given instance whenever it encounters a constructor argument of the interface type.

var myOrdersService = new OrdersService(...);
DivineInjector.Current
	.Bind<IOrdersService>().ToInstance(myOrdersService);

Create Your Root Object

DivineInject allows you to create a tree of objects — each object has references to dependencies, which in turn reference their own dependencies; forming a tree of objects. This tree is created starting with the root object — e.g. in a WPF application the root would be the outermost ViewModel; in a WCF application it would be the service class.

The root object is created by calling DivineInject — any arguments the root object constructor requires are taken from the bindings. This should be the only time you explicitly call DivineInject to create objects.

There are two ways of creating the root object:

1) by explicit type:

private MainWindowViewModel CreateViewModel()
{
    return DivineInjector.Current.Get<MainWindowViewModel>();
}

2) by passing the type as an argument:

public object GetInstance(InstanceContext instanceContext, 
                          Message message)
{
    return DivineInjector.Current.Get(_instanceType);
}

At this point you now have a very simple set of dependencies configured, which can be wired up by DivineInject. E.g.

public class MainWindowViewModel
{
    public MainWindowViewModel(IOrdersService ordersService)
    {
        ...
    }
}

I can define classes which accept an instance of an interface as a constructor argument, when DivineInject instantiates this class it will pass in the right implementation of the interface, the one configured by the bindings.

In the second part of this series, we’ll look at how we use DivineInject to create classes that aren’t singletons.