Friction in Software

Friction can be a very powerful force when building software. The things that are made easier or harder can dramatically influence how we work. I’d like to discuss three areas where I’ve seen friction at work: dependency injection, code reviews and technology selection.

DI Frameworks

A few years ago a colleague and I discussed this and came to the conclusion that the reason most DI frameworks suck (I’m looking in particular at you, Spring) is that they make adding new dependencies so damned easy! There’s absolutely no friction. Maybe a little XML (shudder) or just a tiny little attribute. It’s so easy!

So when we started a new, greenfield project, we decided to put our theory to the test and introduced just a little bit of friction to dependency injection. I’ve written before about the basic scheme we adopted and the AOP endpoint it reached. But the end result was, I believe, very successful. After a couple of years of development we still had of the order of only 10-20 dependencies. The friction we’d introduced was light (add a couple of lines to a single class), but it was sufficient to act as a constant reminder not to just add a new dependency because it was easy.

Code Reviews

I was reminded of this recently when discussing code reviews. I have mixed feelings about code reviews: I’ve seen them work well, and it is better to have code reviews than not to have them; but it’s better still to pair program. But not all teams, not all developers, like pair programming – so code reviews exist. The trouble with code reviews is they can provide a form of friction.

If you & I are pairing on a piece of work, we will discuss the various trade-offs as we go: do we spend time on this, do we refactor that, etc etc. The constant judgements about what warrants attention and what can be left for another day are verbalised and agreed. In general I find the code written while pairing is high in quality but also remains tightly focused on task. The long rambling refactors I’ve been guilty of in the past disappear and the lazy “quick hacks” we all try and explain away to ourselves, aren’t so easy to gloss over when pairing.

But code reviews exist outside of this dynamic. In the cold light of the following day, someone uninvolved reviews your work and passes judgement on whether they think it’s up to scratch. It’s easy to see why this becomes combative: rather than being collaborative it can be seen as a judgement being passed, on not only the code but the author, too.

When reviewing code it is easy to set a very high bar, higher than you might set for yourself and higher than you might have agreed when pairing. Now, does this mean the comments aren’t valid? Absolutely not! You’re right, there is a test case missing here, although my change is unrelated, I should have added the missing test case. And you’re right this code is a mess; it was a mess before I was here and made a simple edit; but you’re right, I should have tidied it up. Everyone should practice code gardening.

These are all perfectly valid comments. But they create a form of friction. When I worked on a team that relied on these code reviews you knew you were going to get comments: so you kept the commit small, so as to minimize the diff. A small diff minimizes the amount of extra tests you could be asked to write. A small diff keeps most of the existing mess out of the review, so you won’t be asked to start refactoring.

Now, this seems dysfunctional: we’re deliberately trying to optimize a smooth passage through the review process, instead of optimizing for code quality. Worse than this though was what never happened: refactoring commits. Looking back I realise that the only code reviews I saw (as both reviewer and reviewee) were for feature changes. There were never any code reviews submitted for purely technical debt reduction. Sure, there’d be some individual commits in amongst the feature changes. But never any dedicated, multi-commit sessions, whose sole aim was to improve the code base. Which was a shame, because like any legacy code base, there was scope for improvement.

Comparing this to teams that don’t do code reviews, where I’ve tended to see more effort on reducing technical debt. Without fearing an endless cycle of review comments, developers are free to embark on refactoring efforts (that may or may not even work out!) – but at least they can try. Instead, code reviews provide a form of friction that might actually hurt code quality in the long run.

Technology Selection

I was talking to another colleague recently who is convinced that Hibernate is still the best way to get data in and out of a relational database. I can’t really work out how to persuade people they’re wrong – surely using Hibernate is enough to persuade you? Especially in a large, legacy code base – the pain that Hibernate causes is obvious. Yet plenty of people still believe in Hibernate. There are even people that still believe in Spring. Whether or not they still believe in the tooth fairy is unclear.

But I think technology selection is another area where friction is important. When contemplating moving away from something well-known and well used in industry like Spring or Hibernate there is a lot of friction. There are new technologies to learn, new approaches to understand and new risks to manage. This all adds friction, so sometimes it’s easiest just to stick with what we know. Sometimes it really is the right choice – the technology you have expertise in is the one you’ll be most productive in immediately. But there are longer term questions too, which are much harder to answer: will the team eventually be more productive using technology X than technology Y?

Friction in software is a powerful process: we’re very lazy creatures, constantly trying to optimise. Anything that slows us down or gets in our way quickly gets side-stepped or worked around. We can use this knowledge as a tool to guide developer behaviour; but sometimes we need to be aware of how friction can change behaviours for the worse as well.

Copy & paste driven development

Software development is rife with copy & paste: all of us resort to copy and paste coding sometimes. We know we probably shouldn’t, but we do
it anyway. It’s like the industry’s dirty little secret: we mainly jucopy-paste-naileditst copy and paste code from the internet or from somewhere else in the code base then bash it till it works.

But maybe, just maybe, the fact that we all rely on this from time to time should be telling us something?

The good

Sometimes copy & paste coding can be a good thing. A while ago I was pairing with someone where we did what I would call “search and replace coding”. I love to code golf. In tools like Eclipse, Intellij or Resharper there is always an optimal way to make each change, letting the tools do as much of the typing as possible. So it was with fascination recently that a colleague showed me an interesting code golf using search & replace.

The change was around extending an existing class with a load of new fields. We had a basic class, with a couple of sample fields, and a test that verified something simple about the class – say that it could be serialized to JSON successfully. We needed to add a boatload of new fields to the class (don’t ask). This involved five separate tasks which, at a macro level, had a lot in common:

  • Adding each field
  • Initialising each field in the constructor
  • Modifying the test to setup sample values for each field
  • Modifying the test to pass the sample value for each field to the constructor
  • Writing an assert that each field had successfully serialized and deserialized

My colleague demonstrated that we could write the list of fields once. Then, copy & paste, search & replace – we have a list of parameters for the constructor. By carefully crafting the search term and a suitable replacement term you can do some limited meta-programming. Taking the list of fields and transforming it into the actual lines of code you need in each instance. The list of fields replaced one way gives you a list of parameters to the constructor; with a different replacement you get the field declarations in the test; with another replacement you get assert statements.

I found this a very interesting way to write software. It definitely optimized the number of key presses required to type the code in. The long, boring list of field names only had to be typed in once; after that merely carefully crafting a search & replace regex to do the lifting for us. But it demonstrates an underlying truth: we had five separate changes to make, which accepted a parameter. If we could have actually meta-programmed this, we would have passed in the list of fields to some meta-programming code which would output the desired lines of code.

While it’s very clever that we could use search & replace meta-programming for this, it feels like the tools are lacking somehow.

The bad

Everyone copies code from stackoverflow from time to time. Hopefully you do it sensibly, using it as a starting point for your own production code. Working through what you’ve just pasted in to understand it, experimenting with it, modifying it, making it fit for your purpose. Rather than just blindly copy & pasting random code from the internet. I mean, who’d just randomly run code from the internet?

Stackoverflow is great. It’s an amazing resource for programmers. While learning WPF I got the sense that as a technology it couldn’t have taken off without something like stackoverflow. The technology is so opaque, so hard to learn. It took me many, many months of copy & pasting xaml from stackoverflow until I started to really understand how it worked. This is a technology that is not trivial to understand. Without being able to just copy & paste code from the internet, the technology would take a lot longer to learn.

But often, we’re lazy, we try it. It works. Woohoo, next problem. I’ve written before about voodoo programming, but the problem is that it’s easy to think you’ve understood what code is doing. If you didn’t actually have to invent those lines, to reason through to them – maybe you don’t understand. Maybe you only think you know what the code’s doing? Maybe there’s some horrific bug you’re not aware of yet?

The ugly

Almost every time I’ve seen TDD done at any scale, when it comes to writing a new test, the first question to ask is: “which existing test is this most like?” Yup, where can I go copy & paste. I’m so lazy, I don’t want to have to invent a whole test setup on my own. I’ll just borrow somebody else’s homework. We’ve all done it. It seems to be an unwritten rule of TDD. By the time you’re on to the third or fourth test in a given file, I guarantee the temptation to just copy & paste to make the fifth test is incredibly strong.

The trouble with this is all sorts of weird test artefacts get copied forwards. You start off with a simple test, with some simple setup. Say an empty bank account. Then to test non-zero balance you need an account with a transaction. Then to test balance summing you need an account with two transactions. Each test so far is building on the last, so you’ve just copy & pasted the previous one as a starting point. The fourth test is inserting a new transaction, so you just copy the third test (with two existing transactions, unnecessary for this test). Test five is that a withdrawal can’t go below zero, so you copy & paste the previous test and set the amount to be a large negative value. Test six is an overdraft test, so you copy & paste the previous test but change the account setup to include an overdraft so the balance can go negative. By the time you copy & paste test seven, your starting point is an account with an overdraft, with two transactions where a third transaction is added. Test seven is about adding a standing order. None of this noise is necessary.

This might seem a ridiculous example, but I see this time and again in real code. Sometimes it’s not even me that’s done it. This noise accumulates as you read down a test file. The tests at the bottom have all sorts of weird artefacts that were only relevant to one test half way up the file. It means fixing tests and changing behaviours becomes a real problem. If I change, say, how overdrafts are defined in my example above – I might have half a dozen tests to change, only one of which even mentions overdrafts. But they’ve inherited that setup because the tests were just copy & pasted around. Not only does this make tests hard to read and understand it makes them hard to maintain.

We all do it. It seems a pretty accepted part of how TDD happens in the wild. And yet, it clearly isn’t right. With discipline, we can keep our tests clean. Yes, when we’re all being conscientious developers we start writing our tests from scratch each time. But most of the time we’re busy or lazy or whatever.

Conclusion

What do these three things have in common, besides copy & paste? In each example we’re using copy & paste to save time. To find the most efficient path through the work we’re doing. Nobody is doing it out of malice or stupidity. Laziness? Almost certainly – but the good kind. The kind of laziness that encourages elegant solutions.

But copy & paste isn’t an elegant solution. It’s a crappy solution to a more fundamental problem: our tools are deficient. Really what we’re working around is the fact that our tools don’t let us express what we really want.

What if we could write our tests in a higher level language? “A test with a bank account with two transactions”. Sure, there are internal & external DSLs you can use to do this. But typically the cost in setting up the DSL isn’t worth the hassle for unit tests. It would completely ruin the flow of TDD. Does that just mean we haven’t found a good way of doing it yet? Is there a way we could more fluidly express the intent of the test, filling in the gaps as we go?

Instead of copy & pasting code from the internet, could our tools get smarter? Could we take some of the amazing machine learning stuff that’s going on and apply it to software development? I tried playing with an IntelliJ plugin recently that promises just that. Unfortunately it’s pretty buggy at the minute and doesn’t really work. But the idea is incredibly attractive. I like the idea of being able to express intent instead of mindlessly typing in the nuts & bolts.

Finally, instead of doing search & replace coding, wouldn’t it be great if we could actually meta-program? If we could actually write code that would write code for us? Not just code generators, but something that can generate small sections of code. I had a very limited go at this some time ago with rescripter – but it turns out its very hard to write a decent meta-programming tool, that anyone except the author can understand. But I think the idea still has merit: too often I find cases where I can describe the intent of my change very succinctly, but implementing it will involve far more typing than I’d like.

Never trust a passing test

One of the lessons when practising TDD is to never trust a passing test. If you haven’t seen the test fail, are you sure it can fail?

traffic-lights-208253_1920Red Green Refactor

Getting used to the red-green-refactor cycle can be difficult. It’s very natural for a developer new to TDD to immediately jump into writing the production code. Even if you’ve written the test first, the natural instinct is to keep typing until the production code is finished, too. But running the test is vital: if you don’t see the test fail, how do you know the test is valid? If you only see it pass, is it passing because of your changes or for some other reason?

For example, maybe the test itself is not correct. A mistake in the test setup could mean we’re actually exercising a different branch, one that has already been implemented. In this case, the test would already pass without writing new code. Only by running the test and seeing it unexpectedly pass, can we know the test itself is wrong.

Or alternatively there could be an error in the assertions. Ever written assertTrue() instead of assertFalse() by mistake? These kind of logic errors in tests are very easy to make and the easiest way to defend against them is to ensure the test fails before you try and make it pass.

Failing for the Right Reason

It’s not enough to see a test fail. This is another common beginner mistake with TDD: run the test, see a red bar, jump into writing production code. But is the test failing for the right reason? Or is the test failing because there’s an error in the test setup? For example, a NullReferenceException may not be a valid failure – it may suggest that you need to enhance the test setup, maybe there’s a missing collaborator. However, if you currently have a function returning null and your intention with this increment is to not return null, then maybe a NullReferenceException is a perfectly valid failure.

This is why determining whether a test is failing for the right reason can be hard: it depends on the production code change you’re intending to make. This depends not only on knowledge of the code but also the experience of doing TDD to have an instinct for the type of change you’re intending to make with each cycle.

When Good Tests Go Bad

A tragically common occurrence is that we see the test fail, we write the production code, the test still fails. We’re pretty sure the production code is right. But we were pretty sure the test was right, too. Eventually we realise the test was wrong. What to do now? The obvious thing is to go fix the test. Woohoo! A green bar. Commit. Push.

But wait, did we just trust a passing test? After changing the test, we never actually saw the test fail. At this point, it’s vital to undo your production code changes and re-run the test. Either git stash them or comment them out. Make sure you run the modified test against the unmodified production code: that way you know the test can fail. If the test still passes, your test is still wrong.

TDD done well is a highly disciplined process. This can be hard for developers just learning it to appreciate. You’ll only internalise these steps once you’ve seen why they are vital (and not just read about it on the internets). And only by regularly practising TDD will this discipline become second nature.

Project vs product teams

One of the hardest things for companies trying to be agile is how to structure teams. Back in the bad-old days, teams would form around a project. Then six months later, everyone would dissipate and go onto new teams. By the time a team has formed and become effective it is ripped apart again. You get no sense of ownership, no continuity.

children-laptop

Nowadays everyone knows that projects are bad, you need scrum teams instead. So a scrum team is formed with a product owner to prioritise the work. But what often happens is that what gets prioritised onto the backlog is a project in bite-size pieces. For example, I saw one team that ran out of work to do. The backlog was empty because, except for bugs, none of the outstanding projects had been signed off. There’s that word again. Project.

Behind the scenes a scrum team often becomes a slightly better way of delivering projects. You get the benefits of team consistency and continuity and the added benefit that the business can carry on thinking of the work in terms of projects. The downside of this approach is the scrum team can lack clear focus: there’s no overarching goal for the team. From sprint to sprint the focus might change as the relative importance of different projects changes. This makes it hard for the team to feel committed to a big idea, to some greater purpose. It ends up an endless procession through the backlog.

Why does this happen? I think it comes down to money. Somebody, somewhere is watching the money. Somebody wants to know “if I spend £x here, how much am I going to make back and by when?” The idea of the project is very easy to fit into this model. The team costs £x per day. The project is estimated to take n days. It’s expected to deliver £y profit. From this we can calculate the expected return on our investment. The trouble is, most of these numbers are entirely made up. If not fundamentally unknowable.

Let’s start with the obvious one: how long is the project going to take? Really, we still actually ask this question? Have we learnt nothing from agile? It seems not: many, many people still think about the world in terms of delivery dates and certainties. When will we learn that the best way is always to deliver a little, inspect the results; then decide whether to keep on the same path or deliver something different. You can’t have an end date with this approach – it’s not even meaningful. Keep on delivering one thing until there’s something better you could be doing, then go do that. Rinse, repeat.

What about the other question: how much profit will this project make? Well, let’s assume for now that the entire project, as originally conceived, will actually be delivered (as if this ever actually happens in software). Can you tell how much money it’s made you? Really? Independent of every other change that the organisation has made at the same time? From software to operations to marketing?

Now sometimes you can come up with a good estimate of expected returns, but often it’s just a pipe dream. But, if you’re vigorously disagreeing with me: I assume you’re religiously tracking actual costs and feed that back into future project planning? I have seen very, very few companies actually do this. If you’re not actually measuring how much you made from a project, how do you know your original estimates were any good?

So we have two made up numbers, both almost certainly unachievable in practice – but we use this to dictate the team’s priority order. I once saw a project signed off and jump to the top of the priority order because it predicted something like a 10% uplift in revenue for the company. This was a very large number for a single project and clearly ridiculous to everyone involved, but it was signed off and duly implemented. Revenue projections later that year were re-estimated downwards and downwards due to difficult market conditions. And some blatant over-estimation. And yet, this non-science is what passes for return on investment planning in all-too-many organisations.

What’s the alternative? The best teams I’ve seen have been structured around products. Give the team complete ownership of one or more products. Any and all changes to those products go via the product team. A product owner guides product direction. As an area expert they are entrusted to decide what are the most important things to work on. They can discuss long term directions with the team and have a consistent, coherent vision for where the product will evolve towards. While, inevitably, some changes are large and sufficiently inter-dependent that they become a project (if one part is delivered then it all must be); the team understands the business benefit of the solution and can evolve the implementation to meet the underlying business need, instead of trying to satisfy some arbitrary internal project deadline. This gives teams the complete freedom to inspect and adapt each iteration. With an understanding of the business priorities for their products they can make sensible trade-offs as each iteration surfaces more information.

What about the money? It’s hard, but let’s be honest about it: return on investment is not clear with the project model of software delivery, so accept that it isn’t clear. The hard thing is working out which products are making you money and which could make more money if more was invested. The trouble is I’ve worked in teams where, honestly, the product was so profitable with so little scope for uplift that the most cost-effective thing to do would have been to fire the dev team and just keep milking the cash cow.

So how can we decide where to spend our money? I think the empirical model of agile could fit here perfectly well. Let’s assume for a minute that the amount of money you have for the delivery team as a whole is fixed – your only choice is where to put it. How much to spend on product A vs how much on product B. Can you estimate how much money each product is making for the business? How is it changing over time?

If one product is making more profit each month – if it’s a growing product – then invest more resources there, to accelerate the growth. If a product is slowing down, with smaller increases in profit each month, or even with profit decreasing – then stop spending so much money on it. This naturally means that your money goes where it seems to be delivering the biggest return. Put your money where it seems to be delivering results.

The hardest thing with this is that it takes time to get the feedback: changing resource allocation could take months to show up on the bottom line. But at least we’re being honest about the impact our decisions have. Instead of trying to micro-manage delivery via projects, manage where resources are put and let the product owner manage the priority order.

Cross-functional teams

Cross-functional teams aren’t a new idea. And yet, somehow, we still don’t seem to have got the memo.

I was listening to the excellent Scott Hanselman’s podcast “Hanselminutes” last week, he had Angie Jones on to talk about automation. Among all the great advice around ensuring that automation is a first-class citizen in your development process one thing stood out for me

you need to get your automation engineers into your scrum team

I don’t know why, but it surprised me. Are there really companies out there up to speed enough to be doing test automation; yet so behind the times with agile that they think it’s a good idea to have a dedicated team of automation engineers, removed from the rest of the dev team?

Cross-functional teams are a pretty central idea to agile – breaking down silos and ensuring that everyone that is required to produce an increment of working software is aligned and working together. It’s certainly not a new idea, but it’s clearly an idea that we’re still struggling to absorb.

But then, looking back, I remember working for a certain large company that decided they needed to “do more test automation”. So they hired a room full of automation engineers, who sat two floors away from the dev team, in a room hidden away (we honestly didn’t know they were even there for weeks, maybe even months). This team were responsible for creating an automated test pack for the application (rather than use the one the test automation engineers in the scrum teams had been working on for the last few years). But… they weren’t even talking to the scrum teams. So they were constantly chasing to keep up. As you can imagine, hilarity ensued. I say hilarity. Arguments, really. Then anger. Eventually laughter as we realised all this effort was wasted because the scrum teams wouldn’t – in fact couldn’t – support this new automation code.

Clearly not getting the idea of cross-functional teams is an age old problem. Compare this to a more recent client of mine – one that had a genuinely more cross-functional team. Not only did the scrum team have its own automation engineer, the developers (actual developers, not re-branded testers) were encouraged to work on the test automation tools – to everyone’s benefit. Test tooling written to the same standards as production code, with the insight and experience of the test automation specialist. This is moving beyond cross-functional teams into cross-skilled teams. Not only is every skill set you need within the same scrum team, but each individual can have multiple skills, taking on multiple roles.

Sure, you still need specialists. But with generalizing specialists you get the best of both worlds: the experience of specialists in their area, with the flexibility and breadth of ideas that come from the whole team being able to work on whatever is required. When the entire team can swarm on any area you have a very flexible team, if we need a big push for test automation the entire team can focus on it. Similarly with plenty of pairing and rotation everyone on the team will see every area and every role, allowing everyone’s unique perspective to improve the product and the process.

But then, a counter-example, the same client suffered from another age-old silo: operations. I thought devops had killed this silo, but it seems not. If the scrum team can’t release an increment of software to actual users then it isn’t a fully self-contained, self-sufficient, cross-functional team. A scrum team working with a separate test automation team seems like a crazy idea; and yet, somehow, a scrum team working with a separate operations team is much more normal, much more accepted. But it’s the exact same problem: if you don’t have everyone you need in the same scrum team then you’re going to get bottlenecks. You’re going to get communication problems. You’re going to get a “them-vs-us” attitude.

Every time I’ve come up against this the typical argument against operations staff being embedded within scrum teams is that they’re not working on “your stuff” all the time, so the rest of the time they’d be busy doing other stuff, unrelated to “your team” or they’d be bored. Well, maybe if we freed up that extra capacity we could release more often? Maybe they could be working on making it quicker, easier, safer to release more often? Maybe they could be more deeply involved with development when we’re making decisions which affect what they’re going to release and how it’s going to wake them up in the middle of the night. Maybe they could even help with other, non-production environments? The neglected, little siblings of production that every company seems to struggle to pay enough attention to.

Maybe, even, over time the team evolves from having the operations specialist to having team members cross skilled into operations. Under the watchful eye of the specialist could we, shock horror, let testers touch production? Could the BA manage a release? In some industries this is completely impossible for regulatory reasons, but in all the others its “impossible” for merely arbitrary reasons.

Breaking down silos is never easy – but I think it’s an interesting reflection of how far we’ve come that some silos seem frankly ridiculous now, while others just seem old-fashioned. I still hold out for the distant dream of genuinely cross-functional teams. Whenever I’ve seen this actually happen the lack of bottlenecks and mis-communication makes everything so much faster, so much easier. In the end a cross-functional team is better than silos. But a cross skilled team is better still, if you can manage it.