Project vs product teams

One of the hardest things for companies trying to be agile is how to structure teams. Back in the bad-old days, teams would form around a project. Then six months later, everyone would dissipate and go onto new teams. By the time a team has formed and become effective it is ripped apart again. You get no sense of ownership, no continuity.

children-laptop

Nowadays everyone knows that projects are bad, you need scrum teams instead. So a scrum team is formed with a product owner to prioritise the work. But what often happens is that what gets prioritised onto the backlog is a project in bite-size pieces. For example, I saw one team that ran out of work to do. The backlog was empty because, except for bugs, none of the outstanding projects had been signed off. There’s that word again. Project.

Behind the scenes a scrum team often becomes a slightly better way of delivering projects. You get the benefits of team consistency and continuity and the added benefit that the business can carry on thinking of the work in terms of projects. The downside of this approach is the scrum team can lack clear focus: there’s no overarching goal for the team. From sprint to sprint the focus might change as the relative importance of different projects changes. This makes it hard for the team to feel committed to a big idea, to some greater purpose. It ends up an endless procession through the backlog.

Why does this happen? I think it comes down to money. Somebody, somewhere is watching the money. Somebody wants to know “if I spend £x here, how much am I going to make back and by when?” The idea of the project is very easy to fit into this model. The team costs £x per day. The project is estimated to take n days. It’s expected to deliver £y profit. From this we can calculate the expected return on our investment. The trouble is, most of these numbers are entirely made up. If not fundamentally unknowable.

Let’s start with the obvious one: how long is the project going to take? Really, we still actually ask this question? Have we learnt nothing from agile? It seems not: many, many people still think about the world in terms of delivery dates and certainties. When will we learn that the best way is always to deliver a little, inspect the results; then decide whether to keep on the same path or deliver something different. You can’t have an end date with this approach – it’s not even meaningful. Keep on delivering one thing until there’s something better you could be doing, then go do that. Rinse, repeat.

What about the other question: how much profit will this project make? Well, let’s assume for now that the entire project, as originally conceived, will actually be delivered (as if this ever actually happens in software). Can you tell how much money it’s made you? Really? Independent of every other change that the organisation has made at the same time? From software to operations to marketing?

Now sometimes you can come up with a good estimate of expected returns, but often it’s just a pipe dream. But, if you’re vigorously disagreeing with me: I assume you’re religiously tracking actual costs and feed that back into future project planning? I have seen very, very few companies actually do this. If you’re not actually measuring how much you made from a project, how do you know your original estimates were any good?

So we have two made up numbers, both almost certainly unachievable in practice – but we use this to dictate the team’s priority order. I once saw a project signed off and jump to the top of the priority order because it predicted something like a 10% uplift in revenue for the company. This was a very large number for a single project and clearly ridiculous to everyone involved, but it was signed off and duly implemented. Revenue projections later that year were re-estimated downwards and downwards due to difficult market conditions. And some blatant over-estimation. And yet, this non-science is what passes for return on investment planning in all-too-many organisations.

What’s the alternative? The best teams I’ve seen have been structured around products. Give the team complete ownership of one or more products. Any and all changes to those products go via the product team. A product owner guides product direction. As an area expert they are entrusted to decide what are the most important things to work on. They can discuss long term directions with the team and have a consistent, coherent vision for where the product will evolve towards. While, inevitably, some changes are large and sufficiently inter-dependent that they become a project (if one part is delivered then it all must be); the team understands the business benefit of the solution and can evolve the implementation to meet the underlying business need, instead of trying to satisfy some arbitrary internal project deadline. This gives teams the complete freedom to inspect and adapt each iteration. With an understanding of the business priorities for their products they can make sensible trade-offs as each iteration surfaces more information.

What about the money? It’s hard, but let’s be honest about it: return on investment is not clear with the project model of software delivery, so accept that it isn’t clear. The hard thing is working out which products are making you money and which could make more money if more was invested. The trouble is I’ve worked in teams where, honestly, the product was so profitable with so little scope for uplift that the most cost-effective thing to do would have been to fire the dev team and just keep milking the cash cow.

So how can we decide where to spend our money? I think the empirical model of agile could fit here perfectly well. Let’s assume for a minute that the amount of money you have for the delivery team as a whole is fixed – your only choice is where to put it. How much to spend on product A vs how much on product B. Can you estimate how much money each product is making for the business? How is it changing over time?

If one product is making more profit each month – if it’s a growing product – then invest more resources there, to accelerate the growth. If a product is slowing down, with smaller increases in profit each month, or even with profit decreasing – then stop spending so much money on it. This naturally means that your money goes where it seems to be delivering the biggest return. Put your money where it seems to be delivering results.

The hardest thing with this is that it takes time to get the feedback: changing resource allocation could take months to show up on the bottom line. But at least we’re being honest about the impact our decisions have. Instead of trying to micro-manage delivery via projects, manage where resources are put and let the product owner manage the priority order.

Cross-functional teams

Cross-functional teams aren’t a new idea. And yet, somehow, we still don’t seem to have got the memo.

I was listening to the excellent Scott Hanselman’s podcast “Hanselminutes” last week, he had Angie Jones on to talk about automation. Among all the great advice around ensuring that automation is a first-class citizen in your development process one thing stood out for me

you need to get your automation engineers into your scrum team

I don’t know why, but it surprised me. Are there really companies out there up to speed enough to be doing test automation; yet so behind the times with agile that they think it’s a good idea to have a dedicated team of automation engineers, removed from the rest of the dev team?

Cross-functional teams are a pretty central idea to agile – breaking down silos and ensuring that everyone that is required to produce an increment of working software is aligned and working together. It’s certainly not a new idea, but it’s clearly an idea that we’re still struggling to absorb.

But then, looking back, I remember working for a certain large company that decided they needed to “do more test automation”. So they hired a room full of automation engineers, who sat two floors away from the dev team, in a room hidden away (we honestly didn’t know they were even there for weeks, maybe even months). This team were responsible for creating an automated test pack for the application (rather than use the one the test automation engineers in the scrum teams had been working on for the last few years). But… they weren’t even talking to the scrum teams. So they were constantly chasing to keep up. As you can imagine, hilarity ensued. I say hilarity. Arguments, really. Then anger. Eventually laughter as we realised all this effort was wasted because the scrum teams wouldn’t – in fact couldn’t – support this new automation code.

Clearly not getting the idea of cross-functional teams is an age old problem. Compare this to a more recent client of mine – one that had a genuinely more cross-functional team. Not only did the scrum team have its own automation engineer, the developers (actual developers, not re-branded testers) were encouraged to work on the test automation tools – to everyone’s benefit. Test tooling written to the same standards as production code, with the insight and experience of the test automation specialist. This is moving beyond cross-functional teams into cross-skilled teams. Not only is every skill set you need within the same scrum team, but each individual can have multiple skills, taking on multiple roles.

Sure, you still need specialists. But with generalizing specialists you get the best of both worlds: the experience of specialists in their area, with the flexibility and breadth of ideas that come from the whole team being able to work on whatever is required. When the entire team can swarm on any area you have a very flexible team, if we need a big push for test automation the entire team can focus on it. Similarly with plenty of pairing and rotation everyone on the team will see every area and every role, allowing everyone’s unique perspective to improve the product and the process.

But then, a counter-example, the same client suffered from another age-old silo: operations. I thought devops had killed this silo, but it seems not. If the scrum team can’t release an increment of software to actual users then it isn’t a fully self-contained, self-sufficient, cross-functional team. A scrum team working with a separate test automation team seems like a crazy idea; and yet, somehow, a scrum team working with a separate operations team is much more normal, much more accepted. But it’s the exact same problem: if you don’t have everyone you need in the same scrum team then you’re going to get bottlenecks. You’re going to get communication problems. You’re going to get a “them-vs-us” attitude.

Every time I’ve come up against this the typical argument against operations staff being embedded within scrum teams is that they’re not working on “your stuff” all the time, so the rest of the time they’d be busy doing other stuff, unrelated to “your team” or they’d be bored. Well, maybe if we freed up that extra capacity we could release more often? Maybe they could be working on making it quicker, easier, safer to release more often? Maybe they could be more deeply involved with development when we’re making decisions which affect what they’re going to release and how it’s going to wake them up in the middle of the night. Maybe they could even help with other, non-production environments? The neglected, little siblings of production that every company seems to struggle to pay enough attention to.

Maybe, even, over time the team evolves from having the operations specialist to having team members cross skilled into operations. Under the watchful eye of the specialist could we, shock horror, let testers touch production? Could the BA manage a release? In some industries this is completely impossible for regulatory reasons, but in all the others its “impossible” for merely arbitrary reasons.

Breaking down silos is never easy – but I think it’s an interesting reflection of how far we’ve come that some silos seem frankly ridiculous now, while others just seem old-fashioned. I still hold out for the distant dream of genuinely cross-functional teams. Whenever I’ve seen this actually happen the lack of bottlenecks and mis-communication makes everything so much faster, so much easier. In the end a cross-functional team is better than silos. But a cross skilled team is better still, if you can manage it.

Code awareness levels

Writing code is all about working at multiple levels of abstraction concurrently. But as well as working at multiple levels of abstraction there are also multiple levels of awareness of the code.

The most basic level of code awareness is just making the code work. Does this line of code compile, run and do what I intended it to do? While learning to code this lowest level of awareness consumes almost all our capacity: it is difficult to focus on anything else other than just typing in the correct syntax.

Once we’ve mastered typing in code that works there is another level of awareness: is the intent of this line of code clear? Will another human being be able to understand this? Hell, will I be able to understand it in a month’s time? This is where code readability begins. While writing code we expend some of our mental capacity on making sure that the code will be readable.

Code legibility though is about more than just each single line of code in isolation. We also need to consider whether this line makes sense in the context of the rest of this method, maybe we should extract a new method? Does this line of code make sense in the context of the rest of this class? Does the behaviour belong somewhere else? Does the design that I’m discovering actually make sense? Is it consistent? Is the pattern I’m implementing consistent with the rest of the application? Besides making each line of code compile there are numerous higher levels – rising through each of the larger structures in the application: the method, the class, the module, the component, the system.

Besides code legibility there are other factors to be aware of: will failures in this code be easy to understand, will exceptions help me get to the right part of the code quickly, what happens if things I depend on fail? Will this code perform well, in what scenarios and with what frequency will it be called? Is this code secure, are there ways for an attacker to take advantage of this code?

Writing good code requires developers to be aware of these levels all the time. However, if we’re tired, hungover or fed up we probably won’t give much time to these concerns. We’ll just make the code work that’s in front of us and ignore the bigger picture. When we’re lacking mental capacity, we tend to stop worrying about the higher levels and focus on the lower level, easier problems. “I’ll come back later and fix the design”, we tell ourselves.

This is where pairing can help. I don’t think it’s true that the pairing partner is only thinking about the bigger picture; but the non-driving partner certainly has spare capacity to think about these things and act as a conscience against the driver missing something or getting tired.

I like to use hands-on coding exercises in interview situations because it helps highlight how much awareness a developer has. Generally I’m looking for developers that, for a given problem, have enough capacity to solve the problem quickly and well. Less capable developers will generally take longer and/or solve the problem badly. In an interview situation though, there are extra levels of awareness, besides just solving the problem: is what I’m doing creating a good impression? Am I solving the problem in the right way (e.g. using TDD)? Is the way I’m solving the problem showing off the right skills?

For example a while back I interviewed a candidate who used “yield return”. This is an unusual thing to see, so I asked why use it? “To start a discussion about whether or not you should ever use it”. This I found quite impressive: a candidate with enough mental capacity to not only solve the problem at hand, in a good way; but still with enough spare capacity left to realise that there was an opportunity to use an unusual construct purely to provoke a discussion of it – effectively driving the interview through code, to show off the depth of their knowledge and experience. This is the kind of person you want on your team: someone with enough spare mental capacity to make sure the code is clean and the design correct.

I think this lack of capacity is why the code we write when we’re tired or when we’re learning a new language or learning a new framework is bad: all our capacity is focused on just making the code work, we don’t have spare capacity to also think about whether the code is readable or whether the design makes sense. When we’re struggling to just make the code do what we want, to just keep the compiler happy, is it any wonder the outcome is a mess?


If you want to see how much mental capacity I can bring to your company, why not hire me? I’m looking for new roles right now – so drop me a line dave@activelylazy.co.uk.

Continuum of Design

How best to organise your code? At one end of the scale there is a god class – a single, massive entity that stores all possible functions; at the other end of the scale are hundreds of static methods, each in their own class. In general, these two extremes are both terrible designs. But there is a continuum of alternatives between these two extremes – choosing where along this scale your code should be is often difficult to judge and often changes over time.

Why structure code at all?

It’s a fair enough question – what is so bad about the two extremes? They are, really, more similar to each other than any of the points between. In one there is a single class with every method in it; in the other, I have static methods one-per class or maybe I’m using a language which doesn’t require me to even group by classes. In both cases, there is no organisation. No structure larger than methods with which to help organise or group related code.

Organising code is about making it easier for human beings to understand. The compiler or runtime doesn’t care how your code is organised, it will run it just the same. It will work the same and look the same – from the outside there is no difference. But for a developer making changes, the difference is critical. How do I find what I need to change? How can I be sure what my change will impact? How can I find other similar things that might be affected? These are all questions we have to answer when making changes to code and they require us to be able to reason about the code.

Bounded contexts help contain understanding – by limiting the size of the problem I need to think about at any one time I can focus on a smaller portion of it, but even within that bounded context organisation helps. It is much easier for me to understand how 100 methods work when they are grouped into 40 classes, with relationships between the classes that make sense in the domain – than it is for me to understand a flat list of 100 methods.

An Example

Let’s imagine we’re writing software to manage a small library (for you kids too young to remember: a library is a place where you can borrow books and return them when you’re done reading them; a book is like a physically printed blog but without the comments). We can imagine the kinds of things (methods) this system might support:

  • Add new title to the catalog
  • Add new copy of title
  • Remove copy of a title
  • User borrows a copy
  • User returns a copy
  • Register a new user
  • Print barcode for new copy
  • Fine a user for a late return
  • Pay outstanding fines for a user
  • Find title by ISBN
  • Find title by name, author
  • Find copy by scanned id
  • Change user’s address
  • List copies user has already borrowed
  • Print overdue book letter for user

This toy example is small enough that written in one-class it would probably be manageable; pretty horrible, but manageable. How else could we go about organising this?

Horizontal Split

We could split functionality horizontally: by technical concern. For example, by the database that data is stored in; or by the messaging system that’s used. This can work well in some cases, but can often lead to more god classes because your class will be as large as the surface area of the boundary layer. If this is small and likely to remain small it can be a good choice, but all too often the boundary is large or grows over time and you have another god class.

For example, functionality related to payments might be grouped simply into a PaymentsProvider – with only one or two methods we might decide it is unlikely to grow larger. Less obviously, we might group printing related functionality into a PrinterManager – while it might only have two methods now, if we later start printing marketing material or management reports the methods become less closely related and we have the beginnings of a god class.

Vertical Split

The other obvious way to organise functionality is vertically – group methods that relate to the same domain concept together. For example, we could group some of our methods into a LendingManager:

  • User borrows a copy
  • User returns a copy
  • Register a new user
  • Fine a user for a late return
  • Find copy by scanned id
  • List copies user has already borrowed

Even in our toy example this class already has six public methods. A coarse grained grouping like this often ends up being called a SomethingManager or TheOtherService. While this is sometimes a good way to group methods, the lack of clear boundary means new functionality is easily added over time and we grow ourselves a new god class.

A more fine-grained vertical grouping would organise methods into the domain objects they relate to – the recognisable nouns in the domain, where the methods are operations on those nouns. The nouns in our library example are obvious: Catalog, Title, Copy, User. Each of these has two or three public methods – but to understand the system at the outer level I only need to understand the four main domain objects and how they relate to each other, not all the individual methods they contain.

The fine-grained structure allows us to reason about a larger system than if it was unstructured. The extents of the domain objects should be relatively clear, if we add new methods to Title it is because there is some new behaviour in the domain that relates to Title – functionality should only grow slowly, we should avoid new god classes; instead new functionality will tend to add new classes, with new relationships to the existing ones.

What’s the Right Way?

Obviously there’s no right answer in all situations. Even in our toy example it’s clear to see that different structures make sense for different areas of functionality. There are a range of design choices and no right or wrong answers. Different designs will ultimately all solve the same problem, but humans will find some designs easier to understand and change than others. This is what makes design such a hard problem: the right answer isn’t always obvious and might not be known for some time. Even worse, the right design today can look like the wrong design tomorrow.

How many builds?

I’m always amazed at the seemingly high pain threshold .net developers have when it comes to tooling. I’ve written before about the poor state of tooling in .net, but just recently I hit another example of poor tooling that infuriates me: I have too many builds, and they don’t agree whether my code compiles.

One of the first things that struck me when starting to develop on .net was that compiling code was still a thing. An actual step that had to be thought about. Incremental compilers in Eclipse and the like have been around for ages – to the point where, generally, Java developers don’t actually have to instruct their IDE to compile code for them. But in Visual Studio? Oh it’s definitely necessary. And time consuming. Oh my god is it slow. Ok, maybe not as slow as Scala. But still unbelievably slow when you’re used to working at the speed of thought.

Another consequence of the closed, Microsoft way of doing things is that tools can’t share the compiler’s work. So ReSharper have implemented their own compiler, effectively. It incrementally parses source code and finds compiler errors. Sometimes it even agrees with the Visual Studio build. But all too often it doesn’t. From the spurious not-actually-an-error that I have to continually instruct ReSharper to ignore; to the warnings-as-errors build failures that ReSharper doesn’t warn me about; to the random why-does-ReSharper-not-know-about-that-NuGet-package-errors.

This can be infuriating when refactoring. E.g. if an automated refactor leaves a variable unused, I will now have a compiler warning; since all my projects run with warnings-as-errors switched on, this will fail the build. But ReSharper doesn’t know that. So I apply the refactoring, code & tests are green: commit. Push. Boom! CI is red. But it was an automated refactor for chrissakes, how’ve I broken the build?!

I also use NCrunch, an automated test runner for Visual Studio (like Infinitest in the Java world). NCrunch is awesome, by the way; better even than the continuous test runner in ReSharper 10. If you’ve never used a continuous test runner and think you’re doing TDD, sort your life out and setup Infinitest or NCrunch. It doesn’t just automate pressing the shortcut key to run all your tests. Well, actually that is exactly what it does – but the impact it has on your workflow is so much more than that. When you can type a few characters, look at the test output and see what happened you get instant feedback. This difference in degree changes the way you write code and makes it so much easier to do TDD.

Anyway I digress – NCrunch, because Microsoft, can’t use the result of the compile that Visual Studio does. So it does its own. It launches MSBuild in the background, continually re-compiling your code. This is not exactly kind on your CPU. It also introduces inconsistencies. Because NCrunch is running a slightly different MSBuild on each project to the build VisualStudio does you get subtly different results sometimes; which is different again from ReSharper with its own compiler that isn’t even using MSBuild. I now have three builds. Three compilers. It is honestly a miracle when they all agree that my code compiles.

An all-too-typical dev cycle becomes:

  • Write test
  • ReSharper is happy
  • NCrunch build is failing, force reload NCrunch project
  • NCrunch builds, test fails
  • Make test pass
  • Try to run app
  • VisualStudio build fails
  • Fix NuGet problems
  • NCrunch build is now failing
  • Force NCrunch to reload at least one project again
  • Force VisualStudio to rebuild the project
  • Then the solution
  • Run app to sanity check change
  • ReSharper now shows error
  • Re-ignore perennial ReSharper non-error
  • All three compilers agree, quick: commit!

Normally then the build fails in CI because I still screwed up the NuGet packages.

Then recently, as if this wasn’t already one of the outer circles of hell. The CI build was failing for a bizarre reason. We have a command line script which applies the same build steps that CI runs, so I thought I’d run that to replicate the problem. Unfortunately the command line build was failing on my machine for a spectacularly spurious reason that was different again than the failure in CI. Great. I now have five builds which don’t all agree on whether my code compiles.

Do you really hate computers? Do you wish you had more reasons to defenestrate every last one of them? Have you considered a career in software development?