Ability or methodology?

There’s been a lot of chatter recently on the intertubes about whether some developers are 10x more productive than others (e.g. here, here and here). I’m not going to argue whether this or that study is valid or not; I Am Not A Scientist and I don’t play one on TV, so I’m not going to get into that argument.

However, I do think these kinds of studies are exactly what we need more of. The biggest challenges in software development are people – individual ability and how we work together; not computer science or the technical. Software development has more in common with psychology and sociology than engineering or maths. We should be studying software development as a social science.

Recently I got to wondering: where are the studies that prove that, say, TDD works; or that pair programming works. Where are the studies that conclusively prove Scrum increases project success or customer satisfaction? Ok, there are some studies – especially around TDD and some around scrum (hyper-performing teams anyone?) – but a lazy google turns up very little. I would assume that if there were credible studies into these things they’d be widely known, because it would provide a great argument for introducing these practices. Of course, its possible that I’m an ignorant arse and these studies do exist… if so, I’m happy to be educated 🙂

But before I get too distracted, Steve’s post got me thinking: if the variation between individuals really can be 10x, no methodology is going to suddenly introduce an across the board 20x difference. This means that individual variation will always significantly dwarf the difference due to methodology.

Perhaps this is why there are so few studies that conclusively show productivity improvements? Controlling for individual variation is hard. By the time you have, it makes a mockery of any methodological improvement. If “hire better developers” will be 5x more effective than your new shiny methodology, why bother developing and proving it? Ok, except the consultants who have books to sell, conferences to speak at and are looking for a gullible customer to pay them to explain their methodology – I’m interested in the non-crooked ones, why would they bother?

Methodologies and practices in software development are like fashion. The cool kid down the hall is doing XP. He gets his friends hooked. Before you know it, all the kids are doing XP. Eventually, everyone is doing XP, even the old fogies who say they were doing XP before you were born. Then the kids are talking about Scrum or Software Craftsmanship. And before you know it, the fashion has changed. But really, nothing fundamentally changed – just window dressing. Bright developers will always figure out the best, fastest way to build software. They’ll use whatever fads make sense and ignore those that don’t (DDD, I’m looking at you).

The real challenge then is the people. If simply having the right people on the team is a better predictor of productivity than choice of methodology, then surely recruitment and retention should be our focus. Rather than worrying about scrum or XP; trying to enforce code reviews or pair programming. Perhaps instead we should ensure we’ve got the best people on the team, that we can keep them and that any new hires are of the same high calibre.

And yet… recruitment is a horrible process. Anyone that’s ever been involved in interviewing candidates will have horror stories about the morons they’ve had to interview or piles of inappropriate CVs to wade through. Candidates don’t get an easier time either: dealing with recruiters who don’t understand technology and trying to decide if you really want to spend 8 hours a day in a team you know little about. It almost universally becomes a soul destroying exercise.

But how many companies bring candidates in for half a day’s pairing? How else are candidate and employer supposed to figure out if they want to work together? Once you’ve solved the gnarly problem of getting great developers and great companies together – we’ll probably discover the sad truth of the industry: there aren’t enough great developers to go round.

So rather than worrying about this technology or that; about Scrum or XP. Perhaps we should study why some developers are 10x more productive than others. Are great developers born or made? If they’re made, why aren’t we making more of them? University is obviously poor preparation for commercial software development, so should there be more vocational education – a system of turning enthusiastic hackers into great developers? You could even call it apprenticeship.

That way there’d be enough great developers to go round and maybe we can finally start having a grown up conversation about methodologies instead of slavishly following fashion.

Doing agile the traditional way

[tweetmeme source=”activelylazy” only_single=false]

Doing agile is easy. If you’re working on a greenfield project, with no history and no existing standards and processes to follow. The rest of us get to work on brownfield projects, with company standards that are the antithesis of agile, and people and processes wedded to a past long out of date. Oh you can do agile in this environment, but its hard – because you have to overcome the constraints that meant you weren’t agile in the first place.

Any organisation hoping to “go agile” needs to overcome its constraints. The scrum team hits issues stopping them being properly agile – normally artefacts of the old way of doing things – they raise these to the scrum master and management hoping to remove them and reach the agile nirvana on the other side. Management diligently discuss these constraints, the easiest are quickly removed – simple things like better tools, whiteboards everywhere; but some take a bit longer to remove.

Before too long, you run into company culture: these are the constraints that just won’t go away. Not all constraints are created equal, though – the hardest to remove are often the most vital; the things that stop the company being properly agile. So let’s look at some of the activities and typical constraints a team can encounter.

User Story Workshops

If you’re doing agile, you’ve gotta have user stories. If you have user stories, you’re bound to have something like a user story workshop – where all the stakeholders (or just the dev team, if you’re kidding yourself) get together to agree the basic details of the work that needs to be done.

The easiest trap in the world to fall into is to try and perfect the user stories. Before you know it you’re stuck in analysis paralysis, reviewing and re-reviewing the user stories until everyone is totally happy with them. Every time you discuss them, someone thinks of a new edge case, a new detail to think about – so you add more acceptance criteria, more user stories, more spikes.

Eventually you’ll get moving again, but now you’ve generated a mountain of collateral with the illusion of accuracy. User stories should be a place holder for a discussion, if you waste time generating that detail up front you might skip the conversation later thinking you’ve captured everything already and miss the really critical details.


The worst thing is when it comes to producing estimates. The start of a new project or a team is a difficult time: you’ve got no history to base your estimates on, but management need to know how long you’ll take. With the best of intentions, scrum masters and team leads are often given incentives – a bonus, for example – to produce accurate estimates; there’s only two ways to play this: 1. pad estimates mercilessly and know you’ll fill that “spare” time easily; 2. spend more time analysing the problem, as though exhaustive analysis can predict the future. Neither of these outcomes are what the business really wants and certainly can’t be described as “agile”.


There’s a great conflict between the need for overall architecture and design; and the agile mentality to JFDI – to not spend time producing artefacts that don’t themselves add value. However, you can’t coordinate large development activities without some guiding architecture, some vague notion of where you’re heading.

But the logical conclusion of this train of thought is that you need some architectural design – so lets write a document describing our ideas and get together to discuss it. Luckily this provides a great forum for all the various stakeholders across the business that don’t grok code to have their say: “we need an EDA“; “I know yours is a small change, but first you must solve some arbitrarily vast problem overnight”; “I’m a potential user of this app and I think it should be green“.

This process is great at producing perfect design documents. Unfortunately in the real world, where requirements change faster than designs can be written, its a completely wasted effort. But it gives everyone a forum for airing their views and since they don’t trust the development team to meet their requirements any other way, any change to the process is resisted.

The scrum team naturally raise the design review process as a constraint, but once its clear it can’t be removed, they start to adapt to it. Because the design can fundamentally change on the basis of one person’s view right up to the last minute (hey! Cassandra sounds cool, let’s use that!); the dev team change their process: “sprints can’t start until the design is signed off”. And because the design can fundamentally change the amount of work to be done, the user stories are never “final” until the design is signed off and the business can’t get an estimate until the design has been agreed.


If you’re lucky, the one part of the project that feels vaguely scrum-y is the development sprint. The team are relatively free of dependencies on others; they can self-organise around getting the work done; things are tested as they go; the Definition of Done is completed before the next story is started. The team are rightly proud of “being agile”.

Manual Testing

And then the constraints emerge again. If you’re working on a legacy code base, without a decent suite of automated tests, you probably rely on manual testers clicking through your product to some written test script. A grotesque misuse of human beings if ever there was one, but the cost of investing in automation “this time” is outweighed by how quickly the testers can whip through a script now they’ve done it a million times.

Then worst of all possible worlds, because regression testing the application isn’t a case of clicking a button (but clicking thousands of buttons, one by one in a very specific order) you have to have a “final QA run”. A chance, once development has stopped, for QA to assure the product meets the required quality goals. If your reliance on manual QA is large, this can be a time-consuming exercise. And then, what happens if QA find a bug? Its fixed, and you start the whole process all over again… like some gruesome nightmare you can’t wake up from!

The Result

With these constraints at every step of the process, we’re not working how we’d like to. Now either we can view them as a hindrances to working the way we want to; or we can view them as forcing a new process on us, that we’re not in control of. What we’ve managed to do is invent a new development methodology:

  • First we carefully gather requirements and discuss ad nauseum
  • Then we carefully design a solution that meets these (annoyingly changing) requirements
  • Then we write code to meet the design (and ignore that the design changes along the way)
  • Finally we test that the code we wrote actually does what we wanted it to (even though that’s changed since we started)

What have we invented? Waterfall. We’ve rediscovered waterfall. Well done! For all your effort and initiatives, you’ve managed to invent a software development methodology discredited decades ago.

But we’ve tarted it up with stand ups, user stories, a burn down chart and some poor schmuck that gets called the scum master – but really, all we’ve done is put some window dressing on our old process and given it a fancy dan name.

If the ultimate goal of introducing agile to your company is to make the business more efficient but you’re still drowning in constraints – then that means the goal of introducing agile has failed: you’ve still got the same constraints, and you’re still doing waterfall – no matter what you call it: a rose by any other name still smells of shit.

The danger of deadlines


Deadlines are a good thing, right? Everyone needs deadlines. Don’t they?

There are three constants in life: death, taxes and software slipping. I’m doing my best to avoid the first two, but the latter seems here to stay. Software engineering, despite all its advances, has never solved the fundamental question: how long will it take? The advice I got from Nik the development manager on my first day at work after graduation “think of a number, then double it” is still a good rule of thumb.

There are lots of reasons that software slips: engineers are optimistic people, who assume this time round will be better, despite years of evidence to the contrary; the requirements are never really clear, no matter what the product manager tells you; there will always be one innocent little question, 90% of the way through, that suddenly balloons into a metric crapload of work; and everyone’s favourite: what idiot did a half-assed job here, making me finish up his mess – why I oughta!

What’s the consequence of software slipping?

Perhaps you work for one of those companies that don’t care, “you’re done when you’re done; we’ll release when you’re ready”. If so, get outta here, we don’t like your kind round here!

For the rest of us, something has to happen when the schedule slips. Somebody, somewhere has been promised the release by a certain date. All your caveats have been ignored – all that person hears is “It’ll be live by July”. Whether its that big external client; “the business”; or your boss – somebody, somewhere, has an expectation mismatch with reality.

What happens?

Angry ManI don’t have to tell you what happens – project managers, development leads, your boss – suddenly everybody cares about scope/time/resources and has designs on your weekend.

One way or another, projects normally get out. Scope is cut, resources are pointlessly thrown at the project, you lose your weekends. Eventually the project gets out and everyone lives to fight another day. But spare a thought for the real victim here: quality.


And you thought it was all about you? Yes – the real victim in all this is the quality of your software.

In a traditional waterfall project, normally by the time you realise you’re late – its too damned late to do anything about it, and you’re screwed. So you do the only thing you can do – you cut time from the end of the project; i.e. persuade QA they don’t need to do as much testing as they estimated. The result? Bugs are missed, crap code gets released to customers – quality has been in a hit and run accident.

In an iterative project you might be a bit more lucky: it might just be one iteration that’s screwed. So you cut from the end of the iteration (there goes the QA again) and try and cut scope from the next iteration to “pull it back”. Unfortunately, all those bugs from the previous iteration are still there, waiting to derail QA and add unexpected development costs later on in the project. And before you know it, you’re back at the scene of the accident again.

At this point the agilistas are all laughing “but that never happens on an agile project!”.

Doesn’t it?

Agile’s different

Downward chartIts true: scrum, kanban and the like are much better at forecasting ahead of time when you’re gonna miss some arbitrary date. You know your velocity, you know how much work is left. You can figure out how many features you can deliver in the time available and cut entire features from the end of the project, without sacrificing QA. Perfect!

But the really insidious problem on agile projects, is that the project manager, scrum master, development lead, even the smart developers all start burndown engineering. Sam the scrum master says,

We’re a bit behind on the burndown chart, Mike, so if you can just get this story finished up today we’ll score the points and get right back on track!

Sounds harmless enough, doesn’t it?

Burndown engineering

Burndown engineering is a terrible crime. By continually trying to engineer each story / feature to adhere to some average you’ve dreamt up, quality dies a thousand small deaths.

I was gonna refactor this code, but I guess I can come back and do it in the next story

Sure, Sam, I was gonna write some more thorough tests; but I can do that in the next sprint  – this story’s done!

The QA guys have been over it already; we really ought to automate some of these tests but we’ve not had time. Shall we add a story to a later sprint to pick this up?

And suddenly, with the best of intentions, Sam has forced technical debt into the project, just in the interests of maintaining his pretty graph. As a one-off, this is fine. The problem is, this happens on every story, for the whole duration of the project. In the end, Sam doesn’t have to say a word – the team are self-policing. They’re trying to pad their estimates to get time to finish everything they need to, but development’s taking longer than expected because everyone’s coming across technical debt from previously half-finished stories.

What’s to be done?

100% quality stampQuality should be a parameter controlled by the development team. Some companies need a high quality bar, other times you can get away with much lower – it depends what kind of industry you’re in. But the development team should set a standard for quality and then stick to it.

Velocity is an emergent property of the team. If you hold quality and resources constant, the velocity of the team is something you measure not something you change. Sure, change the scope or change the schedule (or change the number of people if you believe in man months, mythical or otherwise) – but forcing each unit of work to take a certain amount of time forces quality to be the variable that changes.

But you still need deadlines

Sure, somebody, somewhere needs to know when the release will be ready. But the product owner should be the one that knows the release schedule. Someone whose only lever is scope. If they want it faster, their only option is to reduce scope.

The scrum master or development lead needs to ensure that development continues apace, but that quality is held constant. If the product owner wants it faster – his choice is scope. Leaving developers to focus on doing what they need to get done, without worrying whether they’re maintaining their velocity this week.

Then we can all stop making crap tradeoffs about whether cutting this story short will make the project deliver faster overall (of course not!); or whether we can justify removing this technical debt to make things quicker (of course we can!).

Ever found yourself on the receiving end of this – being pushed to cut corners to maintain some arbitrary deadline? Ever found yourself trying to justify Doing The Right Thing that you know will take longer now, but make things quicker overall? Let’s hear your war stories!