Feeds:
Posts
Comments

Posts Tagged ‘productivity’

A while back I read Making Software – it made me disappointed at the state of academic research into the practice of developing software. I just read Leprechauns of Software Engineering which made me angry about the state of academic research.

Laurent provides a great critique of some of the leprechauns of our industry and why we believe in them. But it just highlighted to me how little we really know about what works in software development. Our industry is driven by fashion because nobody has any objective measure on what works and what doesn’t. Some people love comparing software to the medical profession, to torture the analogy a bit – I think today we’re in the blood letting and leeches phase of medical science. We don’t really know what works, but we have all sorts of strange rituals to do it better.

Rituals

Rituals? What rituals? We’re professional software developers!

But of course, we all get together every morning for a quick status update. Naturally we answer the three questions strictly, any deviations are off-topic. We stand up to keep the meeting short, obviously. And we all stand round the hallowed scrum kanban board.

But rituals? What rituals?

Do we know if stand ups are effective? Do we know if scrum is effective? Do we even know if TDD works?

Measuring is Hard

Not everything that can be measured matters; not everything that matters can be measured

I think we have a fundamental problem when it comes to analysing what works and what doesn’t. As a developer there are two things I ultimately need to know about any practice/tool/methodology:

  1. Does it get the job done faster?
  2. Does it result in less bugs / lower maintenance cost?

This boils down to measuring productivity and defects.

Productivity

Does TDD make developers more productive? Are developers more productive in Ruby or Java? Is pairing productive?

These are some fascinating questions that, if we had objective, repeatable answers to would have a massive impact on our industry. Imagine being able to dismiss all the non-TDD doing, non-pairing Java developers as being an unproductive waste of space! Imagine if there was scientific proof! We could finally end the language wars once and for all.

But how can you measure productivity? By lines of code? Don’t make me laugh. By story points? Not likely. Function points? Now I know you’re smoking crack. As Ben argues, there’s no such thing as productivity.

The trouble is, if we can’t measure productivity – it’s impossible to compare whether doing something has an impact on whether you get the job done faster or not. This isn’t just an idle problem – I think it fundamentally makes research into software engineering practices impossible.

It makes it impossible to answer these basic questions. It leaves us open to fashion, to whimsy and to consultants.

Quality

Does TDD help increase quality? What about code reviews? Just how much should you invest in quality?

Again, there are some fundamental questions that we cannot answer without measuring quality. But how can you measure quality? Is it a bug or a feature? Is it a user error or a requirements error? How many bugs? Is an error in a third party library that breaks several pages of your website one bug or dozens? If we can’t agree what a defect is or even how to count them how can we ever measure quality?

Subjective Measures

Maybe there are some subjective measures we could adopt. For example, perhaps I could monitor the number of emails to support. That’s a measure of software quality. It’s rather broad, but if software quality increases, the number of emails should decrease. However, social factors could so easily dwarf any actual improvement. For example, if users keep reporting software crashes and are told by the developers “yeah, we know”. Do you keep reporting it? Or just accept it and get on with your life? The trouble is, the lack of customer complaints doesn’t indicate the presence of quality.

What To Do?

What do we do? Do we just give up and adopt the latest fashion hoping that this time it will solve all our problems?

I think we need to gather data. We need to gather lots of data. I’d like to see thousands of dev teams across the world gathering statistics on their development process. Maybe out of a mass of data we can start to see some general patterns and begin to have some scientific basis for what we do.

What to measure? Everything! Anything and everything. The only constraint is we have to agree on how to measure it. Since everything in life is fundamentally a problem of lack of code, maybe we need a tool to help measure our development process? E.g. a tool to measure how long I spend in my IDE, how long I spend testing. How many tests I write; how often I run them; how often I commit to version control etc etc. These all provide detailed telemetry on our development process – perhaps out of this mass of data we can find some interesting patterns to help guide us all towards building software better.

Read Full Post »

There’s been a lot of chatter recently on the intertubes about whether some developers are 10x more productive than others (e.g. here, here and here). I’m not going to argue whether this or that study is valid or not; I Am Not A Scientist and I don’t play one on TV, so I’m not going to get into that argument.

However, I do think these kinds of studies are exactly what we need more of. The biggest challenges in software development are people – individual ability and how we work together; not computer science or the technical. Software development has more in common with psychology and sociology than engineering or maths. We should be studying software development as a social science.

Recently I got to wondering: where are the studies that prove that, say, TDD works; or that pair programming works. Where are the studies that conclusively prove Scrum increases project success or customer satisfaction? Ok, there are some studies – especially around TDD and some around scrum (hyper-performing teams anyone?) – but a lazy google turns up very little. I would assume that if there were credible studies into these things they’d be widely known, because it would provide a great argument for introducing these practices. Of course, its possible that I’m an ignorant arse and these studies do exist… if so, I’m happy to be educated :)

But before I get too distracted, Steve’s post got me thinking: if the variation between individuals really can be 10x, no methodology is going to suddenly introduce an across the board 20x difference. This means that individual variation will always significantly dwarf the difference due to methodology.

Perhaps this is why there are so few studies that conclusively show productivity improvements? Controlling for individual variation is hard. By the time you have, it makes a mockery of any methodological improvement. If “hire better developers” will be 5x more effective than your new shiny methodology, why bother developing and proving it? Ok, except the consultants who have books to sell, conferences to speak at and are looking for a gullible customer to pay them to explain their methodology – I’m interested in the non-crooked ones, why would they bother?

Methodologies and practices in software development are like fashion. The cool kid down the hall is doing XP. He gets his friends hooked. Before you know it, all the kids are doing XP. Eventually, everyone is doing XP, even the old fogies who say they were doing XP before you were born. Then the kids are talking about Scrum or Software Craftsmanship. And before you know it, the fashion has changed. But really, nothing fundamentally changed – just window dressing. Bright developers will always figure out the best, fastest way to build software. They’ll use whatever fads make sense and ignore those that don’t (DDD, I’m looking at you).

The real challenge then is the people. If simply having the right people on the team is a better predictor of productivity than choice of methodology, then surely recruitment and retention should be our focus. Rather than worrying about scrum or XP; trying to enforce code reviews or pair programming. Perhaps instead we should ensure we’ve got the best people on the team, that we can keep them and that any new hires are of the same high calibre.

And yet… recruitment is a horrible process. Anyone that’s ever been involved in interviewing candidates will have horror stories about the morons they’ve had to interview or piles of inappropriate CVs to wade through. Candidates don’t get an easier time either: dealing with recruiters who don’t understand technology and trying to decide if you really want to spend 8 hours a day in a team you know little about. It almost universally becomes a soul destroying exercise.

But how many companies bring candidates in for half a day’s pairing? How else are candidate and employer supposed to figure out if they want to work together? Once you’ve solved the gnarly problem of getting great developers and great companies together – we’ll probably discover the sad truth of the industry: there aren’t enough great developers to go round.

So rather than worrying about this technology or that; about Scrum or XP. Perhaps we should study why some developers are 10x more productive than others. Are great developers born or made? If they’re made, why aren’t we making more of them? University is obviously poor preparation for commercial software development, so should there be more vocational education – a system of turning enthusiastic hackers into great developers? You could even call it apprenticeship.

That way there’d be enough great developers to go round and maybe we can finally start having a grown up conversation about methodologies instead of slavishly following fashion.

Read Full Post »

%d bloggers like this: