How many builds?

I’m always amazed at the seemingly high pain threshold .net developers have when it comes to tooling. I’ve written before about the poor state of tooling in .net, but just recently I hit another example of poor tooling that infuriates me: I have too many builds, and they don’t agree whether my code compiles.

One of the first things that struck me when starting to develop on .net was that compiling code was still a thing. An actual step that had to be thought about. Incremental compilers in Eclipse and the like have been around for ages – to the point where, generally, Java developers don’t actually have to instruct their IDE to compile code for them. But in Visual Studio? Oh it’s definitely necessary. And time consuming. Oh my god is it slow. Ok, maybe not as slow as Scala. But still unbelievably slow when you’re used to working at the speed of thought.

Another consequence of the closed, Microsoft way of doing things is that tools can’t share the compiler’s work. So ReSharper have implemented their own compiler, effectively. It incrementally parses source code and finds compiler errors. Sometimes it even agrees with the Visual Studio build. But all too often it doesn’t. From the spurious not-actually-an-error that I have to continually instruct ReSharper to ignore; to the warnings-as-errors build failures that ReSharper doesn’t warn me about; to the random why-does-ReSharper-not-know-about-that-NuGet-package-errors.

This can be infuriating when refactoring. E.g. if an automated refactor leaves a variable unused, I will now have a compiler warning; since all my projects run with warnings-as-errors switched on, this will fail the build. But ReSharper doesn’t know that. So I apply the refactoring, code & tests are green: commit. Push. Boom! CI is red. But it was an automated refactor for chrissakes, how’ve I broken the build?!

I also use NCrunch, an automated test runner for Visual Studio (like Infinitest in the Java world). NCrunch is awesome, by the way; better even than the continuous test runner in ReSharper 10. If you’ve never used a continuous test runner and think you’re doing TDD, sort your life out and setup Infinitest or NCrunch. It doesn’t just automate pressing the shortcut key to run all your tests. Well, actually that is exactly what it does – but the impact it has on your workflow is so much more than that. When you can type a few characters, look at the test output and see what happened you get instant feedback. This difference in degree changes the way you write code and makes it so much easier to do TDD.

Anyway I digress – NCrunch, because Microsoft, can’t use the result of the compile that Visual Studio does. So it does its own. It launches MSBuild in the background, continually re-compiling your code. This is not exactly kind on your CPU. It also introduces inconsistencies. Because NCrunch is running a slightly different MSBuild on each project to the build VisualStudio does you get subtly different results sometimes; which is different again from ReSharper with its own compiler that isn’t even using MSBuild. I now have three builds. Three compilers. It is honestly a miracle when they all agree that my code compiles.

An all-too-typical dev cycle becomes:

  • Write test
  • ReSharper is happy
  • NCrunch build is failing, force reload NCrunch project
  • NCrunch builds, test fails
  • Make test pass
  • Try to run app
  • VisualStudio build fails
  • Fix NuGet problems
  • NCrunch build is now failing
  • Force NCrunch to reload at least one project again
  • Force VisualStudio to rebuild the project
  • Then the solution
  • Run app to sanity check change
  • ReSharper now shows error
  • Re-ignore perennial ReSharper non-error
  • All three compilers agree, quick: commit!

Normally then the build fails in CI because I still screwed up the NuGet packages.

Then recently, as if this wasn’t already one of the outer circles of hell. The CI build was failing for a bizarre reason. We have a command line script which applies the same build steps that CI runs, so I thought I’d run that to replicate the problem. Unfortunately the command line build was failing on my machine for a spectacularly spurious reason that was different again than the failure in CI. Great. I now have five builds which don’t all agree on whether my code compiles.

Do you really hate computers? Do you wish you had more reasons to defenestrate every last one of them? Have you considered a career in software development?

Git stash driven development

I’ve found myself using a pattern quite often recently, which I’ve been calling “git stash driven development” – that is, relying heavily on the magic of git stash as part of my development workflow.

Normally I follow what I think of as a fairly typical TDD workflow:

  • Write next test, watch it fail
  • Write code to make it pass
  • Commit
  • Refactor
  • Commit
  • Push

This cycle can repeat very frequently – as often as every couple of minutes. Sometimes this cycle gets slowed down when the next test to write isn’t obvious or the refactoring needs more thought. But generally this is the process I try and follow.

Quite often having written the next test which takes me forwards on my feature I hit a problem: I can’t actually make the test pass (easily). First I need to refactor to make the problem easy. In that situation I can mark the test as ignored, commit and come back to it later. I refactor as required, commit, push; then finally unignore my test and get back to where I was before. This is a fairly neat process.

However there are a couple of times when this process doesn’t work: what if I’m part way through writing my test and I realise I can’t finish without refactoring the test infrastructure? I can’t ignore my test, it probably isn’t even compiling. I certainly don’t want to commit it in its current state. I could just bin my test and re-write it, if I’m following the 15 minute rule I’m not going to lose much work. But, with the magic of git stash, I can stash my changes and come back once I’ve refactored the test code.

The more annoying time this happens is when I’m part way through a refactor step. This happens more commonly when I’m really going through a design-change – this isn’t really refactoring as it often happens outside of the normal TDD loop. I’m trying to evolve the design to somewhere different; sometimes this is driven by tests, sometimes its a non-feature changing refactor. But often there are non-trivial changes happening across numerous source files. At this point it is very easy to get part way through a refactor and realise that something else needed to have happened first. I could bin my change, I only stand to lose 15 minutes work – but why throw it away when I have git stash?

So I git stash my changes, go and make the change I needed to have happened first. Then, all too commonly, I get part way through this second change and realise something else needs to happen first. Well, git stash again! This stack of git stashes can get quite deep, if you’re not careful. But once I’ve bottomed the stack out, once I’ve managed to commit a refactor that frees up the step above I can git stash pop, complete the next refactor, commit, git stash pop; and so on up the stack until I’m done.

Now, arguably, I’m discovering the refactor in reverse order, but this seems to me often how I find it. I could have spent more time analysing the change in detail, of course. Spent time planning out my change on paper before embarking on it in the correct order. However, this is always time consuming and there’s still the risk that I miss something and come at a change “backwards”. I find that using git stash in this way lets me discover the refactor that I need to make one step at a time. Each commit is kept small, I try and stick to the 15 minute rule so that no single commit loses more than 15 minutes. Ultimately the design change is completed in a sequence of small commits, each of which builds logically on the one before. They’ve been discovered by exploration, the commits were just discovered in reverse order.

The danger is always that I find a refactor step I can’t complete the way I’d imagined – now I can’t unwind the stack and potentially all the previous git stashes aren’t committable. Whenever this happens I normally find going one or two levels up the stack will present a different approach, from where I can continue as before.

Choosing a Programming Language: Recruitment

How do you choose the right language to use for your next project? Use the right tool for the job? Sure, but what does that mean? And how do I know what the right tool is? How do I get enough experience in a new language to know whether or not it is the right tool for the job?

Your choice of language will have several impacts on your project:

  • Finding & hiring new developers to join your team
  • Whether there is a thriving community: both for providing Q&A support and the maturity of libraries & frameworks to help you get more done
  • Training your existing team; discovering and learning new tools; as well as the possible motivation impact of using new technologies
  • Finally your choice of “the right tool for the job” will make the development effort easier or harder, both in the short-term and for long-term maintenance

This is the first article in a series on choosing the right language – we will start by addressing recruitment.


For many teams, recruiting new staff is the hardest problem they face. While choice of programming language has many obvious technical impacts on the development process, it also has a huge impact on your recruitment efforts. If your choice of language has the ability to make your hardest problem easier then it has to be considered.


The truth is that most developers will naturally filter themselves by language, first and foremost. Most Java developers, when looking for jobs, will probably start by asking their Java developer friends and searching job sites for Java developer jobs. Recruiters naturally further this – they need a simple, keyword friendly way to try and match candidates to jobs, and filtering by language is an easy way to start.

Of course, this is completely unnecessary – most developers, if the right opportunity arose, would happily switch language. Most companies, given the right candidate, would be more than happy to hire someone without the requisite language background.

Yet almost every time I’ve been involved in recruitment we’ve always filtered candidates by language. No matter how much we say “we just want the best candidates”, we don’t want to take a risk hiring someone that needs a couple of months to get up to speed with our language.

My last job switch involved a change of language, the first time in 10 years I’ve changed language. Sure, it took me a month to become even competent in C#; and arguably three or four months to become productive. But of course it is possible, and yet it doesn’t seem to be that common.

This is purely anecdotal: I’d love to find a way to try and measure how often developers switch languages – just how common is it for companies to take on developers with zero commercial experience in their primary language? How could we measure this? Could we gather statistics from lots of companies on their recruitment process? Email me or post a comment below with your ideas.

Size of Talent Pool

Ultimately your choice of language has one main impact on recruitment: which language would make it easiest to find good developers? I think we can break this down into three factors:

  1. Overall number of developers using that language
  2. Competition for these developers from other companies
  3. The density of “good” developers within that pool

Total Number of Developers

Some languages have many more developers using them than others. For example, there are many times more Java developers than there are Smalltalk developers. At a simple level, by looking for Java developers I will be looking for developers from a much larger pool than if I search for Smalltalk developers. But how we can we quantify this?

We can estimate the number of developers skilled in or actively using a particular language by approximating the size of the community around that language. Languages with large communities indicate more developers using and discussing them. But how can we measure the size of the community?

GitHub and StackOverflow are two large community sites – representing open source code created by the community and questions asked by the community. Interestingly they seem to have a strong correlation – languages with lots of questions asked on StackOverflow have lots of code published on GitHub. I’m certainly not the first to spot this, I found a good post from Drew & John from a couple of years back looking at the correlation:

GitHub & StackOverflow probably give a good indication of the size of the community around each language. This means if we look for C# developers, there are many times more developers to choose from than if we choose Ada. However, sheer numbers isn’t the full story…

Competition for Developers

We’re not hiring in a vacuum – there is competition for developers from all the other companies hiring in our area (and those remote companies that don’t believe geography should be a barrier). This competition will have two impacts: first if there is more competition for developers using a certain language, that makes it harder to attract them to our company. Assuming market forces work, it also suggests that average salaries for developers with experience in an in-demand language will naturally be higher – as companies use salary to differentiate themselves from the competition. Just because there are more C# developers than Ada developers, the ease with which you can hire developers from either group is partly dependent on the amount of competition.

so-questions-adsI looked at the total number of questions tagged on and the number of UK job ads requiring the language on Interestingly there is a very strong correlation (0.9) – this suggests, perhaps not surprisingly, that most developers primarily use the language they use in their day job: there are large communities around languages that lots of companies hire for; and only small communities around languages that few companies hire for. Of course – it’s not clear which comes first, do companies adopt a language because it has a large community, or does a large community emerge because lots of developers are using it in their job?

There are some interesting outliers: there seems to be greater job demand than there is developer supply for some languages such as Perl and VisualBasic. Does this suggest these are languages that have fallen out of favour, that companies still have investment in and now find it hard to recruit for? Similarly Scala and Node.js seem to have more demand than developers can supply. On the other side of the line, languages such as PHP and Ruby have much more active communities than there is job demand – suggesting they are more hobbyist languages, languages amateurs and professionals alike use in their spare-time. Then there is Haskell, another language with a lot of questions and not many jobs – perhaps this is purely its prevalence in academia that hasn’t (yet) translated to commercial success?

In general the more popular languages have more competition for developers. Perl & VisualBasic will be harder to find developers skilled in; whereas PHP, Ruby & Haskell should be easier to find developers. Besides these outliers, whichever language you hire for the overall size of the pool of developers (i.e. those within commuting distance or willing to relocate) is the only deciding factor. How many companies continue to use the mainstream languages (Java and C#) because its “easier” to hire developers? On the basis of this evidence, I don’t think its true. In fact, quite the opposite: to get the least competition for developers (lower salaries and less time spent interviewing) you should hire developers with Ruby or Haskell experience.

Density of Good Developers

One of the more contentious ideas is that filtering developers by language allows you to quickly filter out less capable developers. For example, by only looking at Python developers, maybe on average they are “better” than C# developers. My doubt with this is if it were found to be true, developers would quickly learn that language so as to increase their chances of getting a job: quickly eliminating any benefit of using language as a filter. It also starts to sound a bit “my language is better than your language”.

But it would be interesting if it were true, if we could use language as a quick filter to get higher quality candidates; it would be quite possible to gather data on this. For example, I use an online interview that could easily be used to provide an A/B test of developers self-selecting based on language. Given the same, standardized recruitment test – do developers using language A do better or worse than those using language B? Of course, the correlation between a recruitment test and on the job performance is very much debatable – but it could give us a good indication of whether there is correlation between language and (some measure of) technical ability. Would this work, is there a better way to measure it? Email me or post a comment below.


Popular languages have more developers to hire from but more companies trying to hire them. These two factors seem to cancel each other out – the ratio of developers:jobs is pretty much constant. As far as recruitment is concerned then, maybe for your next project break away from the norm and use a non-mainstream language? Is it time to hire you some Haskell?

The future’s software: the future’s shit

Software is taking over the world. Arguably software has already taken over the world. The trouble is: software is all shit – and it’s all your fault.

I had one of those seconds-from-flinging-something-heavy-through-my-tv days yesterday: I really have had it with crappy software. First, I decided to relax and catch up with last week’s Have I Got News For You on BBC iPlayer. Unfortunately, for some unfathomable reason, the XBox iPlayer app has become shit in recent months. Last night, it played 0.5 seconds of the show and hung. Network traffic was still flying by, but none of it was appearing on my TV. Fine, I switched to using iPlayer on my laptop – at least that normally works.

Afterwards I decided to watch another of the (excellent, by the way) American remake of House of Cards on Netflix. First, my router had got its knickers in a twist and switched the VPN so Netflix thought I was in the US, so all of my recently watched had disappeared. Boot up laptop (again), login to admin page on the router, fiddle with settings, reboot, reboot XBox, login again: there we go, UK Netflix is back, I can finally watch House of Cards. Except now the TV has crashed and refuses to decode the HDCP signal from the XBox. Reboot TV: no. Reboot xbox, again. Yes! We have signal!

I sat back and started watching. I have to get up, three times, because the volume is quieter than a quiet thing on quiet day in quiet land. Eventually it was so damned quiet I manage to miss some dialog, so I get to use the, occasionally awesome, XBox kinect voice control feature that saves me scrabbling to find the controller: “xbox rewind”. Excellent, it starts rewinding. “xbox stop”. “XBOX, STOP!” “Oh for fuck’s fucking sake xbox, fucking stop you fucking retarded demon, STOP!”. Find the controller. Wait for it to boot up. Now the Netflix app has resumed its entirely random bug where I lose all the on screen navigation controls, so I have to guess which combination of up, down, A, A, B, up, down stops the bloody rewind and not the one that invokes the super Netflix boss level. Eventually the button mashing exits Netflix, not what I intended but at least it’s stopped rewinding. I launch Netflix again. This time the on screen controls work. Woohoo! Now volume has switched back to slightly louder than a jet engine piped directly against your ear drum, so I jump up and turn the volume down before it wakes my little boy up.

By now, I’m in no mood for political intrigue – killing every developer that was involved in any of the pieces of software that have ruined my mood and my evening, quite gladly: pass me the knife, I’m happy to relieve each them of their useless lives and hopelessly misapplied careers.

The trouble is: it’s not their fault. All software is shit. As software takes over more and more of my life I realise: it’s not getting any better. My phone crashes, my TV crashes, my hi-fi crashes. My sat nav gets lost and my car needs a BIOS upgrade. This is all before I get to work and get to use any actual computers. Actual computers that I actually hate.

Every step of every day, software is pissing me off. And you know what? It’s only going to get worse. As software invades more and more of our lives our complete inability to create decent, stable software is becoming a plague. The future will be dominated by people swearing at every little computer they come into contact with. In our bright software future where everything is digital and we’re constantly plugged into the metaverse I will spend my entire life swearing at it for being so unutterably, unnecessarily shit.

You know what else? As software developers: it’s all our fault.

Ability or methodology?

There’s been a lot of chatter recently on the intertubes about whether some developers are 10x more productive than others (e.g. here, here and here). I’m not going to argue whether this or that study is valid or not; I Am Not A Scientist and I don’t play one on TV, so I’m not going to get into that argument.

However, I do think these kinds of studies are exactly what we need more of. The biggest challenges in software development are people – individual ability and how we work together; not computer science or the technical. Software development has more in common with psychology and sociology than engineering or maths. We should be studying software development as a social science.

Recently I got to wondering: where are the studies that prove that, say, TDD works; or that pair programming works. Where are the studies that conclusively prove Scrum increases project success or customer satisfaction? Ok, there are some studies – especially around TDD and some around scrum (hyper-performing teams anyone?) – but a lazy google turns up very little. I would assume that if there were credible studies into these things they’d be widely known, because it would provide a great argument for introducing these practices. Of course, its possible that I’m an ignorant arse and these studies do exist… if so, I’m happy to be educated🙂

But before I get too distracted, Steve’s post got me thinking: if the variation between individuals really can be 10x, no methodology is going to suddenly introduce an across the board 20x difference. This means that individual variation will always significantly dwarf the difference due to methodology.

Perhaps this is why there are so few studies that conclusively show productivity improvements? Controlling for individual variation is hard. By the time you have, it makes a mockery of any methodological improvement. If “hire better developers” will be 5x more effective than your new shiny methodology, why bother developing and proving it? Ok, except the consultants who have books to sell, conferences to speak at and are looking for a gullible customer to pay them to explain their methodology – I’m interested in the non-crooked ones, why would they bother?

Methodologies and practices in software development are like fashion. The cool kid down the hall is doing XP. He gets his friends hooked. Before you know it, all the kids are doing XP. Eventually, everyone is doing XP, even the old fogies who say they were doing XP before you were born. Then the kids are talking about Scrum or Software Craftsmanship. And before you know it, the fashion has changed. But really, nothing fundamentally changed – just window dressing. Bright developers will always figure out the best, fastest way to build software. They’ll use whatever fads make sense and ignore those that don’t (DDD, I’m looking at you).

The real challenge then is the people. If simply having the right people on the team is a better predictor of productivity than choice of methodology, then surely recruitment and retention should be our focus. Rather than worrying about scrum or XP; trying to enforce code reviews or pair programming. Perhaps instead we should ensure we’ve got the best people on the team, that we can keep them and that any new hires are of the same high calibre.

And yet… recruitment is a horrible process. Anyone that’s ever been involved in interviewing candidates will have horror stories about the morons they’ve had to interview or piles of inappropriate CVs to wade through. Candidates don’t get an easier time either: dealing with recruiters who don’t understand technology and trying to decide if you really want to spend 8 hours a day in a team you know little about. It almost universally becomes a soul destroying exercise.

But how many companies bring candidates in for half a day’s pairing? How else are candidate and employer supposed to figure out if they want to work together? Once you’ve solved the gnarly problem of getting great developers and great companies together – we’ll probably discover the sad truth of the industry: there aren’t enough great developers to go round.

So rather than worrying about this technology or that; about Scrum or XP. Perhaps we should study why some developers are 10x more productive than others. Are great developers born or made? If they’re made, why aren’t we making more of them? University is obviously poor preparation for commercial software development, so should there be more vocational education – a system of turning enthusiastic hackers into great developers? You could even call it apprenticeship.

That way there’d be enough great developers to go round and maybe we can finally start having a grown up conversation about methodologies instead of slavishly following fashion.