Feeds:
Posts
Comments

Posts Tagged ‘development’

How do you choose the right language to use for your next project? Use the right tool for the job? Sure, but what does that mean? And how do I know what the right tool is? How do I get enough experience in a new language to know whether or not it is the right tool for the job?

Your choice of language will have several impacts on your project:

  • Finding & hiring new developers to join your team
  • Whether there is a thriving community: both for providing Q&A support and the maturity of libraries & frameworks to help you get more done
  • Training your existing team; discovering and learning new tools; as well as the possible motivation impact of using new technologies
  • Finally your choice of “the right tool for the job” will make the development effort easier or harder, both in the short-term and for long-term maintenance

This is the first article in a series on choosing the right language – we will start by addressing recruitment.

Recruitment

For many teams, recruiting new staff is the hardest problem they face. While choice of programming language has many obvious technical impacts on the development process, it also has a huge impact on your recruitment efforts. If your choice of language has the ability to make your hardest problem easier then it has to be considered.

Filtering

The truth is that most developers will naturally filter themselves by language, first and foremost. Most Java developers, when looking for jobs, will probably start by asking their Java developer friends and searching job sites for Java developer jobs. Recruiters naturally further this – they need a simple, keyword friendly way to try and match candidates to jobs, and filtering by language is an easy way to start.

Of course, this is completely unnecessary – most developers, if the right opportunity arose, would happily switch language. Most companies, given the right candidate, would be more than happy to hire someone without the requisite language background.

Yet almost every time I’ve been involved in recruitment we’ve always filtered candidates by language. No matter how much we say “we just want the best candidates”, we don’t want to take a risk hiring someone that needs a couple of months to get up to speed with our language.

My last job switch involved a change of language, the first time in 10 years I’ve changed language. Sure, it took me a month to become even competent in C#; and arguably three or four months to become productive. But of course it is possible, and yet it doesn’t seem to be that common.

This is purely anecdotal: I’d love to find a way to try and measure how often developers switch languages – just how common is it for companies to take on developers with zero commercial experience in their primary language? How could we measure this? Could we gather statistics from lots of companies on their recruitment process? Email me or post a comment below with your ideas.

Size of Talent Pool

Ultimately your choice of language has one main impact on recruitment: which language would make it easiest to find good developers? I think we can break this down into three factors:

  1. Overall number of developers using that language
  2. Competition for these developers from other companies
  3. The density of “good” developers within that pool

Total Number of Developers

Some languages have many more developers using them than others. For example, there are many times more Java developers than there are Smalltalk developers. At a simple level, by looking for Java developers I will be looking for developers from a much larger pool than if I search for Smalltalk developers. But how we can we quantify this?

We can estimate the number of developers skilled in or actively using a particular language by approximating the size of the community around that language. Languages with large communities indicate more developers using and discussing them. But how can we measure the size of the community?

GitHub and StackOverflow are two large community sites – representing open source code created by the community and questions asked by the community. Interestingly they seem to have a strong correlation – languages with lots of questions asked on StackOverflow have lots of code published on GitHub. I’m certainly not the first to spot this, I found a good post from Drew & John from a couple of years back looking at the correlation: http://www.dataists.com/2010/12/ranking-the-popularity-of-programming-langauges/

GitHub & StackOverflow probably give a good indication of the size of the community around each language. This means if we look for C# developers, there are many times more developers to choose from than if we choose Ada. However, sheer numbers isn’t the full story…

Competition for Developers

We’re not hiring in a vacuum – there is competition for developers from all the other companies hiring in our area (and those remote companies that don’t believe geography should be a barrier). This competition will have two impacts: first if there is more competition for developers using a certain language, that makes it harder to attract them to our company. Assuming market forces work, it also suggests that average salaries for developers with experience in an in-demand language will naturally be higher – as companies use salary to differentiate themselves from the competition. Just because there are more C# developers than Ada developers, the ease with which you can hire developers from either group is partly dependent on the amount of competition.

so-questions-adsI looked at the total number of questions tagged on StackOverflow.com and the number of UK job ads requiring the language on JobServe.com. Interestingly there is a very strong correlation (0.9) – this suggests, perhaps not surprisingly, that most developers primarily use the language they use in their day job: there are large communities around languages that lots of companies hire for; and only small communities around languages that few companies hire for. Of course – it’s not clear which comes first, do companies adopt a language because it has a large community, or does a large community emerge because lots of developers are using it in their job?

There are some interesting outliers: there seems to be greater job demand than there is developer supply for some languages such as Perl and VisualBasic. Does this suggest these are languages that have fallen out of favour, that companies still have investment in and now find it hard to recruit for? Similarly Scala and Node.js seem to have more demand than developers can supply. On the other side of the line, languages such as PHP and Ruby have much more active communities than there is job demand – suggesting they are more hobbyist languages, languages amateurs and professionals alike use in their spare-time. Then there is Haskell, another language with a lot of questions and not many jobs – perhaps this is purely its prevalence in academia that hasn’t (yet) translated to commercial success?

In general the more popular languages have more competition for developers. Perl & VisualBasic will be harder to find developers skilled in; whereas PHP, Ruby & Haskell should be easier to find developers. Besides these outliers, whichever language you hire for the overall size of the pool of developers (i.e. those within commuting distance or willing to relocate) is the only deciding factor. How many companies continue to use the mainstream languages (Java and C#) because its “easier” to hire developers? On the basis of this evidence, I don’t think its true. In fact, quite the opposite: to get the least competition for developers (lower salaries and less time spent interviewing) you should hire developers with Ruby or Haskell experience.

Density of Good Developers

One of the more contentious ideas is that filtering developers by language allows you to quickly filter out less capable developers. For example, by only looking at Python developers, maybe on average they are “better” than C# developers. My doubt with this is if it were found to be true, developers would quickly learn that language so as to increase their chances of getting a job: quickly eliminating any benefit of using language as a filter. It also starts to sound a bit “my language is better than your language”.

But it would be interesting if it were true, if we could use language as a quick filter to get higher quality candidates; it would be quite possible to gather data on this. For example, I use an online interview that could easily be used to provide an A/B test of developers self-selecting based on language. Given the same, standardized recruitment test – do developers using language A do better or worse than those using language B? Of course, the correlation between a recruitment test and on the job performance is very much debatable – but it could give us a good indication of whether there is correlation between language and (some measure of) technical ability. Would this work, is there a better way to measure it? Email me or post a comment below.

Conclusion

Popular languages have more developers to hire from but more companies trying to hire them. These two factors seem to cancel each other out – the ratio of developers:jobs is pretty much constant. As far as recruitment is concerned then, maybe for your next project break away from the norm and use a non-mainstream language? Is it time to hire you some Haskell?

Read Full Post »

Software is taking over the world. Arguably software has already taken over the world. The trouble is: software is all shit – and it’s all your fault.

I had one of those seconds-from-flinging-something-heavy-through-my-tv days yesterday: I really have had it with crappy software. First, I decided to relax and catch up with last week’s Have I Got News For You on BBC iPlayer. Unfortunately, for some unfathomable reason, the XBox iPlayer app has become shit in recent months. Last night, it played 0.5 seconds of the show and hung. Network traffic was still flying by, but none of it was appearing on my TV. Fine, I switched to using iPlayer on my laptop – at least that normally works.

Afterwards I decided to watch another of the (excellent, by the way) American remake of House of Cards on Netflix. First, my router had got its knickers in a twist and switched the VPN so Netflix thought I was in the US, so all of my recently watched had disappeared. Boot up laptop (again), login to admin page on the router, fiddle with settings, reboot, reboot XBox, login again: there we go, UK Netflix is back, I can finally watch House of Cards. Except now the TV has crashed and refuses to decode the HDCP signal from the XBox. Reboot TV: no. Reboot xbox, again. Yes! We have signal!

I sat back and started watching. I have to get up, three times, because the volume is quieter than a quiet thing on quiet day in quiet land. Eventually it was so damned quiet I manage to miss some dialog, so I get to use the, occasionally awesome, XBox kinect voice control feature that saves me scrabbling to find the controller: “xbox rewind”. Excellent, it starts rewinding. “xbox stop”. “XBOX, STOP!” “Oh for fuck’s fucking sake xbox, fucking stop you fucking retarded demon, STOP!”. Find the controller. Wait for it to boot up. Now the Netflix app has resumed its entirely random bug where I lose all the on screen navigation controls, so I have to guess which combination of up, down, A, A, B, up, down stops the bloody rewind and not the one that invokes the super Netflix boss level. Eventually the button mashing exits Netflix, not what I intended but at least it’s stopped rewinding. I launch Netflix again. This time the on screen controls work. Woohoo! Now volume has switched back to slightly louder than a jet engine piped directly against your ear drum, so I jump up and turn the volume down before it wakes my little boy up.

By now, I’m in no mood for political intrigue – killing every developer that was involved in any of the pieces of software that have ruined my mood and my evening, quite gladly: pass me the knife, I’m happy to relieve each them of their useless lives and hopelessly misapplied careers.

The trouble is: it’s not their fault. All software is shit. As software takes over more and more of my life I realise: it’s not getting any better. My phone crashes, my TV crashes, my hi-fi crashes. My sat nav gets lost and my car needs a BIOS upgrade. This is all before I get to work and get to use any actual computers. Actual computers that I actually hate.

Every step of every day, software is pissing me off. And you know what? It’s only going to get worse. As software invades more and more of our lives our complete inability to create decent, stable software is becoming a plague. The future will be dominated by people swearing at every little computer they come into contact with. In our bright software future where everything is digital and we’re constantly plugged into the metaverse I will spend my entire life swearing at it for being so unutterably, unnecessarily shit.

You know what else? As software developers: it’s all our fault.

Read Full Post »

There’s been a lot of chatter recently on the intertubes about whether some developers are 10x more productive than others (e.g. here, here and here). I’m not going to argue whether this or that study is valid or not; I Am Not A Scientist and I don’t play one on TV, so I’m not going to get into that argument.

However, I do think these kinds of studies are exactly what we need more of. The biggest challenges in software development are people – individual ability and how we work together; not computer science or the technical. Software development has more in common with psychology and sociology than engineering or maths. We should be studying software development as a social science.

Recently I got to wondering: where are the studies that prove that, say, TDD works; or that pair programming works. Where are the studies that conclusively prove Scrum increases project success or customer satisfaction? Ok, there are some studies – especially around TDD and some around scrum (hyper-performing teams anyone?) – but a lazy google turns up very little. I would assume that if there were credible studies into these things they’d be widely known, because it would provide a great argument for introducing these practices. Of course, its possible that I’m an ignorant arse and these studies do exist… if so, I’m happy to be educated :)

But before I get too distracted, Steve’s post got me thinking: if the variation between individuals really can be 10x, no methodology is going to suddenly introduce an across the board 20x difference. This means that individual variation will always significantly dwarf the difference due to methodology.

Perhaps this is why there are so few studies that conclusively show productivity improvements? Controlling for individual variation is hard. By the time you have, it makes a mockery of any methodological improvement. If “hire better developers” will be 5x more effective than your new shiny methodology, why bother developing and proving it? Ok, except the consultants who have books to sell, conferences to speak at and are looking for a gullible customer to pay them to explain their methodology – I’m interested in the non-crooked ones, why would they bother?

Methodologies and practices in software development are like fashion. The cool kid down the hall is doing XP. He gets his friends hooked. Before you know it, all the kids are doing XP. Eventually, everyone is doing XP, even the old fogies who say they were doing XP before you were born. Then the kids are talking about Scrum or Software Craftsmanship. And before you know it, the fashion has changed. But really, nothing fundamentally changed – just window dressing. Bright developers will always figure out the best, fastest way to build software. They’ll use whatever fads make sense and ignore those that don’t (DDD, I’m looking at you).

The real challenge then is the people. If simply having the right people on the team is a better predictor of productivity than choice of methodology, then surely recruitment and retention should be our focus. Rather than worrying about scrum or XP; trying to enforce code reviews or pair programming. Perhaps instead we should ensure we’ve got the best people on the team, that we can keep them and that any new hires are of the same high calibre.

And yet… recruitment is a horrible process. Anyone that’s ever been involved in interviewing candidates will have horror stories about the morons they’ve had to interview or piles of inappropriate CVs to wade through. Candidates don’t get an easier time either: dealing with recruiters who don’t understand technology and trying to decide if you really want to spend 8 hours a day in a team you know little about. It almost universally becomes a soul destroying exercise.

But how many companies bring candidates in for half a day’s pairing? How else are candidate and employer supposed to figure out if they want to work together? Once you’ve solved the gnarly problem of getting great developers and great companies together – we’ll probably discover the sad truth of the industry: there aren’t enough great developers to go round.

So rather than worrying about this technology or that; about Scrum or XP. Perhaps we should study why some developers are 10x more productive than others. Are great developers born or made? If they’re made, why aren’t we making more of them? University is obviously poor preparation for commercial software development, so should there be more vocational education – a system of turning enthusiastic hackers into great developers? You could even call it apprenticeship.

That way there’d be enough great developers to go round and maybe we can finally start having a grown up conversation about methodologies instead of slavishly following fashion.

Read Full Post »

Whether you like to think of it as technical debt or an unhedged call option we’re all surrounded by bad code, bad decisions and their lasting impact on our day to day lives. But what is the long term impact of these decisions? Are we really making prudent choices? Martin Fowler talks about the four classes of technical debt – from reckless and deliberate to inadvertent and prudent.

Deliberate reckless debt

Deliberate reckless technical debt is just that: developers (or their managers) allowing decisions to be made that offer no upside and only downside – e.g. abandoning TDD or not doing any design. Whichever way you look at it, this is just plain unprofessional. If the developers aren’t capable of making sensible choices, then management should have stepped in to bring in people that could. Michael Norton labels this “cruft not technical debt“. If we’re getting no benefit and simply giving ourselves an excuse to write crappy code then it’s not technical debt, it’s just crap.

Inadvertent debt

Inadvertent technical debt is tricky. If we didn’t know any better, how could we have done any differently? Perhaps industry standards or best practices have moved on. I’m sure once upon a time EJBs were seen as a good idea, now they look like pure technical debt. Today’s best practice so easily becomes tomorrow’s code smell.

Or perhaps we’d never worked in this domain before, if we had the domain knowledge when we started – maybe the design would have turned out different. Sometimes technical debt is inevitable and unavoidable.

Prudent deliberate debt

Then there’s prudent, deliberate debt. Where we make a conscious choice to add technical debt. This is a pretty common decision:

We need to recognise the revenue this quarter, no matter what

We’ve got marketing initiatives lined up so we’ve got to hit that date

We’ve committed to a schedule so don’t have time for rework

Sometimes, we have to make compromises: by doing a sub-standard job now, we get the benefit of finishing faster but we pay the price later.

Unlike the other types of technical debt, this is specifically a technical compromise. We’ve made a conscious decision to leave the code in a worse state than we should do. We know this will slow us down later; we know we need to come back and fix it in “phase 2″; but to hit the date, we accept compromise.

Is it always the right decision?

1. Compromise is always faster in the short-term and slower in the long-term

Given a choice between what we need now and some unspecified “debt” to deal with later – the obvious choice is always to accept compromise.

2. Each compromise is minor, but they compound

Unlike how most people experience “debt”, technical debt compounds. Each compromise we accept increases the cost of all the existing debt as well. This means the cost of each individual compromise may be small, but together they can have a massive impact.

First of all I decide not to refactor some code. Next time round, because its not well factored, it’s harder to test, so I skip some unit tests. Third time round, my test coverage is poor, so I don’t have the confidence to refactor extensively. Now it’s getting really hard to test and really hard to change. Little by little, but faster and faster, my code is deteriorating. Each compromise piles on the ones that went before and amplifies them. Each decision is minor, but they add up to a big headache.

3. It’s hard to quantify the long term cost

When we agree not to rework a section of code, when we agree to leave something half-finished, when we agree to rush something through with minimal tests: it’s very difficult to estimate the long term cost. Sure, we can estimate what it would take to do it right – we know what the principal is. But how can we estimate the interest payments? Especially when they compound.

How can I estimate the time wasted figuring out the complexity next time? The time lost because a bug is harder to track down. The extra time it takes because I can’t refactor the code so easily next time. The extra time it takes because I daren’t refactor the code next time; and the extra debt this forces me to take on. How can I possibly quantify this?

At IMVU they found they:

underestimated the long-term costs [of technical debt] by at least an order of magnitude

Is it always wrong?

There are obviously cases where it makes sense to add technical debt; or at least, to do something half-assed. If you want to get a new feature in front of customers to judge whether its valuable, it doesn’t need to be perfect, it just needs to be out there quickly. If the feature isn’t useful for users, you remove it. You’ve saved the cost of building it “perfectly” and quickly gained the knowledge you needed. This is clearly a cost-effective, lean way to manage development.

But what if the feature is successful? Then you need a plan for cleaning it up; for refactoring it; making sure it’s well tested and documented sufficiently. This is the debt. As long as you have a plan for removing it, whether the feature is useful or not – then it’s a perfectly valid approach. What isn’t an option is leaving the half-assed feature in the code base for years to come.

The difference is having a plan to remove the debt. How often do we accept compromise with a vague promise to “come back and fix later” or my personal favourite “we’ll fix that in phase 2″. My epitaph should be “now working on phase 2″.

Leaving debt in the code with no plan to remove it is like the guy who pays the interest on one credit card with another. You’re letting the debt mount up, not dealing with it. Without a plan to repay the debt, you will eventually go bankrupt.

The Risk of Technical Debt

Perhaps the biggest danger of technical debt is the risk that it represents. Technical debt makes our code more brittle, less easy to change. As @bertvanbrakel said when we discussed this recently:

Technical debt is a measure of code inflexibility

The harder it is to change, the more debt laden our code. With this inflexibility, comes the biggest risk of all: that we cannot change the code fast enough. What if the competitive or regulatory environment suddenly changes? If a new competitor launches that completely changes our industry, how long does it take us to catch up? If we have an inflexible code base, what will be left of our business in 2, 3 or 4 years when we finally catch up?

While this is clearly a worst case scenario, this lack of flexibility – this lack of innovation – hurts companies little by little. Once revolutionary companies become staid, unable to react and release only derivative products. As companies find themselves unable to innovate and keep up with the ever changing landscape – they risk becoming irrelevant. This is the true cost of technical debt.

Without a plan to repay the technical debt; with no way to reliably estimate the long term cost of not repaying it, are we really making prudent, deliberate choices? Isn’t the decision to add technical debt simply reckless?

Read Full Post »

%d bloggers like this: