Feeds:
Posts
Comments

Visualizing Code

When writing software we’re working at two levels:

  1. Creating an executable specification of exactly what we want the machine to do
  2. Creating a living document that describes the intent of what we want the machine to do, to be read by humans

The first part is the easy part, the second part takes a lifetime to master. I read a really great post today pointing out signs that you’re a bad programmer. Whether you’re a bad programmer or just inexperienced, I think the biggest barrier is being able to quickly and accurately visualize code. You need to visualize what the application actually does, what happens at runtime; but all your IDE shows you is the raw, static, source code. From this static view of the world you have to infer the runtime behaviour, you have to infer the actual shape of the application and the patterns of interaction; while closely related, the two are separate. Given just source code, you need to be able to visualize what code does.

What does it mean to visualize code? At the simplest level, it’s understanding what individual statements do.

string a = "Hello":
string b = "world";
a = b;

It might sound trivial, but the first necessary step is being able to quickly parse code and mentally step through what will happen. First for basic statements, then for code that iterates:

while (stack.Count() > 1)
{
    var operation = stack.Pop() as IOperation;
    var result = operation.Execute(stack.Pop(), stack.Pop());
    stack.Push(result);
}

Where you need to understand looping mechanics and mentally model what happens overall not just each iteration. Then you need to be able to parse recursive code:

int Depth(ITreeNode node)
{
    if (node == null)
        return 0;
    return 1 + Math.Max(Depth(node.Left), Depth(node.Right));
}

Which is generally harder for inexperienced programmers to understand and reason about; even though once you’ve learned the pattern it can be clearer and more concise.

Once you’ve mastered how to understand what a single method does, you have to understand how methods become composed together. In the OO world, this means understanding classes and interfaces; it means understanding design patterns; you need to understand how code is grouped together into cohesive, loosely coupled units with clear responsibilities.

For example, the visitor pattern has a certain mental structure – it’s about implementing something akin to a virtual method outside of the class hierarchy; in my mind it factors a set of classes into a set of methods.

public interface IAnimal
{
    void Accept(IAnimalVisitor visitor);
}

public class Dog : IAnimal { ... }
public class Cat : IAnimal { ... }
public class Fox : IAnimal { ... }

public interface IAnimalVisitor
{
    void VisitDog(Dog dog);
    void VisitCat(Cat cat);
    void VisitFox(Fox fox);
}

The first step in reading code is being able to read something like a visitor pattern (assuming you’d never heard of it before) and understand what it does and develop a mental model of what that shape looks like. Then, you can use the term “visitor” in your code and in discussions with colleagues. This shared language is critical when talking about code: it’s not practical to design a system by looking at individual lines of code, we need to be able to draw boxes on a whiteboard and discuss shapes and patterns. This shared mental model is a key part of team design work; we need a shared language that maps to a shared mental model, both of the existing system and of changes we’d like to make.

In large systems this is where a common language is important: if the implementation uses the same terms the domain uses, it becomes simpler to understand how the parts of the system interact. By giving things proper names, the interactions between classes become logical analogues of real-world things – we don’t need to use technical words or made up words that subsequent readers will need to work to understand or learn, someone familiar with the domain will know what the expected interactions are. This makes it easier to build a mental model of the system.

For example, in an online book store, I might have concepts (classes) such as book, customer, shopping basket, postal address. These are all logical real world things with obvious interactions. If I instead named them PurchasableItem, RegisteredUser, OrderItemList and DeliveryIndicationStringList you’d probably struggle to understand how one related to the other. Naming things is hard, but incredibly important – poor naming will make it that much harder to develop a good mental model of how your system works.

Reading code, the necessary precursor to writing code, is all about building a mental model. We’re trying to visualize something that doesn’t exist, by reading lines of text.

What programming language to use is probably the single biggest technical decision facing a project. That one decision, affects every one that follows – from the frameworks and libraries you can use, to the people you hire. So how do you go about choosing what programming language to use?

The truth is, you probably do what most people do and use the same language you used on your last project. Or, if you’re a hipster, you use the latest super cool language. A couple of years ago all the cool kids were on rails. Now the hipsters are trying to tell me how awesome their nodes are. Or that clojures are where its at. Last time I checked, their turing complete language had the same problem solving capacity as my turing complete language. Really what they’re arguing is their language gives them better expressive power: it’s faster to write and/or cheaper to maintain.

Right tool, right job

As the old adage goes: always use the right tool for the job. If you need to automate some command line maintenance task – use a language that’s good at shell scripting: bash, perl, hell even ruby; don’t use Java. If the problem you’re solving needs a desktop client that integrates seamlessly in a Windows environment: use C#, don’t use Java (be quiet Java on the desktop fanboys, just be quiet). If the problem you’re solving involves handling lots of XML and you like stack traces: sure, use Java.

The biggest distinction normally comes down to algorithmic complexity. If what you’re working on has a lot of algorithmic complexity, use something that’s good at expressing it: a functional language, like haskell or F#. Or if, like 90% of webapps, what you’re doing is data in, data out you need a language with good OO power to try and model your domain: Java and C# are both good choices here, along with almost every other modern language out there.

Scala

Or maybe you really hate yourself and you want a compromise: why choose functional or procedural when you can have both? Why miss out on every language feature thought of over the last 50 years when you can have them all, baked into one mess of a language? Yes, if that sounds like you, you probably think you’re a hipster but you actually missed the boat by several years: its time to learn you some scalas.

I suspect part of the reason Scala is gaining so much popularity is that it finally gives all the frustrated Java devs the language toys they’ve always wanted. Java really is an unbelievably retarded language now, it’s incredibly frustrating to work in. As someone that’s just switched to C#, I’ve relished all the new language gadgets and geegaws I’ve got to play with. Have they made the code any better? With lots of toys comes lots of complexity and variety, making code hard to understand and hard to maintain.

The thing is, Java is a toy language: any idiot can write decent idiomatic Java. The trouble is, Java is a toy language: everyone is forced to write screeds of noddy idiomatic, idiotic Java, no matter how much of a ninja rockstar they are. The best thing with Java, is it stops all the ninja rockstars from showing how ninja they are by writing undecipherable crap. I fear the impact lambdas will have on the maintainability of the average Java codebase as everyone starts finding new and confusing ways to express everything.

Hiring

Another reason to choose the right programming language, is it affects the developers you can hire. But does it really? I work in a C# shop now, do I turn down Java developers? Hell no. A good developer is a good developer, regardless of the language. To dismiss potential recruits because of the language they know is retarded.

And the trouble is, if you think that hiring only python or node developers will get you a better standard of developer: you’re wrong. The pool will simply be much smaller. Maybe the average quality of that pool will be higher, who knows, who cares? I only need one developer, I want her to be the best I can hire – it makes no difference where the average is.

Language has no correlation with ability: I’ve met some very smart Java developers, and some truly terrible hipster developers. I’d rather cast the widest possible net and hire good developers who are happy to use the technologies we use. To do anything else is to limit the pool of talent I can draw from; which, lets be honest, is pretty limited already.

Another argument I’ve heard is that the technologies you use will limit candidates willing to work for you – some developers only want to work in, say, clojure. Well, they’re retarded then. I’d rather have people who want to work on interesting problems, regardless of the language, than people who would rather solve shit problems in the latest hipster language. Now if you work for a bank, and all you have are shit problems? Sure, go ahead and use a hipster language if it helps you hire your quota of morons. If nothing else it keeps them away from me.

Lingua Franca

Take a room full of hipster language programmers and ask them to form teams. What happens? Magically, you’ve now got a room full of C# and Java developers. Almost every developer will know at least one of these two languages – they are the lingua franca. Everything else is frankly varying degrees of hipster.

The truth is the most important thing when choosing a language is how many developers on your team, and those you plan on hiring, already know the language. If everyone on the team has to retrain into say, smalltalk; and everyone you hire needs hand holding while they learn the new language – that’s a cost you have to factor in. What’s the benefit you’re getting from it?

Secondly, how easy is it to get support when you hit problems? The open source community around Java is awesome – if you’ve got a problem, there will already be 15 different solutions, some of which even work. If you work in C#, your choices are more limited – but there will be choices, some of which aren’t even from Microsoft. If you’re working in the latest hipster language, guess what? You’re on your own. For some people that’s the draw of hipster languages. For those of us wanting to get work done, it’s just a pain.

In the end, the best advice is probably to use the same language you used on your last project: everybody already knows it and your tooling is already setup around it. This is why Java has quickly become the new cobol.

We’re flying back to the UK for a week later this month, so once again we’re forced to rent a car. Both sets of parents live too far from public transport for it to be a realistic option, even assuming the trains in the UK break the habit of a lifetime and work for a change. (Rail replacement bus service? What sadistic bastard thought of that?)

It got me thinking about the outrageous cost of hiring a car for the week. To get a car large enough for the three of us and all our luggage (who knew we needed to take a kitchen sink for a 1 week trip?), it will cost us over £200 for the week. I could practically buy a car more cost effectively than that… if only there was some way of everyone clubbing together to buy a car that you can drive when you need it. Scaled up, you’d have… a car rental company. Are they just ripping us off? Or does it plausibly cost that much to hire cars out? I made up some numbers to see if it would shed any light on the situation.

New price £24,000.00
Price @ 3 yrs £12,000.00
Annual running costs £2,000.00
Cost over 3 yrs £18,000.00

A fairly low-spec, family sized car would cost about £24,000 to buy brand new. Assuming you kept it for 3 years, and allowing for insurance and up-keep etc, I reckon you’d need to make about £6,000 a year to cover it. Is that a lot?

Annual cost £6,000.00
Average utilisation 50%
Min daily cost £32.97

Assuming the car is being used 50% of the time, you’d need to charge about £30/day to make your £6,000 a year back. Which works out about the same as I’m actually paying – so it’s probably not a million miles wide of the mark. Which is slightly depressing – unless you buy an older car, and risk spending more money on maintaining it and repairing it… it’s difficult to see how you could rent a car for less than they currently charge.

Although I’m hiring a car for the week, I don’t actually need the car for 7 full days of 24 hours. Maximum, I’ll probably be in the car for 12 hours across that week. If only there was a way that I could rent a car for an hour at a time? (Zipcar and the like only serve big cities, which is useless when you’re visiting the sticks)

Self-Driving Cars

Well, maybe this is where self-driving cars could massively change things. With a self-driving car, it wouldn’t matter that I’m parking up in the middle of nowhere. It can scoot itself off into the nearest city to continue ferrying people about. I’d only need to pay for the time I actually use it.

Annual cost £6,000.00
Average utilisation 25%
Min hourly cost £2.75

Obviously the utilisation would be lower (I can’t imagine many people will need carrying about in the middle of the night). But instead of paying £230 for a week’s car parking, I could instead pay £33 for 12 hours of driving. In theory, the car would still be paid for over three years – but the cost to individual users would be massively lower.

Not Just for Holidays

At those prices, could it even replace my main car?

Annual mileage 12000  miles
Average monthly mileage 1000  miles
Average speed 40  mph
Average monthly time 25  hours
Average monthly cost £68.68

If I was paying for the exact same car, to spend the bulk of it’s time parked outside my house, it would cost £500 per month. But renting it by the hour is a fraction of that. Of course, if everyone did this, you’d need a massive number of cars to cover the peak demand at rush hour – but the utilisation could drop to 5% and it would still be cheaper than owning the car outright. That’s each car in use for just over 1 hour each day, given that’s the average UK daily commute that seems easily achievable.

So, it’s 2013, I only have one question:

Where’s my goddamned self driving car?

 

Short Commit Cycles

How often do you commit? Once a week? Once a day? Once an hour? Every few minutes? The more often you commit, the less likely you are to make a mistake.

I used to work with a guy who reckoned you should commit at least every 15 minutes. I thought he was mad. Maybe he is. But he’s also damned right. I didn’t believe it was possible: what can you actually get done in 15 minutes? At best, maybe you can write a test and make it pass. Well good! That’s enough. If you can write a test and make it pass in 15 minutes, check that sucker in! If you can’t: back it the hell out.

But I’ll never get anything done!

Sure you will, you just need to think more. Writing software isn’t about typing, it’s about thinking. If you can’t see a way to make a step in 15 minutes, think harder.

Chase the Red

One of my oft-used anti-patterns is to “chase the red”. You make one change, delete a method or class you don’t want – and then hack away, error by error, until the compiler is happy. It might take 5 minutes. It might take a week.

It’s very easy to get sucked into this pattern – you think you know where you want to get to: I have a method that takes a string – maybe it’s a product SKU from the warehouse; I want to replace it with a StockUnit which has useful functionality on it. Simples: change the signature of the method to accept StockUnit instead of string and follow the red, when the compiler’s happy: I’m done.

Trouble is – how do I know that every calling method can change? Maybe I think I know the code base and can make a reasonable guess. Maybe I’m even right. But it’s unbelievably easy to be wrong. I might even check all references methodically first, but until the code’s written – it’s just a theory. And you know what they say about the difference between theory and practice.

Is there a different way? What if I add a new method that accepts a StockUnit and delegates to the original method? I can test drive adding the new method, commit! I can go through each reference, one by one: write a failing test, make it pass by calling the new method, commit! Step-by-step, the code compiles and the tests pass every step of the way.

What happens if I hit something insurmountable half way? Where I am is a mess, but it’s a compiling, green mess. If I decide to back out I can revert those commits. Or refactoring tools can inline the method and automatically remove half the mess for me. But the critical thing is, every step of the way I can integrate with the rest of the team, and all the tests pass in case I hit something unexpected.

Merge Hell

The trouble with long running branches (and I include in that your local “branch” that you haven’t committed for two weeks!) is that the longer they run the harder they are to integrate. Now sure, git makes it easier to manage branching & merging – but you still have to do it. You still have to push & pull changes regularly – let others integrate with you and integrate with others as often as you can.

The likelihood of you having merge problems increases rapidly the longer your branch is open. If it’s 15 minutes and one test, you know what? If you have a merge problem, you can throw it away and redo the work – its only 15 minutes!

Just keep swimming!

But if you’ve been working for days and you hit merge hell, what do you do? You’re forced to wade through it. What you’re doing is chasing the red in your version control system. You’ve adopted an approach that gives you no choice but to keep on hacking until all the red merge conflict markers are gone from your workspace. If it takes 5 minutes or 5 days, you’ve got to keep going because you can’t afford to lose your work.

The kicker is by the time you’re done merging and everything compiles and works again, what’s the betting some other bugger has jumped in with another epic commit? Off we go again. You’re now in a race to try and merge your steaming pile before anyone else gets in. This is a ridiculous race – and who wins? Nobody, you’re still a loser.

Instead, if you adopt the 15 minute rule, you write a test, make it pass, check it in and push to share with everyone else. Little and often means you lose little time to merge conflicts and instead let your version control system do what it’s good at*: merging changes!

* doesn’t apply to those poor suckers using TFS, sorry

Brain Fade

An interesting side effect of a short cycle time is that you limit the damage periods of brain fade can do. I dunno about you, but there are times when I really shouldn’t be allowed near a keyboard. The hour after lunch can be pretty bad – a full belly is not good for my thinking. The morning after the night before: it’s best if you keep me away from anything mission critical.

The trouble is, if I have a multi-day branch open I’m almost guaranteed to hit a period of brain fade. Because I’m not committing, when my head clears and I realise I’ve screwed up I can’t just revert those commits or throw my branch away entirely – I have to try and unpick the mess I just made from the mess I already had. It’s incredibly hard to be disciplined and tidy when you’re wading in excrement.

We all make mistakes – but version control gives me an overview of what I did, without relying on my flaky memory. Maybe I’ve gone completely the wrong way and can unpick parts of what I’ve done, maybe I got lucky and some commits were half-way decent – I can cherry pick those and start a new branch. Without version control I’d be flapping around in an ungodly mess trying to figure out how to keep the good bits without just binning everything. All the time trying to explain to the project manager how I’m 90% done! No, really! Nearly there now!

Practice!

Version control is an invaluable tool that we must learn to use properly. It takes discipline. It doesn’t come naturally. If you’ve never worked in such short cycles, work through a kata committing regularly. Our natural instinct is to type first and think later. Stop! Think about how you can make an increment in 15 minutes and do that. If you can’t: revert!

Measuring Software

A while back I read Making Software – it made me disappointed at the state of academic research into the practice of developing software. I just read Leprechauns of Software Engineering which made me angry about the state of academic research.

Laurent provides a great critique of some of the leprechauns of our industry and why we believe in them. But it just highlighted to me how little we really know about what works in software development. Our industry is driven by fashion because nobody has any objective measure on what works and what doesn’t. Some people love comparing software to the medical profession, to torture the analogy a bit – I think today we’re in the blood letting and leeches phase of medical science. We don’t really know what works, but we have all sorts of strange rituals to do it better.

Rituals

Rituals? What rituals? We’re professional software developers!

But of course, we all get together every morning for a quick status update. Naturally we answer the three questions strictly, any deviations are off-topic. We stand up to keep the meeting short, obviously. And we all stand round the hallowed scrum kanban board.

But rituals? What rituals?

Do we know if stand ups are effective? Do we know if scrum is effective? Do we even know if TDD works?

Measuring is Hard

Not everything that can be measured matters; not everything that matters can be measured

I think we have a fundamental problem when it comes to analysing what works and what doesn’t. As a developer there are two things I ultimately need to know about any practice/tool/methodology:

  1. Does it get the job done faster?
  2. Does it result in less bugs / lower maintenance cost?

This boils down to measuring productivity and defects.

Productivity

Does TDD make developers more productive? Are developers more productive in Ruby or Java? Is pairing productive?

These are some fascinating questions that, if we had objective, repeatable answers to would have a massive impact on our industry. Imagine being able to dismiss all the non-TDD doing, non-pairing Java developers as being an unproductive waste of space! Imagine if there was scientific proof! We could finally end the language wars once and for all.

But how can you measure productivity? By lines of code? Don’t make me laugh. By story points? Not likely. Function points? Now I know you’re smoking crack. As Ben argues, there’s no such thing as productivity.

The trouble is, if we can’t measure productivity – it’s impossible to compare whether doing something has an impact on whether you get the job done faster or not. This isn’t just an idle problem – I think it fundamentally makes research into software engineering practices impossible.

It makes it impossible to answer these basic questions. It leaves us open to fashion, to whimsy and to consultants.

Quality

Does TDD help increase quality? What about code reviews? Just how much should you invest in quality?

Again, there are some fundamental questions that we cannot answer without measuring quality. But how can you measure quality? Is it a bug or a feature? Is it a user error or a requirements error? How many bugs? Is an error in a third party library that breaks several pages of your website one bug or dozens? If we can’t agree what a defect is or even how to count them how can we ever measure quality?

Subjective Measures

Maybe there are some subjective measures we could adopt. For example, perhaps I could monitor the number of emails to support. That’s a measure of software quality. It’s rather broad, but if software quality increases, the number of emails should decrease. However, social factors could so easily dwarf any actual improvement. For example, if users keep reporting software crashes and are told by the developers “yeah, we know”. Do you keep reporting it? Or just accept it and get on with your life? The trouble is, the lack of customer complaints doesn’t indicate the presence of quality.

What To Do?

What do we do? Do we just give up and adopt the latest fashion hoping that this time it will solve all our problems?

I think we need to gather data. We need to gather lots of data. I’d like to see thousands of dev teams across the world gathering statistics on their development process. Maybe out of a mass of data we can start to see some general patterns and begin to have some scientific basis for what we do.

What to measure? Everything! Anything and everything. The only constraint is we have to agree on how to measure it. Since everything in life is fundamentally a problem of lack of code, maybe we need a tool to help measure our development process? E.g. a tool to measure how long I spend in my IDE, how long I spend testing. How many tests I write; how often I run them; how often I commit to version control etc etc. These all provide detailed telemetry on our development process – perhaps out of this mass of data we can find some interesting patterns to help guide us all towards building software better.

How much of what we do is based on sound understanding? And how often do we do things without really understanding why? Everything we do: from walking down the street to writing software – sometimes we understand why we do things, other times we follow superstition.

We’re trained to associate cause with effect, even if we don’t understand the connection:

Press button: get food. Hungry. Press button. Food.

At an animal level, the reason is irrelevant. But as rational human beings, we’re able to understand the relationship between cause and effect; but that doesn’t mean we always behave rationally. Ever walked around a set of ladders rather than under them? My wife makes me laugh by saluting magpies. These superstitions are entirely irrational behaviours, but are mostly just amusing and harmless.

But when it comes to computers, this ability for us to not worry about why things work is pervasive. For example, ever had a conversation like the following:

Wife: I need to email someone this word document I just wrote

Me: ok, where did you save it?

Wife: in word

Me: ok, but is it in your documents folder or on your desktop?

Wife: I clicked File then Save.

Me: ooookaaaay, but what folder did you save it in?

Wife:

Me:

Wife:

Me: *sigh*

Now, this isn’t an uncommon problem – people use computers all the time without really understanding what’s happening underneath. We learn a set of behaviours that do what we want. Why that set of behaviours work is irrelevant. We have a mental model that describes the world as we see it, our model is only called into question when we need to do something new or different.

Mental Models

The trouble is, from the outside, it can be very difficult to tell the difference between superstition and understanding. For example, how many Windows users actually understand shortcuts? Ever tried explaining the idea to your Dad? But yet, I bet he can click links on his desktop or in his start menu without any problem. His mental model of the world is sufficient to let him work, but insufficient to let him understand why it works. As soon as you step outside the normal behaviour – for example, trying to explain the difference between deleting the desktop icon and deleting the application – your mental model is challenged and superstitions exposed.

For users to not question things and adopt computer superstitions is understandable. But for software developers, it’s frightening. If you don’t understand something someone is explaining, you have to challenge it – either their model of the world is wrong or yours is. Either way, you’re not going to understand and agree until you can agree a consistent model.

Voodoo Programmers

But I’ve worked with developers who effectively become superstitious programmers. They don’t really know why something works, they just know that clicking these buttons does the right thing. Who needs to know how Hibernate works, as long as I know the magic incantations (or can copy&paste from somewhere else) – I can get the job done! But as another developer on the team, without looking closely – can I tell whether you’ve setup the Hibernate mappings because you know how they work, or just copy&pasted it from somewhere else without understanding?

The trouble is, almost all programmers resort to copy&paste&change from time-to-time. There are some incantations that are just too complex and too rarely used to memorise, so we quite reasonably borrow from somewhere else. But the difference between a developer that uses the incantations to aide his memory and the developer who just blindly copies without thinking is incredibly subtle. It’s easy to be one while thinking you’re the other – especially if it’s something you don’t do all the time.

Understanding

How many times have you found a fix for something but not understood why? Do you keep investigating or move on? For example, I was investigating a race condition recently and I’d spotted an incorrect implementation of double-checked locking - but I found that fixing it didn’t actually fix the bug. I wasn’t convinced I’d implemented the double-checked lock correctly either, so I replaced it with a method that always locked. What do you know: problem fixed!

Now, given what I know of the bug – that’s not right. Double-checked locking is a performance tweak, it shouldn’t impact the thread-safety of the solution. The fact that locking in all cases has fixed the bug gives me a hint that there must be something else that isn’t thread safe. By introducing the lock, the code that runs just after ends up effectively locked – it’s unlikely to get interrupted by the scheduler with another thread in the same critical section.

After another couple of hours investigation, I found the code in question – there was a singleton that was being given a reference to a non-thread safe object. I could have left my lock in place, since it “fixed” the bug – but by following my intuition that I didn’t understand the reason I found a much more damaging bug.

Understanding is key, be wary of superstitions. If you don’t understand why something works, keep digging. If this means you slow down: good! Going fast when you don’t understand what you’re doing is a recipe for disaster.

Fast Feedback

Writing good software is all about getting feedback, quickly. Does it compile? Does it function? Does it build? Does it deploy? Does it do what the customer wanted? Does it actually work? Every step of the way we have feedback loops, to improve the software. The faster these feedback loops are, the faster the software improves.

Builds

Don’t you hate waiting for your code to compile? Or, if you use a language from this century: do you remember back in the olden days when you had to wait for the compiler? Until recently, I’d actually forgotten that incremental compilers are a relatively new invention. I remember when JBuilder (my IDE of choice back in those distant times) first introduced me to incremental compilation – it was something of a revelation! You mean, I don’t have to hit compile? It just does it? In the background? Like magic?!

A few years ago I joined a company who had something of a byzantine build process. As was the fashion at the time, they used ant for their build. Unfortunately, nobody had told them that Eclipse could also compile code. So all code was built with ant. Made a change? Run the ant build to build the relevant library (may require guesswork). Copy it (by hand) to the app server. Quickly restart WebSphere (note: not quick). Test. Lather. Rinse. Repeat. Die of boredom.

Eventually, I replaced this with an Eclipse workspace that could run the application. No more build step. No more copying things by hand. No more mistakes. No more long delays in getting feedback.

Just recently I started working with C++ again after nearly a decade in byte code / interpreted languages. I’d actually forgotten what it was like to wait for code to compile. I’d got so used to working in Eclipse where you press Go and Things Happen(tm). Now instead I have to wait for a build before I can do anything. Every little change involves minutes of my life waiting for Visual Studio.

Then, if I’m really lucky – it will even compile! Remember when your IDE didn’t give you little red squiggles or highlight broken code? How fast is that feedback loop now? Before I’ve even finished typing the IDE is telling me I’m a moron and suggesting how to fix it. Assuming my code compiles, next I run the gauntlet of linking. Normally that means some godawful error message that googling just gives decade old answers and stack overflow posts that might as well be discussing religion.

TDD

I suspect this is why TDD is less common in the C++ world. Not only does the language feel ill-suited to doing TDD (to be honest, it feels ill-suited to writing software at all), but if you have to wait minutes after each change – TDD just doesn’t work.

  • Write a failing test
  • Wait minutes for the compiler to check your work, maybe go for a cuppa
  • Write code to make the test pass
  • Wait minutes for the compiler to check your work, perhaps its lunchtime?
  • Refactor

Actually, scrap the last step – since C++ is basically entirely devoid of automated refactoring tools – just leave whatever mess you’ve created because it’s about as good as it will get.

But with a red, green, refactor cycle that takes approximately 2.6 hours – it would be impossibly slow. No wonder TDD happens less.

Pairing

I’ve been arguing recently about whether pairing or code review is the best way to improve code quality. I always used to be a huge believer in code review – I’d seen it have a massive impact on code quality and really help the team learn both the code and how to code better.

But now, after spending months pairing day in day out – I think pairing is better in every conceivable way than code review. Why? Because it’s so much more immediate. How many times have you heard feedback from code review getting left because “we don’t have time right now” or “we’ll come back to that just as soon as we’ve got this release out”.

But with pairing, you have your reviewer right there offering feedback, before you’ve even finished writing the line of code. Just like your modern IDE – giving you feedback before you’ve even had chance to get it wrong. And, just like the IDE, if the smart ass sitting next to you thinks he knows better, pass over the keyboard and let him show you.

This is just impossible with code review. The feedback cycle is necessarily much slower and, worse, it’s too easy to ignore. You can add a story to the backlog to fix code review comments. You can’t so easily add a story to the backlog to make the argumentative bloke next to you shut up! (More’s the pity, sometimes)

But either way – whether you get feedback from code review or pairing, the important thing is to get feedback. If you’re not getting feedback: you’re not learning and neither you nor your code are improving.

%d bloggers like this: