Code coverage with unit & integration tests

On a pet project recently I set out to build automated UI (integration) tests as well as the normal unit tests. I wanted to get all of this integrated into my maven build, with code coverage reports so I could get an idea of areas with insufficient test coverage. Rather than just publish the source code for the project, I’ve put together a simple example to demonstrate how I got all this setup; so if you’re looking to integrate maven, junit, webdriver (now selenium) and emma – read on to find out how I went about it.

First off, all the source code for this is available on github: https://github.com/activelylazy/coverage-example. I’ll show key snippets, but obviously there’s lots of detail omitted that (hopefully) isn’t relevant.

The Example App

Rather than break with tradition, the example application is a simple, if slightly contrived, hello world:

How It Works

The start page is a simple link to the hello world page:

<h1>Example app</h1>
<p>See the <a id="messageLink" href="helloWorld.html">message</a></p>

The hello world page just displays the message:

<h1>Example app</h1>
<p id="message"><c:out value="${message}"/></p>

The hello world controller renders the view, passing in the message:

public class HelloWorldController extends ParameterizableViewController {
    // Our message factory
    private MessageFactory messageFactory;
    @Override
    protected ModelAndView handleRequestInternal(HttpServletRequest request,
        HttpServletResponse response) throws Exception {
        // Get the success view
        ModelAndView mav = super.handleRequestInternal(request, response);
        // Add our message
        mav.addObject("message",messageFactory.createMessage());
        return mav;
    }
    @Autowired
    public void setMessageFactory(MessageFactory messageFactory) {
        this.messageFactory = messageFactory;
    }
}

Finally the MessageFactory simply returns the hard-coded message:

public String createMessage() {
    return "Hello world";
}

The unit test

We define a simple unit test to verify that the MessageFactory behaves as expected:

public class MessageFactoryTest {
    // The message factory
    private MessageFactory messageFactory;
    @Test
    public void testCreateMessage() {
        assertEquals("Hello world",messageFactory.createMessage());
    }
    @Autowired
    public void setMessageFactory(MessageFactory messageFactory) {
        this.messageFactory = messageFactory;
    }
}

Build

A basic maven pom file is sufficient to build this and run the unit test. At this point we have a working app, with a unit test for the core functionality (such as it is) that we can build and run.

<project>
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.example</groupId>
    <artifactId>helloworld</artifactId>
    <packaging>war</packaging>
    <version>1.0-SNAPSHOT</version>
    <name>helloworld Maven Webapp</name>
    <build>
        <finalName>helloworld</finalName>
    </build>
    <dependencies>
        ...omitted...
    </dependencies>
</project>

Code Coverage

Now let’s integrate Emma so we can get some code coverage reports. First, we define a new Maven profile, this allows us to control whether or not we use emma on any given build.

<profile>
    <id>with-emma</id>
    <build>
        <plugins>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>emma-maven-plugin</artifactId>
                <inherited>true</inherited>
                <executions>
                    <execution>
                        <id>instrument</id>
                        <phase>process-test-classes</phase>
                        <goals>
                            <goal>instrument</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</profile>

This simply invokes the “instrument” goal during the Maven “process-test-classes” phase; i.e. once we’ve compiled our class files, use emma to instrument them. We can run this by invoking maven with the new profile:

mvn clean install -Pwith-emma

Once the build has completed, we can run Emma to generate code coverage reports:

On Windows:

java -cp %USERPROFILE%/.m2/repository/emma/emma/2.0.5312/emma-2.0.5312.jar emma report -r xml,html -in coverage.ec -in target/coverage.em

On Linux:

java -cp ~/.m2/repository/emma/emma/2.0.5312/emma-2.0.5312.jar emma report -r xml,html -in coverage.ec -in target/coverage.em

We can now view the HTML coverage report in coverage/index.html. At this point, it shows we have 50% test coverage (by classes). MessageFactory is fully covered, but the HelloWorldController doesn’t have any tests at all.

Integration Test

To test our controller and JSP, we’ll use WebDriver to create a simple integration test; this is a JUnit test that happens to launch a browser.

public class HelloWorldIntegrationTest {
    // The webdriver
    private static WebDriver driver;
    @BeforeClass
    public static void initWebDriver() {
        driver = new FirefoxDriver();
    }
    @AfterClass
    public static void stopSeleniumClent() {
        try {
            driver.close();
            driver.quit();
        } catch( Throwable t ) {
            // Catch error & log, not critical for tests
            System.err.println("Error stopping driver: "+t.getMessage());
            t.printStackTrace(System.err);
        }
    }
    @Test
    public void testHelloWorld() {
        // Start from the homepage
        driver.get("http://localhost:9080/helloworld/");
        HomePage homePage = new HomePage(driver);
        HelloWorldPage helloWorldPage = homePage.clickMessageLink();
        assertEquals("Hello world",helloWorldPage.getMessage());
    }
}

Lines 4-18 simply start Web Driver before the test and shut it down (closing the browser window) once the test is finished.

On line 22 we navigate to the homepage with a hard-coded URL.

On line 23 we initialise our Web Driver page object for the homepage. This encapsulates all the details of how the page works, allowing the test to interact with the page functionally, without worrying about the mechanics (which elements to use etc).

On line 24 we use the homepage object to click the “message” link; this navigates to the hello world page.

On line 25 we confirm that the message shown on the hello world page is what we expect.

Note: I’m using page objects to separate test specification (what to do) from test implementation (how to do it). For more on why this is important see keeping tests from being brittle.

Homepage

The homepage object is pretty simple:

public HelloWorldPage clickMessageLink() {
    driver.findElement(By.id("messageLink")).click();
    return new HelloWorldPage(driver);
}

HelloWorldPage

The hello world page is equally simple:

public String getMessage() {
    return driver.findElement(By.id("message")).getText();
}

Running the Integration Test

To run the integration test during our Maven build we need to make a few changes. First, we need to exclude integration tests from the unit test phase:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    ...
    <configuration>
        ...
        <excludes>
            <exclude>**/*IntegrationTest.java</exclude>
            <exclude>**/common/*</exclude>
        </excludes>
    </configuration>
</plugin>

Then we define a new profile, so we can optionally run integration tests:

<profile>
    <id>with-integration-tests</id>
    <build>
        <plugins>
            <plugin>
                <groupId>org.mortbay.jetty</groupId>
                <artifactId>maven-jetty-plugin</artifactId>
                <version>6.1.22</version>
                <configuration>
                    <scanIntervalSeconds>5</scanIntervalSeconds>
                    <stopPort>9966</stopPort>
                    <stopKey>foo</stopKey>
                    <connectors>
                        <connector implementation="org.mortbay.jetty.nio.SelectChannelConnector">
                            <port>9080</port>
                            <maxIdleTime>60000</maxIdleTime>
                        </connector>
                    </connectors>
                </configuration>
                <executions>
                    <execution>
                        <id>start-jetty</id>
                        <phase>pre-integration-test</phase>
                        <goals>
                            <goal>run</goal>
                        </goals>
                        <configuration>
                            <daemon>true</daemon>
                        </configuration>
                    </execution>
                    <execution>
                        <id>stop-jetty</id>
                        <phase>post-integration-test</phase>
                        <goals>
                            <goal>stop</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.5</version>
                <inherited>true</inherited>
                <executions>
                    <execution>
                        <id>integration-tests</id>
                        <phase>integration-test</phase>
                        <goals>
                            <goal>test</goal>
                        </goals>
                        <configuration>
                            <excludes>
                                <exclude>**/common/*</exclude>
                            </excludes>
                            <includes>
                                <include>**/*IntegrationTest.java</include>
                            </includes>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</profile>
<profile>
<id>with-integration-tests</id>
<build>
<plugins>
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>maven-jetty-plugin</artifactId>
<version>6.1.22</version>
<configuration>
<scanIntervalSeconds>5</scanIntervalSeconds>
<stopPort>9966</stopPort>
<stopKey>foo</stopKey>
<connectors>
<connector implementation=”org.mortbay.jetty.nio.SelectChannelConnector”>
<port>${test.server.port}</port>
<maxIdleTime>60000</maxIdleTime>
</connector>
</connectors>
</configuration>
<executions>
<execution>
<id>start-jetty</id>
<phase>pre-integration-test</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<daemon>true</daemon>
</configuration>
</execution>
<execution>
<id>stop-jetty</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.5</version>
<inherited>true</inherited>
<executions>
<execution>
<id>integration-tests</id>
<phase>integration-test</phase>
<goals>
<goal>test</goal>
</goals>
<configuration>
<excludes>
<exclude>**/common/*</exclude>
</excludes>
<includes>
<include>**/*IntegrationTest.java</include>
</includes>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>

This may look complex, but really we’re just configuring jetty to run while we run our integration tests; then configuring how to run the integration tests themselves.

In lines 9-19 configure jetty – the port to run on and how to stop it.

Lines 21-30 configure jetty to run during the “pre-integration-test” phase of the maven build.

Lines 31-37 configure jetty to be stopped during the “post-integration-test” phase of the maven build.

In lines 40-62 we use the maven-surefire-plugin again, this time to run during the “integration-test” phase of the build, only running our integration test classes.

We can run this build with:

mvn clean install -Pwith-emma -Pwith-integration-tests

This will build everything, run the unit tests, build the war, fire up jetty to host the war, run our integration tests (you’ll see a firefox window popup while the rest runs) then shut down jetty. Because the war is built with instrumented classes, Emma also tracks code coverage while we run our integration tests.

We can now build our application, running unit tests and integration tests, gathering combined code coverage reports. If we re-run the emma report and check code coverage we now see we have 100% test coverage – since the controller is also being covered through tests.

Issues

What are the outstanding issues with this, what further extensions can be made?

  • The build produces an instrumented WAR – this means you need to run a second build, without emma, to get a production-ready build.
  • The integration test hard-codes the port that Jetty is configured to start on; meaning the tests can’t be run directly within Eclipse. It is possible to pass this port in, defaulting to say, 8080, so that integration tests can be run seemlessly within Eclipse as well via the maven build
  • When running on your build server you probably don’t want Firefox popping up at random (if X is even installed); so running xvfb is a good idea. It is possible to setup maven to start & stop xvfb before & after the integration tests.

First company coding dojo

Last month we ran our first company coding dojo – this was only open to company staff, but attendance was good (around a dozen people).

For those that have never heard of it, a coding dojo – based on the idea of a martial arts dojo – is an opportunity for programmers to improve their skills. This means getting a group of developers together, round a big screen, to work through a problem. Everything is pair programmed, with one “driver” and one “co-pilot”. Every so often the pair is changed: the driver returns to the audience, the co-pilot becomes the driver and a new co-pilot steps up. That way everyone gets a turn writing code, while the rest of the group provide advice (no matter how unwelcome).

For the first dojo we tackled a problem in Scala – this was the first time using Scala for most people, so a lot of time was spent learning the language. But thanks to Daniel Korzekwa, Kingsley Davies & DJ everyone got to grips with the language and we eventually got a solution! The session was a lot of fun, with a lot of heated discussion – but everyone felt they learned something.


Afterwards, in true agile style, we ran a quick retrospective. The lessons learned showed the dojo had been an interesting microcosm of development – with us making the same mistakes we so often see in the day job! For example, we knew we should start with a design and went as far as getting a whiteboard; but failed to actually do any design. This led to repeated rework as the final design emerged, slowly, from numerous rewrites. One improvement for next time was to do just in time design – in true agile style.

We also set out to do proper test-first TDD. However, as so often happens, this degenerated into code-first development with tests occasionally run and passing rarely. It was interesting to see how quickly a group of experienced developers fall out of doing TDD. Our retrospective highlighted that next time we should always write tests first, and take “baby steps” – by doing the simplest thing that could possibly make the test pass.

Overall it was a great session and very enjoyable – it was fascinating to see the impact of ignoring “best practices” on something small where the results are so much more immediate.

The true cost of technical debt

Whether you like to think of it as technical debt or an unhedged call option we’re all surrounded by bad code, bad decisions and their lasting impact on our day to day lives. But what is the long term impact of these decisions? Are we really making prudent choices? Martin Fowler talks about the four classes of technical debt – from reckless and deliberate to inadvertent and prudent.

Deliberate reckless debt

Deliberate reckless technical debt is just that: developers (or their managers) allowing decisions to be made that offer no upside and only downside – e.g. abandoning TDD or not doing any design. Whichever way you look at it, this is just plain unprofessional. If the developers aren’t capable of making sensible choices, then management should have stepped in to bring in people that could. Michael Norton labels this “cruft not technical debt“. If we’re getting no benefit and simply giving ourselves an excuse to write crappy code then it’s not technical debt, it’s just crap.

Inadvertent debt

Inadvertent technical debt is tricky. If we didn’t know any better, how could we have done any differently? Perhaps industry standards or best practices have moved on. I’m sure once upon a time EJBs were seen as a good idea, now they look like pure technical debt. Today’s best practice so easily becomes tomorrow’s code smell.

Or perhaps we’d never worked in this domain before, if we had the domain knowledge when we started – maybe the design would have turned out different. Sometimes technical debt is inevitable and unavoidable.

Prudent deliberate debt

Then there’s prudent, deliberate debt. Where we make a conscious choice to add technical debt. This is a pretty common decision:

We need to recognise the revenue this quarter, no matter what

We’ve got marketing initiatives lined up so we’ve got to hit that date

We’ve committed to a schedule so don’t have time for rework

Sometimes, we have to make compromises: by doing a sub-standard job now, we get the benefit of finishing faster but we pay the price later.

Unlike the other types of technical debt, this is specifically a technical compromise. We’ve made a conscious decision to leave the code in a worse state than we should do. We know this will slow us down later; we know we need to come back and fix it in “phase 2”; but to hit the date, we accept compromise.

Is it always the right decision?

1. Compromise is always faster in the short-term and slower in the long-term

Given a choice between what we need now and some unspecified “debt” to deal with later – the obvious choice is always to accept compromise.

2. Each compromise is minor, but they compound

Unlike how most people experience “debt”, technical debt compounds. Each compromise we accept increases the cost of all the existing debt as well. This means the cost of each individual compromise may be small, but together they can have a massive impact.

First of all I decide not to refactor some code. Next time round, because its not well factored, it’s harder to test, so I skip some unit tests. Third time round, my test coverage is poor, so I don’t have the confidence to refactor extensively. Now it’s getting really hard to test and really hard to change. Little by little, but faster and faster, my code is deteriorating. Each compromise piles on the ones that went before and amplifies them. Each decision is minor, but they add up to a big headache.

3. It’s hard to quantify the long term cost

When we agree not to rework a section of code, when we agree to leave something half-finished, when we agree to rush something through with minimal tests: it’s very difficult to estimate the long term cost. Sure, we can estimate what it would take to do it right – we know what the principal is. But how can we estimate the interest payments? Especially when they compound.

How can I estimate the time wasted figuring out the complexity next time? The time lost because a bug is harder to track down. The extra time it takes because I can’t refactor the code so easily next time. The extra time it takes because I daren’t refactor the code next time; and the extra debt this forces me to take on. How can I possibly quantify this?

At IMVU they found they:

underestimated the long-term costs [of technical debt] by at least an order of magnitude

Is it always wrong?

There are obviously cases where it makes sense to add technical debt; or at least, to do something half-assed. If you want to get a new feature in front of customers to judge whether its valuable, it doesn’t need to be perfect, it just needs to be out there quickly. If the feature isn’t useful for users, you remove it. You’ve saved the cost of building it “perfectly” and quickly gained the knowledge you needed. This is clearly a cost-effective, lean way to manage development.

But what if the feature is successful? Then you need a plan for cleaning it up; for refactoring it; making sure it’s well tested and documented sufficiently. This is the debt. As long as you have a plan for removing it, whether the feature is useful or not – then it’s a perfectly valid approach. What isn’t an option is leaving the half-assed feature in the code base for years to come.

The difference is having a plan to remove the debt. How often do we accept compromise with a vague promise to “come back and fix later” or my personal favourite “we’ll fix that in phase 2”. My epitaph should be “now working on phase 2”.

Leaving debt in the code with no plan to remove it is like the guy who pays the interest on one credit card with another. You’re letting the debt mount up, not dealing with it. Without a plan to repay the debt, you will eventually go bankrupt.

The Risk of Technical Debt

Perhaps the biggest danger of technical debt is the risk that it represents. Technical debt makes our code more brittle, less easy to change. As @bertvanbrakel said when we discussed this recently:

Technical debt is a measure of code inflexibility

The harder it is to change, the more debt laden our code. With this inflexibility, comes the biggest risk of all: that we cannot change the code fast enough. What if the competitive or regulatory environment suddenly changes? If a new competitor launches that completely changes our industry, how long does it take us to catch up? If we have an inflexible code base, what will be left of our business in 2, 3 or 4 years when we finally catch up?

While this is clearly a worst case scenario, this lack of flexibility – this lack of innovation – hurts companies little by little. Once revolutionary companies become staid, unable to react and release only derivative products. As companies find themselves unable to innovate and keep up with the ever changing landscape – they risk becoming irrelevant. This is the true cost of technical debt.

Without a plan to repay the technical debt; with no way to reliably estimate the long term cost of not repaying it, are we really making prudent, deliberate choices? Isn’t the decision to add technical debt simply reckless?

What is software craftsmanship?

Learning to Write Software

Many programmers are, basically, self-taught. I dunno about you, but I taught myself to program as a young kid. I loved it. The endless challenge to bend the machine to my will. The elation when after hours of trying you get over some seemingly insurmountable hurdle. That was it: I was hooked.

Eventually, I went off to university – hoping to learn to do it properly. University taught me many things: first order logic, queuing theory, compiler design, distributed systems. All very interesting theoretically. Only the last of those has ever been any use to me in my professional career.

The trouble is, writing commercial software really has very little to do with computer science. Sure, algorithmic complexity is great to know about, but I don’t need to understand the difference between linear time and polynomial time complexity to see my code running dog slow when I give it large inputs.

Calling what we do “computer science” is like calling cooking “knife science”

You need to know how to handle a knife to cook, but its so much more than that. Cooking is part science, part art, part judgement. I think programming is the same. Great software is a product of enough computer science, creative design decisions and pragmatic judgement calls. Some of the science is taught in school; but what of the art and judgement? How do new programmers learn their craft?

Craftsmanship

In most jobs with innate knowledge, learned skills, hard-won rules of thumb – the only way to learn, the proven-over-centuries way to learn is by learning from those more experienced than you. This is craftsmanship. The young apprentice finds a master craftsman to learn from; after many years the young apprentice himself has learnt these skills and is now ready to pass them on to the next generation.

Does this happen in software?

Mike Cohn made a comment recently that “nobody wants to be a programmer past 30”. This scares the hell out of me. If programmers are all moving on to become agile coaches, consultants, managers or big-A-architects – who’s educating the next generation? Is all our hard won knowledge to disappear? And be re-learnt every time?

Maybe I was lucky, I can point to great teachers in my past. If I’m any good at my job today, its because I’ve worked with some great people. They helped me improve my knowledge, pushed me to new levels of understanding – to really know what it takes to craft great software. I have a feeling this isn’t common. I’ve been in many other jobs where I’ve not met great teachers. I suspect there are many developers that never encounter a great teacher in their entire career.

So I like the idea of software craftsmanship. I like the idea that developers should always be improving their craft. I like the idea that experienced developers pass on their tricks of the trade, their experience, the reasoning behind their decisions. But how do we encourage this idea? How do we unite programmers behind the idea of continuously improving our craft? What we need is some kind of manifesto.

The Manifesto

The software craftsmanship manifesto is great; but I’m not sure it really inspires me. It smacks too much of motherhood & apple pie. Sure, it encourages me to “raise the bar”. But, besides some notion of continuous improvement, it really doesn’t say anything!

Compare this with the agile manifesto, which does a great job of setting out its vision. It’s clear what agile is, but critically some of the things agile isn’t. Sure, customer collaboration is great – who wouldn’t want that? But the most common obstacle to it is contract negotiation. So right there it points it out, if I want to be more agile, I should try reducing my company’s dependency on contract negotiation. The manifesto helps me understand how I can become more agile – that’s inspiring.

But what do I have to do to become a better craftsman? Is it really enough to encourage developers to steadily add value, to write well crafted software?

Ok, we’ll just write better software then. Great suggestion! So am I a craftsman now?

Isn’t the real problem looking at why developers don’t do this. What’s stopping us from crafting great software?

What craftsmanship isn’t

Software craftsmanship requires pride in your work. But its not enough to just be proud of your work. I could be writing crap code and still be proud of it, if I don’t know any better.

Software craftsmanship isn’t about getting a tick in a box. Craftsmanship is a mindset; its the way we should be working. Its not something you can be – its something we should all aspire to.

Craftsmanship definitely isn’t about certification. I’m sick of all the certification. How many Sun Certified Java Programmers are there? I’ve met very few who were any good at their job. Conversely I’ve met many great programmers who didn’t bother with Sun’s certification schemes. Certification is no indication of competence. In general, I find the exact opposite: people who need certification are trying to prove they’re not crap at their job. Fail.

Or there’s the Certified Scrum Master. Everyone and their aunt is a Certified Scrum Master these days. The scrum certification schemes have become ridiculous. Does anyone believe it’s anything other than a bandwagon, a fad, to further the agile brand and make everyone a shit load of money in the process? When everyone is a CSM, how does that help companies hire? How does that help developers demonstrate their experience and knowledge?

Because that’s really the key issue here, isn’t it? How do programmers improve their craft? How do they demonstrate to companies with myopic hiring processes that they’re good at their craft? How do hiring managers wading through endless Sun Certified Java Programmer CVs spot the great craftsman?

The people

In my mind, software craftsmanship is all about the people not the software. I think any good programmer can become a great programmer, with the right help. But who’s providing this help? How do good programmers find it? I think software craftsmanship is about identifying master craftsmen; its about helping apprentice programmers find their own great teachers.

The only way to get better at programming is to do it. The best way to learn is to have your mistakes pointed out to you. This means you need a mentor; someone that works with you day in, day out. Software craftsmanship is about continuous feedback.

If I had to put my belief about craftsmanship into a pithy statement, I’d say software craftsmanship is about:

  • Competence over certification
  • Pragmatism over specific processes
  • Mentoring over training

What do you think? What does software craftsmanship mean to you?

Post your views in the comments; or if you’re based in London come along to the first London Software Craftsmanship Community meeting, date & venue to be announced real soon now.

Is agile about developers (any more)?

I spent last week at the Agile 2010 Conference. It was my first time at a conference this size; I definitely found it interesting and there were some thought provoking sessions – but there weren’t many deeply technical talks. As others have asked, what happened to the programmers?

Bob Martin wrote that

Programmers started the agile movement to get closer to customers not project managers

He also commented on how few talks were about programming

< 10% of the talks at #agile2010 are about programming. Is programming really < 10% of Agile?

People have already commented on how cost is a factor in attending a conference like this – especially for those of us outside the US who have expensive flights to contend with, too. This is certainly a factor, but I wonder if this is the real problem?

Do developers attend a conference like Agile 2010 to improve their craft? How much can you cover in a 90 minute session? Sure, you can get an introduction to a new topic – but how much detail can you get into? Isn’t learning the craft fundamentally a practical task? You need hands on experience and feedback to really learn. In a short session with a 100+ people are you actually gonna improve your craft?

Take TDD as an arbitrary example. The basic idea can be explained fairly quickly. A 90 minute session can give you a good introduction and some hands on experience – but to really grok the idea, to really see the benefit, you need to see it applied to the real world and take it back to the day job. I think the same applies to any technical talk – if its interesting enough to be challenging, 90 minutes isn’t going to do it justice.

This is exacerbated by agile being such a broad church; there were developers specialising in Java, C#, Ruby and a host of other languages. Its difficult to pitch a technical talk that’s challenging and interesting that doesn’t turn off the majority of developers that don’t use your chosen language.

That’s not to say a conference like Agile 2010 isn’t valuable, and I’m intrigued to see where XP Universe 2011 gets to. However, I think the work that Jason Gorman is doing on Software Craftsmanship, for example, is a more successful format for technical learning – but this is focused clearly on the technical, rather than improving our software delivery process.

Isn’t the problem that Agile isn’t about programming? It is – or at least has become – management science. Agile is a way of managing software projects, of structuring organisations, of engaging with customers – aiming to deliver incremental value as quickly as possible. Nothing in this dictates technical practices or technologies. Sure, XP has some things to say about practices; but scrum, lean, kanban et al are much more about the processes and principles than specific technical approaches.

Aren’t the biggest problems with making our workplaces more agile – and in fact the biggest problems in software engineering in general –  management ones, not development ones? Its pretty rare to find a developer that tells you TDD is bad, that refactoring makes code worse, that continuous integration is a waste of time, that OOD leads to worse software. But its pretty common to find customers that want the moon on a stick, and want it yesterday; managers that value individual efficiency over team effectiveness, that create distinct functional teams and hinder communication.

There is always more for us to learn; we’re improving our craft all the time. But I don’t believe the biggest problems in software are the developers. Its more common for a developer to complain about the number of meetings they’re asked to attend, than the standard of code written by their peers.

Peers can be educated, crap management abides.