Feeds:
Posts
Comments

Posts Tagged ‘code coverage’

How good is your code? If you’re like the other 80% of above average developers, then I bet your code is pretty awesome. But are you sure? How can you tell? Or perhaps you’re working on a legacy code base – just how bad is the code? And is it getting better? Code metrics provide a way of measuring your code – some people love ‘em, but some hate ‘em.

The Good

Personally I’ve always found metrics very useful. For example – code coverage tools like Emma can give you a great insight into where you do and don’t have test coverage. Before embarking on an epic refactor of a particular package, just how much coverage is there? Maybe I should increase the test coverage before I start tearing the code apart.

Another interesting metric can be lines of code. While working in a legacy code base (and who isn’t?), if you can keep velocity consistent (so you’re still delivering features) but keep the volume of inventory the same or less, then you’re making the code less crappy while still delivering value. Any idiot can implement a feature by writing bucket loads of new code, but it takes real craftsmanship to deliver new features and reduce the size of the code base.

The Bad

The problem with any metric is who consumes it. The last thing you want is for an over eager manager to start monitoring it.

You can’t control what you can’t measure
– Tom DeMarco

Before you know it, there’s a bonus attached to the number of defects raised. Or there’s a code coverage target everyone is “encouraged” to meet.

As soon as there’s management pressure on a metric, smart people will game the system. I’ve lost count of the number of times I’ve seen people gaming code coverage metrics. In an effort to please a well meaning but fundamentally misguided manager, developers end up writing tests with no assertions. Sure, the code ran and didn’t blow up. But did it do the right thing? Who knows! And if you introduce bugs, will your tests catch it? Hell, no! So your coverage is useless.

The target was met but the underlying goal – improving software quality – has not only been missed, it’s now harder to meet in future.

The Ugly

The goal of any metric is to measure something useful about the code base. Take code coverage, for example – really what we’re interested in is defect coverage. That is, out of the universe of all possible defects in our code, how many would cause a failure in at least one test? That’s what we want to know – how protected are we against regressions in the code base.

The trouble is, how can I measure “the universe of all possible defects” in a system? Its basically unknowable. Instead, we use code coverage as an approximation. Given that tests assert the code did the right thing, the percentage of code that has been executed is a good estimation of the likelihood of bugs being caught by them. If my tests execute 50% of the code, at best I can catch bugs in 50% of the code. If there are bugs in the other 50%, there’s zero chance my tests will find them. Code coverage is an upper bound on test coverage. But, if your tests are shoddy, test coverage can be much lower. To the point where tests with no assertions are basically useless.

And this is the difficulty with metrics: measuring what really matters – the quality of our software – is hard, if not impossible. So instead we have to measure what we can, but it isn’t always clear how that relates to our underlying goal.

But what does it mean?

There are some excellent tools out there like Sonar that give you a great overview of your code using a variety of common metrics. The trouble often is that developers don’t know (or care) what they mean. Is a complexity of 17.0 / class good or bad? I’m 5.6% tangled – but maybe there’s a good reason for that. What’s a reasonable target for this code base? And is LCOM4 a good thing or a bad thing? It sounds like a cancer treatment, to be honest.

Sure, if I’m motivated enough I can dig in and figure out what each metric means and we can try and agree reasonable targets and blah blah blah. C’mon, I’m busy delivering business value. I don’t have time for that crap. It’s all just too subtle so it gets ignored. Except by management.

A Better Way

Surely there’s got to be a better way to measure “code quality”?

1. Agree

Whatever you measure, its important the team agree and understand what it means. If there’s a measure half the team don’t agree with, then its unlikely it will get better. Some people will work towards improving it, others won’t so will let it get worse. The net effect is likely to be heartache and grief all round.

2. Measure What’s Important

Downward chartYou don’t have to measure the “standard” things – like code coverage or cyclomatic complexity. As long as the team agree its a useful thing to measure, everyone agrees it needs improving and can commit to improving it – then its a useful measure.

A colleague of mine at youDevise spent his 10% time building a tool to track and graph various measures of our code base. But, rather unusually, these weren’t the usual metrics that the big static analysis tools gather – these were much more tightly focused, much more specific to the issues we face. So what kind of things can you measure easily yourself?

  • If you have a god class, why not count the number of lines in the file? Less is better.
  • If you have a 3rd party library you’re trying to get rid of, why not count the number of references to it.
  • If you have a class you’re trying to eliminate, why not count the number of times its imported?

These simple measures represent real technical debt we want to remove – by removing technical debt we will be improving the quality of our code base. They can also be incredibly easy to gather, the most naive approach only needs grep & wc.

It doesn’t matter what you measure, as long as the team believe whatever you do measure should be improved; then it gives you an insight into the quality of your code base, using a measure you care about.

3. Make It Visible

Finally, put this on a screen somewhere – next to your build status is good. That way everyone can see how you’re doing and gets a constant reminder that quality is important. This feedback is vital – you can see when things are getting better and, just as importantly, when things start to slip and the graph veers ominously upwards.

Keep It Simple, Stupid

Code quality is such an abstract concept its impossible to measure. Instead, focus on specific things you can measure easily. The simpler the metric is to understand the easier it is to improve. If you have to explain what a metric means you’re doing it wrong. Try and focus on just a few things at any one time – if you’re tracking 100 different metrics its going to be sheer luck that on average they’re all getting better. If we instead focus on half a dozen, I can remember them – the very least I’ll do is not let them get worse; and if I can, they’ll be clear in my mind so I can improve them.

Do you use metrics? If so, what do you measure? If not, do you think there’s something you could measure?


Read Full Post »

On a pet project recently I set out to build automated UI (integration) tests as well as the normal unit tests. I wanted to get all of this integrated into my maven build, with code coverage reports so I could get an idea of areas with insufficient test coverage. Rather than just publish the source code for the project, I’ve put together a simple example to demonstrate how I got all this setup; so if you’re looking to integrate maven, junit, webdriver (now selenium) and emma - read on to find out how I went about it.

First off, all the source code for this is available on github: https://github.com/activelylazy/coverage-example. I’ll show key snippets, but obviously there’s lots of detail omitted that (hopefully) isn’t relevant.

The Example App

Rather than break with tradition, the example application is a simple, if slightly contrived, hello world:

How It Works

The start page is a simple link to the hello world page:

<h1>Example app</h1>
<p>See the <a id="messageLink" href="helloWorld.html">message</a></p>

The hello world page just displays the message:

<h1>Example app</h1>
<p id="message"><c:out value="${message}"/></p>

The hello world controller renders the view, passing in the message:

public class HelloWorldController extends ParameterizableViewController {
    // Our message factory
    private MessageFactory messageFactory;
    @Override
    protected ModelAndView handleRequestInternal(HttpServletRequest request,
        HttpServletResponse response) throws Exception {
        // Get the success view
        ModelAndView mav = super.handleRequestInternal(request, response);
        // Add our message
        mav.addObject("message",messageFactory.createMessage());
        return mav;
    }
    @Autowired
    public void setMessageFactory(MessageFactory messageFactory) {
        this.messageFactory = messageFactory;
    }
}

Finally the MessageFactory simply returns the hard-coded message:

public String createMessage() {
    return "Hello world";
}

The unit test

We define a simple unit test to verify that the MessageFactory behaves as expected:

public class MessageFactoryTest {
    // The message factory
    private MessageFactory messageFactory;
    @Test
    public void testCreateMessage() {
        assertEquals("Hello world",messageFactory.createMessage());
    }
    @Autowired
    public void setMessageFactory(MessageFactory messageFactory) {
        this.messageFactory = messageFactory;
    }
}

Build

A basic maven pom file is sufficient to build this and run the unit test. At this point we have a working app, with a unit test for the core functionality (such as it is) that we can build and run.

<project>
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.example</groupId>
    <artifactId>helloworld</artifactId>
    <packaging>war</packaging>
    <version>1.0-SNAPSHOT</version>
    <name>helloworld Maven Webapp</name>
    <build>
        <finalName>helloworld</finalName>
    </build>
    <dependencies>
        ...omitted...
    </dependencies>
</project>

Code Coverage

Now let’s integrate Emma so we can get some code coverage reports. First, we define a new Maven profile, this allows us to control whether or not we use emma on any given build.

<profile>
    <id>with-emma</id>
    <build>
        <plugins>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>emma-maven-plugin</artifactId>
                <inherited>true</inherited>
                <executions>
                    <execution>
                        <id>instrument</id>
                        <phase>process-test-classes</phase>
                        <goals>
                            <goal>instrument</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</profile>

This simply invokes the “instrument” goal during the Maven “process-test-classes” phase; i.e. once we’ve compiled our class files, use emma to instrument them. We can run this by invoking maven with the new profile:

mvn clean install -Pwith-emma

Once the build has completed, we can run Emma to generate code coverage reports:

On Windows:

java -cp %USERPROFILE%/.m2/repository/emma/emma/2.0.5312/emma-2.0.5312.jar emma report -r xml,html -in coverage.ec -in target/coverage.em

On Linux:

java -cp ~/.m2/repository/emma/emma/2.0.5312/emma-2.0.5312.jar emma report -r xml,html -in coverage.ec -in target/coverage.em

We can now view the HTML coverage report in coverage/index.html. At this point, it shows we have 50% test coverage (by classes). MessageFactory is fully covered, but the HelloWorldController doesn’t have any tests at all.

Integration Test

To test our controller and JSP, we’ll use WebDriver to create a simple integration test; this is a JUnit test that happens to launch a browser.

public class HelloWorldIntegrationTest {
    // The webdriver
    private static WebDriver driver;
    @BeforeClass
    public static void initWebDriver() {
        driver = new FirefoxDriver();
    }
    @AfterClass
    public static void stopSeleniumClent() {
        try {
            driver.close();
            driver.quit();
        } catch( Throwable t ) {
            // Catch error & log, not critical for tests
            System.err.println("Error stopping driver: "+t.getMessage());
            t.printStackTrace(System.err);
        }
    }
    @Test
    public void testHelloWorld() {
        // Start from the homepage
        driver.get("http://localhost:9080/helloworld/");
        HomePage homePage = new HomePage(driver);
        HelloWorldPage helloWorldPage = homePage.clickMessageLink();
        assertEquals("Hello world",helloWorldPage.getMessage());
    }
}

Lines 4-18 simply start Web Driver before the test and shut it down (closing the browser window) once the test is finished.

On line 22 we navigate to the homepage with a hard-coded URL.

On line 23 we initialise our Web Driver page object for the homepage. This encapsulates all the details of how the page works, allowing the test to interact with the page functionally, without worrying about the mechanics (which elements to use etc).

On line 24 we use the homepage object to click the “message” link; this navigates to the hello world page.

On line 25 we confirm that the message shown on the hello world page is what we expect.

Note: I’m using page objects to separate test specification (what to do) from test implementation (how to do it). For more on why this is important see keeping tests from being brittle.

Homepage

The homepage object is pretty simple:

public HelloWorldPage clickMessageLink() {
    driver.findElement(By.id("messageLink")).click();
    return new HelloWorldPage(driver);
}

HelloWorldPage

The hello world page is equally simple:

public String getMessage() {
    return driver.findElement(By.id("message")).getText();
}

Running the Integration Test

To run the integration test during our Maven build we need to make a few changes. First, we need to exclude integration tests from the unit test phase:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    ...
    <configuration>
        ...
        <excludes>
            <exclude>**/*IntegrationTest.java</exclude>
            <exclude>**/common/*</exclude>
        </excludes>
    </configuration>
</plugin>

Then we define a new profile, so we can optionally run integration tests:

<profile>
    <id>with-integration-tests</id>
    <build>
        <plugins>
            <plugin>
                <groupId>org.mortbay.jetty</groupId>
                <artifactId>maven-jetty-plugin</artifactId>
                <version>6.1.22</version>
                <configuration>
                    <scanIntervalSeconds>5</scanIntervalSeconds>
                    <stopPort>9966</stopPort>
                    <stopKey>foo</stopKey>
                    <connectors>
                        <connector implementation="org.mortbay.jetty.nio.SelectChannelConnector">
                            <port>9080</port>
                            <maxIdleTime>60000</maxIdleTime>
                        </connector>
                    </connectors>
                </configuration>
                <executions>
                    <execution>
                        <id>start-jetty</id>
                        <phase>pre-integration-test</phase>
                        <goals>
                            <goal>run</goal>
                        </goals>
                        <configuration>
                            <daemon>true</daemon>
                        </configuration>
                    </execution>
                    <execution>
                        <id>stop-jetty</id>
                        <phase>post-integration-test</phase>
                        <goals>
                            <goal>stop</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.5</version>
                <inherited>true</inherited>
                <executions>
                    <execution>
                        <id>integration-tests</id>
                        <phase>integration-test</phase>
                        <goals>
                            <goal>test</goal>
                        </goals>
                        <configuration>
                            <excludes>
                                <exclude>**/common/*</exclude>
                            </excludes>
                            <includes>
                                <include>**/*IntegrationTest.java</include>
                            </includes>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</profile>
<profile>
<id>with-integration-tests</id>
<build>
<plugins>
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>maven-jetty-plugin</artifactId>
<version>6.1.22</version>
<configuration>
<scanIntervalSeconds>5</scanIntervalSeconds>
<stopPort>9966</stopPort>
<stopKey>foo</stopKey>
<connectors>
<connector implementation=”org.mortbay.jetty.nio.SelectChannelConnector”>
<port>${test.server.port}</port>
<maxIdleTime>60000</maxIdleTime>
</connector>
</connectors>
</configuration>
<executions>
<execution>
<id>start-jetty</id>
<phase>pre-integration-test</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<daemon>true</daemon>
</configuration>
</execution>
<execution>
<id>stop-jetty</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.5</version>
<inherited>true</inherited>
<executions>
<execution>
<id>integration-tests</id>
<phase>integration-test</phase>
<goals>
<goal>test</goal>
</goals>
<configuration>
<excludes>
<exclude>**/common/*</exclude>
</excludes>
<includes>
<include>**/*IntegrationTest.java</include>
</includes>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>

This may look complex, but really we’re just configuring jetty to run while we run our integration tests; then configuring how to run the integration tests themselves.

In lines 9-19 configure jetty – the port to run on and how to stop it.

Lines 21-30 configure jetty to run during the “pre-integration-test” phase of the maven build.

Lines 31-37 configure jetty to be stopped during the “post-integration-test” phase of the maven build.

In lines 40-62 we use the maven-surefire-plugin again, this time to run during the “integration-test” phase of the build, only running our integration test classes.

We can run this build with:

mvn clean install -Pwith-emma -Pwith-integration-tests

This will build everything, run the unit tests, build the war, fire up jetty to host the war, run our integration tests (you’ll see a firefox window popup while the rest runs) then shut down jetty. Because the war is built with instrumented classes, Emma also tracks code coverage while we run our integration tests.

We can now build our application, running unit tests and integration tests, gathering combined code coverage reports. If we re-run the emma report and check code coverage we now see we have 100% test coverage – since the controller is also being covered through tests.

Issues

What are the outstanding issues with this, what further extensions can be made?

  • The build produces an instrumented WAR – this means you need to run a second build, without emma, to get a production-ready build.
  • The integration test hard-codes the port that Jetty is configured to start on; meaning the tests can’t be run directly within Eclipse. It is possible to pass this port in, defaulting to say, 8080, so that integration tests can be run seemlessly within Eclipse as well via the maven build
  • When running on your build server you probably don’t want Firefox popping up at random (if X is even installed); so running xvfb is a good idea. It is possible to setup maven to start & stop xvfb before & after the integration tests.

Read Full Post »

%d bloggers like this: