Feeds:
Posts
Comments

Archive for May, 2010

Doing agile is easy. If you’re working on a greenfield project, with no history and no existing standards and processes to follow. The rest of us get to work on brownfield projects, with company standards that are the antithesis of agile, and people and processes wedded to a past long out of date. Oh you can do agile in this environment, but its hard – because you have to overcome the constraints that meant you weren’t agile in the first place.

Any organisation hoping to “go agile” needs to overcome its constraints. The scrum team hits issues stopping them being properly agile – normally artefacts of the old way of doing things – they raise these to the scrum master and management hoping to remove them and reach the agile nirvana on the other side. Management diligently discuss these constraints, the easiest are quickly removed – simple things like better tools, whiteboards everywhere; but some take a bit longer to remove.

Before too long, you run into company culture: these are the constraints that just won’t go away. Not all constraints are created equal, though – the hardest to remove are often the most vital; the things that stop the company being properly agile. So let’s look at some of the activities and typical constraints a team can encounter.

User Story Workshops

If you’re doing agile, you’ve gotta have user stories. If you have user stories, you’re bound to have something like a user story workshop – where all the stakeholders (or just the dev team, if you’re kidding yourself) get together to agree the basic details of the work that needs to be done.

The easiest trap in the world to fall into is to try and perfect the user stories. Before you know it you’re stuck in analysis paralysis, reviewing and re-reviewing the user stories until everyone is totally happy with them. Every time you discuss them, someone thinks of a new edge case, a new detail to think about – so you add more acceptance criteria, more user stories, more spikes.

Eventually you’ll get moving again, but now you’ve generated a mountain of collateral with the illusion of accuracy. User stories should be a place holder for a discussion, if you waste time generating that detail up front you might skip the conversation later thinking you’ve captured everything already and miss the really critical details.

Estimates

The worst thing is when it comes to producing estimates. The start of a new project or a team is a difficult time: you’ve got no history to base your estimates on, but management need to know how long you’ll take. With the best of intentions, scrum masters and team leads are often given incentives – a bonus, for example – to produce accurate estimates; there’s only two ways to play this: 1. pad estimates mercilessly and know you’ll fill that “spare” time easily; 2. spend more time analysing the problem, as though exhaustive analysis can predict the future. Neither of these outcomes are what the business really wants and certainly can’t be described as “agile”.

Design

There’s a great conflict between the need for overall architecture and design; and the agile mentality to JFDI – to not spend time producing artefacts that don’t themselves add value. However, you can’t coordinate large development activities without some guiding architecture, some vague notion of where you’re heading.

But the logical conclusion of this train of thought is that you need some architectural design – so lets write a document describing our ideas and get together to discuss it. Luckily this provides a great forum for all the various stakeholders across the business that don’t grok code to have their say: “we need an EDA“; “I know yours is a small change, but first you must solve some arbitrarily vast problem overnight”; “I’m a potential user of this app and I think it should be green“.

This process is great at producing perfect design documents. Unfortunately in the real world, where requirements change faster than designs can be written, its a completely wasted effort. But it gives everyone a forum for airing their views and since they don’t trust the development team to meet their requirements any other way, any change to the process is resisted.

The scrum team naturally raise the design review process as a constraint, but once its clear it can’t be removed, they start to adapt to it. Because the design can fundamentally change on the basis of one person’s view right up to the last minute (hey! Cassandra sounds cool, let’s use that!); the dev team change their process: “sprints can’t start until the design is signed off”. And because the design can fundamentally change the amount of work to be done, the user stories are never “final” until the design is signed off and the business can’t get an estimate until the design has been agreed.

Sprinting

If you’re lucky, the one part of the project that feels vaguely scrum-y is the development sprint. The team are relatively free of dependencies on others; they can self-organise around getting the work done; things are tested as they go; the Definition of Done is completed before the next story is started. The team are rightly proud of “being agile”.

Manual Testing

And then the constraints emerge again. If you’re working on a legacy code base, without a decent suite of automated tests, you probably rely on manual testers clicking through your product to some written test script. A grotesque misuse of human beings if ever there was one, but the cost of investing in automation “this time” is outweighed by how quickly the testers can whip through a script now they’ve done it a million times.

Then worst of all possible worlds, because regression testing the application isn’t a case of clicking a button (but clicking thousands of buttons, one by one in a very specific order) you have to have a “final QA run”. A chance, once development has stopped, for QA to assure the product meets the required quality goals. If your reliance on manual QA is large, this can be a time-consuming exercise. And then, what happens if QA find a bug? Its fixed, and you start the whole process all over again… like some gruesome nightmare you can’t wake up from!

The Result

With these constraints at every step of the process, we’re not working how we’d like to. Now either we can view them as a hindrances to working the way we want to; or we can view them as forcing a new process on us, that we’re not in control of. What we’ve managed to do is invent a new development methodology:

  • First we carefully gather requirements and discuss ad nauseum
  • Then we carefully design a solution that meets these (annoyingly changing) requirements
  • Then we write code to meet the design (and ignore that the design changes along the way)
  • Finally we test that the code we wrote actually does what we wanted it to (even though that’s changed since we started)

What have we invented? Waterfall. We’ve rediscovered waterfall. Well done! For all your effort and initiatives, you’ve managed to invent a software development methodology discredited decades ago.

But we’ve tarted it up with stand ups, user stories, a burn down chart and some poor schmuck that gets called the scum master – but really, all we’ve done is put some window dressing on our old process and given it a fancy dan name.

If the ultimate goal of introducing agile to your company is to make the business more efficient but you’re still drowning in constraints – then that means the goal of introducing agile has failed: you’ve still got the same constraints, and you’re still doing waterfall – no matter what you call it: a rose by any other name still smells of shit.


Read Full Post »

Note: this article is now out of date, please see the more recent version covering the same topic: Testing asynchronous applications with WebDriverWait.

WebDriver is a great framework for automated testing of web applications. It learns the lessons of frameworks like selenium and provides a clean, clear API to test applications. However, testing ajax applications presents challenges to any test framework. How does WebDriver help us? First, some background…

Drivers

WebDriver comes with a number of drivers. Each of these is tuned to drive a specific browser (IE, Firefox, Chrome) using different technology depending on the browser. This allows the driver to operate in a manner that suits the browser, while keeping a consistent API so that test code doesn’t need to know which type of driver/browser is being used.

Page Pattern

The page pattern provides a great way to separate test implementation (how to drive the page) from test specification (the logical actions we want to complete – e.g. enter data in a form, navigate to another page etc). This makes the intent of tests clear:

homePage.setUsername("mytestuser");
homePage.setPassword("password");
homePage.clickLoginButton();

Blocking Calls

WebDriver’s calls are also blocking – calls to submit(), for example, wait for the form to be submitted and a response returned to the browser. This means we don’t need to do anything special to wait for the next page to load:

WebDriver driver = new FirefoxDriver();
driver.get("http://www.google.com/");
WebElement e = driver.findElement(By.name("q"));
e.sendKeys("webdriver");
e.submit();
assertEquals("webdriver - Google Search",driver.getTitle());

Asynchronous Calls

The trouble arises if you have an asynchronous application. Because WebDriver doesn’t block when you make asynchronous calls via javascript (why would it?), how do you know when something is “ready” to test? So if you have a link that, when clicked, does some ajax magic in the background – how do you know when the magic has stopped and you can start verifying that the right thing happened?

Naive Solution

The simplest solution is by using Thread.sleep(…). Normally if you sleep for a bit, by the time the thread wakes up, the javascript will have completed. This tends to work, for the most part.

The problem becomes that as you end up with hundreds of these tests, you suddenly start to find that they’re failing, at random, because the delay isn’t quite enough. When the build runs on your build server, sometimes the load is higher, the phase of the moon wrong, whatever – the end result is that your delay isn’t quite enough and you start assert()ing before your ajax call has completed and your test fails.

So, you start increasing the timeout. From 1 second. To 2 seconds. To 5 seconds. You’re now on a slippery slope of trying to tune how long the tests take to run against how successful they are. This is a crap tradeoff. You want very fast tests that always pass. Not semi fast tests that sometimes pass.

What’s to be done? Here are two techniques that make testing asynchronous applications easier.

RenderedWebElement

All the drivers (except HtmlUnit, which isn’t really driving a browser) actually generate RenderedWebElement instances, not just WebElement instances. RenderedWebElement has a few interesting methods on it that can make testing your application easier. For example, the isDisplayed() method saves you having to query the CSS style to work out whether an element is actually shown.

If you have some ajax magic that, in its final step, makes a <DIV> visible then you can use isDisplayed() to check whether the asynchronous call has completed yet.

Note: my examples here use dojo but the same technique can be used whether you’re using jquery or any other asynchronous toolkit.

First the HTML page – this simply has a link and a (initially hidden) <div>. When the link is clicked it triggers an asynchronous call to load a new HTML page; the content of this HTML page is inserted as the content of the div and the div made visible.

<html>
<head>
 <title>test</title>

 <script type="text/javascript" src="js/dojo/dojo.js" djConfig=" isDebug:false, parseOnLoad:true"></script>

 <script src="js/asyncError.js" type="text/javascript"></script>

 <script>
   /*
    * Load a HTML page asynchronously and
    * display contents in hidden div
    */
   function loadAsyncContent() {
     var xhrArgs = {
       url: "asyncContent.htm",
       handleAs: "text",
       load: function(data) {
         // Update DIV with content we loaded
         dojo.byId("asyncContent").innerHTML = data;

         // Make our DIV visible
         dojo.byId("asyncContent").style.display = 'block';
       }
     };

     // Call the asynchronous xhrGet
     var deferred = dojo.xhrGet(xhrArgs);
   }
 </script>
</head>
<body>
 <div id="asyncContent" style="display: none;"></div>
 <a href="#" id="loadAsyncContent" onClick="loadAsyncContent();">Click to load async content</a>
 <br/>
</body>
</html>

Now the integration test. Our test simply loads the page, clicks the link and waits for the asynchronous call to complete (highlighted). Once the call is complete, we check that the contents of the <div> is what we expect.

@RunWith(SpringJUnit4ClassRunner.class)
public class TestIntegrationTest {
 @Test
 public void testLoadAsyncContent() {
   // Create the page
   TestPage page = new TestPage( new FirefoxDriver() );

   // Click the link
   page.clickLoadAsyncContent();
   page.waitForAsyncContent();

   // Confirm content is loaded
   assertEquals("This content is loaded asynchronously.",page.getAsyncContent());
 }
}

Now the page class, used by the integration test. This interacts with the HTML elements exposed by the driver; the waitForAsyncContent method regularly polls the <div> to check whether its been made visible yet (highlighted)

public class TestPage  {

 private WebDriver driver;

 public TestPage( WebDriver driver ) {
   this.driver = driver;

   // Load our page
   driver.get("http://localhost:8080/test-app/test.htm");
 }

 public void clickLoadAsyncContent() {
   driver.findElement(By.id("loadAsyncContent")).click();
 }

 public void waitForAsyncContent() {
   // Get a RenderedWebElement corresponding to our div
   RenderedWebElement e = (RenderedWebElement) driver.findElement(By.id("asyncContent"));

   // Up to 10 times
   for( int i=0; i<10; i++ ) {
     // Check whether our element is visible yet
     if( e.isDisplayed() ) {
       return;
     }

     try {
       Thread.sleep(1000);
     } catch( InterruptedException ex ) {
       // Try again
     }
   }
 ]

 public String getAsyncContent() {
   return driver.findElement(By.id("asyncContent")).getText();
 }
}

By doing this, we don’t need to code arbitrary delays into our test; we can cope with server calls that potentially take a little while to execute; and can ensure that our test will always pass (at least, tests won’t fail because of timing problems!)

JavascriptExecutor

Another trick is to realise that the WebDriver instances implement JavascriptExecutor. This interface can be used to execute arbitrary Javascript within the context of the browser (a trick selenium finds much easier). This allows us to use the state of javascript variables to control the test. For example – we can have a variable populated once some asynchronous action has completed; this can be the trigger for the test to continue.

First, we add some Javascript to our example page above. Much as before, it simply loads a new page and updates the <div> with this content. This time, rather than show the div, it simply sets a variable – “working” – so the div can be visible already.

 var working = false;

 function updateAsyncContent() {
   working = true;

   var xhrArgs = {
     url: "asyncContentUpdated.htm",
     handleAs: "text",
     load: function(data) {
       // Update our div with our new content
       dojo.byId("asyncContent").innerHTML = data;

       // Set the variable to indicate we're done
       working = false;
     }
   };

   // Call the asynchronous xhrGet
   var deferred = dojo.xhrGet(xhrArgs);
 }

Now the extra test case we add. The key line here is where we wait for the “working” variable to be false (highlighted):

 @Test
 public void testUpdateAsyncContent() {
   // Create the page
   TestPage page = new TestPage( getDriver() );

   // Click the link
   page.clickLoadAsyncContent();
   page.waitForAsyncContent();

   // Now update the content
   page.clickUpdateAsyncContent();
   page.waitForNotWorking();

   // Confirm content is loaded
   assertEquals("This is the updated asynchronously loaded content.",page.getAsyncContent());
 }

Finally our updated page class, the key line here is where we execute some Javascript to determine the value of the working variable (highlighted):

 public void clickUpdateAsyncContent() {
   driver.findElement(By.id("updateAsyncContent")).click();
 }

 public void waitForNotWorking() {
   // Get a JavascriptExecutor
   JavascriptExecutor exec = (JavascriptExecutor) driver;

   // 10 times, or until element is visible
   for( int i=0; i<10; i++ ) {

     if( ! (Boolean) exec.executeScript("return working") ) {
       return;
     }

     try {
       Thread.sleep(1000);
     } catch( InterruptedException ex ) {
       // Try again
     }
   }
 }

Note how WebDriver handles type conversions for us here. In this example the Javascript is relatively trivial, but we could execute any arbitrarily complex Javascript here.

By doing this, we can now flag in Javascript, when some state has been reached that allows the test to progress; we have made our code more testable, and we have eliminated arbitrary delays and our test code runs faster by being able to be more responsive to asynchronous calls completing.

Have you ever had problems testing asynchronous applications? What approaches have you used?

Read Full Post »

%d bloggers like this: