Acceptance test driven development for web applications

ATDD is a simple process change that can have far-reaching implications for your development projects.

Acceptance test driven development, or ATDD, is a collaborative practice wherein application developers, software users, and business analysts define automated acceptance criteria very early in the application development process. They then use the acceptance criteria to guide subsequent development work. As John Ferguson Smart explains in this JavaWorld feature, ATDD is a simple process change that can have far-reaching implications for your development projects.

From acceptance tests to ATDD

The idea of acceptance tests -- a set of tests that must pass before an application can be considered finished -- is certainly not new. Indeed, the value of testing an application before delivering it is relatively well established.

Traditionally, testers will prepare test plans and execute tests manually at the end of the software development phase. Acceptance testing is done relatively independent of development activities. In some organizations, QA departments also use automated testing tools such as HP's Quick Test Pro; but, again, this activity is generally siloed away from the rest of the development activity.

Testing an application after it has been developed has a number of significant drawbacks. Most importantly, having feedback about problems raised at this late stage of development makes it very difficult to correct bugs of any size. This results in costly rework, wasted developer time, and delayed deliveries.

ATDD takes a different approach. Essentially, ATDD involves collaboratively defining and automating the acceptance tests for upcoming work before it even begins -- a simple inversion that turns out to be a real game changer. Rather than validating what has been developed at the end of the development process, ATDD actively pilots the project from the start. Rather than being an activity reserved to the QA team, ATDD is a collaborative exercise that involves product owners, business analysts, testers, and developers. And rather than just testing the finished product, ATDD helps to ensure that all project members understand precisely what needs to be done, even before the programming starts.

In addition, acceptance tests are no longer cantoned to the end of the project and performed as an isolated activity. Instead, ATDD tests are automated and fully integrated throughout the development process. As a result, issues are raised faster and can be fixed more quickly and less expensively, the workload on QA at the end of the project is greatly reduced, and the team is able to respond to change faster and more effectively.

ATDD in practice

Let's consider how ATDD typically works in the context of an agile project. As a rule, a software project aims at delivering end-users with a number of high-level "features" (sometimes called functionalities or capabilities). A feature is a general value-proposition relating to something the application can do for the end-user, expressed in terms you might put on a product flyer or press release: for example, a feature of an online real-estate lease-management application might be "Manage property repairs."

Features are generally too big to implement all at once, so they are broken into smaller, more manageable chunks. In agile circles, these chunks are often expressed in the form of user stories -- a short sentence capturing what the user wants from a particular piece of functionality. For example, user stories for the "Manage property repairs" feature might include "Issue work order" and "Approve invoice."

A user story cannot stand alone, however; it is merely the promise of a conversation between developers and users about a particular requirement. The details about what needs to be implemented will arise from this conversation. It will then be formalized as a set of objective, demonstrable acceptance criteria. For example, you would need to specify acceptance criteria for "user can approve an invoice for an amount less than the agreed maximum" and "user cannot approve an invoice if the price exceeds the agreed maximum."

Acceptance criteria determine when a particular user story is ready to be deployed into production. But they do much more than record what should be tested at the end of an iteration. Acceptance criteria are drawn up as a collaborative exercise, at the start of the iteration, with developers, testers, and product owners involved. As a result, they help ensure that everyone on the team has a clear vision of what is required. They also help provide clear guidelines for developers as to what needs to be implemented. (These guidelines are even more effective if the developers doing the programming are practicing Test Driven Development, or TDD.)

ATDD and TDD

TDD, or Test-Driven Development, is a highly effective development strategy that helps developers write code more accurately and precisely. The low-level requirements used to drive TDD are directly derived from the high-level acceptance tests, so the two techniques complement each other: automated acceptance tests describe the high level business objectives, while TDD helps developers implement them as requirements.

Note that acceptance criteria are not designed to be exhaustive -- there will be more technical tests for that. Instead, they are used as much for communication as they are for verification. They take the form of working examples, which is why ATDD is sometimes referred to as "specification by example."

Acceptance-test driven development is not just limited to agile projects. Even teams using more formal and detailed use cases, or more traditional approaches such as the Software Requirements Specification (or SRS) documents, can benefit from having verifiable, automated acceptance criteria as early as possible.

Automating your acceptance tests

A key part of acceptance criteria is that they are automated. They are not simply stored in a Word document or Excel spreadsheet, but are living, executable tests. This is important -- for ATDD to be effective, the automated acceptance tests need to be run automatically whenever a change is made to the source code. So it is vitally important to have a tool that will integrate smoothly into your build process, and that can be run on your build server with no human intervention.

Automated acceptance tests not only serve to test the application: they also provide an objective measurement of progress (in agile projects, working software is considered to be the only true measure of progress). The tests can also give an idea of the relative complexity of each feature and story, because a functionality that is long and complicated to test is likely to also be long and complicated to develop. This in turn can give a useful heads-up to product owners needing to set priorities.

Although you certainly can write automated acceptance tests using conventional unit testing tools such as TestNG, there are a number of dedicated ATDD tools. These tools are focused as much on communication and feedback as they are on testing.

ATDD tools

ATDD is more an approach than a toolset, but there are a number of tools that can make things easier.

FitNesse is one of the earliest ATDD tools. Using FitNesse, users enter their requirements in tabular form in a Wiki, and developers write code behind the scenes to run the test data stored in the Wiki against the actual application. When the tests are executed, the table will be colored according to whether the tests succeeded or failed. FitNesse is very useful when your acceptance test criteria can be expressed in terms of tables of data and expected results, although it is also used to express acceptance tests as a series of steps.

More recently, other tools have emerged that support Behaviour-Driven Development, or BDD. This technique encourages developers to think in terms of the behaviour of an application, and to express their low-level technical requirements using a narrative approach. Cucumber is a popular tool from the Ruby community, that allows you to express your acceptance criteria using the "given-when-then" structure commonly used in agile projects. It is also easy to use Cucumber with Java. JBehave uses a similar approach, with stories expressed in text files and tests written using annotated Java classes. Easyb is a similar tool based on the Groovy language.

Concordion is another more recent ATDD tool. In Concordion, acceptance tests are expressed in the form of HTML pages containing free-form text and tables. Java classes are then used to analyze special tags placed in these pages, in order to execute and display the results in HTML form.

All of these tools place a high emphasis on readability and communication. Listing 1 illustrates how one of the earlier acceptance criteria might be expressed using Easyb:

Listing 1. A user scenario in Easyb

scenario User can approve an invoice for an amount less than the agreed maximum"
    {
      given "the User has selected an open invoice",
      and "the User has chosen to approve the invoice",
      and "the invoice amount is less than the agreed maximum amount",
      when "the User completes the action",
      then "the invoice should be successfully approved",
    }

Once the acceptance criteria are defined in this way, the corresponding test code can then be written in more conventional programming languages such as Java, Groovy, and Ruby.

In addition to showcasing Easyb, this code snip shows the communication focus of ATDD tools. Automated acceptance criteria are expressed in high-level terms that makes sense to business managers as much as to software engineers and programmers. Most ATDD tools also generate reports that express the test results in familiar business terms. Tests that have been written in this way, but with no backing test code, will be marked as "pending." At the start of an iteration, all of the acceptance criteria will be in this state. As development progresses, the next step will be to implement them, which is where the actual code that tests the application is written. So these reports not only tell you what tests pass and fail, they also provide a way to track the progress of your project, by indicating what work remains to be done.

Taking a slightly broader perspective, automated acceptance tests are like any other automated tests -- they should be stored in your version control system and executed periodically on your Continuous Integration server (at least on a nightly basis, but preferably whenever a change is made to the application source code). Getting fast feedback when acceptance tests fail is essential. You can also configure your CI server to publish the results of the acceptance tests where they can be easily consulted by non-developers. Fortunately, modern CI tools such as Jenkins integrate well with virtually all of the common BDD tools.

Automating acceptance tests for web applications

When it comes to implementing ATDD for a web application, a wide range of open source and commercial tools are available. Given this wide range, choosing your tool with care is important; it can mean the difference between a set of automated acceptance tests that is easy to maintain in the future, and one that quickly becomes unusable due to prohibitive maintenance costs.

Modern automated web testing tools, both commercial and open source, fall into three categories:

  • Record/Replay
  • Script-based
  • Page Objects

Record/Replay tools, such Selenium IDE and JAutomate, let a user step through a web application, recording the user's actions as a test script. While tempting in its simplicity, this approach is in fact a poor strategy. The low-level scripts generated by these tools are fragile and hard to maintain. For example, there is no reuse of testing logic between scripts, which makes maintaining the scripts very costly.

Script-based testing is a slightly more flexible strategy. Tools such as Selenium, Watir, Canoo WebTest, and the commercial Quick Test Pro fall into this category. Tests are written in a programming language such as Java, Ruby, or VBScript. However this strategy is still quite low-level, focusing on the technical details of the web tests rather than the business requirements that they are testing. It also requires strong discipline and structure to avoid duplication within the scripts. Again, this tends to make tests more fragile and harder to maintain.

Good automated acceptance tests should be high level, expressed in business terms. They need to isolate the "what" from the "how." Doing so ensures that, if the implementation details for a particular screen should change, the changes would only minimally affect the low-level test code, and not the high-level tests. Ideally, you want to maintain a level of abstraction between what a web page does in business terms ("Approve an invoice"), and how it does it ("click the invoice in the invoice list, wait for the details to appear, then click on the Approve button").

The Page Objects pattern, well supported by Selenium 2/WebDriver in particular, is an excellent choice for ATDD tests. High-level acceptance criteria need to be expressed in high-level business terms (the "what"), and then implemented under the hood using a set of well-structured, maintainable page objects. For example, an automated acceptance test will be expressed in business terms, and implemented as a series of steps. Each step will make use of page objects to interact with the web application. These levels of abstraction make the acceptance tests considerably more stable and maintainable.

In conclusion

Defining and automating your acceptance criteria up front makes a lot of sense. Not only does it provide clear goals for developers, it also gives excellent visibility into what feature are being implemented, how they will work, and how the project as a whole is progressing. And, as a bonus, ATDD will also provide you with a broad set of regression tests.

Many open source tools exist to help you implement an ATDD strategy in your project -- see Resources for a listing of the ones discussed in this article. While you can use conventional unit testing tools for ATDD, dedicated ATDD tools provide a stronger emphasis on communication and reporting, which are key parts of the ATDD approach. And for web applications, automated testing tools based on the Page Objects pattern are an excellent choice when it comes to implementing the tests themselves.

John Ferguson Smart is an experienced consultant specialising in enterprise Java, web development, and open source technologies, currently based in Wellington, New Zealand. Well known in the Java community for his many published articles, and as author of Java Power Tools, John helps organisations to optimize their Java development processes and infrastructures and provides training and mentoring in open source technologies, SDLC tools, and agile development processes.

Learn more about this topic

Tools discussed in this article

More from JavaWorld

  • See the JavaWorld Site Map for a complete listing of research centers focused on client-side, enterprise, and core Java development tools and topics.
  • JavaWorld's Java Technology Insider is a podcast series that lets you learn from Java technology experts on your way to work.
Join the discussion
Be the first to comment on this article. Our Commenting Policies
See more