Open source Java projects: Jakarta Cactus

Test-driven development for server-side applications

Unit tests require a granularity that is hard to achieve when testing components inside of a server-side container -- which is exactly why some test-driven developers use Jakarta Cactus. Cactus extends the popular JUnit testing framework with an in-container strategy that enables you to execute test cases for servlets, EJBs, and other server-side code. In this Open source Java projects installment, Steven Haines shows you how to write Cactus test cases for a servlet and run them automatically.

Software developers today have widely embraced test-driven development (TDD) for the simple reason that tested code works better. Furthermore, if you introduce new code into a working application and it inadvertently breaks something, your test cases will show you what specific functionality is broken and where. For example, suppose your "test sorting with wildcards" test case fails, stating that it expected 10 results from the search string "happy*" but only 8 were returned. You know which specific results are missing, so you know where to start your investigations -- all from a test case that you might have written a year ago!

TDD for more complex components such as servlets and EJBs (Enterprise JavaBeans), though, can be problematic, because these components expect to run inside a container. Sure, you can build a mock servlet or EJB container and test your code against the specifications. That's a valid test that might be worth doing. But a container's idiosyncratic behaviors might cause your application to behave differently in different deployment environments. This is a problem because TDD dictates that tests be run automatically, whereas deploying an application to a container and testing it with the granularity required of a unit test is a cumbersome manual process.

Even if you use a solution for automatic deployment (such as Cargo), unit-test-level granularity is still an issue. For example, you can deploy an application to a container, execute some piece of functionality, and then validate the results -- but that doesn't test internal servlet methods or examine internal variables such as the session or servlet context to see if variables are set appropriately. It only allows for external "black-box" tests.

Origins of TDD

Many consider TDD to be formalized by Kent Beck in his book Test-Driven Development: By Example, through which he addressed the problems with the state of code testing and the resultant poor quality of applications. In traditional development environments, developers spend a considerable amount of time writing code and integrating their components into an application; then they pass that application to quality assurance (QA) for testing. QA then tests the application from a business test-case level, which is more like black-box testing: when I invoke functionality X I expect to see results Y. If the application returns the correct results, then it passes; if not, it fails.

The challenge with this level of testing is that the code's underlying complexity can't be effectively exercised only from business test cases. For example, a search function might behave differently (follow a different path through the code) depending on the input. The business test cases might not call this out explicitly, but a code optimization might make it preferable. The business test case might not exercise functionality that users eventually will. So one of the tenets of TDD is that developers need to write their own test cases: they understand the different code paths that different options can invoke and are best suited to write accurate test cases.

Jakarta Cactus provides a framework for testing servlets and EJBs running inside different containers to obtain unit-test-level metrics. You can determine not only whether your application is returning the correct results, but also whether the application is behaving correctly internally. This article describes how to use Jakarta Cactus to write JUnit-style test cases to test a Web application inside multiple Web containers. You'll also learn how to automate your tests using Apache Ant, which you can easily extend to be launched by a Continuous Integration server like Hudson.

TDD basics

TDD is implemented through the following steps, illustrated in Figure 1:

  1. Add a new test to the test suite
  2. Prove that the test fails
  3. Implement the new functionality
  4. Prove that the test succeeds
  5. Refactor the code
The TDD process
Figure 1. The TDD process (Click to enlarge.)

You write a new test is before writing code so that the test harness can prove that the new test case fails; this validates that the test harness is valid. If you write a flawed test case and it succeeds, you might not be detecting problems that the test case is meant to identify. But if you can prove that the test fails without your code and succeeds with it, then you can have confidence in the test itself. Finally, once you have a valid test case, you are free to refactor the code to find a more elegant solution (because if you inadvertently break the code, your test case will find it!).

1 2 3 4 5 Page 1
Page 1 of 5