Fit for analysts and developers

Test-first development from the user perspective

In the real world, applications keep growing in size and complexity, and change frequently; thus, the necessity for continuous testing constantly increases. Extreme programming (XP) prescribes automated acceptance testing so that tests can be run often, while facilitating regression testing at a low cost. XP also insists that the customers specify the acceptance tests and keep them updated as the requirements change, and use these tests for test-driven development (TDD).

Automated unit tests are quite common nowadays; however, most acceptance tests remain manual. Many commercial test automation tools are available, but their cost and required effort are so high that most project teams resort to doing the acceptance testing manually. The major roadblock to automating user acceptance testing has been the nonavailability of easy-to-use tools and frameworks. In this article, I show you how the Framework for Integrated Test (Fit) makes it easy to automate acceptance tests; it can also be used as an effective tool for communication and collaboration between users and developers.

Roles
A typical development team may include many roles, such as user, customer, domain expert, tester, developer, or architect. For the sake of simplicity, I use just two roles in this article: analyst and developer. You can think of an analyst as a user, customer, domain expert, tester, or anyone who provides and/or clarifies requirements, and is involved in the acceptance testing process. A developer could translate to architect, programmer, or anyone developing the actual product itself.

What is Fit?

Framework for Integrated Test (Fit) is an open source framework for user acceptance testing, and a tool for enhancing the communication and collaboration between analysts and developers. Fit lets analysts write acceptance tests using simple HTML tables. Developers write fixtures to link the test cases with the actual system itself. Fit compares these test cases, written using HTML tables, with actual values, returned by the system using fixtures, and highlights the results with colors and annotations.

Just two steps are required to automate user acceptance tests using Fit:

  1. Express a test case in the form of a Fit table
  2. Write the glue code in Java, called a fixture, that bridges the test case and system under test

That's it! You are all set to execute the tests automatically for the rest of the application's lifetime.

To work with Fit, you must know and understand four basic elements:

  1. Fit table
  2. Fixture
  3. Core fixtures
  4. Test runner

Fit table

A Fit table is a way of expressing the business logic using a simple HTML table. These examples help developers better understand the requirements and are used as acceptance test cases. Analysts create Fit tables using a tool like MS Word, MS Excel, or even a text editor (assumes familiarity with HTML tags). There are different types of Fit tables, which I discuss later in this article.

Fixture

A fixture is an interface between the test instrumentation (in our case, the Fit framework), test cases (Fit tables), and the system under test (SUT). Fixtures are Java classes usually written by developers.

Figure 1. Relationship between Fit table, fixture, and SUT

In general, there is a one-to-one mapping between a Fit table and fixture. The simplicity of Fit lies in the idea that you can express and test your requirements using one or more of the core fixtures.

Fit provides three core fixtures:

  1. Column fixture for testing calculations
  2. Action fixture for testing the user interfaces or workflow
  3. Row fixture for validating a collection of domain objects

Test runner

Fit provides a main driver class, fit.FileRunner, that can be used to execute tests. FileRunner takes two parameters: the name of an input HTML file that has one or more test cases expressed as Fit tables and the name of an output file where Fit records test results.

Fit in action

To illustrate how to use Fit in a real project scenario, I'll walk you through a development cycle highlighting the key activities in the requirements-definition, development, test-case-preparation, and testing phases.

Requirements

A sports magazine decides to add a new feature to its Website that will allow users to view top football teams based on their ratings. An analyst and a developer get together to discuss the requirements. The outcome of the discussion is a user story card that summarizes the requirements, a set of acceptance tests, and an Excel file with sample data, as illustrated in the following three figures.

Figure 2. Front side of the user story card: Requirements
Figure 3. Back side of the user story card: Acceptance tests
Figure 4. Excel file with sample data

Now that we have the first cut requirements, let's try our hands on test-first development using Fit. We start with the first user test, automate it, develop the code required for the test to pass, and repeat the cycle for all the tests.

Test calculations using column fixture

For a team, given the number of matches played, won, drawn, and lost, we need to verify that the ratings are calculated properly. The first step is to express the logic using a Fit table. The sample table created using MS Excel during the requirements discussion could be easily converted into a Fit table by just adding a fixture name and modifying the labels.

sample.VerifyRating
team nameplayedwondrawnlostrating ()
Arsenal38312583
Aston Villa382021654
Chelsea38351293
Dummy383512100
Wigan38267575

The above Fit table represents the first acceptance test case to verify the rating calculation. The table has seven rows. The top cell has the fully qualified name of the fixture (sample.VerifyRating) used to execute the test cases represented by this Fit table. The second row represents the list of input attribute names (Columns 1 through 5) and the name of the calculated value (Column 6). The parenthesis () after the attribute name rating in the sixth column denotes that it's a calculated value. Rows 3 through 7 are the test cases. Figure 5 is a dissection of the above Fit table.

Figure 5. Fit table: Column fixture

The Fit table corresponding to a column fixture follows a certain format as described by the table below.

RowPurposeNotes for the analystNotes for the developer
1Fixture identifierThe first column has the name of the column fixture written by the developer. Fit uses the first column only and ignores the rest.Fully qualified name of the class that will extend fit.ColumnFixture.
2LabelAdd a column for each input attribute or expected calculated value. The labels for calculated values include parenthesis () so that Fit can recognize them as calculated values.An input attribute translates to a public field (public <type> <labelName>) in the fixture. A calculated value translates to a public method with the following signature: public <type> <labelName>(). Follow camel notation to translate a label to a field/method name.
3 - nTest case(s)Specify the input and expected calculated values. Each calculated value is a test case.Fit converts the input attributes to the appropriate types and sets them in the corresponding fields in the fixture. For each of the calculated values, Fit calls the appropriate methods to get the actual result and uses it to match against the expected value.

Now that we have created the Fit table, we need to write the glue code that will bridge the test case to the system under test:

 

package sample;

import businessObjects.Team; import fit.ColumnFixture;

public class VerifyRating extends ColumnFixture {

public String teamName; public int played; public int won; public int drawn; public int lost; Team team = null;

public long rating(){ team = new Team(teamName,played,won,drawn,lost); return team.rating; } }

The domain object representing a football team is shown below:

 

package businessObjects;

public class Team {

public String name; public int played; public int won; public int drawn; public int lost; public int rating;

public Team(String name, int played, int won, int drawn, int lost) { super(); this.name = name; this.played = played; this.won = won; this.drawn = drawn; this.lost = lost; calculateRating(); }

private void calculateRating() { float value = ((10000f*(won*3+drawn))/(3*played))/100; rating = Math.round(value); } }

The test case we have at hand is related to calculations, so we create a new fixture sample.VerifyRating that extends fit.ColumnFixture. For each input attribute represented by Columns 1 through 5 in the second row of the Fit table, there is a public member with the same name as the label using camel notation. Notice that between the table and the fixture, "team name" translates to teamName. A public method public long rating() correspondes to the calculation in the sixth column. The rating() method in VerifyRating creates a Team object using the input data specified by the test case and returns the rating from the Team object; this is where the bridging between the test case and the system under test happens.

Let's execute the test using the FileRunner:

 java -cp fit.jar;ratings.jar fit.FileRunner VerifyRatingTest.html VerifyRatingResults.html
4 right, 1 wrong, 0 ignored, 0 exceptions

Wow, we have automated the first user acceptance test case! The test results generated by Fit are shown below.

sample.VerifyRating
team nameplayedwondrawnlostrating()
Arsenal38312583
Aston Villa382021654
Chelsea38351293
Dummy383512100 expected
93 actual
Wigan38267575

Here is what happens when you run the test: Fit parses the table and creates an instance of sample.VerifyRating. For each row in Rows 3 through 7, Fit uses reflection to set the values specified in Columns 1 through 5 to the corresponding fields in the fixture. The rating() method is executed to get the actual value to be compared against the expected value specified in the sixth column. If the expected value matches the actual value, then the test passes; otherwise it fails. Fit produces an output table like the one shown above. It's the same table that we used as input, with the cells colored in green to indicate that the test case passed. The failed ones are colored in red with the expected value and the actual value listed in the cell.

Note: By convention, fixtures use green to indicate success, red to indicate failure, yellow for an exception, a gray background to indicate that the cell was ignored, and gray text to indicate a blank cell filled with the actual value from the system.

Refactoring test cases

Let's move on to the next test case: Search for top two teams using the screen and validate the search results. This step involves a screen, an example of which appears in Figure 6, through which the user provides input, clicks on a button, and seeks verification that the results returned match a collection of objects as expected.

Figure 6. Screen prototype

As I mentioned earlier, we use an action fixture to test the user interface and a row fixture to examine a collection of objects. The second test case includes both. So how can we express it using a Fit table? It's simple: Split it into two test cases so they can fit into Fit—one test case for the user interface actions and another to validate the results. I call this process of breaking a complex test case into smaller ones to express the case in terms of the core fixtures without losing the intent of the original test case as refactoring test cases.

Testing a screen using an action fixture

The action fixture supports four commands that you can use to simulate the actions that a user would perform on a screen. The commands are start, enter, press, and check. The following table summarizes the action commands, the parameters they take, and what they do.

Action commandParameter 1Parameter 2Notes for the analystNotes for the developer
startName of the fixture to executen/aStarts the custom fixture developed for this test case.Starts a new fixture specified by Parameter 1. The rest of the commands, enter, press, and check, act on this fixture, a descendant of fit.Fixture.
enter Name of the input elementInput dataSimulates entering test data through the screen.The input data is passed to the fixture specified in the start command. The fixture has a method with the following signature: public void <nameOfTheInputElement>(<type> param). This method stores the data passed through the parameter internally for later use.
press Name of the buttonn/aSimulates button click.The fixture has a method with the following signature: public void <nameOfTheButton>(). This method simulates an action like submitting data and searching for results.
check Name of the element to checkValueSpecifies the expected value (Parameter 2) of the element identified by Parameter 1.The fixture has a method with the following signature: public <type> <nameOfTheElement>(). This method returns an actual value to be used for matching with Parameter 2.

Let's tackle testing the user interface part. We are back to creating a Fit table again, for an action fixture this time. A typical usage of the screen could be described something like The user types 2 in the number of top teams text box, clicks on the Search button, and expects to see the top two teams displayed. The following table represents this typical usage scenario in a way Fit can understand.

fit.ActionFixture
start sample.VerifyWorkflow
enter number of top teams2
press search
check number of results2
1 2 Page
Join the discussion
Be the first to comment on this article. Our Commenting Policies
See more