JMeter tips

Improve the quality of your JMeter scripts

1 2 Page 2
Page 2 of 2

Figures 5 and 6 below show two normal distributions. In our context, the horizontal axis is the sampling mean of response time, shifted so the population mean is at the origin. Figure 5 shows that 90 percent of the time, the sampling means are within the interval ±Zσ, where Z=1.645 and σ is the standard deviation. Figure 6 shows the 99-percent case, where Z=2.576. For a given probability, say 90 percent, we can look up the corresponding Z value with a normal curve and vice versa.

Figure 5. Z value for 90 percent
Figure 6. Z value for 99 percent

A few Websites for normal curve calculation are listed in Resources. Note that in those sites, we can calculate the probability of either a symmetric bounded region (e.g., -1.5 < X < 1.5) or a cumulated area (e.g., X < 1.5). You may also look up approximate values from the tables below.

Table 1. Standard deviation range corresponding to a given confidence interval
Confidence IntervalZ
0.800±1.28155
0.900±1.64485
0.950±1.95996
0.990±2.57583
0.995±2.80703
0.999±3.29053

Table 2. Confidence interval corresponding to given standard deviation
ZConfidence Interval
10.6826895
20.9544997
30.9973002
40.9999366
50.9999994

Confidence interval

The confidence interval is defined as [sampling mean - Z*σ/√n, sampling mean + Z*σ/√n]. For example, if the confidence interval is 90 percent, we can look up the Z value to be 1.645, and the confidence interval is [sampling mean - 1.645*σ/√n, sampling mean + 1.645*σ/√n], which means that 90 percent of the time, the (unknown) population mean is within this interval. That is, our measurement is "close." Note that if σ is larger, the confidence interval will be larger, which means that it is more likely that the upper bound of the interval will exceed an acceptable value. That is, if σ is larger, it is more likely that the result is not acceptable.

Response-time requirements

Let's translate all this information into response-time requirements. First, you can define the performance requirements like so: The upper bound of the 95-percent confidence interval of the average response time must be less than 5 seconds. Of course, you must add loading requirements and specify a particular scenario as well.

Now, after the performance tests, suppose you analyze the results and discover that the average response time is 4.5 seconds, while the standard deviation is 4.9 seconds. The sample size is 120. You then calculate the 95-percent confidence interval. By looking in Table 1, you find the Z value is 1.95996. Therefore the confidence interval is [4.5 - 1.95996*4.9/√120, 4.5 + 1.95996*4.9/√120], which is [3.62, 5.38]. The result is not acceptable, even though the average response time looks pretty good. In fact, you can verify that the result is not acceptable even for an 80-percent confidence interval. As you can see, applying confidence interval analysis gives you a much more precise method to estimating the quality of your tests.

Note that in the context of Web applications, to measure a scenario's response time, we typically need to instruct the load-testing tool to send multiple requests, for example:

  1. Login
  2. Display a form
  3. Submit the form

Assume we are interested in Request 3. To conduct a confidence interval analysis, we need the average response time and the standard deviation of all of Request 3's samples, not the statistics of all samples.

Note that JMeter's Graph Result listener calculates the average response time and standard deviation of all requests. JMeter's Aggregate Report listener calculates the average response time of individual samplers for you, but, unfortunately, does not give the standard deviation.

In summary, specifying the requirement of average response times alone is dangerous, since it says nothing about data variation. What if the average response time is acceptable, but your confidence interval is only 75 percent? Most likely, you cannot accept the result. Applying the confidence internal analysis, however, gives you much more certainty.

Conclusion

In this article, I have discussed:

  • A fine point of specifying loads with the JMeter Thread Group element
  • Guidelines for creating a JMeter test script automatically using the JMeter Proxy Server element, with emphasis on modeling user think time
  • Confidence interval analysis, a statistical method that we can leverage to specify better response-time requirements

You can improve the quality of your JMeter scripts with the techniques described in this article. From a larger viewpoint, what I have discussed is really part of a performance testing workflow, which differs from an ordinary functional testing workflow. A performance testing workflow includes, but is not limited to, the following activities:

  • Developing performance requirements
  • Selecting testing scenarios
  • Preparing environment for testing
  • Developing test scripts
  • Performing tests
  • Reviewing test scripts and test results
  • Identifying bottlenecks
  • Writing test reports

In addition, the performance test results, including the identified bottlenecks, are fed back to the development team or to an architect for additional optimization design. During this process, developing quality test scripts and reviewing test scripts are probably the trickiest parts and really need careful management. Armed with test-script writing guidelines and a good performance testing workflow, you will have a much better chance for optimizing the performance of your software under heavy loads.

Chi-Chang Kung is a Java architect with Sun Microsystems Taiwan. He is a member of IEEE Computer Society and ACM.

Learn more about this topic

1 2 Page 2
Page 2 of 2