Make performance and scalability testing continuous ... or else

Write and run integration tests continuously. No excuses. It's that simple.

Scalability and performance are not the same thing. Developers used to say, "Hey, it works on my machine." Now they say, "Hey, it was fast on my machine." But it's different in production or under load. Unbelievably, most organizations still fail to do adequate performance or load testing -- nor have they integrated performance and scalability into their systems.

This is insane. You don't build a bridge, then try to add load-bearing capabilities at the end of the project -- but most software projects try to do exactly that. Even projects that claim to be "agile" actually treat performance and scalability as that thing they push off all the risk onto at the end. About the only thing worse than this silly behavior is going through the motions of "early optimization" for a routine or two and acting is if that meant something.

More about continuous integration

Get a technical and business overview of continuous integration, then learn hands-on about continuous integration with Hudson/Jenkins. Want more Java enterprise news? The JavaWorld Enterprise Java newsletter can be delivered to your inbox.

It doesn't have to be this way. You can write simple integration tests with basic performance assertions. The simplest would be to compare the epoch time at the beginning and the end of the routine, then check that the difference falls within expectations. Of course this can't be very precise because all but real-time operating systems have nondeterministic performance characteristics, but you usually know the thing should return in less than a few seconds.

This sort of testing can be implemented as a simple xUnit test -- and you can still use mockups. For more sophisticated tests or those that involve the GUI, you can use Seleniumto record the scripts and run them from your build like xUnit tests. Both xUnit- and Selenium-style tests can be added to your continuous integration environment (such asJenkins).

One frequent excuse I hear is that the QA department has Mercury, which was renamed HP LoadRunner after the HP acquisition. The joke is that Selenium is the cure for Mercury poisoning, but it's no joke. LoadRunner is one of those solutions that would be really good if you could figure out how to use it and could get it to do what it claims to do without buying 10 times more hardware than it says it need. You'll never get it to do an adequate job of load testing for an amount of money that even a cash-flush organization would be unwilling to spend.

There are adequate alternatives to LoadRunner. One of my favorites is Push-to-Test Testmaker. It's open source (GPL), and you can tie the same Selenium tests you do for automated functional testing into load test scripts. It allows you to parameterize the user fields from a CSV file or database so that you can simulate adequate unique users under a comprehensive set of scenarios.

If you start doing this stuff at the beginning of the project and run load/performance test subsets of the project as they develop, you stand a very good chance of knowing what the capacity of the system will be when you deploy the software to production. For reference, consider the "Why eXtreme Programming?" diagram below explaining why the traditional waterfall model is bad.

Make performance and scalability testing continuous ... or else

Everyone knows that integrating at the end of a project pushes all the risk to the end -- when it costs the most to fix it. But aren't capacity and performance also critical features of the system? If you don't test for load or performance until the end, what is the price to amend them? What if those tests reveal that a fundamentally bad architectural decision was made?

You don't build a bridge and say, "I have no idea how many cars will be on it at once, so let's just see if it falls." Why would you do that in software?

This article, "Make performance and scalability testing continuous ... or else," was originally published at InfoWorld.com. Follow the latest developments in business technology news and get a digest of the key stories each day in the InfoWorld Daily newsletter. For the latest business technology news, follow InfoWorld on Twitter.

This story, "Make performance and scalability testing continuous ... or else" was originally published by InfoWorld.

Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies
See more