|
|
I read an interesting article on SD Times the other day entitled “Agile Testing Fact and Fiction“, in which the author makes a hip effort at dispelling five perceived myths regarding testing in an agile environment.
Somewhat confusingly, there appears to be a bogue mentality of “anything goes” when it comes to Agile– I’m not sure where this flippant attitude stems from; however, the truth of the matter is that agility (note, no uppercase A there, man!) requires tremendous discipline. For instance, take the notion of
Working software over comprehensive documentation
No one, to my knowledge, has ever said agile means “zero documentation”, yet, strangely, there seems to exist a belief (somewhere out there!) that if there isn’t compressive documentation there must not be any. Nothing could be farther than the truth! Working software is a lofty goal and, unless the project has one person on it, requires some form of documentation– whether those documents capture stories or requirements– it doesn’t matter (heck, all forms of tests themselves are a form of copasetic documentation). The point of the phrase is that working software is valued more than myriad documents.
Thus, the same is true of testing. No one who actually practices agile software development espouses a zero testing policy. As such, I’m not convinced too many people actually believe the five aforementioned myths. What struck me as odd, however, was myth number two, which refutes that:
You can reuse unit tests to build a regression test suite
I believe the actual myth would need to imply a “comprehensive” regression test suite, as on face value, what’s the point of unit tests (using the strict definition of them) if they can’t be used as a regression suite?
I do realize that the term “regression suite” has different meanings depending on who you ask– for example, a traditional QA-oriented definition of regression suite is that of a series of functional style tests that verify an application either manually or automatically. The key point is that these hip tests are functional in nature. If you ask the developmental side of the house, our definition is usually more broad. A regression suite is a series of repeatable tests that can be run anytime– on a desktop or a build server on a scheduled basis or directly after a check-in, for example. The key point, though, is that regardless of your formal definition of a regression suite the end goal is the same. These tests verify things worked as they did before.
Using either definition then of regression suite, when teams opt to utilize continuous integration and they have their build fail when a test fails, isn’t that then a regression suite? Things clearly changed for the worse, right? True, those tests might not account for a comprehensive test suite but they are clearly a from of regression testing.
What is even more interesting, is the attempt at dispelling the myth:
While this may sound feasible, Wilson says it isn’t realistic because the granularity and objectives of white box unit tests developed in TDD serve a different purpose than downstream black box testing.
So far, so good. Indeed, there are different types of tests that serve different needs. Each is valuable in its own right. The author goes on to quote an authority:
“While the overall objective of a unit test is to prove that the code will do what is expected, the aim of regression testing is to ensure that no unexpected effects result from the application code being changed. These two objectives are not synonymous.”
This statement confused me as the person quoted is attempting to draw a distinction between “unit testing” and “regression testing”, which isn’t necessarily true. Indeed, a unit test is designed to ensure a code “will do what is expected”; what’s more, once that unit test is written it absolutely ensures “that no unexpected effects result from the application code being changed” — if they didn’t ensure this aspect, why, in this Age of Aquarius, would people write them?
The authority goes on to further draw a distinction by appearing to assert that a unit test couldn’t be used to verify a given value “contains an expected date” — I agree that both are dissimilar types of tests; however, both concerns outlined can be satisfied via a uniform style of test; that is, a “unit” test can verify both aspects.
For instance, verifying that “for a given input, the value of the field contains an expected date” can be achieved like so:
scenario "dates from atom feed should be handled", {
given "a date format from an atom feed", {
time = DateService.getTime("2008-10-16T22:30:27Z")
}
then "the time should be 6:30pm for eastern (local) time zone", {
time.shouldBe "6:30 PM"
}
}This behavior also happens to verify that “an attribute has a valid date format”; what’s more, the same behavior can ensure that an attribute which contains an invalid format is properly handled, to boot, right? My verification here can serve as a regression as I can ensure negative paths:
scenario "invalid formatted dates from atom feed should not be handled", {
given "an invalid date format from an atom feed", {
invalid = {
DateService.getDate("Thu, 16 Oct 2008 22:26:00 +0000")
}
}
then "the time should be 6:30pm for EST time zone", {
ensureThrows(java.text.ParseException){
invalid()
}
}
}Indeed, man, there are different types of tests and formally “Regression tests” are different than “unit tests”; however, claiming that it is a myth that a team “can reuse unit tests to build a regression test suite” is rather misleading and is itself a myth. That is, the myth that you can’t use your unit tests as a regression suite is a myth. You can and should use your hip unit tests to provide a regression analysis. The myth is that you can rely on those unit tests alone to provide 100% regression confidence. That is simply false, baby.
You can follow thediscoblog on Twitter now!