|
|
"Hi. My name's Ted, and I write shitty code."
With this opening, a group of us earlier this year opened a panel (back in March,
as I recall) at the No Fluff Just Stuff conference in Minneapolis. Neal Ford started
the idea, whispering it to me as we sat down for the panel, and I immediately followed
his opening statement in the same vein. Poor Charles Nutter, who was new to the tour,
didn't get the whispered-down-the-line instruction, and tried valiantly to recover
the panel's apparent collective discard of dignity--"Hi, I'm Charles, and I write Ruby code"--to
no avail. (He's since stopped trying.)
The reason for our declaration of impotent implementation, of course, was, as
this post states so well, a Zen-like celebration of our inadequacies: To be
a Great Programmer, you must admit that you are a Terrible Programmer.
To those who count themselves as the uninitiated into our particular brand of philosophy
(or religion, or just plain weirdness), the declaration is a statement of anti-Perfectionism.
"I am human, therefore I make mistakes. If I make mistakes, then I cannot assume that
I will write code that has no mistakes. If I cannot write code that has no mistakes,
then I must assume that mistakes are rampant within the code. If mistakes are rampant
within the code, then I must find them. But because I make mistakes, then I must also
assume that I make mistakes trying to identify the mistakes in the code. Therefore,
I will seek the best support I can find in helping me find the mistakes in my code."
This support can come in a variety of forms. The Terrible Programmer cites several
of his favorites: use of the Statically-Typed Language (in his case, Ada),
Virulent Assertions, Testing Masochism, the Brutally Honest Code Review, and Zeal
for the Visible Crash. Myself, I like to consider other tools as well: the Static
Analysis Tool Suite, a Commitment to Confront the Uncomfortable Truth, and the
Abject Rejection of Best Practices.
By this point in time, most developers have at least heard of, if not considered adoption
of, the Masochistic Testing meme. Fellow NFJS'ers Stuart Halloway and Justin Gehtland
have founded a consultancy firm, Relevance, that sets a high bar as a corporate cultural
standard: 100% test coverage of their code. Neal Ford has reported that ThoughtWorks
makes similar statements, though it's my understanding that clients sometimes put
accidental obstacles in their way of achieving said goal. It's amibtious, but as the
ancient American Indian proverb is said to state, If you aim your arrow at the
sun, it will fly higher and father than if you aim it at the ground.
In fact, on a panel this weekend in Dallas, fellow NFJS'er David Bock attributed the
rise of interest in dynamic languages to the growth of the unit-testing meme, since
faith in the quality of code authored in a dynamic language can only follow if the
code has been rigorously tested, since we have no tool checking the syntactic or semantic
correctness before it is executed.
Among the jet-setting Ruby crowd, this means a near-slavish devotion to unit tests.
Interestingly enough, I find this attitude curiously incomplete: if we assume that
we make mistakes, and that we therefore require unit tests to prove to ourselves (and,
by comfortable side effect, the world around us) that the code is correct, would we
not also benefit from the automated--and in some ways far more comprehensive--checking
that a static-analysis tool can provide? Stu Halloway once stated, "In five years,
we will look back upon the Java compiler as a weak form of unit testing." I have no
doubt that he's right--but I draw an entirely different conclusion from his statement:
we need better static analysis tools, not to abandon them entirely.
Consider, if you will, the static-analysis tool known as FindBugs. Fellow NFJS'er (and author
of the JavaOne bestselling book Java Concurrency in Practice) Brian
Goetz offered a presentation last year on the use of FindBugs in which he cited the
use of the tool over the existing JDK Swing code base. Swing has been in use since
1998, has had close to a decade of debugging driven by actual field use, and (despite
what personal misgivings the average Java programmer may have about building a "rich
client" application instead of a browser-based one) can legitimately call itself a
successful library in widespread use.
If memory serves, Brian's presentation noted that when run over the Swing code base,
FindBugs discovered 70-something concurrency errors that remained in the code base,
some since JDK 1.2. In close to a decade of use, 70-something concurrency bugs
had been neither fixed nor found. Even if I misremember that number, and it is
off by an order of magnitude--7 bugs instead of 70--the implication remains clear: simple
usage cannot reveal all bugs.
The effect of this on the strength of the unit-testing argument cannot be understated--if
the quality of dynamic-language-authored code rests on the exercise of that code under
constrained circumstances in order to prove its correctness, then we have a major
problem facing us, because the interleaving of execution paths that define concurrency
bugs remain beyond the ability of most (arguably all) languages and/or platforms to
control. It thus follows that concurrent code cannot be effectively unit tested,
and thus the premise that dynamic-language-authored code can be proven correct by
simple use of unit tests is flawed.
Some may take this argument to imply a rejection of unit tests. Those who do would
be seeking evidence to support a position that is equally untenable, that unit-testing
is somehow "wrong" or unnecessary. No such statement is implied; quite the opposite,
in fact. We can neither reject the necessitary of unit testing any more than
we can the beneficence of static analysis tools; each is, in its own way, incomplete
in its ability to prove code correct, at least in current incarnations. In fact, although
I will not speak for them, many of the Apostles of UnitTestitarianism in fact indirectly
support this belief, arguing that unit tests do not obviate the need for a quality-analysis
phase after a development iteration, because correct unit tests cannot imply completely
correct code.
Currently much research is taking place in the statically-typed languages to make
them more typesafe without sacrificing the productivity enhancements seen in the dynamically-typed
language world. Scala, for example, makes heavy use of type-inferencing to reduce
the burden of type declarations by the programmer, requiring such declarations only
when the inferencing yields ambiguity. As a result, Scala's syntax--at a first glance--looks
remarkably similar in places to Ruby's, yet the Scala compiler is still fully type-safe,
ensuring that accidental coercion doesn't yield confusion. Microsoft is pursuing much
the same route as part of their LINQ strategy for Visual Studio 2008/.NET 3.5 (the
"Orcas" release), and some additional research is being done around functional languages
in the form of F#.
At the end of the day, the fact remains that I write shitty code. That means I need
all the help I can get--human-centric or otherwise--to make sure that my code is correct.
I will take that help in the form of unit tests that I force myself (and others around
me force myself) to write. But I also have to accept the possibility that my unit
tests themselves may be flawed, and that therefore other tools--which do not suffer
from my inherent human capacity to forget or brush off--are a powerful and necessary
component of my toolbox.