Making Programming Certifications Relevant in the Real World

Many of us sneer at vendor certifications because they don't seem to reflect the individual's actual skills. Being "a good test-taker" does not mean that you are also a good programmer or that you can intimdate a down server into working correctly. But I'm heartened to hear about a new Java programming certification going into beta that just might go beyond rote memorization.

Here's the basic information: In conjuction with Sun, Topcoder.com is developing a Java certification exam that will require the programmer to write actual code. Like, y'know, what you might do on a job if you're hired. (At least, during the time you're not stuck in yet another meeting.)

If you aren't familiar with TopCoder... well, I've written about its code contest business model before, but essentially the premise is that developers can compete to write the best code, based on well-defined inputs and outputs. The company's clients get well-tested software, generally for less than they'd pay a standard consulting firm for the usual "I hope they're good" crap-shoot results, and the winning developer can get, say, $50,000.

Because most of its competitions are based on writing components (rather than soup-to-nuts applications), TopCoder knows all about defining input and output requirements and testing whether the code does what it's suposed to. Now, they tell me, TopCoder is using that expertise to help Sun overhaul its Java certification program.

These tests are more skill set based than the current Java cert exams, explained TopCoder's software architect Greg Eldreidge. There are no theoretical questions. "They'll implement a class with a specific method signature, with primitive inputs and outputs," said Eldridge. Each test will have an objective, such as the programmer's ability to use polymorphism, or declare a class; you might be assigned a task to show you can write the code for exception handling, regular expression processing, or processing generic collections. Once the candidate has written a test harness, it's tested over a large set of inputs to see if it meets the problem statement.

This is just going into beta test, but I'm pretty impressed by it because it's an effort to match the job skill (programming) with the tested skill.

Nobody is promising that this will separate the best programmer from the worst one, or judge someone's ability to learn. You're not judged on how well you perform the task, just whether you can do it. That's still admirable, because I've had a few experiences in which someone who could "talk the talk" was hired even though they could not get themselves up from a chair, much less "walk the walk." The certification isn't meant to determine everything about a job candidate, after all; it's just meant to measure whether someone meets a base set of qualifications. Hopefully, a sane job interview with prospective team members will help with the next step.

And we do need that baseline. I remember when technology certifications were all the rage in the late 90s; during the dot com era, there was no way for a company to tell if a would-be employee could actually code his way out of a paper bag, so employers (or their HR departments) relied on paper certifications to prove some level of basic knowledge. Most techies understood that many certification exams were based more on your ability to memorize the contents of a book than on any real skill — it was common to refer to an MCSE as a Minesweeper Consultant And Solitaire Engineer or Must Consult Someone Else — but many sighed and played along. Because during that time (and maybe today, you'll have to tell me), those same HR departments would use the lack of a certification as a reason to eliminate an otherwise perfect programmer from consideration (much like a college degree).

Some vendor certifications are or were worth the effort to acquire, though I'm not sure that any of the most-respected are in software development realms. (Feel free to argue; I'm not passionate on that point.) The key is that we all wanted (and want) some way to dispassionately judge whether an individual can do the job, the way that a high school diploma at least hints that a graduate meets base literacy requirements. And since a multiple-guess test isn't how we produce quality code in the real world, a test based on that model couldn't really demonstrate whether someone could produce something that works.

TopCoder's method might not be perfect... (well, of course it's not. It's just going into beta; if it was perfect, it'd be released). But I think it's a really good step along that path.