Newsletter sign-up
View all newsletters

Enterprise Java Newsletter
Stay up to date on the latest tutorials and Java community news posted on JavaWorld

JavaWorld Daily Brew

More on Quality

 

Sun's CEO, Jonathan Schwartz, also has a definition of quality. And as he mentions in that post, he's also a big believer in simplicity and efficiency. So am I.

He provides a dead-simple measure of quality for Sun which consists of asking a customer one question: "Would you recommend Sun?".

As a programmer I can apply the same measurement to determine the quality of my program, by asking a user of my program if he would recommend using my program to others.

What I tried to describe in my previous post was a dead-simple procedure for raising a program to the level where it will get a "Yes".

I can name many programs (not written by me) for which I would answer "Yes"

Unfortunately, for the vast majority of programs I've tried my answer would probably be "No", or "Not really", or "Only because there's no alternative", or even "Hell no!".

Here are a few examples of both:

  • Netbeans and Eclipse Java Plugins - Yes
  • All other Netbeans and Eclipse Plugins I've tried- No
  • Microsoft Excel - Yes
  • Microsoft Outlook - No
  • Google Search - Yes
  • Froogle - No

In my experience, in each case where I would answer "Yes" I was able to produce a satisfactory result in a short or reasonable amount of time, without getting stuck, and without having to alter the result significantly due to the limitations or defects of the program.

Maybe I'm dumb, but I honestly can't write software I can't test. So I presume my program is testable. Given that, a starting point is to test the program myself on a (possibly simulated) real-world problem and then at least be able to honestly say I myself would want to use the program.

Over the years at various companies I've actually asked my colleagues if they would want to use their own programs in the real world and the answer was often "No".

From my experience, "low-quality" programs tend to manifest lack of quality in one of four ways:

  1. Lack of functionality (I didn't implement the full Design Spec)
  2. Outright bugs (crashing, hanging, misbehavior, visual bugs like inconsistent colors or margins, or ugly layouts, etc)
  3. Slow performance
  4. Excessive memory use

In addition, from my experience, management tends to be very ineffective in prioritizing such problems. Generally, those in category (2) are placed at the top of the list, followed by (1). Due to insufficient testing of real use cases (3) and (4) usually don't even make it to the list until after a release when real users try to use it. In addition, (3) and (4) typically aren't addressed until several releases later (if ever) because during the next release cycle you get a new pile of (1)'s and (2)'s. Instead your customers struggle along feeling dissatisfied, unless they can find an alternative to using your program at all.

The (as I mentioned, dead-simple) procedure described in my previous post crosscuts all of the above manifestations of lack of quality, by literally measuring their importance based on the performance of the user. Lack of functionality is only important if the user actually uses those features. The importance of outright bugs is relative to the impact they have on the user's performance (obviously very important if they are of the "showstopper" variety, but possibly of low importance if there are reasonable workarounds). Likewise, execution performance, memory-use or other resource issues only matter if it affects the user (note, however, that this is extremely common from my experience; for example, I'd say this is the main problem with Netbeans, JavaEE, AJAX toolkits, (I could name many others,...) and not lack of functionality or outright bugs).

I've avoided the issue of "learnability" so far, but that is a prerequisite to all of the above. Obviously, if the barrier to entry is too high, you won't have users and any quality problems are therefore irrelevant.