Recommended: Sing it, brah! 5 fabulous songs for developers
JW's Top 5
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Page 3 of 7
File t = new File("/dev/tty");
you can't expect your program to work on anything but a Unix machine (or, at least, a POSIX machine). (See Sidebar 1 for the source code for a helper class that makes it easy to use java.io.File in a portable way.) A more subtle kind of platform variation occurs in thread scheduling. The Java Language Specification1 provides quite a lot of detail about what may be expected from the thread scheduler of all JRE implementations. Due to the
nature of multithreaded programming, however, it's entirely possible to write a program that will run under one scheduling
policy but will either hang or produce invalid results under another policy. (See Sidebar 2 for examples of code that will work under some, but not all, scheduling policies.)
Despite these differences, it's entirely possible to write Java programs that operate correctly on all Java platforms. The purpose of runnability testing is to raise a programmer's confidence that deploying the tested programs to a wide spectrum of computers won't result in disaster.
In order to understand runnability testing, we have to understand the scope of our ambition. Runnability testing asks whether a program can run, without alteration, on a variety of Java platforms. First, we'll discuss what it means for a program to run, then we'll discuss what could affect the runnability of Java programs on various platforms.
The delivery mechanism for a Java program can have several different forms, depending on how the program is invoked and used. It may be in the form of an applet that works within an HTML browser; it may be an application that is invoked from the command line; or it may be a servlet that operates within a Web server. It may use the AWT to communicate with the user, or it may be faceless and do all its work by reading and writing files or network streams. These differences are fundamental to the character of the program. A Java application program will never accidentally become a servlet -- the program is written for a specific set of delivery mechanisms. Therefore, the delivery mechanism is not a portability consideration.
Independent of the delivery mechanism, a Java program may encounter platform variations. These variations may include implementation details of the JRE, security policies set by the user, and optional packages that may or may not be installed.
Unfortunately, some JRE implementations are imperfect -- in fact, there are bugs in all JRE implementations. We expect that as Java technology matures, these imperfections will be corrected and disappear. But in the meantime, and in order to deliver value to users, programmers must cope with these bugs. This unpleasant fact is a major motivation for runnability testing. If all JRE implementations followed the specification, and if there were no ambiguities or errors in the specification, a minimum amount of runnability testing would be necessary (only enough to ensure that the rules had been followed). But since our users don't have access to that perfect implementation, we have to take the extra step to assure that our software is useful on the JREs they do have. This fact also poses a challenge in runnability testing: platform bugs aren't well designed, and can't be predicted from design principles or (generally) researched in documentation. In the face of platform bugs, programmers can't expect testing to provide absolute assurance that they've achieved runnability. At best, testing can only increase their confidence level.