Java Tip 95: Load classes behind the scenes

Ebb Java performance problems with speculative class loading

Java application and applet users commonly complain about slow launch times. Java detractors wave that fact like a red flag, claiming it's an inherent flaw in the language. There are numerous causes for that observation, some of which are being addressed by Sun (such as Swing and Java 2D performance), while others are not so easily dismissed. Until broadband network access becomes widespread, applet performance will be hindered by 56 Kbps modems. But even with applications and LAN-based applets, complex data structures still take time to set up, and classes will not load instantaneously. Those one-time costs are readily apparent on slower machines and can dampen enthusiasm for your program.

There are generally two solutions to the performance problem associated with loading your code. The first solution is to load all your code at once. That creates one giant pause, but afterward, if your user doesn't become impatient and wander off, everything works smoothly. For applets, in which I/O is the bottleneck, that means putting everything in a Jar file, which has the added advantage of reducing the total number of bytes downloaded by about 55 percent.

The second approach is to load the code piecemeal. That causes a fast startup and eliminates the execution costs for any unused code. Instead of one pause that can scare potential users away, a short pause occurs each time a new UI element is first accessed. However, that method is still undesirable because, instead of just one, every action will appear to be slow.

Most applications spend a considerable amount of time waiting idly for input from the user or another program. The program cannot transition to its next task until some critical piece of information is provided. As such, the vast majority of your application's features are hidden behind other features. For example, you almost never access the spellchecker until you've loaded the text editor. You never send data to the server until you've connected to it. You don't need to send an email to the mail server until you've written one. And, you don't need screen 2 of a wizard until you're finished using screen 1.

Your program doesn't perform tasks that the user hasn't asked it to do because that would be wasteful or result in undesired actions. So you don't even consider loading large units of code and data until they are asked for, right? Well, that depends. Strategy games and static animation and sound (movie or music playback) are notable exceptions to that rule. Those applications identify units of work they can perform ahead of time with no adverse side effects. They can (and often do) beat a hard deadline by starting the task before it's strictly needed. In the case of a strategy game, your opponent doesn't stop thinking about the board just because it's your turn, especially if there's a time limit, so why should the computer? Computers use contextual information to speculate on what tasks it will need to perform next. In a game, for example, the computer would consider what the game's rules permit you to do for your next move.

Applying speculative computation to the problem

Let me return for a moment to the wizard example. If you are on a wizard's screen, generally you can press three buttons: Back, Forward, or Cancel. You've probably already been to Back, so the resources (classes, statically determined data) necessary for that screen are loaded and waiting for you to return. Cancel, too, is probably already covered. Yet the average application waits for the user to click a button, though it's fairly easy to predict that he or she will press Next. Because the current screen isn't finished executing, you can't be completely sure what the next screen will look like, but you can probably make a good guess. Even if you can't, it's very likely you'll know what classes will be used to build it.

Loading a class is not free. In addition to the time spent resolving the class (hunting down, verifying, and loading the class file into memory), many complex classes use cached lookups to keep decision costs down. Those classes will load conversion tables and lists from a file or server, or on the basis of some calculation. Frequently, they also delegate part of their jobs to other secondary classes. Those secondary classes also need to be resolved before the class is useful, leading to a cascading object dependency tree, all of which will eventually need to be sorted out. A class is generally only resolved on the first attempt to load an instance, and the first instance is generally created on demand, as the result of interaction with a user or another process or task. And because that usually occurs as a result of some action that you need to process, those costs are paid just when you have something useful to do instead of beforehand.

Background class loading

To address that issue, a simple piece of code can allow you to queue the names of classes you think you'll need soon, which would be resolved ahead of time. To do that, you'll need a job queue to schedule the loads. Since binary size and compatibility with various JVM versions are both concerns, I'll just use a java.util.Vector in this article, but any ordered collection will suffice. Next, you'll need a separate thread of execution to load the classes and a bit of code to do the actual class resolution. Finally, you'll need a way to efficiently pause the loader when it's inactive, which means using a monitor object, wait(), and notify(). I'll call the class that performs all of those tasks BackgroundLoader (see Resources for the complete source code).

A few details of the code are worth noting. A hashtable in the run() method is used to defeat class garbage collection under JDK 1.1. If you are using JDK 1.2 or higher, you can omit that object. Another thing to note is that the getName() method in the last try block may or may not have beneficial side effects, depending on your JVM implementation. Current high-performance virtual machines are working toward better lazy initialization by trying to drive a wedge between class resolution and the static initializer. To force the class to be initialized, it may not be enough to simply get a class reference. The Java Language Specification (JLS) requires that the static initializer run before any work be done with the class. Now, I don't particularly consider getName() to be work, but most VMs do, which is sufficient for the purposes of that endeavor.

Last but not least, for those of you developing under VisualAge for Java, there is a race condition in the built-in VM, with respect to two threads trying to resolve the DateFormat and NumberFormat classes simultaneously. I avoided that issue by forcing the resources to be resolved before the queue starts. I have not seen that issue pop up in any other VM, so I assume that is a VisualAge for Java bug. However, since the problem is harmless unless your program manually changes the default locale, it's probably better to leave that code chunk in place.

Using the code

With that tool in hand, you can now optimize the common pathways through your code. If screen 3 contains two main components: GraphWidget and TableWidget, and screen 4 is normally accessed from screen 3, then screen 3 should look something like the following:

public class Screen3 extends SomeOtherClass
  // Queue component elements of that screen
  // Do some static configuration work
public Screen3(...)
  // Start loading next screen.
  // Instantiate my components here

What does that accomplish? First, if some other external object (for example, screen 2) queues screen 3's class to be loaded, then the static initializer for screen 3 will be run as part of the class lookup (because you can't access a class until its static initializer has executed). Since screen 3 depends directly on several other classes, its static initializer queues those classes for loading. Those classes may turn around and queue their own dependencies for loading down the line until simpler objects terminate the dependency chain. All of that happens in the background while the users are doing simple tasks like trying to remember their password, absorbing the information that has been presented to them on screen 2, or waiting for data to arrive from the server. Then when screen 3 is finally loaded, it informs the class loader that screen 4 will be next. Screen 4 and the programmer-selected dependencies are then loaded.

For remote code, you would follow the "pay some now, some later" tactic, by jarring up the BackgroundLoader, the welcome screens of your app, and a bare minimum of supporting classes to get the app started. Everything else would live unjarred on the server (unless you're using JDK 1.3, in which case, you can take advantage of the jar indexing and lazy loading facilities it provides). In that way, the applet starts as soon as possible and presents some information to the user. Immediately thereafter, all of the classes commonly accessed after the initial screen should be queued for loading. In many cases, the user will not notice that the application is still busy loading, and that alone will go a long way toward achieving a responsive UI. If you omit a class from the list, either on purpose or by accident, no real harm occurs because the class will eventually be referenced and loaded on an as-needed basis anyway.

TANSTAAFL (There ain't no such thing as a free lunch)

As stated in the summary, that technique boosts perceived performance. As with many latency optimizations, it essentially boils down to enlightened paper shuffling. The same (actually slightly more) work is being done, but it's being done when the computer is idle, and so in some sense it's free. If your application does continuous processing or I/O, that mechanism will be less of a gain. For applications that dynamically generate or dynamically acquire code, it may not be possible to enjoy a benefit from that mechanism, as information pertaining to the name or location of the classes may not arrive until the moment in which the classes are expected to be available. Content handler factories are a prime example of a use case that is a poor fit for speculative class loading.

The main potential for unexpected behavior has to do with the order of static initializers. Many applications have order-of-evaluation issues with class loading. A singleton may ask another class for the information necessary to initialize itself. If the other class isn't ready with the information, that will likely result in an application malfunction. The simple fix for that is to enforce queuing order on those interdependent classes and to delay the first calls to BackgroundLoader until that requisite data is available. Alternately, you can revisit the design and choose a more robust mechanism to propagate the needed information. Those problems tend to be limited to the first few moments of operation and, once bootstrapping is over, no further issues should arise.


While lazy evaluation is often used to avoid or hide Java VM delays, it only improves the situation when the resulting incremental costs are small enough to escape the user's attention. Since class loading often doesn't fall into that category, lazy evaluation can actually do more harm than good. By applying the simple code illustrated here, you gain access to a more powerful technique for hiding those predictable but difficult-to-hide tasks, ridding yourself of many of the conspicuous startup delays that detract from the user experience.

Jason Marshall is a software engineer, specializing in client/server performance and usability issues. He is currently applying his more than four years of Java and six years of Internet software experience toward research and development at EpicEdge, where he is part of a team working to deploy several J2EE applications. In his not-so-ample free time, he is a student of computer language and virtual machine design theory.

Learn more about this topic

Recent JavaWorld articles on class loading