Recommended: Sing it, brah! 5 fabulous songs for developers
JW's Top 5
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
April 11, 2003
In "My Kingdom for a Good Timer!," what is the purpose of this part of the sample code:
// JIT/hotspot warmup:
for (int i = 0; i < 3000; ++ i)
{
timer.start ();
timer.stop ();
timer.getDuration ();
timer.reset ();
}
The short answer is the above code snippet "warms up" the JVM with respect to the ITimer implementation. Let's investigate what I mean by that in more detail.
It's easier to illustrate my point with a specially crafted code snippet:
public class Main
{
public static void main (String [] args)
{
// Create an ITimer using the Factory class:
final ITimer timer = TimerFactory.newTimer ();
for (int repeat = 0; repeat < 25; ++ repeat)
{
timer.start ();
sum (100);
timer.stop ();
System.out.println (repeat + ": " + timer.getDuration ());
timer.reset ();
}
}
public static int sum (int n)
{
if (n <= 1)
return 1;
else
return n + sum (n - 1);
}
} // End of class
As you can see, I use the ITimer high-resolution API once again to measure the performance of a test method called sum(), which is a very naïve implementation of the sum of numbers 1+2+...+n (an arithmetic progression). It is implemented recursively by design. You should never use such an implementation inside
a real application. However, it suits my discussion precisely because of its high method call overhead.
The above code snippet invokes sum(100) 25 times and logs the cost of every call. The results from running in Java 2 Platform, Standard Edition (J2SE) 1.4.1 for
Win32 are quite interesting:
0: 0.08888 1: 0.04755 2: 0.03448 3: 0.03462 4: 0.03432 5: 0.03437 6: 0.03440 7: 0.03460 8: 0.03441 9: 0.03445 10: 0.0344 11: 0.0343 12: 0.0343 13: 0.0345 14: 0.8865 15: 0.0444 16: 0.0107 17: 0.0103 18: 0.0102 19: 0.0102 20: 0.0102 21: 0.0103 22: 0.0101 23: 0.0103 24: 0.0103
Notice that something strange happens around the 15th repeat: the method slows down for a while and then speeds up by a factor
of 3.5. "It's just a random garbage collection (GC) delay," I hear some seasoned Java programmers mutter. Except it is not:
a sporadic GC run would not explain why sum() gets faster afterwards and stays that way (you can see that by making the upper limit for the repeat loop variable much larger than 25). Furthermore, you can repeat this experiment numerous times, and the phenomenon will always
happen at about the same point in the program's execution.
The riddle has a simple explanation. A call to sum(n) implies n total calls to the method; n-1 of them are recursive. After 15 calls to sum(100), the method body executes a grand total of 1,500 times. As it happens, my code executes in the client HotSpot JVM by default,
and 1,500 is the default method invocation threshold for when a method is considered for compilation into native code. Execution
slows down while HotSpot spends extra CPU cycles to compile sum() into native code and then speeds up because the JVM switches to the method's natively compiled, and thus noticeably faster,
version.
For further proof you can add the -XX:+PrintCompilation HotSpot option (see Resources) to your JVM command line and watch the HotSpot engine tell you when it compiles various methods: