Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
System.currentTimeMillis()before and after the code to be measured. This is comparable to using a stopwatch when testing GUI activity, and works fine if elapsed time is really what you want. The downside is that this approach may include much more than your code's execution time. Time used by other processes on the system or time spent waiting for I/O can result in inaccurately high timing numbers.
Programming languages like C and Pascal must use operating system calls to get the CPU time spent by a thread or process. Java applications can do this too, by using native, and therefore unportable, methods. Unfortunately, the results for an OS thread may not be directly related to the executing Java thread. This is because the way Java threads are mapped to OS threads is entirely up to the JVM -- and different JVMs use different strategies.
Some JVMs use green threads, which run all the Java threads in one native OS thread (a so-called n-to-one mapping). The HotSpot JVM uses native threads, which may execute in parallel on a multi-CPU machine. The fact that several threads are executing in parallel means that the sum of the CPU times may exceed the elapsed real time. On Solaris, Java threads are not bound permanently to the same native threads but are remapped by the scheduler (in an n-to-m mapping). So, getting the CPU time for the current native thread does not give you the time you want. Contrast this with the Blackdown port of the JDK 1.2 to Linux, where a thread is akin to a process (one-to-one mapping).
The result of the JVM successfully hiding the underlying machine and operating system means that the native information is often useless to Java programmers. Fortunately, each JVM knows how it maps threads, even if this detail is hidden from the application programmers. Java 2 introduced a new API -- the Java Virtual Machine Profiler Interface (JVMPI) -- that allows access to the necessary timing information.
The JVMPI is a C interface to the JVM, where profilers may access the state of the JVM via an in-process native agent, and can be notified of interesting events like object allocations and method invocations. You can use the JVMPI with a frontend that provides a GUI, like commercial profilers OptimizeIt or JProbe, or the agent can simply dump the profiling information into a file like HPROF (see Resources for more information).
Profilers are very good at identifying hot spots that need to be optimized. They work either by sampling or by instrumentation, meaning by tracing method invocations or by modifying class code on class loading. Sampling has little overhead but is very coarse-grained, whereas instrumentation has a significant overhead and may get in the way of optimization, especially because Java programs typically have many small methods.
A potential problem is that the profiler itself uses CPU and memory resources. The latter may affect execution of the program with respect to hardware caching and OS swapping of virtual memory, causing the timing information to get blurred. This is not a serious problem for profiling in which you are interested in the big picture and looking at relative figures, but if you want to microbenchmark JVMs, components, code snippets, or algorithms, you must choose another approach that has less overhead.
capjprof.zipcontains the source code and compiled versions for Windows NT and Solaris