|
|
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Page 6 of 6
ObjectProfiler's approach is not perfect. Besides the already explained ignorance of memory alignment, another obvious problem with it is
that Java object instances can share nonstatic data, such as when instance fields point to global singletons and other shared
content.
Consider DecimalFormat.getPercentInstance(). Even though it returns a new NumberFormat instance each time, all of them usually share the Locale.getDefault() singleton. So, even though sizeof(DecimalFormat.getPercentInstance()) reports 1,111 bytes each time, it is an over-estimate. This is really just another manifestation of the conceptual difficulties
in defining the size measure for a Java object. In a situation like that, ObjectProfiler.sizedelta(Object base, Object obj) might be handy: this method traverses the object graph rooted at base and then profiles obj using the visited object set pre-populated during the first traversal. The result is effectively computed as the total size
of data owned by obj that does not appear to be owned by base. In other words, it is the amount of memory needed to instantiate obj given that base exists already (shared data is effectively subtracted out).
sizedelta(DecimalFormat.getPercentInstance(), DecimalFormat.getPercentInstance()) reports that every subsequent format instance requires 741 bytes, only a few bytes off the more precise value of 752 bytes
measured by Java Tip 130's Sizeof class and much better than the original sizeof() estimate.
Another type of data ObjectProfiler can't see is native memory allocations. The result of java.nio.ByteBuffer.allocate(1000) is a JVM heap-allocated structure of 1,059 bytes, but the result of ByteBuffer.allocateDirect(1000) appears to be just 140 bytes; that's because the real storage is allocated in native memory. This is when you give up pure
Java and switch to JVM Profiler Interface (JVMPI)-based profilers.
A quite obscure example of the same problem is trying to size an instance of Throwable. ObjectProfiler.sizeof(new Throwable()) reports 20 bytes, in stark contrast with 272 bytes reported by Java Tip 130's Sizeof class. The reason is this hidden field in Throwable:
private transient Object backtrace;
The JVM treats this field in a special way: it does not show up in reflective calls even though its definition can be seen in JDK sources. Obviously, the JVM uses this object property to store some 250 bytes of native data that supports stack tracing.
Finally, profiling objects that make use of java.lang.ref.* references can lead to confusing results (e.g., results that fluctuate between repeated sizeof() calls on the same object). This happens because weak references create extra concurrency in the application, and the sheer
fact of traversing such an object graph might change the reachability status of weak referents. Furthermore, boldly going
and poking inside the innards of a java.lang.ref.Reference the way ObjectProfiler does might not be what pure Java code is supposed to do. It is perhaps best to enhance the traversal code to sidestep all
nonstrong reference objects (it is also not even clear if such data contributes to the root object's size in the first place).
This article probably goes too far trying to build a pure Java object profiler. Still, my experience has been that a quick
look at a large datastructure via a simple method like ObjectProfiler.profile() can highlight easy memory savings on the order of tens and hundreds of percent. This approach can complement commercial profilers
that tend to present very shallow (not graph-based) views of happenings inside the JVM heap. If nothing else, looking inside
an object graph can be quite educational.
Archived Discussions (Read only)