Java Tip 130: Do you know your data size?

Don't pay the price for hidden class fields

Page 2 of 2

Wrapper classes like java.lang.Integer seem a bad choice for storing large data amounts in memory. If you strive to be memory-economic, avoid them altogether. Rolling your own vector class for primitive ints isn't difficult. Of course, it would be great if the Java core API already contained such libraries. Perhaps the situation will improve when Java has generic types.

Multidimensional arrays

For large data structures built with multidimensional arrays, you can oftentimes reduce the extra dimension overhead by an easy indexing change: convert every int[dim1][dim2] instance to an int[dim1*dim2] instance and change all expressions like a[i][j] to a[i*dim1 + j]. Of course, you pay a price from the lack of index-range checking on dim1 dimension (which also boosts performance).

java.lang.String

You can try a few simple tricks to reduce your application's String static memory size.

First, you can try one common technique when an application loads and caches many Strings from a data file or a network connection, and the String value range proves limited. For example, if you want to parse an XML file in which you frequently encounter a certain attribute, but the attribute is limited to just two possible values. Your goal: filter all Strings through a hash map and reduce all equal but distinct Strings to identical object references:

    public String internString (String s)
    {
        if (s == null) return null;
        
        String is = (String) m_strings.get (s);
        if (is != null)
            return is;
        else
        {
            m_strings.put (s, s);
            return s;
        }
    }
    
    private Map m_strings = new HashMap ();

When applicable, that trick can decrease your static memory requirements by hundreds of percent. An experienced reader may observe that the trick duplicates java.lang.String.intern()'s functionality. Numerous reasons exist to avoid the String.intern() method. One is that few modern JVMs can intern large amounts of data.

What if your Strings are all different? For the second trick, recollect that for small Strings the underlying char array takes half the memory occupied by the String that wraps it. Thus, when my application caches many distinct String values, I can just keep the arrays in memory and convert them to Strings as needed. That works well if each such String then serves as a transient, quickly discarded object. A simple experiment with caching 90,000 words taken from a sample dictionary file shows that this data takes about 5.6 MB in String form and only 3.4 MB in char[] form, a 65 percent reduction.

The second trick contains one obvious disadvantage: you cannot convert a char[] back to a String through a constructor that would take ownership of the array without cloning it. Why? Because the entire public String API ensures that every String is immutable, so every String constructor defensively clones input data passed through its parameters.

Still, you can try a third trick when the cost of converting from char arrays to Strings proves too high. The trick exploits java.lang.String.substr()'s ability to avoid data copying: the method implementation exploits String immutability and creates a shallow String object that shares the char content array with the original String but has its internal start and end indices adjusted correspondingly. To make an example, new String("smiles").substring(1,5) is a String configured to start at index 1 and end at index 4 within a char buffer "smiles" shared by reference with the originally constructed String. You can exploit that fact as follows: given a large String set, you can merge its char content into one large char array, create a String out of it, and recreate the original Strings as subStrings of this master String, as the following method illustrates:

    public static String [] compactStrings (String [] strings)
    {
        String [] result = new String [strings.length];
        int offset = 0;
        
        for (int i = 0; i < strings.length; ++ i)
            offset += strings [i].length ();
        
        // Can't use StringBuffer due to how it manages capacity
        char [] allchars = new char [offset];
        
        offset = 0;
        for (int i = 0; i < strings.length; ++ i)
        {
            strings [i].getChars (0, strings [i].length (), allchars, offset);
            offset += strings [i].length ();
        }
        
        String allstrings = new String (allchars);
        
        offset = 0;
        for (int i = 0; i < strings.length; ++ i)
            result [i] = allstrings.substring (offset,
                                             offset += strings [i].length ());
        
        return result;
    }

The above method returns a new set of Strings equivalent to the input set but more compact in memory. Recollect from earlier measurements that every char[] adds 16 bytes of overhead; effectively removed by this method. The savings could be significant when cached data comprises mostly short Strings. When you apply this trick to the same 90,000-word dictionary mentioned above, the memory size drops from 5.6 MB to 4.2 MB, a 30 percent reduction. (An astute reader will observe in that particular example the Strings tend to share many prefixes and the compactString() method could be further optimized to reduce the merged char array's size.)

As a side effect, compactString() also removes StringBuffer-related inefficiencies mentioned earlier.

Is it worth the effort?

To many, the techniques I presented may seem like micro-optimizations not worth the time it takes to implement them. However, remember the applications I had in mind: server-side applications that cache massive amounts of data in memory to achieve performance impossible when data comes from a disk or database. Several hundred megabytes of cached data represents a noticeable fraction of maximum heap sizes of today's 32-bit JVMs. Shaving 30 percent or more off is nothing to scoff at; it could push an application's scalability limits quite noticeably. Of course, these tricks cannot substitute for beginning with well-designed data structures and profiling your application to determine its actual hot spots. In any case, you're now more aware of how much memory your objects consume.

Vladimir Roubtsov has programmed in a variety of languages for more than 12 years, including Java since 1995. Currently, he develops enterprise software as a senior developer for Trilogy in Austin, Texas. When coding for fun, Vladimir develops software tools based on Java byte code or source code instrumentation.

Learn more about this topic

| 1 2 Page 2