Newsletter sign-up
View all newsletters

Enterprise Java Newsletter
Stay up to date on the latest tutorials and Java community news posted on JavaWorld

Sponsored Links

Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs

Can double-checked locking be fixed?

No matter how you rig it, double-checked locking still fails

  • Print
  • Feedback

Page 2 of 3

As an example, consider this simple class:

public class SomeObject {
  int a;
  public SomeObject() {
    a = 1;
  }
}


Suppose your program instantiates a SomeObject, storing the result in the object MyObject's field called someField. A Java compiler would generate something like the Java byte code in Listing 4. In Listing 4, I've inlined the constructor for SomeObject and eliminated the stack-management instructions (dup, aload) for clarification:

Listing 4: Simplified Java byte code for creating a new SomeObject

   new <Class SomeObject> ; Allocate memory for a SomeObject
   invokespecial <Method java.lang.Object()>
                          ; Call the constructor for Object()
   iconst_1               ; Load the constant 1
                          ; Call this next operation FirstWrite
   putfield <Field int a> ; Store it in SomeObject.a
                          ; Call this next operation SecondWrite
   putfield <Field MyObject someField>
                          ; Store the reference somewhere


Now, suppose that a JIT (just-in-time) compiler translates the byte code from Listing 4 into machine code. The JIT would likely generate a call to the JVM's equivalent of malloc(), a call to the constructor for Object, and two store-to-memory instructions -- one for SomeObject's field a (FirstWrite) and one to store the resulting reference in someField (SecondWrite).

Even if you were assured that the processor would execute these instructions in exactly this order, other threads -- running on other processors -- examining main memory might not see them happen in that order. Even though FirstWrite executes before SecondWrite, the cache on the executing processor could flush the results of SecondWrite to main memory before it flushes the results of FirstWrite. As a result, another thread could see someField initialized to a partially constructed SomeObject.

This example might shed some light on a common misperception about the JMM and memory access reordering: the reorderings occur at the statement, method, or byte-code level. In reality, you should be more concerned about reorderings at the memory-fetch level. After the Java compiler emits its byte code, and the JIT compiles it to machine code, the machine code will execute on a real processor with a real cache. The JMM specifies what sort of hardware-based reorderings it will tolerate -- and it is simply not the case that the JMM expects memory coherency across threads.

If you are not familiar with the specifics of what happens in modern processors and caches, you might find this sort of nondeterminism surprising and even disturbing. But the compiler and JVM can hide all this complexity from you -- if you follow the rules embodied in the JLS. And when it comes to sharing memory between threads, the rule is simple: synchronize.

Why are these sorts of nondeterminism in processors and caches tolerated? Because they provide us with better performance. Many of the recent advances in computing performance have come through increased parallelism. That is why the JMM doesn't assume that memory operations performed by one thread will be perceived as happening in the same order by another thread -- so as not to hamstring Java's performance on modern hardware. Only when two threads synchronize on the same monitor (or lock) can they rely on the ordering of memory operations.

  • Print
  • Feedback

Resources
  • Double-checked locking idiom:
  • The Java Memory Model and multithreaded programming