Can double-checked locking be fixed?

No matter how you rig it, double-checked locking still fails

1 2 Page 2
Page 2 of 2

The JMM was designed to support architectures both with and without cache coherency. The JMM requires that a thread perform a read barrier after monitor entry and a write barrier before monitor exit. FullMemoryBarrierSingleton does indeed force the initializing thread to perform two write barriers so that resource and initialized are written to main memory in the proper order. So what could be wrong? The problem is that the other threads don't necessarily perform a read barrier after determining that initialized is set, so they could possibly see stale values of resource or resource's fields.

To see how a thread could see stale values for resource, don't think in terms of objects and fields, but instead in terms of memory locations and their contents. Perhaps the memory location corresponding to the field resource was already in the current processor's cache before another processor initialized resource. Since the current processor has not performed a read barrier, it would see that the address of resource was already in its cache and just use the cached value, which is now stale. The same could happen with any nonvolatile field of resource. So by not synchronizing before acquiring the reference to resource, a thread might see a stale or garbage value for resource (or one of its fields).

Is this just theory, or could it happen to my code?

Most Java applications are hosted on Intel or Sparc systems, which offer stronger memory models than required by the Java Memory Model. (Sparc processors actually offer multiple memory models with varying levels of cache coherency.) And many systems have only a single processor. It might be tempting to dismiss these concerns as being only of theoretical value and think, "That couldn't happen to us because we only use Solaris," or, "All our systems have single processors."

The danger behind dismissing these concerns is that these assumptions get buried in the code, where no one knows about them. Programs have a tendency to live much longer than expected. The Y2K phenomenon was dramatic evidence of that fact -- programmers made memory optimizations 20 or 30 years ago, fully convinced that future programmers would certainly replace the code by the year 2000. Even if you're sure that your program is only going to run on Linux/Intel in the foreseeable future, how do you know the same program won't get rehosted to another platform 10 years from now? Will anyone remember that you assumed otherwise when you wrote some unsynchronized cache class deeply buried inside your application?

Conclusion

DCL, and other techniques for avoiding synchronization, expose many of the complexities of the JMM. The issues surrounding synchronization are subtle and complicated -- so it is no surprise that many intelligent programmers have tried, but failed, to fix DCL.

The original goal of the Java Memory Model was to enable programmers to write concurrent programs in Java that would run efficiently on modern hardware, while still guaranteeing the Write Once, Run Anywhere behavior across a variety of computing architectures. Since there is now a JVM for nearly every conceivable processor, it is not unreasonable to expect that your code will eventually run on a different architecture than the one on which it was developed. So follow the rules now (synchronize!), and you can avoid a concurrency crisis in the future.

Brian Goetz is a professional software developer with more than 15 years of experience. He is a principal consultant at Quiotix, a software development and consulting firm located in Los Altos, Calif.

Learn more about this topic

  • Double-checked locking idiom:
  • The Java Memory Model and multithreaded programming

Related:
1 2 Page 2
Page 2 of 2