Newsletter sign-up
View all newsletters

Enterprise Java Newsletter
Stay up to date on the latest tutorials and Java community news posted on JavaWorld

Sponsored Links

Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs

Programming Java threads in the real world, Part 3

Roll-your-own mutexes and centralized lock management

  • Print
  • Feedback
In last month's column, I demonstrated a simple deadlock scenario using two nested synchronized blocks that acquired the same two locks, but in a different order. (Please review last month's example if this isn't fresh in your mind.) This month, I'll take a look at a solution to this commonplace deadlock problem, presenting a roll-your-own exclusion semaphore class and a lock manager that supports the safe acquisition of multiple semaphores. Using these objects rather than the built-in synchronized can save you hours of searching for unexpected deadlocks. (They don't solve every possible deadlock problem, of course, but are nonetheless pretty useful.)

When 'synchronized' isn't good enough

The nested-synchronized-statements example from last month was -- admittedly -- contrived, but the multiple-lock situation comes up frequently in the real world. One common problem is the too-coarse, object-level granularity of the synchronized keyword: there's only one monitor per object, and sometimes that's not enough.

Consider the following class, the methods of which can be broken up into three partitions. The methods in the first partition use only a subset of the fields in the class. The methods in the second partition use a non-overlapping subset; they share fields that are not used by methods in the first partition. The methods in the third partition use everything.

1| class Complicated       // NOT thread safe
2| {
3|     private long a, b;
4|     private long x, y;
5| 
6|     // partition 1, functions use a and/or b
7| 
8|     public void use_a()          { do_something_with(a);   ); }
9|     public void use_b()          { do_something_with(b);   ); }
10     public void use_a_and_b()    { do_something_with(a+b); ); }
11| 
12|     // partition 2, functions use x and/or y
13| 
14|     public void use_x()          { do_something_with(x);   ); }
15|     public void use_y()          { do_something_with(y);   ); }
16|     public void use_x_and_y()    { do_something_with(x+y); ); }
17| 
18|     // partition 3, functions use a, b, x, and y
19| 
20|     public void use_everything()     { do_something_with( a +
x ); }
21|     public void use_everything_else(){ do_something_with( b +
y ); }
22| }


As it stands, this code is a multithreading disaster. Nothing is synchronized and we have guaranteed race conditions. (A race condition occurs when two threads try to access the same object at the same time, and chance determines which one wins the "race." Programs shouldn't work by chance.) Synchronizing all the methods would fix the problem, but then you couldn't call a method in partition 1 (emphasized in the code above) simply because some thread was using a method from partition 2 above. Since these two partitions don't interact with each other, this solution imposes needless access restrictions on the methods of the class. If you're accessing any method in partition 3, though, you do want to lock out everything in the other two partitions. We really need two locks in this situation. One to lock partition-1 variables and another for partition-2 variables. The methods in partition 3 can then grab both locks.

  • Print
  • Feedback

Resources
  • All the real code (the stuff in the com.holub.asynch package) is available in the "Goodies" section on my Web site http://www.holub.com