In the previous instalment of this three part series, I examined the 'Worker Aggregation' anti-pattern. I also listed a number of ways to modify it so as to reduce the probability of its implementations leading to deadlock. Code samples were provided to demonstrate the pattern in both its raw and modified form, with the latter serving as practical demonstration of the deadlock avoidance modifications discussed.
In this concluding instalment I will examine the simple and well known incremental locking anti-pattern, which, despite being well documented, is unfortunately still a frequent mistake. Chances are that most people who have had the misfortune of debugging deadlocking code would have come across this pattern at some point in their career.
Note that this instalment also involves working through code examples, so please download the series’ companion source code archive and load the files into your preferred IDE.
Formally, incremental locking occurs when a process 'A' acquires a series of resources T1,....Tx, in such a manner that: at the acquisition of resource Tx, the process holds resources T1....Tx-1.
Strictly speaking, although incremental locking can lead to circular waits, it potency for havoc is not as a result of this but instead because it intentionally plays to two other deadlock conditions: resource hold and wait; and no pre-emption. By its very definition and nature, incremental locking actually enforces ‘hold and wait’; and more often than not, it is impossible to force Java processes to give up locked resources--especially after deadlock.
All that said, one actually hesitates to term incremental locking an anti-pattern, because there are requirements which make it a necessity. It is however the programming equivalent of making financial predictions based on normally distributed random variables; even if things look ok most of the time, when they go wrong, it is generally in unexpected and quite abrupt fashion. As such it should not be done without some basic safeguards in place, and most certainly should not be encouraged.
To be clear, when acquiring locks incrementally, it is imperative that a strict acquisition order is followed and maintained. This goes a long way in reducing the risks involved. But what happens when such an order cannot be followed or enforced? This is the scenario that we will focus on here. We will examine some example code that demonstrates how this pattern can trigger deadlock, and also examine an established technique for preventing this.
The code referenced in this tip is contained in the package
example.incrementallock
which is available in the
which accompanies this series. The package consists of the following source files:
+example.incrementallock
-ExampleThread.java
-ExampleThread2.java
-SharedResource.java
-ExampleRunner.java
-ExampleRunner2.java
We start with examining the class SharedResource
,
shown in listing 10, which encapsulates two locks
'A
' and 'B
'. We pretend that
processing threads, in order to make use of an instance of
this class, will have to acquire both lock A
and
lock B
, either in the order A->B
or
B->A
.
Listing 11: example.incrementallock.SharedResource
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class SharedResource
{
private Lock lockA;
private Lock lockB;
public SharedResource()
{
lockA = new ReentrantLock();
lockB = new ReentrantLock();
}
public Lock getLockA() {return lockA;}
public Lock getLockB(){return lockB;}
}
The process/thread behaviour is modelled by the class
ExampleThread
, which is detailed in listing 11.
Pay particular attention to the constructor, in which you
will note that the lock acquisition order can be varied
based on the value of the argument
invertLockAcquisitionOrder
. When true,
lock 'A' is treated as the primary lock and 'B' as the secondary;
otherwise, the converse applies.
In the run
method, the thread tries to acquire
the secondary lock post the acquisition of the primary lock.
The aim here of course is to demonstrate what happens when
two threads with different acquisition orders try to share
the same set of locks.
Listing 11: example.incrementallock.ExampleThread
package example.incrementallock;
import java.util.concurrent.locks.Lock;
public class ExampleThread extends Thread
{
protected int threadIndex;
protected Lock primaryLock;
protected Lock secondaryLock;
public ExampleThread( SharedResource sharedResource,
boolean invertLockAcquisitionOrder,
int threadIndex)
{
this.threadIndex = threadIndex;
if (invertLockAcquisitionOrder)
{
this.primaryLock = sharedResource.getLockA();
this.secondaryLock = sharedResource.getLockB();
}
else
{
this.primaryLock = sharedResource.getLockB();
this.secondaryLock = sharedResource.getLockA();
}
}
@Override
public void run()
{
try
{
primaryLock.lock();
System.out.println("Thread " + getName()
+ "' obtained read-lock");
// do something
Thread.sleep(1000*threadIndex);
secondaryLock.lock();
System.out.println("Thread " + getName()
+ "' obtained write-lock");
}
catch (InterruptedException exce)
{
return;
}
finally
{
secondaryLock.unlock();
primaryLock.unlock();
}
System.out.println("Thread " + getName() +
"' terminating");
}
}
To run this model, execute the target ExampleRunner.java
,
which is shown in listing 12. Observe that
this class creates two ExampleThread
instances,
reversing the lock ordering on the second.
When executed, this class should produce the following output,
and then deadlock.
Thread Thread-0' obtained read-lock
Thread Thread-1' obtained read-lock
Listing 12: example.incrementallock.ExampleRunner
package example.incrementallock;
public class ExampleRunner
{
public static void main(String[] args)
throws InterruptedException
{
ExampleThread thread1;
ExampleThread thread2;
SharedResource sharedResource;
sharedResource = new SharedResource();
thread1 = new ExampleThread(sharedResource, false, 1);
thread1.start();
thread2 = new ExampleThread(sharedResource,true, 2);
thread2.start();
}
}
The cause of the deadlock is that thread1
on
acquiring lock 'A
', proceeds to ask for lock
'B
', which is already held by thread2
.
The latter of course also tries to obtain lock 'A
',
and with each of the threads waiting for the other's resource,
the application goes into a state of deadlock.
The problem here is not simply the circular wait that forms; in fact, it is, in itself, actually a symptom of the bad decision made to introduce hold and wait logic into the application. Equally bad is the fact that even though the code incorporates hold and wait, it does nothing to cater for scenarios where the resource being watched cannot be freed i.e. there is no logic for resource release--voluntary or otherwise.
Considering that deadlock can only hold if all four of its preconditions are in place, how would you go about modifying this example so that it is less prone to deadlock? Whilst there are undoubtedly more ways to achieve this than there are to skin the proverbial cat, let us borrow a technique from our friends in electronics to illustrate a very simple solution to the problem.
Examine the class ExampleThread2
shown
in listing 13. Note that the call to acquire the secondary lock
is to the tryLock()
method instead of lock()
.
This means that if the secondary lock is in use, this call will
return false, and the thread will consequently release the
primary lock. This simple change means that not only do we
prevent a circular wait from forming, but we also willingly give
up the resource collected to date, thus allowing the other
thread to proceed to completion. This technique can be
summarised quite simply as thus: when a process already holds
one or more resources and fails to acquire an additional
resource, the process should immediately relinquish all the
resources which it currently holds.
Listing 13: example.incrementallock.ExampleThread2
public class ExampleThread2 extends ExampleThread
{
public ExampleThread2( SharedResource sharedResource,
boolean invertLockAcquisitionOrder,
int threadIndex)
{
super( sharedResource,
invertLockAcquisitionOrder,
threadIndex);
}
@Override
public void run()
{
boolean success=false;
try
{
primaryLock.lock();
System.out.println("Thread " + getName() +
"' obtained read-lock");
//simulate some work
Thread.sleep(1000 * threadIndex);
if (secondaryLock.tryLock())
{
System.out.println("Thread " + getName() +
"' obtained write-lock");
success = true;
}
else System.out.println("Thread " +getName()+" failed to acquire write lock." +
" Relinquishing first lock");
}
catch (InterruptedException exce)
{
return;
}
finally
{
primaryLock.unlock();
if (success) secondaryLock.unlock();
}
System.out.println("Thread " + getName() +
"' terminating");
}
}
The disadvantage of the above solution of course is that only one of the threads actually completes its work successfully. However the failed process can be retried at a later date, perhaps when the system is less busy or prone to deadlock. A more complex and comprehensive implementation of this idea is to force threads to publish the list of resources they currently hold when requesting new ones. That way a controller or arbitrator may be used to decide if the new requests are likely to lead to deadlock. Note that the first instalment of this series has already described the concept of arbitration, and that the same idea can be applied here.
The modified model can be invoked via the
class ExampleRunner2
, shown in listing 14, and
its output should be as follows:
Thread Thread-0' obtained read-lock
Thread Thread-1' obtained read-lock
Thread Thread-0 failed to acquire write lock. Relinquishing first lock
Thread Thread-0' terminating
Thread Thread-1' obtained write-lock
Thread Thread-1' terminating
Listing 14: example.incrementallock.ExampleRunner2
public class ExampleRunner2
{
public static void main(String[] args)
throws InterruptedException
{
ExampleThread2 thread1;
ExampleThread2 thread2;
SharedResource sharedResource;
sharedResource = new SharedResource();
thread1 = new ExampleThread2(sharedResource,true,1);
thread1.start();
thread2 = new ExampleThread2(sharedResource,false,2);
thread2.start();
}
}
From the output listing, we can confirm that Thread-0
is
unable to acquire the second resource, and, as a consequence, relinquishes
control of the lock which it already holds. This allows Thread-1
to proceed to termination.
Conclusion
This series of posts have explored the pre-requisites of application deadlock, and discussed three concurrency anti-patterns which often lead to the occurrence of these pre-requisites, and, consequently, application deadlock itself. Code examples were provided to illustrate these anti- patterns, and to demonstrate their weaknesses.
Possible means of counteracting the negative downsides of the presented anti-patterns were also discussed. It was shown through the use of code examples that: application deadlock can be avoided by modifying the implementation of the anti-patterns such that at least one of the deadlock pre-requisites is prevented from being valid.