Analysis of reentrantlock source code of Java Concurrent series

In Java 5 Before 0, the only mechanisms that can be used to coordinate access to shared objects are synchronized and volatile. We know that the synchronized keyword implements built-in locks, while the volatile keyword ensures memory visibility for multithreading. In most cases, these mechanisms can work well, but they can't realize some more advanced functions. For example, they can't interrupt a thread waiting to obtain a lock, can't realize the lock acquisition mechanism for a limited time, and can't realize the locking rules of non blocking structure. These more flexible locking mechanisms can usually provide better activity or performance. Therefore, in Java 5 0 adds a new mechanism: reentrantlock. Reentrantlock class implements the lock interface and provides the same mutex and memory visibility as synchronized. Its bottom layer realizes multi-threaded synchronization through AQS. Compared with the built-in lock, reentrantlock not only provides a richer locking mechanism, but also has the same performance as the built-in lock (even better than the built-in lock in the previous version). Having said so many advantages of reentrantlock, let's uncover its source code and see its specific implementation.

1. Introduction to synchronized keyword

Java provides built-in locks to support multi-threaded synchronization. The JVM identifies the synchronized code block according to the synchronized keyword. When a thread enters the synchronized code block, it will automatically obtain the lock, and when it exits the synchronized code block, it will automatically release the lock. After a thread obtains the lock, other threads will be blocked. Each Java object can be used as a lock to achieve synchronization. The synchronized keyword can be used to modify object methods, static methods and code blocks. When modifying object methods and static methods, the lock is the object where the method is located and the class object respectively. When modifying code blocks, additional objects need to be provided as locks. Each Java object can be used as a lock because a monitor object (process) is associated in the object header. When a thread enters the synchronization code block, it will automatically hold the monitor object, and when it exits, it will automatically release the monitor object. When the monitor object is held, other threads will be blocked. Of course, these synchronization operations are implemented by the bottom layer of the JVM, but the methods and code blocks modified with the synchronized keyword are still different in the bottom implementation. The method modified by the synchronized keyword is implicitly synchronized, that is, it does not need to be controlled by bytecode instructions. The JVM can use ACC in the method table_ Synchronized access flag to distinguish whether a method is a synchronous method; The code block modified by the synchronized keyword is explicitly synchronized. It controls the holding and release of threads to processes through monitorenter and monitorexit bytecode instructions. The monitor object is held internally_ Count field_ Count equal to 0 indicates that the process is not held_ Count greater than 0 indicates that the process has been held. Each time the holding thread re enters_ Count will increase by 1 every time the holding thread exits_ Count will decrease by 1, which is the implementation principle of built-in lock reentry. In addition, there are two queues inside the monitor object_ Entrylist and_ Waitset, corresponding to the synchronization queue and condition queue of AQS, will be displayed when the thread fails to obtain the lock_ The entrylist is blocked. When the wait method of the lock object is called, the thread will enter_ Wait in waitset, which is the implementation principle of thread synchronization and conditional wait with built-in lock.

2. Comparison between reentrantlock and synchronized

Synchronized keyword is a built-in locking mechanism provided by Java. Its synchronization operation is implemented by the underlying JVM, and reentrantlock is Java util. The explicit lock provided by the concurrent package, whose synchronization operation is supported by the AQS synchronizer. Reentrantlock provides the same semantics on locking and memory as built-in lock. In addition, it also provides some other functions, including timed lock waiting, interruptible lock waiting, fair lock, and locking in non block structure. In addition, in the early JDK versions, reentrantlock has certain advantages in performance. Since reentrantlock has so many advantages, why use the synchronized keyword? In fact, many people do use reentrantlock instead of locking the synchronized keyword. But the built-in lock still has its unique advantages. The built-in lock is familiar to many developers and the way of use is more compact and compact, because the explicit lock must be manually invoked in the finally block, so the use of built-in locks is relatively safer than unlock. At the same time, it is more likely to improve the performance of synchronized rather than reentrantlock in the future. Because synchronized is a built-in property of the JVM, it can perform some optimizations, such as lock elimination optimization for lock objects closed by threads. It can eliminate the synchronization of built-in locks by increasing the granularity of locks. If these functions are realized through locks based on class libraries, it is unlikely. Therefore, reentrantlock should be used when some advanced functions are needed, including timed, pollable and interruptible lock acquisition operations, fair queues, and non block locks. Otherwise, synchronized should be preferred.

3. Acquire and release locks

Let's first look at the sample code for locking with reentrantlock.

The following are the APIs for acquiring and releasing locks.

You can see that the operations of obtaining and releasing locks are delegated to the lock method and release method of the sync object respectively.

Each reentrantlock object holds a reference of sync type. This sync class is an abstract internal class, which inherits from abstractqueuedsynchronizer, and its lock method is an abstract method. The member variable sync of reentrantlock is assigned during construction. Let's see what the two construction methods of reentrantlock do?

Calling the default parameterless constructor will assign the nonfairsync instance to sync, and the lock is unfair. The parameterized constructor allows you to specify whether to assign a fairsync instance or a nonfairsync instance to sync through parameters. Both nonfairsync and fairsync inherit from the sync class and rewrite the lock () method, so there are some differences between fair locks and non fair locks in the way of obtaining locks, which we will talk about below. Let's look at the operation of releasing the lock. Each time the unlock () method is called, it just performs sync Release (1) operation, which will call the release () method of abstractqueuedsynchronizer class. Let's review it again.

This release method is the API for releasing locks provided by AQS. It will first call the tryrelease method to try to obtain locks. The tryrelease method is an abstract method, and its implementation logic is in the subclass sync.

This tryrelease method first obtains the current synchronization state, subtracts the passed parameter value from the current synchronization state to the new synchronization state, and then determines whether the new synchronization state is equal to 0. If it is equal to 0, it indicates that the current lock is released, then sets the release state of the lock to true, and then clears the thread currently holding the lock, Finally, the setState method is invoked to set the new synchronization state and return the release state of the lock.

4. Fair lock and unfair lock

We know whether reentrantlock is a fair lock or an unfair lock, which specific instance is pointed to by sync. During construction, the member variable sync will be assigned a value. If it is assigned to a nonfairsync instance, it indicates that it is a non fair lock, and if it is assigned to a fairsync instance, it indicates that it is a fair lock. If it is a fair lock, the thread will obtain the lock in the order in which they issue the request, but on a non fair lock, queue jumping is allowed: when a thread requests a non fair lock, if the state of the lock becomes available while the request is issued, the thread will skip all waiting threads in the queue and obtain the lock directly. Let's take a look at the acquisition methods of unfair locks.

You can see that in the lock method of unfair lock, the thread will change the value of synchronization status from 0 to 1 in CAS mode in the first step. In fact, this operation is equivalent to trying to obtain the lock. If the change is successful, it indicates that the thread has obtained the lock as soon as it comes, and there is no need to queue in the synchronization queue. If the change fails, it indicates that the lock has not been released when the thread first came, so the acquire method is called next. We know that the acquire method is inherited from the AbstractQueuedSynchronizer method. Now let's review the method. After entering the acquire method, the thread first calls the tryAcquire method to try to get the lock, because NonfairSync covers the tryAcquire method and calls the nonfairTryAcquire method of the parent class Sync in the method. Therefore, the nonfairtryacquire method will be called to try to obtain the lock. Let's see what this method does.

The nonfairtryacquire method is a sync method. We can see that after the thread enters this method, it first obtains the synchronization state. If the synchronization state is 0, it uses CAS operation to change the synchronization state. In fact, it obtains the lock again. If the synchronization status is not 0, it indicates that the lock is occupied. At this time, it will first judge whether the thread holding the lock is the current thread. If so, increase the synchronization status by 1. Otherwise, the attempt to obtain the lock will fail. The addwaiter method is called to add the thread to the synchronization queue. To sum up, in the unfair lock mode, a thread will try to obtain the lock twice before entering the synchronization queue. If it succeeds, it will not enter the synchronization queue, otherwise it will enter the synchronization queue. Next, let's look at how to obtain the fair lock.

When the lock method of a fair lock is called, the acquire method is called directly. Similarly, the acquire method first calls the tryacquire method overridden by fairsync to try to acquire the lock. In this method, the value of the synchronization status is also obtained first. If the synchronization status is 0, it indicates that the lock is just released. At this time, unlike the unfair lock, it will first call the hasqueuedpredecessors method to query whether someone in the synchronization queue is queuing. If no one is queuing, it will modify the value of the synchronization status, You can see that the fair lock takes a comity way here instead of getting the lock immediately. Except that this step is different from unfair locking, other operations are the same. To sum up, we can see that the fair lock only checks the lock status once before entering the synchronization queue. Even if it is found that the lock is open, it will not be obtained immediately, but let the threads in the synchronization queue obtain it first. Therefore, it can be ensured that the order in which all threads obtain the lock under the fair lock is first come first served, which also ensures the fairness of obtaining the lock.

So why don't we want all locks to be fair? After all, fairness is a good behavior, while injustice is a bad behavior. Due to the large overhead of thread suspension and wake-up operations, which affects the system performance, especially in the case of fierce competition, fair lock will lead to frequent thread suspension and wake-up operations, while non fair lock can reduce such operations, so it will be better than fair lock in performance. In addition, because most threads use locks for a very short time, and there will be a delay in the wake-up operation of threads, it is possible that during the wake-up of thread a, thread B immediately obtains the lock and releases the lock after use, which leads to a win-win situation. The time for thread a to obtain the lock is not delayed, but thread B uses the lock in advance, and the throughput is improved.

5. Implementation mechanism of condition queue

There are some defects in the built-in condition queue. Each built-in lock can only have an associated condition queue, which causes multiple threads to wait for different condition predicates on the same condition queue. Then each time notifyAll is called, all waiting threads will wake up. When the thread wakes up, it finds that it is not the condition predicate it is waiting for, and it will be suspended. This leads to many useless thread wake-up and suspend operations, which will waste a lot of system resources and reduce the performance of the system. If you want to write a concurrent object with multiple conditional predicates, or if you want more control than conditional queue visibility, you need to use explicit lock and condition instead of built-in lock and conditional queue. A condition is associated with a lock, just as a condition queue is associated with a built-in lock. To create a condition, you can call lock on the associated lock Newcondition method. Let's start with an example of using condition.

A lock object can generate multiple condition queues. Here, two condition queues, notfull and notempty, are generated. When the container is full, the thread calling the put method needs to block and wait for the conditional predicate to be true (the container is not satisfied) to wake up and continue to execute; When the container is empty, the thread calling the take method also needs to be blocked and wait until the conditional predicate is true (the container is not empty) to wake up and continue execution. These two kinds of threads wait according to different conditional predicates, so they will block in two different conditional queues, wait until the right time, and then wake up by calling the API on the condition object. The following is the implementation code of the newcondition method.

The implementation of the condition queue on reentrantlock is based on abstractqueuedsynchronizer. The condition object obtained when we call the newcondition method is an instance of the internal class conditionobject of AQS. All operations on the condition queue are completed by calling the API provided by the conditionobject. For the specific implementation of conditionobject, you can refer to my article "Java concurrency series [4] - condition queue of abstractqueuedsynchronizer source code analysis", which will not be repeated here. So far, our analysis of the source code of reentrantlock has come to an end. I hope reading this article can help readers understand and master reentrantlock.

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>