A blog shows you about locks in Java

preface

Recently, in reviewing the lock, I sorted out the locks in Java. This paper introduces various locks, hoping to help you.

Java lock

Optimistic lock

Optimistic lock is an optimistic idea, that is, it thinks that there are more reads and fewer writes, and the possibility of concurrent writes is low. Every time you get the data, you think others will not modify it, so you won't lock it. However, when updating, you will judge whether others have updated the data during this period, and read the current version number first when writing, Then lock the operation (compare the version number of the last time, and update if it is the same). If it fails, repeat the read compare write operation. Optimistic locks in Java are basically realized through CAS operation. CAS is an updated atomic operation. Compare whether the current value is the same as the incoming value, update if it is the same, otherwise it fails.

Pessimistic lock

Pessimistic locking is a pessimistic idea, that is, it is considered that there are many writes and there is a high possibility of concurrent writes. Every time you go to get the data, you think others will modify it, so you will lock it every time you read and write the data, so that others will block the data until they get the lock. The pessimistic lock in Java is synchronized. The lock under the AQS framework first tries to obtain the CAS optimistic lock. If it cannot be obtained, it will be converted to a pessimistic lock, such as retreenlock.

Spin lock

The principle of spin lock is very simple. If the thread holding the lock can release the lock resource in a short time, those waiting for the competing lock

Threads do not need to switch between kernel state and user state to enter the blocking pending state. They only need to wait (spin),

After the thread holding the lock releases the lock, it can obtain the lock immediately, so as to avoid the consumption of switching between the user thread and the kernel.

The thread spin needs to consume the cup. To put it bluntly, let the cup do idle work. If you can't get the lock all the time, the thread can't always occupy the cup spin for idle work, so you need to set a maximum spin waiting time.

If the thread holding the lock executes for more than the maximum spin waiting time, throwing the lock without releasing it will lead to other competing locks

If the thread fails to acquire the lock within the maximum waiting time, the contention thread will stop spinning and enter the blocking state.

Advantages and disadvantages of spin lock

Spin lock reduces thread blocking as much as possible, which is not fierce for lock competition, and takes up code blocks with very short lock time

The performance can be greatly improved, because the consumption of spin will be less than that of thread blocking, suspending and waking up. These operations will lead to thread context switching twice!

However, if the lock competition is fierce, or the thread holding the lock needs to occupy the lock for a long time to execute the synchronization block, this is not suitable

The spin lock is used because the spin lock always occupies the CPU for useless work before obtaining the lock, accounting for XX but not XX. At the same time, a large number of threads are competing for a lock, which will lead to a long time to obtain the lock. The consumption of thread spin is greater than that of thread blocking and pending operation. Other threads that need to be cut cannot obtain the CPU, resulting in a waste of CPU. So in this case, we have to turn off the spin lock;

Spin lock time threshold

The purpose of spin lock is not to release the CPU resources until the lock is obtained. But how to choose the execution time of spin? If the spin execution time is too long, a large number of threads will be in the spin state, occupying CPU resources, which will affect the performance of the overall system. Therefore, the periodic selection of spin is extra important!

JVM selection of spin period, jdk1 5 this limit is written to a certain extent. Adaptive spin lock is introduced in 1.6. Adaptive spin lock means that the spin time is no longer fixed, but determined by the spin time on the same lock and the state of the lock owner. It is basically considered that the time of a line context switching is the best time, At the same time, the JVM also makes many optimizations for the current CPU load. If the average load is less than CPUs, it will spin all the time. If more than (CPUs / 2) threads are spinning, then the threads will block directly, If the spinning thread finds that the owner has changed, it delays the spin time (spin count) or enters blocking. If the CPU is in power-saving mode, it stops spinning. The worst case of spin time is the storage delay of the CPU (CPU a stores a data, and CPU B knows the direct time difference of this data). When spinning, it will give up the difference between thread priorities appropriately.

Synchronized synchronization lock

Synchronized, which can treat any non null object as a lock. It is an exclusive pessimistic lock and a reentrant lock.

Synchronized scope

1. When acting on a method, it locks the instance of the object (this);

2. When it is used for static methods, the class instance is locked. Because the relevant data of the class is stored in the persistent band permgen (JDK1.8 is Metaspace), and the persistent band is globally shared, the static method lock is equivalent to a global lock of the class and will lock all threads calling the method;

3. When synchronized works on an object instance, it locks all code blocks with the object as the lock. It has multiple queues. When multiple threads access an object monitor together, the object monitor will store these threads in different containers.

Synchronized core components

1) Wait set: which threads calling the wait method are blocked are placed here;

2) Content list: contention queue. All threads requesting locks are first placed in the contention queue;

3) Entry list: threads in the content list that are eligible to become candidate resources are moved to the entry list;

4) Ondeck: at most one thread is competing for lock resources at any time, and this thread is called ondeck;

5) Owner: the thread that has obtained the resource is called owner;

6) ! Owner: the thread that currently releases the lock.

1. The JVM fetches one data from the end of the queue each time to lock the competition candidate (ondeck), but in the case of concurrency,

The contentlist will be accessed by a large number of concurrent threads. In order to reduce the competition for tail elements, the JVM will move some threads to the entrylist as candidate competing threads.

2. When the owner thread is unlocked, it will migrate some threads in the contentlist to the entrylist, and specify a thread in the entrylist as the ondeck thread (generally the first thread).

3. The owner thread does not directly pass the lock to the ondeck thread, but gives the right of lock competition to ondeck,

Ondeck needs to re compete for locks. Although this sacrifices some fairness, it can greatly improve the system throughput. In the JVM, this selection behavior is also called "competitive switching".

4. Once the ondeck thread obtains the lock resource, it will become the owner thread, while those who do not get the lock resource still stay in the entrylist. If the owner thread is blocked by the wait method, it will be transferred to the waitset queue until it wakes up through notify or notifyAll at a certain time, and will re-enter the entrylist.

5. All threads in contentlist, entrylist and waitset are in blocking status. The blocking is caused by the operating system

(implemented by pthread_mutex_lock kernel function under Linux kernel).

6. Synchronized is a non fair lock. Synchronized when a thread enters the contentlist, the waiting thread will first try to spin to obtain the lock. If it cannot obtain the lock, it will enter the contentlist. This is obviously unfair to the threads that have entered the queue. Another unfair thing is that the thread that spins to obtain the lock may also directly seize the lock resources of the ondeck thread.

reference resources: https://blog.csdn.net/zqz_zqz/article/details/70233767

7. Each object has a monitor object. Locking is competing for the monitor object. Code block locking is realized by adding monitorenter and monitorexit instructions before and after. Method locking is determined by a flag bit

8. Synchronized is a heavyweight operation, which needs to call the relevant interfaces of the operating system. The performance is inefficient. Locking the thread may consume more time than useful operations.

9. Java1. 6. Synchronized has been optimized to adapt to spin, lock elimination, lock coarsening, lightweight lock and bias lock, and the efficiency has been essentially improved. Java 1 7 and 1.8, the implementation mechanism of this keyword is optimized. Bias lock and lightweight lock are introduced. There are marker bits in the object header, which do not need to be locked by the operating system.

reentrantlock

Reentantlock inherits the interface lock and implements the methods defined in the interface. It is a reentrant lock, except that it can complete

In addition to all the work that can be done by synchronized, it also provides methods to avoid multi-threaded deadlock, such as responsive interrupt lock, pollable lock request, timing lock and so on.

Main methods of lock interface

1. Void lock(): when this method is executed, if the lock is idle, the current thread will acquire the lock Conversely, if the lock is already held by another thread, the current thread is disabled until the current thread acquires the lock

2. Boolean trylock(): if the lock is available, obtain the lock and return true immediately; otherwise, return false The method and

The difference between lock() and trylock() is that trylock() only "attempts" to acquire the lock. If the lock is unavailable, it will not cause the current thread to be disabled,

The current thread continues to execute code The lock () method is to obtain the lock

Wait. The current thread does not continue to execute downward until the lock is obtained

3. Void unlock(): when this method is executed, the current thread will release the lock held The lock can only be released by the holder if the thread

Executing this method without holding a lock may result in an exception

4. Condition newcondition(): condition object to obtain the waiting notification component. The component is bound to the current lock,

The current thread can call the await () method of the component only after obtaining the lock. After calling, the current thread will scale the lock.

5. Getholdcount(): query the number of times the current thread holds this lock, that is, the number of times the thread executes the lock method

Number.

6. Getqueuelength(): returns the estimated number of threads waiting to acquire this lock, such as starting 10 threads and 1 thread

The thread obtains the lock and returns 9

7. Getwaitqueuelength: (condition condition) returns the line waiting for the given condition related to this lock

Process estimate. For example, 10 threads use the same condition object, and all 10 threads are executed at this time

The await method of the condition object, then executing this method returns 10

8. Haswaiters (condition): query whether there are threads waiting for a given condition related to this lock

(condition), for the specified boundary object, how many threads have executed condition Await method

9. Hasqueuedthread (thread thread): query whether a given thread is waiting to acquire this lock

10. Hasqueuedthreads(): whether there are threads waiting for this lock

11. Isfair(): whether the lock is fair

12. Isheldbycurrentthread(): whether the current thread keeps the lock, and whether the thread executes the lock method

Don't be false or true

13. Islock(): whether this lock is occupied by any thread

14. Lockinterrupt(): if the current thread is not interrupted, obtain the lock

15. Trylock(): try to obtain the lock. Only when calling, the lock is not occupied by the thread and the lock is obtained

16. Trylock (long timeout timeunit): if the lock is not held by another thread within a given waiting time,

Get the lock

Unfair lock

The mechanism by which the JVM allocates locks according to the principle of random and proximity is called unfair lock. Reentrantlock provides in the constructor

Whether fair locks are initialized. The default is non fair locks. The efficiency of the actual implementation of unfair locks is much higher than that of fair locks. Unless the program has special needs, the allocation mechanism of unfair locks is most commonly used.

Fair lock

Fair lock means that the lock allocation mechanism is fair. Generally, the thread that first requests to obtain the lock will be allocated to the lock first. Reentrantlock provides the initialization method of fair lock in the constructor to define the fair lock.

Reentrantlock and synchronized

1. Reentrantlock uses the methods lock () and unlock (). Unlike synchronized, which is automatically unlocked by the JVM, reentrantlock needs to be unlocked manually after locking. In order to avoid the situation that the program cannot be unlocked normally due to exceptions, the unlocking operation must be carried out in the finally control block when using reentrantlock.

2. Compared with synchronized, reentrantlock has the advantages of interruptible, fair lock and multiple locks. In this case

Use reentrantlock.

Difference between condition class and object class lock methods

1. The awiat method of condition class is equivalent to the wait method of object class

2. The signal method of condition class is equivalent to the notify method of object class

3. The signalall method of condition class is equivalent to the notifyAll method of object class

4. Reentrantlock class can wake up the thread with specified conditions, and the wake-up of object is random

Specify conditions to wake up, and create several more conditions.

The difference between trylock and lock and lockinterruptible

1. Trylock returns true if it can obtain a lock, but false if it cannot immediately return. Trylock (long timeout, timeunit)

Unit), which can increase the time limit. If the lock is not obtained after this time period, false is returned.

2. Lock returns true if it can obtain the lock. If it cannot, wait for the lock to be obtained.

3. Lock and lockinterrupt. If two threads execute these two methods respectively, but interrupt these two threads at this time,

Lock does not throw an exception, but lockinterruptably throws an exception.

Benefits of reentrant locks

Suppose a thread owns the lock. Another thread needs this lock, which is called at this time. It can be called directly without waiting for re acquisition.

Semaphore semaphore

Semaphore is a count based semaphore. It can set a threshold. Based on this, multiple threads compete to obtain the license signal and return it after completing their own application. When the threshold is exceeded, the thread application license signal will be blocked. Semaphore can be used to build some object pools and resource pools, such as database connection pools.

Implement mutex (counter is 1)

We can also create semaphore with a count of 1 as a mechanism similar to mutual exclusion lock, which is also called binary semaphore to represent two mutually exclusive states.

Other uses

You can create a semaphore that each thread consumes. After use. Get the remaining quantity. If it is equal to the initial quantity, it proves that the internal execution of the thread has been completed and the main thread can continue to execute.

Semaphore and reentrantlock

Semaphore can basically complete all the work of reentrantlock, and the use method is similar to it. It obtains and releases critical resources through the acquire () and release () methods. After actual measurement, semaphone The acquire () method defaults to a responsive interrupt lock, which is the same as reentrantlock Lockinterruptibly() has the same effect, that is, it can be used by thread while waiting for critical resources Interrupt() method interrupt.

In addition, semaphore also implements the functions of pollable lock request and timing lock. Except that the method name tryacquire is different from trylock, its use method is almost the same as reentrantlock. Semaphore also provides fair and unfair locking mechanisms, which can also be set in the constructor.

Semaphore's lock release operation is also performed manually. Therefore, like reentrantlock, in order to avoid that the thread cannot release the lock normally due to throwing an exception, the lock release operation must also be completed in the finally code block.

AtomicInteger

First of all, atomicinteger, an integer class that provides atomic operations, and common

Atomicboolean, atomicinteger, atomiclong, atomicreference, etc. their implementation principles are the same,

The difference lies in the type of operation object. Excitedly, you can also convert all operations of an object into atomic operations through atomicreference < V >.

We know that in multithreaded programs, operations such as + + I or I + + are not atomic and are one of the unsafe thread operations.

We usually use synchronized to turn this operation into an atomic operation, but the JVM specially provides some synchronization classes for this kind of operation, which makes it more convenient to use and makes the program run more efficiently. According to relevant data, generally, the performance of atomicinteger is several times that of reentrantlock.

Reentrant lock (recursive lock)

This article talks about the reentrant lock in a broad sense, rather than the reentrant lock under Java. Reentrant lock, also known as recursive lock, means that after the outer function of the same thread obtains the lock, the inner recursive function still has the code to obtain the lock, but it is not affected. In Java environment, reentrantlock and synchronized are reentrant locks.

Fair lock and unfair lock

Fair lock

Before locking, check whether there are queued threads. Priority is given to queued threads, first come, first served

Non fair lock (nonfair)

When locking, the queue waiting problem is not considered. If you directly try to obtain the lock, you will automatically wait at the end of the queue

1. The performance of non fair locks is 5 ~ 10 times higher than that of fair locks, because fair locks need to maintain a queue in the case of multiple cores

2. Synchronized in Java is a non fair lock, and the default lock () method of reentrantlock uses a non fair lock.

Readwritelock read write lock

In order to improve performance, Java provides a read-write lock. It uses a read lock where it is read and a write lock where it is written. It can be controlled flexibly. If there is no write lock, the read is non blocking, which improves the execution efficiency of the program to a certain extent. Read / write locks are divided into read locks and write locks. Multiple read locks are not mutually exclusive. Read locks and write locks are mutually exclusive. This is controlled by the JVM itself. You only need to install the corresponding locks.

Read lock

If your code is read-only data, many people can read it at the same time, but can't write it at the same time, put the read lock on it.

Write lock

If your code modifies data and only one person is writing and cannot read it at the same time, then write the lock. In short, read the lock when reading, write the lock when writing!

There is an interface for read-write lock in Java util. concurrent. locks. Readwritelock also has a specific implementation

reentrantreadwritelock。

Shared and exclusive locks

Exclusive lock

In the exclusive lock mode, only one thread can hold the lock at a time. Reentrantlock is a mutually exclusive lock implemented exclusively.

Exclusive lock is a pessimistic and conservative locking strategy. It avoids read / read conflicts. If a read-only thread obtains a lock, other read threads can only wait. In this case, unnecessary concurrency is limited, because read operations will not affect data consistency.

Shared lock

Shared locks allow multiple threads to acquire locks at the same time and access shared resources concurrently, such as readwritelock. Shared lock is an optimistic lock, which relaxes the locking strategy and allows multiple threads performing read operations to access shared resources at the same time.

1. The internal class node of AQS defines two constants shared and exclusive, which respectively identify the lock acquisition mode of the waiting thread in the AQS queue.

2. Readwritelock and read-write lock are provided in the concurrent package of Java. It allows a resource to be accessed by multiple read operations,

Or accessed by a write operation, but the two cannot be performed at the same time.

Mutex lock

Synchronized is through a lock called a monitor inside the object (monitor). However, the essence of the monitor lock depends on the mutex lock of the underlying operating system. The operating system realizes the switching between threads, which requires switching from user state to core state. This cost is very high, and the switching between States takes a relatively long time, which is why the synchronized efficiency is low. Therefore, this A lock that depends on the implementation of the operating system mutex lock is called "heavyweight lock". The core of various optimizations for synchronized in JDK is to reduce the use of this heavyweight lock.

JDK1. After 6, in order to reduce the performance consumption caused by obtaining and releasing locks and improve performance, the "lightweight lock" and "bias lock" are introduced.

Lightweight Locking

There are four lock states: no lock state, biased lock, lightweight lock and heavyweight lock.

Lock upgrade

With lock competition, locks can be upgraded from biased locks to lightweight locks, and then upgraded to heavyweight locks (but the upgrade of locks is one-way, that is, they can only be upgraded from low to high, and there will be no degradation of locks).

"Lightweight" is relative to traditional locks implemented using operating system mutexes. However, first of all, it should be emphasized that lightweight locks are not used to replace heavyweight locks. Its original intention is to reduce the performance consumption caused by the use of traditional heavyweight locks without multi-threaded competition. Before explaining the execution process of lightweight locks, let's understand that the scenario for lightweight locks is that threads alternately execute synchronization blocks. If the same lock is accessed at the same time, it will lead to the expansion of lightweight locks into heavyweight locks.

Bias lock

After previous research, the author of hotspot found that in most cases, locks are not only non multi-threaded competition, but also obtained by the same thread many times. The purpose of biased locking is that after a thread obtains the lock, This thread seems to be biased by eliminating the overhead of lock reentry (CAS). The bias lock is introduced to minimize unnecessary lightweight lock execution paths without multi-threaded competition, because the acquisition and release of lightweight locks depend on multiple CAS atomic instructions, while the bias lock only needs to rely on CAS atomic instructions once when replacing ThreadID (since biased locks must be revoked in case of multi-threaded competition, the performance loss of biased lock revocation must be less than that of CAS atomic instructions). As mentioned above, lightweight locks are used to improve performance when threads alternately execute synchronization blocks, while biased locks further improve performance when only one thread executes synchronization blocks.

Sectional lock

Segmented lock is not an actual lock, but an idea. Concurrent HashMap is the best practice for learning segmented lock.

Lock optimization

Reduce lock holding time

Only lock programs with thread safety requirements

Reduce lock granularity

Split a large object (which may be accessed by many threads) into small objects, which greatly increases the degree of parallelism and reduces lock competition.

The competition of locks is reduced, and the success rate of lightweight locks will be improved if locks are biased. The most typical case of reducing lock granularity is

ConcurrentHashMap。

Lock separation

The most common lock separation is readwritelock, which is separated into read lock and write lock according to the function. In this way, reading is not mutually exclusive, reading and writing are mutually exclusive, which not only ensures thread safety, but also improves performance. JDK concurrent package 1. The idea of read-write separation can be extended. As long as the operations do not affect each other, the lock can be separated. For example, linkedblockingqueue takes data from the head and puts data from the tail.

Lock coarsening

Generally, in order to ensure effective concurrency among multiple threads, each thread is required to hold the lock as short as possible, that is, the lock should be released immediately after using the public resources. However, everything has a degree. If the same lock is constantly requested, synchronized and released, it will consume valuable resources of the system, which is not conducive to performance optimization.

Lock elimination

Lock elimination is at the compiler level. In the real-time compiler, if you find objects that cannot be shared, you can eliminate the lock operation of these objects, mostly because the programmer's coding is not standardized.

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>