Analysis of synchronized principle

Analysis of synchronized principle

1. Introduction to synchronized

  in concurrent programs, this keyword may be the most frequent field. It can avoid the security problems in multithreading and synchronize the code. In fact, the synchronization method is implicit locking. The locking process is completed with the help of the JVM, which will be reflected in the generated bytecode. If we decompile the class file of the code block with the irresistible synchronized keyword, we will find that there are two special instructions, monitorenter and monitorexit, which are entering and exiting the process. The reason why we say that it cannot be eliminated is that lock optimization is carried out during compilation. For example, a lock is added in StringBuffer, that is, the lock object is itself. However, after compilation, we will find that there are no above two instructions at all, because of the lock elimination technology.

   general scenarios for synchronized use, use on object methods and class methods, and custom synchronization code blocks. However, the method of using synchronized keyword is different from that of using synchronous code block. The method of using synchronization is to use the flag bit ACC in bytecode_ Synchronized to synchronize. The synchronization code block uses the lock pointer in the object header to point to a monitor (lock) to complete the synchronization.

  when a method is called, the calling instruction will check the ACC of the method_ Whether the synchronized access flag is set. If it is set, the execution thread will obtain the monitor first. After successful acquisition, the method body can be executed. After the method is executed, the monitor can be released. During method execution, no other thread can get the same monitor object. In fact, there is no difference in essence, but the synchronization of methods is realized in an implicit way without bytecode.

2. Object head and lock

  an object is divided into three parts in memory: object header, instance data and alignment filling.

   here we mainly analyze the heavyweight lock, that is, the synchronized object lock. The lock identification bit is 10, Where the pointer points to the monitor object (the monitor in the synchronized code block). Each object has a monitor associated with it. There are many ways to implement the relationship between the object and its monitor. For example, the monitor can be created and destroyed together with the object or automatically generated when the thread attempts to obtain the object lock, but when a monitor is held by a thread, it is locked Status.. In the Java virtual machine (hotspot), monitor is implemented by objectmonitor, and its main data structure is as follows.

ObjectMonitor() {
    _count        = 0; //记录个数
    _owner        = NULL; // 运行的线程
    //两个队列
    _WaitSet      = NULL; //调用 wait 方法会被加入到_WaitSet
   _EntryList    = NULL ; //锁竞争失败,会被加入到该列表
  }

  there are two queues in objectmonitor_ Waitset and_ Entrylist is used to save the objectwaiter object list (each thread waiting for a lock will be encapsulated as an objectwaiter object)_ Owner points to the thread that holds the objectmonitor object. When multiple threads access a piece of synchronous code at the same time, they will enter the_ Entrylist collection, which is entered after the thread obtains the monitor of the object_ In the owner area, set the owner variable in the monitor to the current thread, and the counter count in the monitor increases by 1. If the thread calls the wait() method, the currently held monitor will be released, the owner variable will be restored to null, and the count will decrease by 1. At the same time, the thread enters the waitsee t collection to be awakened. If the current thread finishes executing, it will also release the monitor (lock) and reset the value of the variable so that other threads can enter to obtain the monitor (lock).

3. Synchronized code block principle

Decompile the following code to get the following bytecode:

public class SynchronizedTest {
    public static void main(String[] args) {
        synchronized (SynchronizedTest.class) {
            System.out.println("hello");
        }
    }

    public synchronized void test(){

    }
}

   when the monitorenter instruction is executed, the current thread will attempt to obtain the ownership of the monitor corresponding to the objectref (i.e. object lock). When the entry counter of the monitor of the objectref is 0, the thread can successfully obtain the monitor and set the counter value to 1 to obtain the lock successfully. If the current thread already has the ownership of the objectref monitor, it can re-enter the monitor, and the counter value will be increased by 1 when re-entering. If other threads already own the monitor of objectref, the current thread will be blocked until the executing thread finishes executing, that is, the monitorexit instruction is executed, the executing thread will release the monitor (lock) and set the counter value to 0, and other threads will have the opportunity to hold the monitor. It is worth noting that the compiler will ensure that every monitorenter instruction that is invoked in the method will execute its corresponding monitorexit instructions regardless of the way the method is completed, regardless of whether the method ends normally or ends. In order to ensure that the monitorenter and monitorexit instructions can still be executed correctly when the method exception is completed, the compiler will automatically generate an exception handler, which declares that it can handle all exceptions. Its purpose is to execute the monitorexit instruction. So I see two monitorexit on it!

4. Principle of synchronized method

  first look at the result of a decompiled instance method. It does have one more flag field than an ordinary method. Method level synchronization is implicit, that is, it does not need to be controlled by bytecode instructions. It is implemented in method calls and return operations. When a method is called, the calling instruction will check the ACC of the method_ Whether the synchronized access flag is set. If it is set, the execution thread will hold the monitor first, then execute the method, and finally release the monitor when the method completes (whether normal or abnormal). During method execution, the execution thread holds the monitor, and no other thread can get the same monitor. If an exception is thrown during the execution of a synchronization method and cannot be handled inside the method, the monitor held by the synchronization method will be released automatically when the exception is thrown outside the synchronization method.

5. Deflection lock

  biased locking is an elegant locking method designed by java to improve program performance. The core idea of biased lock is that if a thread obtains a lock, the lock will enter the biased mode. At this time, the structure of mark word will also become a biased lock structure. When the thread requests a lock again, there is no need to do the process of obtaining the lock. If other threads compete for locks, they need to expand into lightweight locks. In this way, a large number of operations related to lock application are omitted, which provides the performance of the program.

   therefore, for situations where there is no lock competition, biased locking has a good optimization effect. After all, it is very likely that the same thread applies for the same lock several times in a row. However, in the case of fierce lock competition, the biased lock will fail, because it is very likely that the threads applying for the lock are different every time. Therefore, the biased lock should not be used in this case, otherwise the gain will not pay off. It should be noted that after the biased lock fails, it will not expand into a heavyweight lock immediately, but will be upgraded to a lightweight lock first.

   the process of obtaining biased lock is as follows. When the lock object is obtained by the thread for the first time, the virtual machine sets the flag bit in the object header to "01", that is, biased mode. At the same time, the CAS operation is used to record the ID of the thread that obtains the lock in the biased thread ID in the mark word of the object, and set the status position of whether to bias the lock to 1. If the CAS operation is successful, the thread holding the biased lock will directly check whether the ThreadID is consistent with its own thread ID every time it enters the synchronization block related to the lock. If so, it is considered that the current thread has obtained the lock, and the virtual machine can no longer perform any synchronization operations (such as locking, unlocking, mark word update, etc.).

   in fact, generally speaking, biased locks are rarely released actively, because only when other threads need to obtain locks, that is, the lock is not only used by one thread, but may be used alternately by two threads. Whether to release the lock (restore to the unlocked state) or upgrade to the lightweight lock state depends on whether the object is locked.

6. Lightweight lock

   lightweight lock generally refers to that when two threads use the lock alternately, they can use the lightweight lock because they do not grab the lock at the same time, which belongs to a more harmonious state. His basic idea is to copy the mark word of the lock object to the top of the stack of the current thread when the thread wants to obtain the lock, and then execute a CAS operation to update the mark word of the lock object to a pointer to the copy of the top of the stack. If successful, the current thread has the lock. You can execute synchronous code blocks, but there are two possibilities of failure. Either the current thread already has the pointer to the lock object, and you can continue to execute at this time. Or the lock object is preempted by other threads. At this time, it shows that two threads need to compete for locks at the same time, which breaks this harmonious situation. It needs to expand to heavyweight locks, modify the flag of the lock object, and obtain the lock waiting of the thread.    CAS is used to replace the assigned mark word on the stack with the lock object in the process of lightweight lock release. If it fails, it indicates that other threads have preempted the lock, and the mark word flag of the lock object has been modified to wake up the waiting thread while releasing.

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>