Deep parsing of Java Memory Model: sequence consistency — conversion

Original address: http://www.codeceo.com/article/java-memory-3.html

Data competition and sequence consistency assurance

When the program is not synchronized correctly, there will be data competition. The Java Memory Model Specification defines data competition as follows:

When the code contains data competition, the execution of the program often produces counterintuitive results (as the example in the previous chapter is). If a multithreaded program can synchronize correctly, the program will be a program without data competition.

JMM guarantees the memory consistency of correctly synchronized multithreaded programs as follows:

Sequential consistent memory model

Sequential consistent memory model is a theoretical reference model idealized by computer scientists. It provides programmers with strong memory visibility. The sequential consistency memory model has two features:

The sequential consistency memory model provides programmers with the following views:

Conceptually, the sequential consistency model has a single global memory, which can be connected to any thread through a left-right switch. At the same time, each thread must perform memory read / write operations in the order of the program. As can be seen from the above figure, at most one thread can be connected to memory at any point in time. When multiple threads execute concurrently, the switching device in the figure can serialize all memory read / write operations of all threads.

For a better understanding, we will further explain the characteristics of the sequential consistency model through two schematic diagrams.

Suppose two threads a and B execute concurrently. Thread a has three operations, and their order in the program is: A1 - > a2 - > A3. B thread also has three operations. Their order in the program is B1 - > B2 - > B3.

Suppose that the two threads use the monitor to synchronize correctly: thread a releases the monitor after three operations are executed, and then thread B obtains the same monitor. Then the execution effect of the program in the sequence consistency model will be as shown in the following figure:

Now let's assume that the two threads are not synchronized. The following is the execution diagram of the unsynchronized program in the sequence consistency model:

In the sequential consistency model, although the overall execution order of unsynchronized programs is out of order, all threads can only see a consistent overall execution order. Taking the above figure as an example, the execution sequence seen by threads a and B is: B1 - > A1 - > a2 - > B2 - > a3 - > B3. This guarantee is obtained because each operation in the sequential consistency memory model must be immediately visible to any thread.

However, there is no such guarantee in JMM. In JMM, not only the overall execution order of unsynchronized programs is out of order, but also the operation execution order seen by all threads may be inconsistent. For example, before the current thread caches the written data in the local memory and has not been refreshed to the main memory, the write operation is only visible to the current thread; From the perspective of other threads, it will be considered that the write operation has not been performed by the current thread at all. Only after the current thread flushes the data written in the local memory to the main memory can the write operation be visible to other threads. In this case, the operation execution order seen by the current thread and other threads will be inconsistent.

Sequential consistency effect of synchronizer

Let's use the monitor to synchronize the previous example program reorderexample to see how the correctly synchronized program has sequence consistency.

See the following example code:

<span class="hljs-function"><span class="hljs-keyword">public <span class="hljs-keyword">synchronized <span class="hljs-keyword">void <span class="hljs-title">writer<span class="hljs-params">() {
a = <span class="hljs-number">1;
flag = <span class="hljs-keyword">true;
}

<span class="hljs-function"><span class="hljs-keyword">public <span class="hljs-keyword">synchronized <span class="hljs-keyword">void <span class="hljs-title">reader<span class="hljs-params">() {
<span class="hljs-keyword">if (flag) {
<span class="hljs-keyword">int i = a;
……
}
}
}

In the above example code, it is assumed that after thread a executes the writer () method, thread B executes the reader () method. This is a correctly synchronized multithreaded program. According to the JMM specification, the execution result of the program will be the same as that of the program in the sequence consistency model. The following is the execution timing comparison diagram of the program in two memory models:

In the sequential consistency model, all operations are executed serially in the order of the program. And in JMM, The code in the critical area can be reordered (but JMM does not allow the code in the critical area to "escape" outside the critical area, which will destroy the semantics of the monitor). JMM will do some special processing at the two key time points of exiting the monitor and entering the monitor, so that the thread has the same memory view as the sequence consistency model at these two time points (the details will be explained later). Although thread a reorders in the critical area, thread B here cannot "observe" the reordering of thread a in the critical area due to the mutually exclusive execution characteristics of the monitor. This reordering not only improves the execution efficiency, but also does not change the execution result of the program.

From here, we can see the basic policy of JMM in specific implementation: on the premise of not changing (correctly synchronized) program execution results, open the door for compiler and processor optimization as much as possible.

Execution characteristics of unsynchronized programs

For multi-threaded programs that are not synchronized or not synchronized correctly, JMM only provides minimum security: the value read during thread execution or the value written by a previous thread, Either the default value (0, null, false). JMM ensures that the value read by the thread read operation will not come out of thin air. In order to achieve minimum security, when the JVM allocates objects on the heap, it will first clear the memory space, and then allocate objects on it (the two operations will be synchronized within the JVM). Therefore, the memory space will be cleared (pre zeroed memory) when allocating objects, the default initialization of the domain has been completed.

JMM does not guarantee that the execution result of an unsynchronized program is consistent with that of the program in the sequence consistency model. Because when the unsynchronized program is executed in the order consistency model, it is disordered as a whole, and its execution result is unpredictable. It is meaningless to ensure that the execution results of unsynchronized programs in the two models are consistent.

Like the sequential consistency model, the execution of unsynchronized programs in JMM is disordered as a whole, and the execution results are unpredictable. At the same time, there are several differences in the execution characteristics of unsynchronized programs in the two models:

The third difference is closely related to the working mechanism of the processor bus. In a computer, data is transferred between the processor and memory through the bus. Every data transfer between processor and memory is completed through a series of steps, This series of steps is called bus transaction. Bus transaction includes read transaction and write transaction (write transaction). Read transactions transfer data from memory to the processor, and write transactions transfer data from the processor to memory. Each transaction will read / write one or more physically continuous words in memory. The key here is that the bus will synchronize transactions trying to use the bus concurrently. During a processor's execution of bus transactions, the bus will prohibit all other processors and I / O The device performs read / write of memory. Let's illustrate the working mechanism of the bus through a schematic diagram:

As shown in the figure above, suppose that processors a, B and C initiate bus transactions to the bus at the same time, At this time, bus arbitration will make a ruling on the competition. Here, we assume that the bus determines that processor a wins the competition after arbitration (bus arbitration ensures that all processors have fair access to memory). At this time, processor a continues its bus transaction, while the other two processors wait for processor a's bus transaction to complete before they can start memory access again. Suppose that during processor a's bus transaction (no matter whether the bus transaction is a read transaction or a write transaction), processor D initiates a bus transaction to the bus. At this time, the request of processor D will be prohibited by the bus.

These working mechanisms of the bus can serialize the access of all processors to memory; At most one processor can access memory at any point in time. This feature ensures that memory read / write operations within a single bus transaction are atomic.

On some 32-bit processors, if the read / write operation of 64 bit data is required to be atomic, it will have a large overhead. In order to take care of this processor, the Java language specification encourages but does not require the JVM to be atomic in reading / writing 64 bit long and double variables. When the JVM runs on this processor, it will split the read / write operation of a 64 bit long / double variable into two 32-bit read / write operations. These two 32-bit read / write operations may be allocated to different bus transactions. At this time, the read / write to this 64 bit variable will not be atomic.

When a single memory operation is not atomic, it may have unexpected consequences. Please see the following diagram:

As shown in the figure above, suppose processor a writes a long variable and processor B reads the long variable. The 64 bit write operation in processor a is split into two 32-bit write operations, and the two 32-bit write operations are allocated to different write transactions. At the same time, the 64 bit read operation in processor B is divided into two 32-bit read operations, and the two 32-bit read operations are allocated to the same read transaction for execution. When processors A and B execute in the timing shown above, processor B will see an invalid value that is only "half written" by processor a.

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>