Java multithreading and concurrency (Fundamentals)

1、 Process and thread

Process: it is a running activity of code on the data set. It is the basic unit for resource allocation and scheduling of the system.

Thread: it is an execution path of a process. There is at least one thread in a process, and multiple threads in the process share the resources of the process.

Although the system allocates resources to processes, the CPU is very special and is allocated to threads, so threads are the basic unit of CPU allocation.

Relationship between the two:

There are multiple threads in a process. Multiple threads share the heap and method area resources of the process, but each thread has its own program counter and stack area.

Program counter: a memory area used to record the address of the instruction currently to be executed by the thread.

Stack: used to store the local variables of the thread. These local variables are private to the thread. In addition, they are also used to store the call stack frame of the thread.

Heap: the largest piece of memory in a process. The heap is shared by all threads in the process.

Method area: it is used to store information such as classes, constants and static variables loaded by nm, which is also shared by threads.

Differences between the two:

Process: there is an independent address space. After a process crashes, it will not affect other processes in protected mode.

Threads: are different execution paths in a process. Threads have their own stack and local variables, but there is no separate address space between threads. The death of a thread is equal to the death of the whole process.

1) In short, a program has at least one process, and a process has at least one thread

2) The division scale of threads is smaller than that of processes, which makes the concurrency of multithreaded programs high.

3) In addition, the process has an independent memory unit during execution, and multiple threads share memory, which greatly improves the running efficiency of the program.

4) Each independent thread has an entry for program operation, sequential execution sequence and program exit. However, threads cannot be executed independently. They must be stored in the application, and the application provides multiple thread execution control.

5) From a logical point of view, the significance of multithreading is that there are multiple execution parts in an application that can be executed at the same time. However, the operating system does not regard multiple threads as multiple independent applications to realize process scheduling, management and resource allocation. This is the important difference between process and thread

2、 Concurrency and parallelism

Concurrency: it means that multiple tasks are executed at the same time in the same time period, and none of them is finished. Concurrent tasks emphasize simultaneous execution in a time period, and a time period is accumulated by multiple unit times. Therefore, multiple concurrent tasks may not be executed at the same time in unit time.

Parallel: it means that multiple tasks are executed at the same time in unit time.

In the practice of multithreaded programming, the number of threads is often more than the number of CPUs, so it is generally called multithreaded concurrent programming rather than multithreaded parallel programming.

Common problems during Concurrency:

1. Thread safety issues:

When multiple threads operate shared variable 1 at the same time, thread 1 will update the value of shared variable 1, but other threads get the value before the shared variable is not updated. This will lead to inaccurate data.

2. Shared memory invisibility problem

Java Memory Model (handling shared variables)

The Java Memory Model stipulates that all variables are stored in the main memory. When a thread uses variables, it will copy the variables in the main memory to its own workspace or called working memory. When a thread reads and writes variables, it operates on the variables in its own working memory. (as shown above)

(JAVA memory model for actual work)

The figure above shows a dual core CPU system architecture. Each core has its own controller and arithmetic unit, in which the controller contains a set of registers and operation controller, and the arithmetic unit performs arithmetic and logic auxiliary operations. Each CPU core has its own L1 cache. In some architectures, there is also a L2 cache shared by all CPUs. Then the working memory in the JAVA memory model corresponds to the L1 or L2 cache or CPU register here

1. Thread a first obtains the value of shared variable x. since both levels of cache miss, it loads the value of X in main memory. If it is 0. Then cache the value of x = 0 to the two-level cache. Thread a modifies the value of X to 1, writes it to the two-level cache and flushes it to the main memory. After the operation of thread a is completed, the value of X in the two-level cache and main memory of the CPU where thread a is located is L.

2. Thread B obtains the value of X. first, the L1 cache does not hit, and then look at the L2 cache. The L2 cache hit, so x = 1 is returned; Everything is normal here, because x = L is also in the main memory at this time. Then, thread B modifies the value of X to 2 and stores it in the primary cache and shared secondary cache where thread 2 is located. Finally, it updates the value of X in main memory to 2. Everything is good here.

3. Thread a needs to modify the value of X this time. The L1 cache hits when it gets x = L. the problem arises here. Obviously, thread B has modified the value of X to 2. Why does thread a still get l? This is the problem that the memory of shared variables is not visible, that is, the value written by thread B is not visible to thread a.

Memory semantics of synchronized:

This memory semantics can solve the memory visibility problem of shared variables. The memory semantics of entering the synchronized block is to clear the variable used in the synchronized block from the thread's working memory, so that when the variable is used in the synchronized block, it will not be obtained from the thread's working memory, but directly from the main memory. The memory semantics of exiting a synchronized block is to refresh the changes to shared variables in the synchronized block to main memory. It will cause the overhead of context switching, exclusive lock and reduce concurrency

Understanding of volatile:

This keyword ensures that updates to one variable are immediately visible to other threads. When a variable is declared volatile, the thread will not cache the value in registers or other places when writing the variable, but will flush the value back to main memory. When another thread reads the shared variable, it retrieves the latest value from main memory instead of using the value in the working memory of the current thread. The memory semantics of volatile are similar to synchronized, specifically, When the thread writes the volatile variable value, it is equivalent to the thread exiting the synchronized synchronization block (synchronizing the variable value written to the working memory to the main memory). When reading the volatile variable value, it is equivalent to entering the synchronization block (emptying the local memory variable value first, and then obtaining the latest value from the main memory). Atomicity cannot be guaranteed

3、 Create thread

1. Inherit thread class

Rewrite the run method: the advantage of using inheritance is that, Get the current thread in the run () method and use this directly instead of using the thread. Currentthread() method. The bad thing is that Java does not support multiple inheritance. If you inherit the thread class, you can't inherit other classes. In addition, the task is not separated from the code. When multiple threads execute the same task, you need multiple copies of the task code.

2. Implement the runable interface

Implement the run method: solve the disadvantage of inheriting thread, and there is no return value

3. Implement callable interface

Call implementation method:

The advantage of using inheritance is that it is convenient to pass parameters. You can add member variables in subclasses, set parameters through the set method or pass them through the constructor. If you use runnable, you can only use variables declared as final in the main thread. The bad thing is that Java does not support multi inheritance. If you inherit the thread class, the subclass can no longer inherit other classes, while runable does not have this restriction. Neither of the first two methods can get the return result of the task, but the callable method can

4、 Thread class details

Thread characteristics:

1. Threads can be marked as daemon threads or user threads

2. Each thread is assigned a name, which is the combination of (thread - self incrementing number) by default

3. Each thread has a priority High priority threads take precedence over low priority threads 1-10, default to 5

4. The thread group where main is located is main. There is no actual thread group specified when constructing threads. The thread group is the same as the parent thread by default

5. When a new thread object is created in the run () method code of the thread, the priority of the newly created thread is the same as that of the parent thread

6. If and only if the parent thread is a daemon thread, the newly created thread is a daemon thread

7. When the JVM starts, there is usually a unique non daemon thread (this thread is used to call the main () method of the specified class)

The JVM will continue to execute threads until one of the following occurs:

1) Class runtime exit () method is called and the security mechanism allows the call of this exit () method

2) All non daemon threads have been terminated. The or run () method call returns or throws some propagatable exceptions outside the run () method

Init method:

Constructor: all constructor methods call init () method

Thread status:

New: status means that the thread has just been created and has not been started

Runnable: the state is that the thread is running normally. Of course, there may be some time-consuming computing / Io waiting operations / CPU time slice switching, etc. the waiting in this state is generally other system resources, not locks, sleep, etc

Blocked: in this state, it is a scenario where multiple threads have synchronous operations, such as waiting for the execution release of another thread's synchronized block, or someone else calls the wait () method in the reentrant synchronized block, that is, the thread is waiting to enter the critical area

WAITING: this state is that after the thread has a certain lock, it calls its wait method, waiting for other thread / lock owner to call notify / notifyAll once again, the thread can continue to operate next. Here we want to distinguish the difference between BLOCKED and WATING, one is waiting outside the critical point, and the other is wait waiting for notify in the understanding point. When a thread calls the join method and joins another thread, it will also enter the waiting state and wait for the execution of the thread it joins to end

TIMED_ Waiting: this state is limited (time limited) waiting. It usually occurs when calling wait (long), join (long), etc. after another thread sleep, it will also enter timed_ Waiting status

Terminated: in this state, it means that the run method of the thread has been executed, which is basically equal to death (if the thread is held for a long time, it may not be recycled)

(running status is written in many articles. In fact, there are only six in the source code. When you write a thread and keep the execution status through while, then use jconsole tool to check the thread status. It is indeed in the running state.)

The API documentation says:

In fact, we can understand it as two states. One is running, which means it is executing, and the other is running, which means it is ready and just waiting for other system resources. Then we can understand it, as shown in the figure below

Start method:

Yield method:

Is a local method that prompts the thread scheduler that the current thread is willing to give up the current CPU. If the current resources are not tight, the scheduler can ignore this prompt. In essence, the thread state is always runnable, but I can understand it as the conversion from runnable to running

Sleep method:

The sleep method has an overloaded method. The sleep method releases the time slice of the CPU, but does not release the lock. After calling sleep (), it changes from the runnable state to timed_ Waiting status

Join method

Joining a thread a will cause thread B to wait until thread a ends or reaches a given time. During this period, thread B is in the blocked state instead of thread a

5、 Other methods

Next, let's talk about the wait, notify and notifyAll methods of the object class

Wait method

It can be seen that both wait () and wait (long timeout, int nanos) internally call the wait (long timeout) method. The following is mainly about the wait (long timeout) method. The wait method will cause the current thread to block until another thread calls the notify or notifyAll () method on the corresponding object, or reaches the time specified in the method parameters. The current thread calling the wait method must have the monitor lock of the object. The wait method will place the current thread t in the waiting queue on the corresponding object, and all synchronization requests on this object will not be responded to. Thread scheduling will not invoke thread T. Thread T will be waken up before the following four things happen (thread T is the thread calling wait method in its code).

1. When other threads call the notify method on the corresponding object, any thread will be selected to wake up in the corresponding waiting queue of this object. 2. Other threads called notifyAll method 3 on this object. Other threads called interrupt method to interrupt thread T 4. The waiting time has exceeded the time specified in wait. If the value of the timeout parameter is 0, it does not mean that the real waiting time is 0, but that the thread waits until it is awakened by another thread.

The awakened thread t is removed from the object's waiting queue and can be scheduled again by the thread scheduler. After that, thread T will compete with other threads to obtain the lock on the object as usual; Once thread t obtains the lock on this object, all synchronization requests on this object will return to the previous state, that is, when wait is called. Thread t then returns from the call to the wait method. Therefore, when returning from the wait method, the state of the object and the state of thread T are the same as when the wait method is called. A thread can still be awakened without being awakened, interrupted, or running out of time. This is called pseudo wakeup. Although this rarely happens in practice, the program must test the condition that can wake up the thread, and when the condition is not met, the thread continues to wait. In other words, the wait operation always appears in the loop, as follows:

If the current thread is interrupted by another thread calling interrupt() before or while the current thread is waiting, an interruptedexception exception will be thrown. This exception will not be thrown until the lock state on the object returns to the state described above. It should be noted that the wait method places the current thread in the waiting queue of this object, and the unlocking is only on this object; The lock of the current thread on other objects still holds the lock of other objects while the current thread is waiting. This method should only be called by the thread that holds the object monitor. In the implementation of wait (long timeout, int nanos) method, as long as nanos is greater than 0, the timeout time is added with one millisecond, mainly to control the time more accurately. Others are the same as wait (long timeout)

Notify method

Notifies other threads that may be waiting for an object lock on the object. The JVM (regardless of priority) randomly selects a thread in the wait state. Before calling notify(), the thread must obtain the object level lock of the object. After the notify() method is executed, the lock will not be released immediately. The current thread will not release the lock until the synchronized code block is exited. Notify() only notifies one thread randomly to wake up at a time

Notifyall() method

It's similar to notify (). It just makes all threads waiting for the same shared resource in the waiting pool exit from the waiting state and enter the runnable state to compete for the lock of the object, Only the thread that obtains the lock can enter the ready state. Each lock object has two queues: ready queue and blocking queue - ready queue: stores the thread that will obtain the lock - blocking queue: stores the blocked thread

6、 Instance

1、sleep

2. Join and interrupt (Mark interrupt is recommended)

3. Priority and daemon

4. Producers and consumers (I forgot which article I saw. Sorry)

Define an interface:

Define a class implementation interface for storing things produced

Producer class:

Consumer category:

Test class:

reference resources

Detailed explanation of Java high concurrency

The beauty of Java Concurrent Programming

https://blog.csdn.net/caoxiaohong1005/article/details/80312396

https://blog.csdn.net/benjaminlee1/article/details/72843713

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>