Thoroughly understand the basis of Java concurrency (I)

Multithreaded programming is to make the program run faster, but it does not mean that the more threads are created, the better, the context switching during thread switching, and the limitation of hardware and software resources

Context switching

The single core CPU also supports multi-threaded programming. The CPU implements this mechanism by allocating CPU time slices to each thread. The time slice is the time allocated by the CPU to each thread. This time slice is very short, so it has to be executed by switching threads (the time slice is generally tens of milliseconds)

After the current task executes a time slice, it will switch to the next task. However, the state of the previous task will be saved before switching, so that the next time this thread obtains the time slice, it can restore the state of this task

Synergetic process

In common terms, a cooperative process is a thread scheduled by a thread. The operating system creates a process, and the process creates several threads in parallel. The switching of threads is scheduled by the operating system. Threads such as Java language actually have a 1:1 relationship with the threads of the operating system. Each thread has its own stack. Java is silent in the 64 bit operating system. The stack size is 1024KB, Therefore, a process cannot start tens of thousands of threads

J2EE based projects occupy one thread per request to complete the complete business logic (including transactions). Therefore, the throughput of the system depends on the operation time of each thread. If you encounter time-consuming I / O behavior, the throughput of the whole system will immediately decline. For example, JDBC is blocked synchronously, which is why many people say that the database is the bottleneck. In fact, the time consumption here is to keep the CPU waiting for I / O to return. In short, the thread does not use the CPU to perform operations at all, but is in an idle state.

There is a well encapsulated ThreadPool in the Java JDK, which can be used to manage a large number of thread life cycles, but in essence, it can not solve the problem of the number of threads and the occupation of CPU resources by thread idling.

One of the popular solutions in the early stage industry is single thread plus asynchronous callback. Its representative is node JS and Java rookie vert x。 Their core idea is the same. When I / O operations are needed, they directly give up CPU resources, and then register a callback function. Other logic continues to go down. After I / O, insert the execution result into the event queue with the result, and then the event scheduler schedules the callback function to pass in the result.

In fact, the essence of a collaborative process is the same as the above method, except that its core point is that it is scheduled by it. In case of blocking operation, it immediately yields and records the data on the current stack. After blocking, it immediately finds a thread to recover the stack and puts the blocking result on this thread. This seems to be no different from writing synchronization code, The whole process can be called coroutine, and the thread running in the scheduling of coroutine is called fiber. For example, the go keyword in golang is actually responsible for opening a fiber and making func logic run on it. All this happens in the user state, not in the kernel state, that is, there is no overhead of context switching.

Java thread scheduling

The JVM must maintain a priority based scheduling mode. The priority value is very important because the agreement between the Java virtual machine and the underlying operating system is that the operating system must select the java thread with the highest priority to run. Therefore, we say that Java implements a priority based scheduler, which is implemented in a priority way, This means that when a higher priority thread arrives, it will be interrupted (preempted) whether the lower priority thread is running or not (as the JVM will do). This Convention is not always the case for the operating system, which means that the operating system may sometimes choose to run a lower priority thread.

Yield() method

In theory, yield means letting go, giving up and surrendering. A thread that calls the yield () method tells the virtual machine that it is willing to let other threads occupy its own location. This indicates that the thread is not doing something urgent. Note that this is only a hint and there is no guarantee that it will not have any impact.

Join() method

If a thread a executes thread The purpose of the join () method is that the current thread a waits for the thread to terminate before starting from the thread Join() returns,

How to reduce context switching

How to avoid deadlock

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>