[Java Concurrent Programming] sorting out knowledge points related to thread pool – New

Summary of common thread pools created by several executors

FixedThreadPool

The thread pool with a fixed number of thread pools can be reused. The task queue uses unbounded linkedblockingqueue.

Operation diagram of fixedthreadpool [picture source: Art of Java Concurrent Programming]

Reasons why fixedthreadpool is not recommended

SingleThreadExecutor

Singlethreadexecution is a thread pool with only one thread.

Operation diagram of SingleThread executor [picture source: the art of Java Concurrent Programming]

Reasons why singlethreadpool is not recommended

Like fixedthreadpool, oom may be triggered when there are many tasks.

CacheThreadPool

Cachedthreadpool is a thread pool that creates new threads as needed. The task queue used is synchronous queue.

Reasons why cachedthreadpool is not recommended

The number of threads allowed to be created by cachedthreadpool is integer MAX_ Value, a large number of threads may be created, resulting in oom.

ScheduledThreadPoolExecutor

The delayedworkqueue is used. The scheduledthreadpoolexecutor will put the tasks to be scheduled (scheduledfuturetask) into a delayqueue.

Delayqueue encapsulates a PriorityQueue that sorts the scheduled futuretask in the queue. When sorting, the task with the smaller time will be ranked first (the task with the earlier time will be executed first). If the time of two scheduledfuturetask is the same, compare the sequencenumber, and the task with the smaller sequencenumber will be ranked first (that is, if the execution time of two tasks is the same, the task submitted first will be executed first).

WorkStealingPool

Create a thread pool with enough threads to call idle CPU to handle other tasks. Implement it with forkjoinpool and jdk8 add new threads.

Linkedblockingqueue and arrayblockingqueue

Linkedblockingqueue is implemented using a one-way linked list. When declaring, you can not specify the queue length, which is integer MAX_ Value, and a new node object is created. The node object has item and next variables. Item is used to store elements. Next points to the next node object in the linked list. At the beginning, the head and last of the linked list point to the node object. Item and next are null. The new element is placed at the end of the linked list and the element is taken from the head. When fetching elements, there are only some pointer changes. Linkedblockingqueue declares a lock for both put and take. Putting and fetching do not affect each other and are more efficient.

Arrayblockingqueue is implemented by array, and the length must be specified when declaring. If the length is too large, it will cause a waste of memory, the length is too small, and the concurrency performance is not high. If the array is full, elements cannot be put in. Unless other threads take out elements, the same lock is used for putting in and taking out. Therefore, there is competition and the efficiency is lower than linkedblockingqueue

Do you need to close the thread pool when it is not in use?

How to reasonably configure java thread pool

Calculation formula

How to determine whether it is a CPU intensive task or an IO intensive task?

CPU intensive is simply understood as a task that uses CPU computing power, such as sorting a large amount of data in memory. Whenever network reading and file reading are involved, they are IO intensive. The characteristic of these tasks is that CPU computing takes little time compared with waiting for the completion of IO operations, and most of the time is spent waiting for the completion of IO operations.

reference resources:

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>