Understanding I / O model of Java NiO (2)

preface

The last article explained some basic concepts of I / O model, including synchronous and asynchronous, blocking and non blocking, synchronous IO and asynchronous IO, blocking IO and non blocking io. This time, let's learn about several existing IO models and two design patterns of efficient IO, which are also the basic knowledge of IO model.

Five I / O models available under UNIX

According to the classification of IO models by unix network programming, UNIX provides five IO models, which are introduced below.

Blocking I / O model

The most common IO model. As described earlier, a read operation is divided into two stages. The first stage is to wait for the data to be ready, and the second stage is to copy the data to the thread calling the IO. Blocking occurs in the first stage. When the data is not ready, the user thread will be blocked all the time. When the data is ready, the data will be copied to the thread and the results will be returned to the user thread.

The general process is shown in the figure below.

In fact, most socket interfaces are typically blocking. The so-called blocking interface refers to that the system call (generally IO interface) does not return the call result and keeps the current thread blocked. It is returned only when the system call obtains the result or there is a timeout error.

By introducing blocking IO, we can easily find its problem, that is, blocking makes the user thread unable to perform any operations and requests. Generally, we use multithreading to create a thread for each link, or use thread pool to manage threads, which may alleviate some of the pressure, but it can not solve all the problems. Multithreading model can easily and efficiently solve small-scale service requests, but in the face of large-scale service requests, multithreading model will also encounter bottlenecks, and non blocking interfaces can be used to try to solve this problem.

Non blocking I / O model

The non blocking IO model is a process in which when an application initiates a read operation, it will not block, but will immediately receive a result. When the application thread finds that the returned result is an error, it knows that the data is not ready, so it can send the read operation again. Once the data is ready and receives the request from the user thread again, it immediately copies the data to the user's memory and returns.

Such a process actually requires the user thread to constantly ask whether the system is ready for data, which will always occupy CPU resources. However, this model is only available in systems that specifically provide certain functions.

The general process is as follows:

Multiplex I / O multiplexing model

When introducing multiplex I / O, you should first briefly explain the select function and poll function.

Select function

The select function allows the process to instruct the kernel to wait for any one of multiple events to occur and wake it up only after one or more events have occurred or experienced a specified period of time.

That is, we call select to tell the kernel which descriptors (read, write, or exception conditions) are interested in and how long to wait.

Poll function

The poll function originated from svr3 and was initially limited to streaming devices. SVR4 removes this restriction and allows poll to work on any descriptor. The poll function provides similar functions to the select function, but poll does not have a limit on the maximum number of file descriptors.

After the select function and poll function tell the process of the ready file descriptors, if the process does not perform IO operations on them, these file descriptors will be reported again when the select function or poll function is called next time, so they generally do not lose the ready message. This method is called level triggered.

After simply solving the select function and poll function, we will continue to talk about the multiplex I / O multiplexing model. The multiplex IO multiplexing model is to call the select or poll function, and the blocking process of this model occurs in the call of these two functions, rather than the real I / O system call. The advantage of using select or poll is that it can handle the IO of multiple network connections with a single thread or process. The whole process is that the select or poll function will continuously poll the responsible socket. When data arrives in a socket, it will notify the user thread or process.

The approximate call is as follows:

NiO in Java is actually the IO multiplexing model used, through selector Select () queries whether each channel has an arrival event. If there is no event, it will always be blocked there. Therefore, the multiplexing IO model will also block the user thread, but the thread is blocked by the select function rather than the socket io.

Therefore, the multiplexing IO model is similar to the non blocking IO model, but the efficiency of the multiplexing IO model is higher than that of the non blocking IO model, because in the non blocking IO, the constant query of the socket status is carried out through the user thread, while in the multiplexing IO model, the polling of each socket status is carried out by the kernel, which is much higher than that of the user thread. In this way, it can also be seen that the multiplexing IO model is more suitable for the case with a large number of links.

However, this model also has problems. Because the multiplex IO model detects whether events arrive by polling and responds to the arriving events one by one, once the event response body is large or the number of response events is too large, it will consume a lot of time to process events, thus affecting the timeliness of the whole process. In order to deal with this situation, Linux system provides epoll interface, but there are many differences in the support of other operating systems except Linux. Therefore, although epoll solves the timeliness problem of event detection, it can not get good support in cross platform ability.

Signal driven IO model

In the signal driven IO model, the kernel sends a sigio signal to notify the user thread when the datagram is ready.

The whole process is as follows:

First, open the signal driven IO function of the socket, and install a signal processing function through sigaction system call. The system call will return immediately and the process will continue to work, that is, it is not blocked. When the datagram is ready to read, the kernel generates a sigio signal for the process. We can then call recvfrom in the signal processing function to read the datagram and notify the user that the process data is ready to read.

The advantage of this model is that it will not be blocked while waiting for the datagram to arrive, and the user process can continue to execute as long as it waits for the notification from the signal processing function.

Asynchronous IO model

The process of asynchronous IO model is as follows: when the user thread initiates the read operation, it tells the kernel to start the read data operation, And let the kernel notify us after the whole operation (including copying data from the kernel to our own buffer) is completed. In this way, when the kernel reads data, the user thread can continue to execute. When receiving the signal that the kernel completes the whole operation, it can directly use the data.

The general process is as follows:

In the asynchronous IO model, the two phases of IO operation will not block the user thread or process. Both phases are completed by the kernel, and then send a signal to inform the user that the thread or process operation has been completed. The difference between the asynchronous IO model and the signal driven IO model is that in the signal driven IO model, the kernel informs the user thread when to start an IO operation, while in the asynchronous IO model, the kernel informs us when to complete the IO operation. In the asynchronous IO model, the user thread does not need to carry out actual read-write operations, but only needs to receive the read completion signal after the kernel operation is completed, Just use the data directly.

Asynchronous IO needs to be supported by the bottom layer of the operating system. Linux kernel supports asynchronous IO from version 2.6. Asynchronous IO is already supported in Java 7.

Two high performance IO design patterns: reactor and Proactor

Reactor mode

Reactor means reactor, which literally means immediate reaction.

How reactor works:

(1) The application registers the read ready event and the associated event handler

(2) Reactor blocks waiting for kernel event notification

(3) Reactor receives the notification and then distributes the read-write event (read-write ready) to the user event handler

(4) The user reads the data and processes it

(5) The event processor completes the actual read operation, processes the read data, registers a new event, and then returns the control right.

The general process is that each application declares that it is interested in a socket, and then needs to register the interested events and related processing functions in reactor. When the socket finds that events arrive, it will process each event in order (call the handler). When all events are processed, it will continue to cycle the whole operation.

The process is shown in the figure below:

From the processing process of this design pattern, it can be seen that the multiplex IO multiplexing model is the reactor pattern used, and this design pattern still embodies synchronous io.

Proactor mode

Proactor means the driver, which takes the initiative to complete the corresponding work without affecting the main process.

How Proactor mode works:

(1) The application initializes an asynchronous read operation and then registers the corresponding event handler. At this time, the event handler does not focus on the read ready event, but on the read completion event, which is the key to distinguish it from reactor.

(2) Event separator waits for the read operation to complete

(3) When the event separator waits for the read operation to complete, the operating system calls the kernel thread to complete the read operation, and puts the read content into the cache passed by the user. This is also different from reactor. In Proactor, the application needs to pass the cache.

(4) After the event separator captures the read completion event, it activates the event processor registered by the application. The event processor directly reads data from the cache without actual reading operation.

The asynchronous IO model is the Proactor pattern used.

reference material:

Unix network programming

https://www.cnblogs.com/dolphin0520/p/3916526.html

https://www.cnblogs.com/findumars/p/6361627.html

IO模型解惑

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>