java. What are the benefits of NiO for web servers?

I know this is a recurring problem. I have read articles like the following http://www.mailinator.com/tymaPaulMultithreaded.pdf It is not necessarily true that NiO extends better than io

But is it hard for me to see that Java NiO extends better than the traditional acceptor / worker thread architecture when developing web servers? Let me explain:

Typically, the Java Web server processes connections using the following pattern:

>Some restricted threads are limited to the number of core blocks in the accept() method of ServerSocket:

while (true) {
  socket = serverSocket.accept();
  // handleRequest submits the socket to a queue
  handleRequest(socket);
  socket.close();
}

>When a client socket is retrieved, it is submitted to a non - blocking queue and then processed by a worker thread in the worker thread pool The number of worker threads depends on the duration of the operation being performed

How to use Java NiO makes this architecture more scalable?

I mean, I still need worker threads to handle requests that will perform blocking operations (accessing databases or file systems, calling external services) If the back-end operation is not like node JS, I still need worker threads to limit the relationship between overall scalability and one or two event scheduler threads

Solution

I really like Paul tyma's article on this issue. It's really in-depth I see two main points in his article:

>Better throughput can be achieved by using traditional blocking IO (he measured) > traditional blocking IO makes your server logic less complex - the state of the client server conversation is implicitly defined in the process flow

The main reason for using non blocking NiO is when you have multiple simultaneous idle requests The reason is: with NiO, you can provide multiple requests from the same thread, which is better

OK, that's what you can read everywhere Now... Why is it better?

There are two main reasons, which are related to two different types of overhead per thread:

>When the scheduler changes the thread being executed by the processor, a "context switch" occurs, This may be an expensive operation (that is, the thread has some status in the processor - the value in the register, a large amount of data loaded in L1, L2, L3 cache, etc. - when the thread stops, it must "save" somewhere and "reload" when the thread continues to execute; similarly, when you lose the contents of L1 and L3 cache, you may get tons of cache miss, which may be bad (or not, depending on the workload)) > each thread must allocate its own independent stack (usually used to store local variables and return the address of the function call)

Therefore, each thread carries some more "wasted" memory and possibly "wasted" processor cycles (to perform "context switching")

Now, suppose you have a chat server. The client establishes an HTTP connection to request new messages, and your server will answer them only when new messages are sent to the client (so that the client can receive new messages immediately) Suppose you have a 10k client In the traditional, blocked, per connection thread model, you have 10K threads In Java, the typical standard value of thread stack size (- XSS) is 256Kb With 10K threads, you will automatically use about 2GB of memory!!!!!!!! Worse: even if there is no activity on the chat server and no messages are sent, the client will still waste those 2GB Add a lot of context switches and you'll find problems

In this case, you'd better use non blocking NiO, with fewer threads (only 1! D in the end) Enough to handle all 10K clients, so you will save context switching (i.e. CPU time) and thread stack (i.e. memory), even at the cost of more complex code, which is usually a side effect of using non blocking NiO

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>