Java Servlet3. 0 handles problems asynchronously

Through this article, we mainly explain servlet 3.0 in java development The problems encountered in asynchronous processing of 0 and their solutions are as follows:

Servlet 3.0 began to provide asynccontext to support asynchronous request processing. What benefits can asynchronous request processing bring?

Generally speaking, the web container handles requests by assigning a thread to each request. We all know that the creation of thread is not without cost. The thread pool of web container has an upper limit. A predictable problem is that under high load conditions, the thread pool is occupied, and subsequent requests can only wait. If you are unlucky, the client will report an error of waiting timeout. Before the emergence of asynccontext, the only way to solve this problem was to expand the thread pool of the web container.

However, there is still a problem. Consider the following scenarios:

There is a web container with a thread pool size of 200. There is a web app with two servlets. Servlet-a processes a single request in 10s and servlet-b processes a single request in 1s. At present, there is a high load. There are more than 200 requests to servlet-a. if servlet-b requests at this time, it will wait because all HTTP threads have been occupied by servlet-a. At this time, the engineer found a problem and expanded the thread pool size to 400, but the load continues to rise. Now there are 400 requests to servlet-a, and servlet-b still cannot respond.

See the problem? Because the HTTP thread and worker thread are coupled together, when a large number of requests arrive at a time-consuming operation, the HTTP thread will be full, resulting in the failure of the whole web container to respond.

However, if asynccontext is used, we can hand over the time-consuming operations to another thread, so that the HTTP thread can be released and can handle other requests.

Note that the above effect can be achieved only by using asynccontext. If you directly use new thread() or similar methods, the HTTP thread will not be returned to the container.

Here is an official example:

trap

In this official example, each HTTP thread will open another worker thread to process the request, and then return the HTTP thread to the web container. But look at asynccontext Javadoc of start() method:

In fact, there is no provision on where the worker thread comes from. Maybe it is another thread pool other than the HTTP thread pool? Or HTTP thread pool?

The Limited Usefulness of AsyncContext. The start () article says: different web containers have different implementations for this, but Tomcat actually uses HTTP thread pool to process asynccontext Start().

In other words, we originally wanted to release the HTTP thread, but we didn't, because the HTTP thread is still used as the worker thread, but the thread is not the same as the HTTP thread receiving the request.

This conclusion can also be seen from the JMeter benchmark of asyncservlet 1 and syncservlet. The throughput results of both are similar. Startup method: start main, and then use JMeter to start benchmark JMX (HTTP thread pool = 200 in Tomcat's default configuration).

Using executorservice

As we saw earlier, Tomcat does not maintain the worker thread pool separately, so we have to find a way to do it ourselves. See asyncservlet2, which uses an executorservice with thread pool to process asynccontext.

Other ways

Therefore, there is no fixed way to use asynccontext. You can handle it in different ways according to your actual needs. Therefore, you need some knowledge of Java Concurrent Programming.

Misunderstandings about performance

The purpose of asynccontext is not to improve performance, nor does it directly provide performance improvement. It provides a mechanism to decouple HTTP thread and worker thread, so as to improve the response ability of web container.

However, asynccontext can improve performance at some times, but it depends on how your code is written. For example, the number of HTTP thread pools in the web container is 200, and a servlet uses a worker thread pool of 300 to process asynccontext. Compared with sync mode, worker thread pool = http thread pool = 200, in this case, we have 300 worker thread pool, so it will certainly bring some performance improvements (after all, there are more people working).

On the contrary, if the number of worker threads < = the number of HTTP threads, the performance will not be improved, because the bottleneck of processing requests is worker threads. You can modify the thread pool size of asyncservlet 2 and compare it with syncservlet benchmark results to verify this conclusion.

You must not think that the worker thread pool must be larger than the HTTP thread pool for the following reasons:

The two have different responsibilities. One is the web container used to receive external requests, and the other is to process business logic

There is a price for creating a thread. If the HTTP thread pool is already large, a larger worker thread pool will cause too much context switch and memory overhead

The purpose of asynccontext is to release the HTTP thread to avoid being occupied by the operation for a long time, which will cause the web container to fail to respond

Therefore, more often, the worker thread pool will not be large, and different worker thread pools will be built according to different businesses.

For example, the size of the web container thread pool is 200, and the size of the worker thread pool of a slow servlet is 10. In this way, no matter how many requests to slow operation, it will not fill up the HTTP thread, resulting in other requests unable to be processed.

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>