Java – nonlocking IO vs blocking IO raw data throughput
There is a statement in Apache httpcomponent document:
Really? Can anyone explain it in more detail? What is a typical use case
Solution
When you can process requests, you should use non blocking IO, scale it up in some other execution context (different threads, RPC calls to another server, some other asynchronous mechanisms) for processing, and release the threads of the web server to process more incoming requests When the response processing is completed, the response processing thread will be called and the response will be sent to the client
I suggest reading netty documentation to better understand this concept
As for higher throughput: when your server sends / receives large amounts of data, all these context switches and data transfer between threads will seriously affect the overall performance Think of it this way: you receive a big request (put request with a big file) All you need to do is save it to disk and return to OK Starting to throw it between threads may result in fewer MEM copy operations. If you just throw it on the disk of the same thread, you need it Processing this operation asynchronously will not improve performance: Although you can publish the request processing thread back to the web server's thread pool and let it process other requests, your main performance bottleneck is your disk IO. In this case – trying to save more files at the same time will only slow things down
I hope I know enough If you need more explanation, please feel free to ask more questions in the comments