Java – twisted throughput limit reduced

I'm developing a program that allows you to simulate a network on a machine For this purpose, I use twisted for asynchronous I / O, because each 'connection' has one thread, which may be a little more I also implemented a similar program in Java using their NiO However, as I expand the scale of the analog network, the throughput on twisted will decrease When compared with Java implementation, Java throughput will continue to grow for the same network size The growth rate has slowed down, but it is still growth

I wonder if anyone has any suggestions on why this happened?

The only reason I can think of is that each 'peer' in Java runs in its own thread (it contains its own selector to monitor peer connections) In the python version, everything is registered in reactor (and then a selector) With the extension of python, a selector cannot respond quickly However, this is just a guess. If anyone has any more specific information, it will be applied

Editor: I conducted some tests as suggested by Jean Paul Calderone, and the results were published in imgur For those who may want to know the following AVG throughput test report (the analysis was completed with cprofile, and the test ran for 60 seconds)

Epoll reactor: 100 peers: 20.34 MB, 200 peers: 18.84 MB, 300 peers: 17.4 MB

Select reactor: 100 pairs of et al: 18.86 MB, 200 pairs of et al: 19.08 MB, 300 pairs of et al: 16.732 MB

Some things that seem to go up and down on the reported throughput are for main Py: 48 (send), but this corrosion is not a surprise, because this is where the data is sent

For the two reactors, the time spent on the sending function on the socket increases with the decrease of throughput, and the number of calls to the sending function decreases with the decrease of throughput (that is, it takes longer to send on the socket and fewer calls are sent on the socket.) For example, for 413600 calls, epoll {method 'send "_socket. Socket' object} on 100 peers is 2.5 seconds, epoll on 300 peers is 5.5 seconds, and epoll on 354300 calls

In an attempt to answer the original question, does this data seem to point to a selector as a limiting factor? The time spent in the selector seems to decrease as the number of peers increases (if the selector slows everything down, wouldn't people expect the time spent internally to increase?) What else might slow down the amount of data sent? (data sending is only a function of each peer registered with reactor.caller again and again. That is main. Py: 49 (sending))

Solution

Try to analyze the application at different concurrency levels and see what slows down when adding more connections

Selection is a possible candidate; If you find that you are using more and more time to add a connection, try using the poll or epoll reactor

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>