Java – high performance JMS messaging

I read the slide from this year's uberconf. One of the speakers is arguing that spring JMS adds performance overhead to your message queuing system, but I don't see any evidence to support the slide Speakers also make peer-to-peer faster than traditional "publish subscribe" methods, because each message is sent only once rather than broadcast to each consumer

I would like to know if there are any experienced Java messaging masters who can weight here and clarify some technical issues:

>Is there a performance overhead when using spring JMS instead of pure JMS? If so, how and where was it introduced? Is there any way? > What practical evidence supports that P2P is faster than the pub sub model? If so, do you want to publish through P2P in any case (i.e. why is it slower?)?

Solution

1) The main overhead of spring JMS is to use jmtemplate to send messages, under which there is a caching mechanism Basically, jmtemplate will do the following for each message you send:

>Create connection > create session > create producer > create message > send message > Close session > close connection

This can be compared to manual code that you reuse:

>Create connection > create session > create producer > create message > send message > create message > send message > create message > Close session > close connection

Since creating connections, sessions and producers require communication between your clients and JMS providers, and of course resource allocation, it will create considerable overhead for many small messages

You can easily solve this problem by caching JMS resources For example, use spring cacheingconnectionfactory or activemqs pooledconnectionfactory (if you marked the problem with ActiveMQ)

If you are running a full Java EE container, the pool / cache is usually built-in and implicit when retrieving the JNDI connection factory

When received, the spring default message listening container is used. Spring has a thin layer, which may increase a small amount of overhead, but the main aspect is that the performance can be adjusted according to concurrency This article explains this very well

2)

PubSub is a usage model, and publishers do not need to know which subscribers exist You can't simply simulate with P2P Moreover, without any evidence at hand, I would think that if you want to send the same message from one application to ten other applications, the pub sub setting will be ten times faster than sending messages

On the other hand, if there is only one producer and one consumer, the P2P model with queue is selected because it is easier to manage in some aspects P2P (queue) allows load balancing, which pub / sub is not (easy)

ActiveMQ also has a hybrid version of virtual destinations - which is mainly the subject of load balancing

The actual implementation of different suppliers is different, but the subject and queue are not completely different and should be similar What you should check is:

insist? (= slow) message selector (= slow) > concurrent? Durable subscribers (= slow) > request / reply "sync" with temporary queues (= overhead = slow) > queue prefetching (= impact some aspects of performance) > caching

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>