Java – optimistic cache concurrent design pattern
I have a web service running on a set of servers The network service performs some internal processing, and then you can call an external service to collect fees
I want to put in some caches so that if I receive the same service request (which is guaranteed to happen), I don't have to process it repeatedly, which saves processing time / power consumption and saves the cost of external parts
However, when I encounter the following limitations, I am trying to find out how to manage the cache
>The service is running on multiple web servers to achieve high availability and scalability > the request may take 5 seconds to respond, but I may have received two or three other identical requests at the same time
When working in a distributed environment, how to execute other service calls until the first response (available in the cache)
I have considered establishing a front-end proxy pattern in the proxy and building a queue of the same request, so that when the first one returns, it can also return the same response to other parties Is this the right pattern, or is there a better concurrency pattern to handle this situation?
Solution
You can
>Calculate the password hash of the request > check whether the result already exists in the database, if so, return it > store the hash in the database as "result waiting" status > call the web service and update the row in the database with the result
In step 2, if the hash is already in the database and has a "result pending" status, the database can be polled every x milliseconds, and finally after the result returns the result
The devil is in the details, of course, because you have to decide what you do in case of mistakes:
>Do you return errors for all subsequent identical requests? > Do you cause the waiting thread to retry calling the web service? > Do you return an error, but only for a while, and then try again?