Java – why do I need synchronization?

I'm trying to fill some shameful gaps in my knowledge of Java threads, and I'm reading Java concurrency in practice in Brian Goetz et al. (btw is highly recommended). An early example in this book left me a problem In the following code, I fully understand why you need to synchronize when updating hits and cachehits member variables, but why you need the gethits method when reading hits variables?

Example code in Chapter 2:

public class CachedFactorizer extends GenericServlet implements Servlet {
  private BigInteger lastNumber;
  private BigInteger[] lastFactors;
  private long hits;
  private long cacheHits;

public synchronized long getHits() {
    return hits;
}

public synchronized double getCacheHitRatio() {
    return (double) cacheHits / (double) hits;
}

public void service(ServletRequest req,ServletResponse resp) {
    BigInteger i = extractFromRequest(req);
    BigInteger[] factors = null;
    synchronized (this) {
        ++hits;
        if (i.equals(lastNumber)) {
            ++cacheHits;
            factors = lastFactors.clone();
        }
    }
    if (factors == null) {
        factors = factor(i);
        synchronized (this) {
            lastNumber = i;
            lastFactors = factors.clone();
        }
    }
    encodeIntoResponse(resp,factors);
}...

I have a feeling that it is related to atomicity, monitor and locking, but I don't fully understand these, so please someone explain it there?

Thank you in advance

James

Solution

There are many potential problems here Michael pointed out one big (long store non - atomicity), but there was another Writes can be seen unordered between relationships that do not occur before (provided between releasing and acquiring locks, such as in a synchronization block)

Note that the row hit is before the cachehits in service () Without synchronization, the JVM has full authority to reorder these instructions in a way that may confuse other threads For example, it can reorder cachehits before they hit, or it can make the increased cachehits value visible to other threads before the increased hit value (in this case, the difference is not important because the results may be the same) Imagine reordering, starting with a clean cache, resulting in the following interleaving:

Thread 1                  Thread 2
---------------           ----------------
++cacheHits (reordered)
  cacheHits=1,hits=0
                          read hits (as 0)
++hits
  cacheHits=1,hits=1
                          read cacheHits (as 1)

                          calculate 1 / 0 (= epic fail)

It is certain that you will not get the result you expect

Please note that this is easy to debug You can make 1000 service () calls, and then the reading thread treats cachehits as 500 and hits 1 A cache hit rate of 50000% may not be obvious, which is even more confusing for a bad debugger

Synchronous reading sets the relationship that occurred before, so this does not happen, and then locking provides other advantages mentioned by others

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>