Design and implementation of cache — reprint

This article is compiled from the following two Blogs: http://my.oschina.net/ScottYang/blog/298727http://my.oschina.net/u/866190/blog/188712

Cache introduction:

Cache (CACHE), a concept that can be accessed almost at any time in a computer. Cache in CPU can greatly improve the time of accessing data and instructions, so that the whole memory (CACHE + memory) has both the high speed of cache and the large capacity of memory; The cache used in the memory page of the operating system can make the frequently read memory disk files less replaced out of memory, so as to improve the access speed. The common algorithm designs of cache include FIFO (first in first out), LRU (least recently used) and LFU (least frequently used).

The algorithm eliminates the data according to the recent access records of the data. The principle is that if the data has been accessed recently, the probability of being accessed in the future is relatively high. The most common implementation is to use a linked list to save the cached data. The specific algorithm is as follows:

The algorithm eliminates data according to the historical access frequency of data. Its principle is that if the data is accessed more times in the past, the probability of being accessed in the future is relatively high. Each data block of LFU has a reference count. All data blocks are sorted by reference count, and data blocks with the same reference count are sorted by time. The specific algorithm is as follows:

The algorithm eliminates data according to the first in first out principle. It is the simplest one in implementation. The specific algorithm is as follows:

There are two main criteria to evaluate a cache algorithm. One is that the hit rate should be high, and the other is that the algorithm should be easy to implement. When there is hot data, LRU is very efficient, but occasional and periodic batch operations will lead to a sharp decline in LRU hit rate and serious cache pollution. LFU is more efficient than LRU, and can avoid the problem of decreasing cache hit rate caused by periodic or accidental operations. However, LFU needs to record the historical access record of data. Once the data access mode changes, LFU needs more time to apply the new access mode, that is, LFU has the "cache pollution" effect of historical data affecting future data. Although the implementation of FIFO is very simple, the hit rate is very low, and this algorithm is rarely used in practice.

Interview questions: the questions of designing a cache have appeared in the interview questions of Google and Baidu. What is a cache and how to design a simple cache

The storage space in the cache is often limited. When the storage blocks in the cache are used up and new data needs to be loaded into the cache, we need to design a good algorithm to replace the data blocks. The idea of LRU is based on the design rule that "the probability of reuse of recently used data is much greater than that of early use".

In order to quickly delete the data items that have not been accessed for the longest time and insert the latest data items, we connect the data items in the cache with a two-way linked list, and ensure that the linked list maintains the order of data items from the latest access to the oldest access. Every time a data item is queried, it is moved to the head of the linked list (time complexity of O (1)). In this way, after multiple search operations, the recently used content moves to the head of the linked list, and the unused content moves to the back of the linked list. When replacement is needed, the last position in the linked list is the least used data item recently. We only need to put the latest data item in the head of the linked list. When the cache is full, the last position in the linked list will be eliminated. Note: the use of two-way linked list is based on two considerations. First, the hit of blocks in the cache may be random, independent of the order in which load comes in. Secondly, the two-way linked list can be inserted and deleted quickly, and the order between them can be adjusted flexibly. The time complexity is O (1). The time complexity of finding elements in a linked list is O (n). Every time we hit, we need to spend o (n) time to find. If we don't add other data structures, this is the highest efficiency we can achieve. At present, the bottleneck of the whole algorithm is searching here. How can we improve the efficiency of searching? The hash table, yes, is it. It exists in the data structure because its lookup time complexity is O (1).

Sort out the idea: for each data block of the cache, we design a data structure to store the contents of the cache block, and implement a two-way linked list. The attributes next and prev are two pointers of the two-way linked list. The key is used to store the key value of the object, and the value user stores the object itself of the block to be cached.

Query:

Insert:

to update:

Delete:

Reference implementation: Apache JCS

1) First, we need a two-way linked list. Naturally, we need to define the nodes of the two-way linked list: doublelinklistnode and doublelinklist

For doublelinklist, we need to synchronize various read and write operations to the node:

2) The two-way linked list is implemented. Then we need to implement a custom lrumap to store the doublelinklistnode according to the key. In this way, we can query the corresponding values with the help of the hash table of map.

·Before giving the implementation, we may recall that in the key value pairs of the map, the key is created by itself, and the value is the doublelinklistnode to implement the map. When putting, adjust the cache capacity as needed (LRU algorithm).

If you think so, let's think the other way around. In fact, map is a container that can be directly used as a cache. It's better to use map (object key, object value) directly than map (object key, doublelinklistnode node). When we do this, there will be a problem. When we need to delete an object, to be exact, Is to delete an object that has not been used for the longest time recently. The original map result of such an operation cannot be realized; Perhaps further, you will think that we can add a time stamp to each object in the map. In this way, when deleting, we need to traverse each object, find the one with the earliest time stamp, and delete it directly (in this way, the efficiency will be very poor, and it takes O (n) time). In fact, we want to say that the use of double linked list is to quickly delete the most recently unused objects, so we need to build a descriptor to associate the relationship between HashMap and double linked list.

Next, implement the basic operations of lrumap, as follows:

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>