HashMap analysis

This is the back-end small class of the monastery. Each article is shared from

[background introduction] [knowledge analysis] [common problems] [solutions] [coding practice] [extended thinking] [more discussion] [References]

Eight aspects of in-depth analysis of back-end knowledge / skills. This article shares:

[HashMap analysis]

Hello, I'm Yue Xiaopeng, a student of the 34th session of Beijing Branch of it Academy. I'm an honest, pure and kind java programmer. Today, I'd like to share with you the Java task 11 on the official website of the Academy, the knowledge point in deep thinking - HashMap analysis

(1) Background:

hashmap

Map is an abstract interface for key value pair mapping. The mapping does not include duplicate keys, that is, one key corresponds to one value. HashMap is an important member of the Java collection framework and one of the most commonly used map families. In short, HashMap is the implementation of the map interface based on hash table. It exists in the form of key value, and the stored object is entry (including both key and value). In HashMap, it will calculate the storage location of key value according to the hash algorithm and access it quickly.

(2) Knowledge analysis:

The concept of hash

Hash is to transform any length of input (also known as pre image) into a fixed length output (usually integer) through hash algorithm, and the output is the hash value. This transformation is a compression mapping, that is, the space of hash values is usually much smaller than the input space. Different inputs may be hashed into the same output, so it is impossible to uniquely determine the input value from the hash value. In short, it is a summary function that compresses messages of any length to messages of a fixed length.

Data structure of HashMap

As we know, the two most commonly used structures in Java are array and linked list. Almost all data structures can be combined and implemented by using these two structures. HashMap is a typical application of this kind. In fact, HashMap is a linked list array. From the above figure, we can visually see that the underlying implementation of HashMap is still an array, but each item of the array is a chain.

Some concepts in HashMap class

Entry contains both key and value. It is the storage object in HashMap

Transient int size: records the number of kV pairs in the map

Capacity: capacity. If not specified, the default capacity is 16

LoadFactor: load factor, which is used to measure the full degree of HashMap. The default value of LoadFactor is 0.75f

Int threshold: critical value. When the actual number of kV exceeds the threshold, HashMap will expand the capacity. Threshold = capacity * loading factor

(3) Frequently asked questions:

How fast is HashMap read and write?

(4) Solution:

The bottom layer of HashMap is array linked list, so when reading and writing data, it is easy to find the storage location of entry according to the hash value of key. Without hash conflict, the time complexity is O (1).

(5) Coding practice:

This paper introduces the implementation of several concepts of HashMap and the flow of put operation

(6) Expand thinking:

(7) References:

https://blog.csdn.net/zxt0601/article/details/77413921 【1】 CSDN blog

https://blog.csdn.net/justloveyou_/article/details/62893086 【2】 CSDN blog

(8) More discussion:

Q1: why should HashMap ensure that its capacity is to the power of 2? A1: ensuring that the HashMap capacity is to the power of 2 is to reduce hash conflicts, because if there are hash conflicts, multiple entries in one location will occur when storing entries, which will reduce the reading and writing speed; When using the capacity to the power of 2, bit operations will be performed with the HashMap (capacity-1) according to the hash value of the key. Such operations will minimize the hash conflict when the binary numbers of (capacity-1) are all 1.

Q2: since the capacity of HashMap is to the power of 2, what is the use of specifying the capacity during initialization and what value will HashMap use? A2: when initializing HashMap, we can specify its capacity. However, HashMap will calculate the minimum power of 2 that can accommodate the specified capacity according to the specified capacity within the system; For example, if you specify 25, its capacity is actually 32, and if you specify 33, its capacity is actually 64; This is related to the later use of HashMap. It is necessary to minimize hash conflicts. Q3: how about exceeding the capacity when HashMap is put into the entry? A3: if the HashMap exceeds the capacity, it will be expanded, but its expansion condition is not to fill the capacity; We know that the size of HashMap is the number of entries that have been stored, capacity is its capacity, and LoadFactor is the loading factor - it can be used to indicate capacity expansion conditions or set by yourself; When size = capacity * LoadFactor, the capacity will be expanded automatically. The capacity after expansion is twice that before; The default factor is 0.75, that is, when the size is 75% of the total capacity, the capacity will be expanded. The larger the loading factor is, the more space will be used, but the lower the search efficiency will be; If the load factor is smaller, the data in the hash table will be more sparse and the waste of space will be more serious. The system default factor is 0.75, which is a compromise between space and time.

(9) Thanks:

(10) Conclusion:

That's all for today's sharing. You are welcome to like, forward, leave messages and make bricks~

Ppt link video link

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>