Quick sorting of sorting algorithm

< H1 id = "Introduction to algorithm" > I. Introduction to algorithm

Quick sort: the basic idea of quick sort is to divide the waiting records into two independent parts through one sort, in which the keywords of some records are less than those of the other. The quick sorting of Part C continues until the whole sequence is sorted.

Take any element (such as the first) as the center, put forward all elements smaller than it, and put back the elements larger than it to form the left and right sub tables. Reselect the central element for each sub table and adjust it according to this rule until there is only one element left in each sub table

① The sub table of each trip is formed by alternating approximation from both ends to the middle.

② Since the operations of each sub table are similar in each trip, a recursive algorithm can be used.

To set two pointers I, J, first select one in the sequence Temp, and compare the number J point with temp. If it is greater than the temperature, it will decrease by 1. If it is less than temp, it should take a value higher than the current position J.

Best: after division, the length of subsequences on the left and right is the same. Worst: from small to large, the recursive tree becomes a tree. Each partition can only get a subsequence of one object smaller than the previous one. It must locate all objects n-1 times, and the second time it needs to find the location of the second object through n-i key code comparison. If the probabilities of all possible permutations are the same, the best case and worst case averages are preferred. Time efficiency: O (nlog2n) - the elements determined in each trip increase exponentially. Space efficiency: O (log2n) - stack space is used for recursion. Stability: unstable - any element can be selected as the fulcrum

As can be seen from the above description, quick sort is exchanged back and forth through pivot points, so the classification times of quick sort is related to the initial sequence. Therefore, it is very important to select the key point of rapid sorting, because it is related to the efficiency of sorting. Take before or after method: the first or last element in the sequence is used as the benchmark, If the input sequence (the array above) is random, the processing time is acceptable. If the array has been sorted, the partition is a very bad partition. Since each partition can only reduce the sequence to be sorted, this is the worst case, and the time complexity is (n ^ 2). In addition, sorting or partial sorting of input data is very common. Therefore, using the first element as the central element is very bad. Randomly selected benchmark: This is a relatively safe strategy. Because the pivot position is random, the resulting segmentation will not always produce bad segmentation. When the whole array is the same, it is still the worst case, And the time complexity is O (N2). Therefore, for most input data, random quick sorting can reach o Expected time complexity of (nlogn). Three digit median method: in the fast queuing process, each time we take an element as the fulcrum value, we use this number to divide the sequence into two parts. This paper uses the three digit method, that is, the left, middle and right digits are used as the key values, and then the sequence is sorted. Obviously, the three digit median segmentation method eliminates the bad situation of pre sorting input and The number of comparisons in the fast queue was reduced by about 14%.

1. In the best case, the partition averages each partition. If n keywords are sorted, the depth of the recursive tree is [log2n] + 1 ([x] represents the maximum integer not greater than x). That is, only 2 recursive logs are needed, and the time required is t (n). The first partition should scan the entire array once. N comparisons. Then, the obtained pivot divides the array into two parts, so each part needs t (n / 2) time (note that in the best case, it is divided into two parts). As a result, as a result of continuous partition, the following inequality inference is made: This shows that in the best case, The time complexity of the quick sort algorithm is o (nlogn). 2. Worst case. Then look at the worst case of fast scheduling. When the sequence to be sorted is positive or negative, and each partition produces only one record subsequence less than the previous partition, note that the other is empty. If a recursive tree is drawn, it is an inclined tree. At this time, n ‐ 1 recursive calls need to be executed, and the ith partition needs to pass through n ‐ I keywords The i-th record, that is, the position of the pivot, can only be found by comparing. Therefore, the number of comparisons is n (n-1) / 2, and the final time complexity is O (n ^ 2). 3. The average time complexity is directly set as the time expectation required for sorting large-scale arrays. In fact, the expectation is the average complexity There is no need to row an empty table, so the initial value condition is t (0) = 0 The so-called fast row is to take out a number at random, usually the first number, and then put it on the left less than or equal to his row and on the right greater than his row For example, there are K on the left, and the next row is: t (n - K) + T (k - 1) Then how much K is uncertain, traversing 1 ~ n, the probability of occurrence is equal In addition, the segmentation operation itself takes time p (n), and the operation cost is linear time p (n) = CN, which should also be added, so the total is:

partition( A[], left, p = (left < (left < right && A[right] >= -- A[left] = (left < right && A[left] <= ++ A[right] = A[left] = Quick( A[], (left < pnode = Quick(A,pnode - 1 Quick(A,pnode + 1 }

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>