Java – compare the efficiency of two o (n) algorithms
I'm studying linked lists, and the problem is - write a function to print the middle items of a given linked list (assuming ll has an odd number of nodes)
Method 1 - traverse ll and count nodes using counters Add 1 (make it even) and divide the counter by 2 (ignore mathematical differences) Traverse ll again, but this time only the first deadline can be reached and returned
void GetMiddleTermMethod1(){ //Count the number of nodes int counter = 0; Node n = FirstNode; while (n.next != null){ counter = counter + 1; n = n.next; } counter=counter+1/2; //Now counter is equal to the half of the number of nodes //Now a loop to return the nth term of a LL Node temp = FirstNode; for(int i=2; i<=counter; i++){ temp = temp.next; } System.out.println(temp.data); }
Method 2 – initialize 2 node references Traverse 2 nodes at a time, and the other traverses 1 When the fast reference value reaches null (the end of LL), the slow reference will reach the middle and return
void GetMiddleTermMethod2(){ Node n = FirstNode; Node mid = FirstNode; while(n.next != null){ n = n.next.next; mid = mid.next; } System.out.println(mid.next.data); }
I have three questions –
Q1 – if I am asked this question in a job interview, how do I know which algorithm is more effective? I mean, two functions traverse ll once and a half (the second is in one loop instead of two loops, but it still traverses ll once and a half)
Q2 – since both algorithms have a large o of O (n), which parameters will determine which is more effective?
Q3 – what is the general method of calculating the efficiency of such algorithms? I really appreciate it if you can contact me to the appropriate tutorial
thank you
Solution
Well, there is no really simple answer. The answer may be different in compiler optimization, JIT optimization and the actual machine running the program (for some reason, it may be better optimized for an algorithm)
The fact is that there is seldom a "clean, theoretical" method to determine that algorithm a is faster than algorithm B in conditions (1), (2),... Except for the theoretical large o symbol that gives us asymptotic behavior (K).
However, this doesn't mean you can't do anything. You can write code by creating various random data sets and calculate the duration of each algorithm It is very important to do this more than once How much more? Until you get statistical significance, use some known and accepted statistical tests, such as Wilcoxon signed ranked test
In addition, in many cases, unimportant performance is usually not worth the time to optimize the code, which is even worse if it reduces the readability of the code - and therefore more difficult to maintain