Wednesday, February 29, 2012

Building Blocks ,Analysis of Algorithms & Rates of Growth


I thought it would be prudent to split this tutorial into 2 parts, in order to avoid bombarding the reading with too much information.
The most important accept of analyzing algorithms is to figure which features in the algorithm ultimately define its efficiency, therefore ignoring the minute features. This is the only way we can describe the performance of algorithms across different platform & implementations.  

Not all computers have the same specification or architecture ; some computers are PC’s others are Apple Macintosh, some are 32bit and other are 64bit, and we go on forever providing differences but you get the picture…

Will begin with introducing some building blocks, and finally in the 2nd part of this tutorial I will present the reader with tools (notations) that will help describe an algorithm regardless of the technologies used.

Let’s look at a simple while loop statement and describe it mathematically:
 
Figure 1, simple while loop that prints 99, “Whats up” statements
Ok, here is our complexity analysis of Figure 1, step by step:

1)      There is a variable declaration is int counter therefore, 1 constant operation
2)      “int counte”r is initialized to 0, 1 constant operation
3)      A logical operation is being performed 100 times, “while (counter<100)” , 100 operations
4)      System.out.println(“Whats up”), is printed 99 times, 99 operations
5)      “Counter” increase Counter by 1, 100 operations
6)       “Counter” is assigned new value or iterated, 100 operations

More or less according the steps above the code in figure 1 has complexity:
1 constant operation + 1 constant operation + 100 operations + 99 operations + 100 operations + 100 operations = 401 operations

Now if we thought of the expression “while (counter<100)” as “while (counter<n)”, in order to make a more general case, then:

1 constant operation + 1 constant operation + n operations + (n-1) operations + n operations + n operations = 3n+(n-1)+2  operations
You see the constants stay the same, and the while loop actually defines this algorithm

Therefore the above algorithms functions is f(n)=3n+(n-1)+2, this is a linear function
We are going to further simplify by using the following simple rules (Dasgupta, , and et al )):

1)      Multiplicative constants can be omitted,

Example 50n2 simplifies to n2

2)      na dominates nb if a>b

Example n3 dominates n2, so n3 + n2 + n ,simplifies to n3

3)      Any exponential dominates any polynomial

Example 3n dominates n5

Example 2n dominates 5n+100

4)      Any polynomial dominates any logarithm

Example n dominates  (log n)3
So if we applied rule #1 “Multiplicative constants can be omitted ” our analysis of Figure #1 , the function f(n)=3n+(n+1)+2, would simplify to f(n)=n.  This means the code in Figure #1 , behaves linearly.

Questions & Answers
Look at the following functions, what is the dominating feature or term in each function
(Hint: use the rules defined above)

1)      f(x)=x2+4x+30  

Answer = X2       , Using rules 1 & 2

2)      f(x)= x log x + 100x

Answer = X         ,Using rule 4

3)      f(x) = x5+x3+ log x + 30x

Answer = x5        ,Using rule 2 & 3

4)      f(x) = x2/30x      

Answer = X2           ,Using rule 3

5)      f(x) = x3/2+x1/3+x

Answer = x3/2,    ,Using rule 3
Now 2 more examples and part 1 is finished.
 

Figure 2, For loop that iterates 10 times, and prints i’s value 10 times
Now that we know a “While loops” number of operations can be described as linear, we can apply that to any loop such as a “For” loop. Figure #2  is f(n)=n , or linear number if operations.
Figure 3, Inner loop is nested in outer loop , printing k’s value 10 times per each i’s iteration, therefore 100 print statements.
Nesting loops is just like multiplying the loops operations ,therefore one loop nested in another loop, performs n*n or n2 operations, of course ignoring the minute constants.
Now let’s look at rates of growth before ending Part 1.
Rates of growth in order
Log2 n < n < n log2 n < n2 < n3 < 2n
The above chained expressions summarize from the least to greatest, the mathematical features that most algorithms exhibit in terms of the runtime and input "n".
Below is the graphical representation of the above expressions.
By now we have learned very simple yet powerful simplification rules, for analyzing an algorithm’s time complexity, but in order to be more precise we need some formal notations
References
1.       2000 Solved Problems In Discrete Mathematics. 1st. New York: McGraw-Hill, 1992. 101-110. Print.
2.       Dasgupta, Sanjoy, First , et al. Algorithms. 1. Crawfordsville, IN: McGraw-Hill, 2008. Print.





5 comments:

  1. I have studied first two chapters of algorithms clrs book and felt i understood, but was still hitting my head. Your tutorial is awesome, it fits the missing piece of the incomplete puzzle in my head.Thanks a lot!

    ReplyDelete
    Replies
    1. Thanks, CLRS is a great book , I use alot of my knowledge from CLRS , but it requires several times of review, im going to upload a complete tutorial on Recursion soon. If you have any questions just post them :)

      Delete
  2. A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.
    website: geeksforgeeks.org

    ReplyDelete
  3. A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.
    website: geeksforgeeks.org

    ReplyDelete
  4. A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.
    website: geeksforgeeks.org

    ReplyDelete