Monday 3 August 2015

Big Oh notation

  • For run time complexity analysis we use big Oh notation extensively so it is vital that you are familiar with the general concepts to determine which is the best algorithm for you in certain scenarios. 
  • We have chosen to use big Oh notation for a few reasons, the most important of which is that it provides an abstract measurement by which we can judge the performance of algorithms without using mathematical proofs.
The following list explains some of the most common big Oh notations : 



  • O(1) constant: the operation doesn't depend on the size of its input, e.g. adding a node to the tail of a linked list where we always maintain a pointer to the tail node.

  •  O(n) linear: the run time complexity is proportionate to the size of n

  •  O(log nlogarithmic: normally associated with algorithms that break the problem into smaller chunks per each invocation, e.g. searching a binary search tree.

  • O(n log njust n log n : usually associated with an algorithm that breaks the problem into smaller chunks per each invocation, and then takes the results of these smaller chunks and stitches them back together, e.g. quick sort.
  • (n^2quadratic: e.g. bubble sort.
  • O(n^3) cubic: very rare.
  • O(2n) exponential: incredibly rare.



  • If you encounter either of the latter two items (cubic and exponential) this is really a signal for you to review the design of your algorithm. 
  • While prototyping algorithm designs you may just have the intention of solving the problem irrespective of how fast it works. We would strongly advise that you always review your algorithm design and optimize where possible|particularly loops recursive calls|so that you can get the most efficient run times for your algorithms.
  • Taking a quantitative approach for many software development properties will make you a far superior programmer - measuring one's work is critical to success.

No comments:

Post a Comment