Complexity Analysis

The Big-O Notation

Big-O Notation is a mathematical notation that describes the upper bound of an algorithm's running time. It provides an upper limit on the time taken based on the input size. Essentially, it answers the question: "In the worst-case scenario, how does the runtime grow as the size of the input grows?"

Why Big-O Notation?

Big-O helps in understanding the efficiency of algorithms. Two algorithms might solve the same problem, but one might do it faster and more efficiently than the other. With Big-O, we can measure and compare their efficiency without getting into minute details.

Basic Rules of Big-O

Constants Don't Matter: O(2n) is equivalent to O(n). O(500) is O(1).
Smaller Terms Don't Matter: O(n^2 + n) is equivalent to O(n^2).

Common Big-O Notations:

Table 1
Column 1
Column 2
Column 3
O(1)
Constant time
The operation doesn't depend on the input size.
O(log n)
Logarithmic time
Common in algorithms that divide their inputs.
O(n)
Linear time
The operation grows linearly with input size.
O(n log n)
Log-linear growth
Common in efficient sorting algorithms.
O(n^2), O(n^3), ...
Polynomial time
Common in nested loops.
O(2^n)
Exponential time
Seen in problems where you have a lot of branching possibilities
O(n!)
Factorial time
Common in problems where you're evaluating many different arrangements or sequences
There are no rows in this table

image.png

Polynomials to Big-O Notation

The program's operations count can be expressed as a polynomial. The degree of this polynomial accurately captures the Big-O Notation for the program
Alright, let's dive into some examples of polynomial equations and how they translate into Big-O Notation.
Equation: 3n^3 + 20n^2 + 15n + 40
Big-O Notation: Here, the term with the highest degree is 3n3. Other terms will become insignificant as n grows. Therefore, the equation simplifies to O(n3).
Equation: 5n^4 + 2n^3 + 4n^2
Big-O Notation: The term with the highest degree is 5n4. So, the equation is represented as O(n4).
Equation: 0.5n^2 + 20
Big-O Notation: The dominant term is 0.5n2 . Thus, the equation is O(n2).
Equation: 7n + 50n^2 + 3n^3 + n^4
Big-O Notation: The term with the highest degree is n4 . The equation can be denoted as O(n4).
Equation: 1000n
Big-O Notation: This equation is linear, so it translates to O(n).
Equation: 15n^2 + 3n + 5
Big-O Notation: The dominant term is 15n2 . Thus, the equation becomes O(n2).
Remember: The purpose of Big-O notation is to represent the upper bound on the running time of operations in an algorithm. This is why we usually only consider the term with the highest degree (the dominant term) because it will have the most significant impact on the running time as the input size n grows. Constants and smaller order terms become insignificant for large values of n, so they are omitted in the Big-O representation.

Growth of Complexity Based on Input Size

Table 2
Input Size
log n
n
n log n
n^2
n^3
2^n
n!
10
3.3
10
33
100
1000
1000
100
6.6
100
66
10^4
10^6
10^30
1000
10
1000
10^4
10^6
10^9
10^4
13
10^4
10^5
10^8
10^12
10^5
17
10^5
10^6
10^10
10^6
20
10^6
10^7
10^12
10^7
23
10^7
10^8
10^8
27
10^8
10^9
10^9
30
10^9
10^10
10^10
33
10^10
10^11
There are no rows in this table
The diagram shows that the time complexity of a problem can vary greatly depending on the algorithm used to solve it.
For example, the problem of finding a specific element in a sorted array can be solved in logarithmic time (O(log n)) using the binary search algorithm. However, the problem of finding the shortest path between two nodes in a graph has a worst-case time complexity of O(n log n) using Dijkstra's algorithm.
The diagram also shows that the time complexity of a problem can grow very quickly as the input size increases. For example, the problem of sorting an array using the bubble sort algorithm has a worst-case time complexity of O(n²). This means that the time it takes to sort an array using bubble sort will increase exponentially as the size of the array increases.
In general, it is important to choose an algorithm with the lowest possible time complexity for the problem you are trying to solve. However, it is also important to consider other factors, such as space complexity and accuracy.
Here is a more detailed explanation of the different time complexity categories shown in the diagram:
Logarithmic time (O(log n)): Algorithms with logarithmic time complexity can solve problems very quickly, even for very large input sizes. This is because the number of operations required to solve the problem grows very slowly as the input size increases.
Linear time (O(n)): Algorithms with linear time complexity can solve problems quickly for small input sizes, but the time it takes to solve the problem increases linearly as the input size increases.
Quadratic time (O(n²)): Algorithms with quadratic time complexity can solve problems quickly for very small input sizes, but the time it takes to solve the problem increases exponentially as the input size increases.
Cubic time (O(n³)): Algorithms with cubic time complexity can solve problems quickly for very, very small input sizes, but the time it takes to solve the problem increases exponentially as the input size increases.
Exponential time (O(2^n)): Algorithms with exponential time complexity can only solve problems quickly for very, very, very small input sizes. For larger input sizes, the time it takes to solve the problem will become so large that it is impractical to do so.
It is important to note that the time complexity of an algorithm is only a theoretical measure. The actual time it takes to run an algorithm will depend on a number of factors, such as the programming language used, the hardware the algorithm is running on, and the specific input data.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.