# Time & Space Complexity

Mostly, you have noticed there are different ways to write code of any program using different algorithms. For example, There are different ways to travel from our home to office, but you choose only one way to reach your destination which is the shortest path. The same approach is applied in the programming world. We need to understand how the performance of these approaches work and pick our best solution. While analyzing these algorithms, we mostly consider time complexity and space complexity.

Time and space complexity depends on lots of things like hardware, operating system, processors, etc. However, we don’t consider any of these factors while analyzing the algorithm. We will only consider the execution time of an algorithm.

*Time complexity** of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the input.*

*Space complexity** of an algorithm quantifies the amount of space or memory taken by an algorithm to run as a function of the length of the input.*

Lets see how to calculate time complexity of any program. Suppose an array **ar** is given and you have to find if element **x** exist in the array!!

Each of the operation in computer take approximately constant time. Let each operation takes **c** time. The number of lines of code executed is actually depends on the value of **x**. During analyses of algorithm, mostly we will consider worst case scenario, i.e., when **x** is not present in the array . In the worst case, the **if** condition will run **n **times where **n** is the length of the array . So in the worst case, total execution time will be **(n*c + c)**. **(n*c)** for the **if** condition and **c** for the **return** statement. As we can see that the total time depends on the length of the array . If the length of the array will increase the time of execution will also increase. Thus, the time of execution is linearly depends on the length of the array.

# Asymptotic Notation

**Asymptotic notations** are the mathematical **notations** used to describe the running time of an algorithm when the input tends towards a particular value or a limiting value. For example: In bubble sort, when the input array is already sorted, the time taken by the algorithm is linear i.e. the best case.

We use different notation to describe limiting behavior of a function.

- Big O Notation
- Ω Notation
- Θ Notation (theta)

O**-notation:**

To denote asymptotic upper bound, we use O-notation. For a given function g(n), we denote by O(g(n)) (pronounced “big-oh of g of n”) the set of functions:

O(g(n)) = {f(n) : there exist positive constants c and n0 such that 0<f(n)≤c*g(n) for all n ≥ n0 }

Ω**-notation:**

To denote asymptotic lower bound, we use Ω-notation. For a given function g(n), we denote by Ω(g(n)) (pronounced “big-omega of g of n”) the set of functions:

Ω(g(n)) = { f(n) : there exist positive constants c and n0 such that 0≤c*g(n)≤f(n) for all n ≥ n0 }

Θ**-notation:**

To denote asymptotic tight bound, we use Θ-notation. For a given function g(n), we denote by Θ(g(n)) (pronounced “big-theta of g of n”) the set of functions:

Θ(g(n)) = { f(n) : there exist positive constants c1, c2 and n0 such that 0≤c1*g(n)≤f(n)≤c2*g(n) for all n > n0 }