How to Determine the Time and Space Complexity of an Algorithm Using the Big-O Notation

Learn everything about the big-o notation as the representation for the worst-case complexity of an algorithm.

Picture of Nsikak Imoh, author of Macsika Blog
White background with the text How to Determine the Time and Space Complexity of an Algorithm Using the Big-O Notation
White background with the text How to Determine the Time and Space Complexity of an Algorithm Using the Big-O Notation

Table of Content

In the previous lesson, we mentioned that the worst-case complexity of an algorithm is represented using the big O notation (O).

But what exactly is big O notation?

In this lesson, you will learn everything about the big-o notation as the representation for the worst-case complexity of an algorithm.

You will also learn the characteristics of the big O notation and how to calculate the big O of a program in terms of time complexity and space complexity.

What is Big O Notation?

In very simple terms, big O notation describes the complexity of an algorithm using algebraic terms.

It has two important characteristics:

  1. It is expressed in terms of input.
  2. It places emphasis on the main points without considering additional details.

We will explain these two characteristics in the context of time complexity, and thereafter, extend that knowledge to understand space complexity,

How to Determine the Time Complexity of an Algorithm Using Big O Notation

All right, let's calculate the worst-case, time complexity of our first program written in Python programming language.

def total_sum(n: int):
	sum = 0
	for i in range(n + 1):
		sum += i
	return sum
Highlighted code sample.

The algorithm for the program above is to find the sum of all natural numbers from 1 to n.

We have an input n.

The function returns the sum of all the natural numbers from 1 to n.

For example, the program returns 10 when called with n=4.

As we already mentioned in the previous tutorial, we cannot calculate the absolute running time of an algorithm, hence, the return value cannot be the time complexity.

What we do instead is count the number of times a statement executes based on the input size.

Let's follow the same instruction for our total_sum() program.

The program has three main statements to execute.

  • Line 2: sum = 0
  • Line 4: sum += i
  • Line 6: return sum

The for loop is just a repetition of line 4. Take note of this as a lot of people confuse this when calculating the complexity.

Given n=4, let's calculate the number of times each statement is executed.

Line 2 executes only once.
Line 4 executes four times, which is the same as the value for n.
Line 6 executes just once.

So, the total count is:
n + 1 + 1 = n + 2

Since n is the input to the function,
if n = 4,
total count is 4 + 2.

If n = 10,
total count is 10 + 2.

Recall the first characteristics of Big O we mentioned above, our time complexity is dependent on the input size, and Big O is expressed in terms of the input.

The second characteristic is that big O places emphasis on the main points without considering additional details.

Let's plug in a few values to n and understand this point.

If = 100,
n + 2 = 100 + 2

as the input grows to 100 million, it becomes 100 million plus 2. At that point, the plus 2 becomes very insignificant.

As a result, we can actually drop it and take the highest degree of n. Hence, n + 2 can be approximated to just n.

Since n contributes the most to the total value and not the additional 1, 5, or 50 additions.

We can ignore the other parts of the equation since they do not significantly influence the performance as much as the highest degree of n.

So, the worst case, time complexity, which is also referred to as just the time complexity of our summation algorithm, is:
O(n).

This is called linear time complexity.

Linear time complexity means that as the size of the input increases, the time complexity also increases.

Hopefully, you have a good idea of what the big o notation is.

However, you might be wondering if you always have to determine the time complexity of a program by counting the lines of code.

Well, you could do that, but you can also perform some safe calculations.

Whenever you see a loop in your algorithm, most of the time, you can deduce the time complexity by taking the time complexity to be linear at least.

This is because a loop's worst case is when it iterates over the entire input, and hence the time complexity is linear.

Here is another algorithm for a program to find the sum of all natural numbers from 1 to n.

def total_get_sum(n: int):
	return (n * (n + 1))/2
Highlighted code sample.

The time complexity of this algorithm is:
O(1).
This is called constant time complexity.

This means that irrespective of what the value of n is, line 2 is only executed once.

For more examples:

If there are two nested loops, the time complexity is quadratic:

def  calc(n: int):
	sum = 0
	for  i  in  range(n + 1):
		for  j  in  range(i + 1):
			...
	return  sum
Highlighted code sample.

When this happens, we only take the highest degree of n as the big O notation.

For instance, if you calculate the time complexity to be:

3n2 + 5n + 1

The big O notation will be just:

O(n2)

Now, if there are three nested loops, it is cubic.
O(n3)

If the input size reduces by half in every iteration, it is logarithmic.
O(logn)

How to Determine the Space Complexity of an Algorithm Using Big O Notation

The big O complexity of Space is similar to that of the time.

If the algorithm does not need the extra memory or the memory needed does not depend on the input size, the space complexity is constant O(1)

An example of this would be sorting algorithms, which sort within the array without utilizing additional arrays.

With this in mind, you can also have algorithms with linear space complexity, where the extra space required grows as the input size grows.

And you can also have a logarithmic space complexity in which case, the extra space required grows, but not at the same rate as the input size.

Typically, when it comes to the space complexity of an algorithm, you would find algorithms with these three space complexities.

Even though the quadratic and cubic time complexity might be common, a quadratic space complexity should be avoided at all costs.

What Big O Complexities are Good and Bad?

Here's a table made from a graphical chart of the big O trend from Big O CheatSheet to use as a guide to understanding how the input size affects the performance.

NotationQuality
O(logn)Good
O(1)Good
O(n)Fair
O(nlogn)Bad
O(n2)Horrible
O(n3)Horrible
O(2n)Horrible
O(n!)Horrible

As you can see from the table, O(logn) and O(1) are very good, whereas O(2n) and O(n!) are just awful time complexities and should be avoided when possible.

Quick Method to Determine the Big O Complexity

Here is a cheat sheet that can be used as a guide to quickly find the Big O complexity of an algorithm.

Even though this guide helps most of the time, you should not totally rely on it.

  • Any business logic not dependent on the input size or the number of inputs is constant O(1).
  • Any business logic depending on a single loop with input size is linear O(n).
  • Any business logic depending on a nested loop with input size is linear O(n2).
  • Any business logic depending on a loop with the input size reduced by half is linear O(logn).

Points to keep in Mind for the Course

Let us go over a few more points that you should keep in mind as you continue the course.

The first point is that multiple algorithms exist for the same problem, and there is no one right solution.

Hence, different algorithms work well under different constraints. You should always understand the problem being solved before determining which is better.

The second point is that the same algorithm with the same programming language can be implemented in multiple ways.

For this reason, what we learn in this series might be slightly different from other solutions you might see.

It does not mean that either of the solutions is incorrect.

Finally, when you start writing programs for the production environment and in organizations, rather than writing clever code, write codes that are simple to read and maintain. Optimize where you need, but not all the time.

Wrap Off

Alright, that is pretty much what we wanted to cover in this lesson.

In the next lesson, we are going to take a look at objects and arrays in the context of big O notation before we start solving problems and writing code.

If you learned from this tutorial, or it helped you in any way, please consider sharing and subscribing to our newsletter.

Please share this post and for more insightful posts on business, technology, engineering, history, and marketing, subscribe to our newsletter.

Next Tutorial — What is Big O Time Complexities of Python List, Tuples, Dictionary, and SetsGet the Complete Code of Algorithms: Baby-Steps to Complete Mastery on Github.

Connect with me.

Need an engineer on your team to grease an idea, build a great product, grow a business or just sip tea and share a laugh?