How to analyze JavaScript functions with Big O

Welcome back! To review, the previous article explained the basics of asymptotic notation: the upper-bound/worst case, lower-bound/best case, and the tight-bound/average case for how long an algorithm will take to run.

As we’ve mentioned before, the important thing about evaluating algorithms is correctness and efficiency, and Big O Notation evaluates the efficiency part. To really understand how this works, we’d be best served to look at a few examples and break them down piece-by-piece. But first, onto the Daily Problem…

This article is part of the Data Structures and Algorithms Series following the course The Analysis of Algorithms. If you missed the previous article, check that out first as well as a copy of The Algorithm Design Manual.

Answers to the Daily Problem (2-22)

So for our question today, we’re trying to decide which of the asymptotic notations we’re working with when comparing two functions:

For each of the following pairs of functions f(n) and g(n), state whether f(n) = O(g(n)), f(n) = Ω(g(n)), f(n) = Θ(g(n)), or none of the above.

It’s just a fancy way of saying, “we have a function f(n), is its g(n) the best-case, worst-case, or average-case runtime for our function?” Let’s take a look:

(a) f(n) = n^2 + 3n + 4, g(n)= 6n + 7

I’m going to offer you the trick I used. As was mentioned previously, we can reduce the g(n) down to just the highest-powered variable without the constants. So that means I’m really just looking at whether n^2 + 3n + 4 is compared to O(n), Ω(n), or Θ(n) for this to realistically apply.

For f(n), the worst-case can’t be any lower than n^2, so that’s out. We also know that if worst-case can’t work, then neither can the tight-bound theta (since theta requires that both worst and best case can apply). Therefore, we do know the best-case can be no better than n given g(n) has an n in it, so we can safely say that f(n) = Ω(g(n)) is the correct answer here.

(b) f(n) = n√n, g(n) = n^2 − n

This one is a bit trickier, but we can use a neat trick for finding the powers for comparison.

√n is really the same thing as n^0.5, and when we multiply that with n, we’re really just saying that f(n) = n^1.5. So now we’re really just comparing n^1.5 to n^2, which means that f(n) clearly has the lower order variable compared to g(n), so f(n) = O(g(n)) is the correct answer.

(c) f(n) = 2^n − n^2, g(n) = n^4 + n^2

Applying the trick here again, we have an exponential function on the left, and a quadratic function on the right. Quadratic functions are lower on the dominance relation, so we can that f(n) = Ω(g(n)) is the correct answer here as well.

Big O Analysis by example

Now that we’ve gone over some hypothetic mappings of calculating computational operations, let’s actually apply this to real-world algorithms in JavaScript. The most applicable form of Big O Notation in a professional setting would be if you were asked to evaluate the time complexity of the algorithm you’ve just written on the whiteboard/typed on your computer.

In the following examples, we’ll provide the JavaScript code for some algorithm, and then we’ll do a Big O analysis line-by-line to show you how we can provide an f(n) = O(g(n)).

Sorting items in an array with Selection Sort

Selection sort is an algorithm that sorts an array by selecting the lowest remaining value and putting it at the end of the sorted portion (the sorted portion is allocated to the front of the array). It looks something like this:

function selectionSort(arr) {
  let min;
  const arrLength = arr.length;

  for (let i = 0; i < arrLength; i++) {
    min = i;

    for (let j = i + 1; j < arrLength; j++) {
      if (arr[j] < arr[min]) min = j;
    }

    [arr[i], arr[min]] = [arr[min], arr[i]];
  }

  return arr;
}

If the last line before the brackets seems weird to you, this is how you can swap variables using ES6 destructuring.

This is also a great way to illustrate the appropriate uses of let and const. let makes sense for variables like min, i, and j because they are being modified over and over again. const makes sense for arrLength because the length of the array never needs to change, so we can keep it constant.

So how do we know how much this will cost in terms of Big O time complexity?

Remember the 3 rules from the previous article:

  1. Add 1 for simple operations.

  2. Add n for looping over a set.

  3. Whatever your calculations were for 1 and 2, multiply them if the algorithm is called again recursively.

So let’s go through this line-by-line, ignoring lines for whitespace and closing brackets because those are just syntax and are not evaluated:

The first line is the function declaration, and while that requires computation, we’re evaluating what is inside of the function, so we can skip this line.

The next two lines are variable declarations, which apply rule 1, and take a constant amount of time (this includes the length function on the array). So let’s call our total 2 right now.

Now we’ve hit our first for loop. How many elements are we iterating over? From zero up to the length of the array, which we can mark as n since that’s been the variable we’ve stuck with for all of our Big O calculations thus far. This brings our total up to n + 2.

The next line brings another variable assignment, bringing our total up to n + 3.

Now we’ve hit another for loop. But this time it’s inside the other for loop. How would that work?

Imagine you have 5 cards in your hand, each numbered 1 through 5. Take the first card and add it up against every other card, including itself. How many additions have you made? It should be 5. Now do that for the second card, adding up the other four cards plus itself. The number of additions? Still 6. If you keep repeating this, each card has interacted with each other card, for a total of 25 operations. 25 = 5*5, so if we generalize this for n cards, the total is n*n = n^2. Therefore, our nested for loop is multiplying, not adding onto our outer for loop.

Since our outer for loop thus far has a variable declaration and the loop itself, we’re looking at a total of (n-1)*(n+1) + 2. Why n-1? Because we declare j = i+1, so the inner loop will only ever see, at most, 1 less than the entire array.

Next, we have a bunch of things going on all places onto one line. We have a conditional statement, some variable access in the array, and finally a variable assignment (if the condition holds).

This part gets a bit tricky. We know that all of these items add a constant amount of cost in terms of computation. The problem is that with a conditional assignment (like with our if statement) the chance of making the condition based on the sum of all of the remaining items in the array to evaluate. That sum is equal to (n-1) + (n-2) + ... + 2 + 1, which is approximately equal to n(n-1) / 2, bringing the total to (n-1)*(n+1) + n(n-1)/2 + 2.

Next, we have our JavaScript destructuring which is acting as a sort of swap function. Since these are just assignments of two variables, we know this is just a constant addition to our operations total within the outer for loop, bringing us to (n-1)*(n+3) + n(n-1)/2 + 2.

Finally, we return the array to display its newly-sorted self. Referencing a variable is just as costly as assigning one in terms of computation, and now that we’re back into the base block of our function, we’ve completed our cost analysis with a grand total of (n-1)*(n+3) + n(n-1)/2 + 3. When we multiply everything out, we get n^2 + 3n - n - 3 + .5n^2 - .5n, which simplifies to 1.5n^2 + 1.5n - 3.

So coming back to the actual asymptotic analysis for O, if we say that f(n) = 1.5n^2 + 1.5n - 3 and we want to do all of the same simplification techniques when evaluating this to be equal to O(g(n)), we’re left with the largest power which is just O(n^2).

Matrix multiplication

Did that feel like a lot of work? It should! Because with this next example, we’re going to drastically simplify our analysis, even though the algorithm is more complicated.

Matrix multiplication means taking two 2D arrays and multiplying them together to get the multiplied value of all of the cells. If one 2D array is of dimensions x,y and the other is y,z (they have to have at least 1 dimension in common else you can’t actually perform multiplication on the two), we return an array of dimension x,z using the following algorithm:

function multiplyMatrices(arr1, arr2) {
  const x = arr1[0].length;
  const y = arr1.length;
  const z = arr2.length;
  let matrix;

  if (y !== arr2[0].length) {
    throw new Error("Matrix cannot be multiplied");
  }

  // initialize matrix
  for (let a = 0; a < x; a++) {
    matrix[a] = [];
  }

  for (let i = 1; i <= x; i++) {
    for (let j = 1; j <= z; j++) {
      matrix[i][j] = 0;

      for (let k = 1; k <= y; k++) {
        matrix[i][j] += arr1[i][k] * arr2[k][j];
      }
    }
  }

  return matrix;
}

Much more complex, but with this trick, we can calculate this very quickly: scan for nesting and multiply.

The first few lines are constants, so worst-case now is O(1).

Our first for loop of length x, now we know worst-case is O(n).

We have a 2nd for loop where the multiplication starts, but since it’s at the same nesting as the first one, we’re still at a worst-case of O(n).

The next for loop is inside of the other one, and as we saw from earlier, those get multiplied so now our worst-case is O(n^2).

Finally, we have a 3rd for loop nested inside the other two, so multiplying this again we now have a worst-case of O(n^3). We can skip the return statement as we know that’s a constant, which we’ve already determined this far worse than constant time.

Even with all of the initialization, error handling, and matrix math, by simply scanning the function for nesting, we were able to determine very quickly that the time complexity of this function is cubic.

A bit on why logarithms are special

Remember the example about continuously splitting a deck of cards in half until we reach our target card? This is an example of a logarithm function. A logarithmic function is the inverse of an exponential function. That means it tells us “how many times I need to double something to get to some number n”. Conversely, it also tells us “how many times I need to cut something in half to get down 1.”

If you’re dealing with a binary search (i.e. choose your own adventure where you either go left or right until you reach a dead end), or you’re climbing down the levels of a tree until you hit the leaves, then you know you’re dealing with a Big O of O(log n).

The base of the log (meaning base 2 if you have 2 options as in a binary search or base 3 if you have 3 options in a ternary search, and so on) doesn’t matter. This is just like how the constant value of a quadratic function doesn’t matter in big O analysis. All we care about is the fact that, in general, we’re dealing with a logarithmic growth rate for our Big O analysis.

The next Daily Problem (2-20)

You made it! Big O analysis isn’t so bad if you apply some quick scanning tricks. To practice today’s work, here’s the next Daily Problem:

Find two functions f(n) and g(n) that satisfy the following relationship. If no such f and g exist, write “None.”

(a) f(n) = o(g(n)) and f(n) != Θ(g(n))

(b) f(n) = Θ(g(n)) and f(n) = o(g(n))

(c) f(n) = Θ(g(n)) and f(n) != O(g(n))

(d) f(n) = Ω(g(n)) and f(n) != O(g(n))

More practice

If you’re looking for more deliberate practice with these articles, the Algorithms class we’re following includes homework sets and the first one) just came out today.

Since I’m auditing this course (and so are you by definition if you’re reading my articles instead of actually taking the course), I’ll include the homework problems that are covered thus far. Once the assignment answer keys are out, only then will I spend any (if at all) time on my answers to the homework problems.

The problems from the homework set that are covered from the first three articles include:

  1. 1-17
  2. 1-19
  3. 1-20
  4. 1-22
  5. 2-7
  6. 2-8
  7. 2-19
  8. 2-21
  9. 2-23
  10. 2-24

Happy practicing!


Get the FREE UI crash course

Sign up for our newsletter and receive a free UI crash course to help you build beautiful applications without needing a design background. Just enter your email below and you'll get a download link instantly.

A new version of this app is available. Click here to update.