Dictionaries and Binary Search Trees in JavaScript

Dictionaries are so important that we’ve allocated an entire article for them. Dictionaries, also called dynamic sets, are like special arrays. While arrays you can only look up the values by their numeric address, dictionaries allow you to look up items by their values. This gives them tremendous power that we’ll explore today. But first, the answers to the previous article’s Daily Problem.

This article is part of the Data Structures and Algorithms Series following the course The Analysis of Algorithms. If you missed the previous article, check that out first as well as a copy of The Algorithm Design Manual.

Answers to the Daily Problem

This problem basically asked us to combine what we learned from Big O Notation and apply that to one of the fundamental data structures we saw in the last article.

For each of the four types of linked lists (singly unsorted, singly sorted, doubly unsorted, doubly sorted), what is the asymptotic worst-case running time for each dynamic-set operation listed?

If you’ve been following along with this series, this should be pretty easy to figure out.

a) Search(List, item)

Drawing on our playpen slide tube analogy, we know that we can only enter our LinkedList from either ends of the slide. The worst-case is that we enter the wrong end of the slide, and our target is all the way at the other end. So if we have to go through the whole list of n items, that means our worst-case runtime for search is O(n) across all 4 kinds of lists.

b) Insert(List, item)

One of the benefits we mentioned in the previous article was that insertions and deletions were simpler because we can just tack on stuff to the ends of the list. Since we always have a pointer to the head of any kind of linked list, it only requires constant-time access (O(1)) to insert an item onto any unsorted linked list.

Why does sorting hurt us? Because once the list is sorted, we can no longer tack on the item to the beginning of the list. Now the list has some logical ordering behind it, we have to maintain that ordering, which could mean having to scan the entire list to find the exact right place to insert our new item, making insertions a linear-time runtime (O(n)) for sorted linked lists.

c) Delete(List, item)

To limit confusion, let’s right off the bat say doubly-linked lists can implement this O(1) runtime because they have access to both the previous and next node. Since you need both to connect them, and thus delete the active item by disconnecting it from the list, we’re done here.

However, for singly-linked lists, this one is interesting because it depends on the algorithm you want to implement. If you want to delete the item after item, this is a straightforward worst-case for delete is O(1) since you just connect item.next to the next of the next.

But if you want to delete the item provided, you need to find the previous node, and the only way to do that is to start from the beginning and work your way down until you see that the current node’s next is equal to the item you specified. Only then can you connect current to the next of the next node, meaning in the worst-case, deletes can take up to O(n) for singly-linked lists.

d) Successor(List, item)

This function just asks you to find the next logical item in the list. Logical in this case means that if you have a list that would be represented by an array as [1,2,3], the successor to 2 is 3. Since logical ordering is based on sort order, this is entirely dependent on if the list is sorted or not, and nothing to do with being singly- or doubly-linked, since both of those implement a next property.

For sorted lists, this is just an access to next and runs in O(1), while unsorted lists may require viewing the whole list to finding the logical next item, and thus runs in the worst-case O(n).

e) Predecessor(List, item)

Finally, we have something unique to tackle! This function is the first place we see fragmentation between singly- and doubly-linked lists. For sorted doubly-linked lists, this, too, is O(1) because the previous node is a property inherent on the list object, and property access happens in constant time. This doesn’t apply for unsorted doubly-linked lists, because like our quandry with question D, it could take viewing the entire list to find our logical predecessor, and therefore unsorted doubly-linked lists take O(n) to run the predecessor function.

For singly-linked lists, this presents a problem. If you only ever have access to next, how can you possibly know what the prev item is? Another way to look at that is to flip the script: rather than looking at the predecessor as the previous node of the current item, we instead want to look for the previous node’s next node. Yes, see, because we know that the previous node’s next node is the same as the current node (item) that we’re passing in! But to do this we have to start from the very beginning of the list. And if our current item is all the way at the end of the list, we’ll have to search the entire list of n items until we find the predecessor, therefore singly-linked lists can calculate the predecessor in O(n) time in the worst-case, regardless of whether or not they are sorted.

f) Minimum(List)

Here we just want to find the minimum item in the list. Whether that’s the lowest ordered letter in a list of letters in the alphabet, or the lowest number in the list, we want to find the minimum value. Now it should start to make sense as to why there were 4 list types, one group centered around sorting. When the list is not sorted, you really have no idea where the minimum item could be, so in the worst-case scenario you have to search the entire list to find the minimum item, which is O(n). However, if the list is sorted, you know that it is going to be at the very start of the list, which you already have access to in both singly- and doubly-linked lists, and therefore the minimum value can be found in a sorted linked list in constant O(1) time.

g) Maximum(List)

Finding the maximum is identical to finding the minimum, but not exactly for the reasons you think (again ,think of the ends of a slide). The obvious part is that unsorted lists require searching the entire list, so maximum is found in O(n) time. The other obvious connection should be that sorted doubly-linked lists have access to both the first and last item so the maximum can be found in O(1) constant runtime. So why is maximum access found in constant time (O(1)) for sorted singly-linked lists? When we add and delete items from a sorted singly-linked lists, we can use that as an opportunity to update a property of the last node in the list. By paying that storage cost in those functions, it allows us to have instant access to the last node, which in the case of a sorted list, also happens to be the maximum value as well.

Wow! That’s a big list of problems to solve! As a thought exercise, see if you can’t extend this table to include runtimes for sorted and unsorted arrays as well.

Thinking about dictionaries

Think of a data structure dictionary as the eponym of the physical book dictionary is a good analogy. Now I’m going to give myself some credit here because I think this is pretty clever: I purposefully chose a fancy word like eponym because if you don’t know what that means, YOU’RE GOING TO LOOK IT UP IN A DICTIONARY.

And when you look up that word at dictionary.com or whatever your favorite word site is, how do you search for the word? By its name. Dictionaries operate exactly the same way as dictionary books do because searching for content happens by looking up the actual name of the content itself. Now you know how a search method would be implemented.

What about insert and delete? Well how is a word and its definition laid out in a dictionary that you read?

eponym noun _ep·onym \ ˈe-pə-ˌnim_ : one for whom or which something is or is believed to be named

The name of the word is the key that you’ve looked up, and the definition of the word is the value. So whenever you want to add or remove a word in a dictionary, you see if the key is available, and if it is, you add/remove the definition.

As we saw from the Daily Problem, there are some additional (optional) methods you could add to a dictionary as well, such as min/max for quickly moving to the beginning/end of the dictionary, and successor/predecessor for moving forward/backward between keys/words (not necessarily forward/backword in memory), very similar to the next method we implemented for a linked list. We went into these pretty in-depth at the beginning, so suffice it to say we’ve thoroughly covered the 7 major methods for implementing dictionaries.

let dictionary = {};

// insert
dictionary["eponym"] = "one for whom or which something is or is believed to be named";

// search
dictionary["eponym"];
// => "one for whom or which something is or is believed to be named"

// delete
dictionary["eponym"] = undefined;
// no different than dictionary["apple"], both are undefined

Improving dictionary methods with Binary Search Trees

Binary Search Trees (BST) are a clever data structure that implements all 7 dictionary methods and takes all of the best parts of the dictionary runtimes. The result is a data structure that efficiently supports all of these functions.

BSTs consist of a root or start node and trickle down into two other child nodes: the left and the right child. The left node is always less (in value) than the current root node, and the right node is always greater (in valud) than the current root node. We can also have an optional parent node reference which tracks in the opposite direction. This organization is what allows us to quickly and easily find any value in the tree.

The best way to think of BSTs is to imagine the Lady Justice holding her balance scale with a heavier right-handed side.

Why the right side? Since the right node is the larger item, if you imagine the tree holding numeric values, and those values represented the mass of the object on the scale, you know that the higher-valued item will weigh more:

  6
 / \
4   \
     8

If that right there is her scale, then you can clearly see that 8 weighs more than 4, so the scale will tip to the right in favor of the item that weighs more. An elaborate example? Maybe. An example you will remember? Definitely.

class Node {
  constructor(value) {
    this.value = value;
    this.left = null;
    this.right = null;
    this.parent = null; // optional
  }
}

class BinarySearchTree {
  constructor() {
    this.root = null;
  }

  search(item) {
    let curr = this.root;

    while (item.value !== curr.value) {
      if (item.value < curr.value) {
        curr = curr.left;
      }
      else if (item.value > curr.value) {
        curr = curr.right;
      }
    }

    return curr;
  }

  insert(item) {
    let curr = this.root;

    if (!curr) {
      this.root = item;
      return true;
    }

    while(curr) {
      if (item.value < curr.value) {
        if (curr.left) {
          curr = curr.left;
        } else {
          curr.left = item;
          break;
        }
      }
      else if (item.value > curr.value) {
        if (curr.right) {
          curr = curr.right;
        } else {
          curr.right = item;
          break;
        }
      }
    }

    return true;
  }

  maximum() {
    let max = this.root;

    if (!max.value) return null;

    while (max.right) {
      max = max.right;
    }

    return max.value;
  }

  minimum() {
    let min = this.root;

    if (!min.value) return null;

    while (min.left) {
      min = min.left;
    }

    return min.value;
  }

  preorderTraverse(item) {
    if (item.value) {
      console.log(item.value);
      traverse(item.left);
      traverse(item.right);
    }
  }

  inorderTraverse(item) {
    if (item.value) {
      traverse(item.left);
      console.log(item.value);
      traverse(item.right);
    }
  }

  postorderTraverse(item) {
    if (item.value) {
      traverse(item.left);
      traverse(item.right);
      console.log(item.value);
    }
  }

  traverseTree() {
    inorderTraverse(this.root);
  }
}

Question: Given our practice with Big O Notation, can you figure out the runtime of the methods within this sample BinarySearchTree class?

Extra Credit: We never filled in the delete(item) method for BSTs. To be honest, they’re kind of tricky. How do you think you would implement one?

Would love to see your answers and sample gists on Twitter.

Balanced BSTs and the next Daily Problem

While BSTs are very good data structures most of the time, they can still be improved. Balanced BSTs will automatically rebalance nodes whenever new items are inserted or deleted into the tree. For example, if you keep adding the natural numbers (1,2,3,4,5…) to a BST, it will be effectively the same thing as a linked list, with the height of the unbalanced tree h equal to the length of the items n:

1
 \
  2
   \
    3
     \
      4
       \
        5

But if you balance the tree so that we try to maximize as many left and right nodes being used as possible, the end result of a rebalance would look like this:

    3
   / \
  2   4
 /     \
1       5

This time, the height is halved at every level, and we now know that if we are successively halving something, it’s happening in a logarithmic fashion. Therefore the height of the balanced BST h is equal to log n.

To get some more practice dealing with this concept for balanced BSTs, the Daily Problem focuses on them:

You are given the task of reading in n numbers and then printing them out in sorted order. Suppose you have access to a balanced dictionary data structure, which supports each of the operations search, insert, delete, minimum, maximum, successor, and predecessor in O(log n) time.

  1. Explain how you can use this dictionary to sort in O(n log n) time using only the following abstract operations: minimum, successor, insert, search.

  2. Explain how you can use this dictionary to sort in O(n log n) time using only the following abstract operations: minimum, insert, delete, search.

  3. Explain how you can use this dictionary to sort in O(n log n) time using only the following abstract operations: insert and in-order traversal

Think you’ve got the answers? We’ll find out in two days with the next article! And for more practice, check out these problems from the book:

  1. 3-10
  2. 3-11

Get the FREE UI crash course

Sign up for our newsletter and receive a free UI crash course to help you build beautiful applications without needing a design background. Just enter your email below and you'll get a download link instantly.

A new version of this app is available. Click here to update.