Basic data structures in JavaScript

Have you wondered how basic data structures like Objects and Arrays work under-the-hood in JavaScript? In this article we’ll be exploring elementary data structures as they are constructued in the JavaScript language.

This article is part of the Data Structures and Algorithms Series following the course The Analysis of Algorithms. If you missed the previous article, check that out first as well as a copy of The Algorithm Design Manual.

Answers to the Daily Problem (2-20)

So this one was honestly a bit confusing to me. The question reads:

Find two functions f(n) and g(n) that satisfy the following relationship. If no such f and g exist, write “None.”

The problems are odd because we don’t really deal with Little O, which stands for “grows strictly smaller than.” Little O is like a stricter version of Big O. This just isn’t something you’d have to worry about in the professional world, either as a working developer or in an interview setting. It’s also not a topic that is covered very much in an algorithms book. But let’s give it a try anyway.

(a) f(n) = o(g(n)) and f(n) != Θ(g(n))

So this is saying g(n) should grow at a smaller rate than f(n) should, and should not be the average-case, meaning it can’t satisfy both O(g(n)) or Ω(g(n)). Basically, we want an f(n) that is not as bad as g(n), so a simple one would be f(n) = n + 1, g(n) = n^2 + 1.

(b) f(n) = Θ(g(n)) and f(n) = o(g(n))

This one should be none, and the reason why is that Θ is all about being tightly-bound, whereas the little letters (Little O and Little Omega) are all about being loosely-bound. You can’t have both, so the answer is none.

(c) f(n) = Θ(g(n)) and f(n) != O(g(n))

This is also none, because the definition of Θ(g(n)) assumes that you satisfy both the Big O and the Big Omega.

(d) f(n) = Ω(g(n)) and f(n) != O(g(n))

This one should be fairly straightforward. We want something that satisfies Big Omega and not Big O, meaning we don’t want a Big Theta set of functions. f(n) = n^2 + 1, g(n) = n + 1 should satisfy this (essentially, the inverse of problem A).

Data structures as blocks of memory

One way to group data structures is by how they allocate memory. I will admit, this was not an intuitive way for me to think about elementary data structures, but it’s a good thing to keep in mind. There are two major forms of data structures by memory allocation:

1. Continuous

Continuous-memory data structures are those that are linked as one block of memory. So when a computer allocates memory in RAM for this structure, it will allocate one block so it can be accessed as one block. The computer won’t have to use pointers to find chunks of addresses in order to make sense of all of the data for this one structure. There are lots of data structures of this type we’ll cover later, but the only fundamental continuous data structure you need to be aware of is the array.


Arrays are the fundamental continuous data structure that everyone knows. In JavaScript, they have a typeof === 'object' but you can distinguish them with [] instanceof(Array).

Arrays are like cubbies at school. A cubby is one big piece of furniture with slots to store things like hats, coats, and backpacks.

Like arrays, they store stuff, and they are one object. And like in JavaScript, cubbies can store anything that fits, we don’t have any strict types that dictate how our arrays can be allocated like in statically-typed languages.

  • Constant-time access to elements. If you provide the index, or numeric address inside of the array, we know exactly where the value of that item is without any sort of transformation. That makes array access extremely fast.
  • Space efficiency. If you only have to take up one block of memory, that’s more efficient than if you have to take up lots of discontinuous, fragmented chunks of memory.
  • Colocality. Building on the previous benefit, because it’s one block of memory, that also means that all of the values of the array are located next to each other, making access via traversal also very easy. This allows computers to make use of their caches since we can easily plop the whole array onto (and off) of the cache.
  • Fixed size. The space efficiency comes at a cost that we can’t modify the size of the array later on. If we allocate for 5 items, that’s it.

There are two ways to mitigate this: buffer space and dynamic arrays. With buffer space, we can leave extra empty room for the array to grow. So let’s say you allocate an array for 5 items, well we can buffer in 5 more for a total of 10 allowable items. The obvious downside here is that there is wasted space in anticipation for utilizing arrays for purposes we didn’t intend.

The other option is to use dynamic arrays. Dynamic arrays will allocate double the space of the current size of a filled array and copy its contents over to that newly-allocated block. It will then free up the old space that was allocated for the original array. This is like a smart version of buffer space that only adds in the buffer when the array is filled.

It still suffers from the same downside of potentially wasted space, plus the additional operational complexity of allocation and deallocation (O(n)). This cost is still O(n) because the amount of reallocation work is the convergence of a geometric series (i.e. we double the size half as often because the amount of slots required to fill double with each expansion) which in the end works out to O(n). It also can’t guarantee that access will be in constant-time, since the boundary conditions for accessing elements that double the array will now take O(n) time.

var myArray = [1,2,3];

2. Linked

Linked data structures, as you might guess, differ from continuous data structures because they are distinct blocks of memory linked together to form what looks like one cohesive structure. The linking mechanism is called a pointer.

A pointer acts as a reference to an address in memory. So while your house is at 742 Evergreen Terrace in Springfield, it’s the Evergreen Terrace part that serves as the pointer to your house. In programming, we make heavy use pointers by passing the value of the pointer around in languages like C and C++. In JavaScript, however, no such functionality exists.

Instead, we pass objects by reference in JavaScript. That means we make a copy of the object as a reference to the original object we want to find, rather than the address we want to use to find it in memory.

In this article we’ll cover several of these kinds of fundamental linked data structures, including:

Linked lists

I have memories/nightmares of my very early college days doing phone screens for my internships with questions like:

Tell me how to reverse a singly-linked list in place.

It’s funny, looking back now it actually is a good question to ask a sophomore in college because he has no real professional experience to lean on to reply with something snarky like:

Why would I know how to do that? My list library already has the optimal algorithm.

Anyway back to the task at hand.

I like to think of linked lists like those colorful play slides you see on playgrounds. There’s one way to enter: at the top (or the start of the list). You can’t see what’s inside the slide/list until you slide down/traverse the list. And each piece of the slide/list is a different color/value. And the only way to get from one section/item to the next is to slide down the slide/iterate to the next time.

And for doubly-linked lists, those are like the crazy kids who would get to the bottom of the slide and dare to climb back up inside to the top. Sure, they’re kind of crazy and mildly stupid (the climbers, not the lists), but the reward for the extra work (of climbing or implementing the extra functionality) is increased flexibility and exploration. Now obviously there is more of an implementation cost to doubly-link a list, but this cost is negligible, so it’s basically always worth implementing.

  • Memory is dynamically allocated. Each time we add a LinkedList item to the end of the list, we allocate some more memory. No more and no less than exactly what we need. This is advantageous to arrays because of that space efficiency.
  • Insertion and deletion is simpler. Since list manipulation operates on the ends of the list, which we have direct access to, these operations are extremely simple, as in, constant-time (O(1)) simple .
  • Sorting/rearranging lists is simpler, too. Since the chain between list items is just a pointer to the next item, we don’t have to worry about physically changing the memory space to rearrange the list. All we have to change is who is pointing to who. This is particularly beneficial for really large objects because those objects do not have to move. Moving and finding really large blocks of memory is far more difficult and costly than moving pointers which are of a very tiny fixed size.
  • More overall space. You not only need space to store your value, but also to point to your other list items.
  • Worse colocality and caching performance. Since there is no requirement that all of your LinkedList items have to be part of the same memory block, it’s much more difficult to cache since you could theoretically have to search all of your RAM to find the memory spaces to throw the list into the cache.
  • Random access is slower. You can’t just pluck an item with an index. Remember, this is like a slide. The only way in or out is from the ends, there is a barrier around the middle items of the list. Only from the ends can you make your way to the section of the list that you desire.
class Node {
  constructor(value) {
    this.value = value; = null;

class LinkedList {
  constructor() {
    this.head = null;

  delete(item) {
    let prev, curr = this.head;
    while (curr.value !== value) {
      prev = curr;
      curr =;
    } =;
    curr = null;

  search(value) {
    let curr = this.head;
    while (curr.value !== value) {
      curr =;
    return curr;

  insert(item) { = this.head;
    this.head = item;

Stacks and queues

Once you’ve grasped linked lists, stacks and queues are pretty easy to understand. Stacks and queues are just specialized linked lists.

Technically, both of these can be implemented using a linked list or an array. It is arguably easier to make a queue with a linked list and a stack as an array because of the way the insertion and deletion functions are implemented respectively for arrays and lists.


A stack is exactly what you think it looks like. A stack of pancakes. A stack of cards. A stack of whatever.

When you throw pancakes onto a plate, what’s the pancake you see at the top? The last one you put on the plate. When you start eating the pancakes, what’s the first one you dig your fork into? Also the last pancake. This actually has a name: Last-In-First-Out, or LIFO.

In the jargon of computer science, really the only difference with a stack and regular old linked list is insert is now called push, and delete is now called pop. In fact, you may recognize that because those are methods on Array.prototype.

That’s right, you could actually build a very simple Stack class by limiting the scope of an Array’s modification to just push() and pop().

Why would you need one over an array? Stacks are an ideal data structure for handling depth-first search. Just like you dig deeper into that stack of pancakes, so should you think about using stacks with DFS. We’ll get more into search algorithms in a later article.


A queue is also an aptly-named structure for the thing you do when you wait in line at the deli or the DMV.

What’s that process like? You take a ticket, it has some number on it, and you wait until your number is called. When you are waiting in line, how do they process the orders? From the lowest ticket number to the highest, or in other words, the time people entered the queue. This, too, has a name: First-In-First-Out, or FIFO.

Again, this is just a specialized linked list with different names for insert and delete. Now they are called enqueue and dequeue. JavaScript has [these methods](those are functions on Array.prototype as well, but they are called push and unshift (or shift and pop, depending on how you look at it).

No, that’s not a typo, push is used twice on purpose. With queues, think of it like the tickets: when a number is called from the deli, we’re taking the oldest ticket with the lowest number, that’s at the front of the queue, or the beginning of the list. When the list was empty, we added to the end of it. As more people are added to the list, we keep adding them to the end of the queue. But when the person behind the deli counter calls our number, where are we in line? Right in front, so we don’t remove from the end of the list, we remove from the beginning.

Why would you need one over a linkedt list? Queues are an ideal data structure for handling breadth-first search. Just like people shuffle across the floor when waiting at the deli counter, so should you think about using queues with BFS. Again, we’ll save search algorithms for later.

Okay, I think that’s enough for now. Next article we’ll cover dictionaries, but right this moment we’ll move onto the Daily Problem.

The next Daily Problem

This one is not in the book, so I’ll just transcribe it here:

For each of the four types of linked lists (singly unsorted, singly sorted, doubly unsorted, doubly sorted), what is the asymptotic worst-case running time for each dynamic-set operation listed?

a) Search(List, item)

b) Insert(List, item)

c) Delete(List, item)

d) Successor(List, item)

e) Predecessor(List, item)

f) Minimum(List)

g) Maximum(List)

Pretty straightforward: for the functions that operate on a linked list, give the Big O time complexity. If you’ve followed along with this and the previous articles then this question should take very little time to answer. And for more practice, check out these problems from the book:

  1. 3-2
  2. 3-4

Get the FREE UI crash course

Sign up for our newsletter and receive a free UI crash course to help you build beautiful applications without needing a design background. Just enter your email below and you'll get a download link instantly.

A new version of this app is available. Click here to update.