Introduction to NP-Completeness and Intractable Problems

We’re rounding the final corner in our algorithms series to focus on seemingly impossible problems. In our dynamic programming segment, we focused on taking very hard problems in exponential time and leveraged caching to get them down into polynomial time. We hinted that certain applications cannot benefit from this optimization, such as the Traveling Salesman Problem. So how can you possibly solve the most complex problems when they are unable to be solved in polynomial time?

This next chunk will focus on these intractable problems to create approximations from other problems in order to make best estimates by leveraging solvable problems in other realms. This translation from one problem to another is called reduction, and we’ll spend the next few posts creating our own reductions to satisfy these constraints. But first, onto the Daily Problem:

This article is part of the Data Structures and Algorithms Series following the course The Analysis of Algorithms. If you missed the previous article, check that out first as well as a copy of The Algorithm Design Manual.

Answers to the previous Daily Problem

Eggs break when dropped from great enough height. Specifically, there must be a floor f in any sufficiently tall building such that an egg dropped from the fth floor breaks, but one dropped from the (f − 1)st floor will not. If the egg always breaks, then f = 1. If the egg never breaks, then f = n + 1. You seek to find the critical floor f using an n-story building. The only operation you can perform is to drop an egg off some floor and see what happens. You start out with k eggs, and seek to drop eggs as few times as possible. Broken eggs cannot be reused. Let E(k,n) be the minimum number of egg droppings that will always suffice. (a) Show that E(1,n) = n. (b) Show that E(k,n) = Θ(n^1/k). (c) Find a recurrence for E(k,n). What is the running time of the dynamic program to find E(k,n)?

Before we completely leave dynamic programming, let’s keep it fresh in our minds and think about how we can break this problem down.

Answer to part A

To solve part A, we can intuit very simply that one egg requires starting at the bottom floor and, at worst, having to travel all of the way up each and every floor to prove what floor the egg breaks (if at all). Thus, 1 egg requires at most n floors.

Answer to part B

To solve part B, we’re leveraging binary search: as we reduce our eggs by k-1, we cut our floors in half which is a cost of n/2^k-1.

Answer to part C

Finally, to solve part C, let’s think of how we would solve this instinctually. If we have 100 floors and 5 eggs, it might initially make sense to just break the building up into 20 equal sections of floors. You may not get an exact match with such limited eggs but you know you’ll be able to cover the building with all of your eggs at your disposal.

Diving in, let’s say that section 1 (floors 1 through 20 in our example) doesn’t break the egg, but section 2 (floors 21 and 40) does. We now have k-1 eggs to explore a subproblem of this greater problem, because now we know our bounds have been reduced from 1 to 100 down to 21 to 40. You repeat this process, continuing to subdivide sections with your remaining eggs.

Generalizing this problem, we can say that, in the worst case, it will take us n/s + s attempts where s is the sections divided across the total number of floors n, plus the throw from the top section having the egg break at the top floor. What is the minimum section size we can use to reduce the number of hops?

Because we said min (or max) in that last question, this should harken back to differential calculus. So if f(s) = s + n/s, then our derivative function is f'(s) = 1 - n/s^2, set to zero is 0 = 1 - n/s^2. Rearranging this, we can say that 1 = n/s^2, and therefore our section size is s = sqrt(n).

To check for if this equation solves the minimum or the maximum, we now need to take the 2nd derivate of that equation, which is f''(s) = 2n/s^3. The rule states that if s is positive, we have f''(s) > 0 and we have a minimum. Since you can’t have negative number of sections for these floors, we now know we’ve reached our minimum floor section size. If that doesn’t make sense, check out that Math Stackexchange link for a deeper explanation of how that differential equation works.

Going back to our original equation, since we know that a section is expressed as a fraction of n, we can then turn our equation into using only 1 variable, which means that n/s + s is now n/sqrt(n) + sqrt(n), n/sqrt(n) is just sqrt(n), so now it’s sqrt(n) + sqrt(n) or 2sqrt(n) is the maximum number of tries we need to find our solution.

All of that said, this means E(k,n) = min( max(E(k,n-x), E(k-1,x-1)) + 1). What this is saying is that if you drop an egg and it doesn’t break, we know the floors must be above where you’re at and you still have your egg intact. The first function in the max equation is saying you would run this function with n-x remaining floors and the same number of k eggs. Conversely, if the egg breaks, we know the floors have to be below your current floor x, so that second max equation is saying you would run this function E with one less egg and the max floor being x-1. We need to take the worst-case scenario here which is why we are calling max over these two possibilities. We add in +1 because you need to consider the cost of throwing the egg, which is a constant cost. Finally, for all possible x values to try against, we want to minimize the cost of these tries, which is why we wrap the whole thing in a min function.

To compute the runtime of this dynamic programming algorithm, we know that E(k, n) runs across all k eggs and all floors n on a minimization function summing across all values of n, which reduces down to O(kn^2).

Reductions: turning one problem into another

If I had to summarize this section, it would be that reductions are ways of expressing that two problems are identical. If you can find a fast solution for problem A, it means, through transformation, you can find a fast solution for problem B.

Examples

Closest pair of integers

One simple exmaple is finding the closest pair of numbers in an array. The naive implementation says find the difference across all numbers which is O(n^2). The translation is to sort the array first since the closest pairs will be adjacent to each other and therefore we can run the difference comparison in a linear fashion, bringing the complexity down to O(nlog(n)).

Least common subsequence

Another example we can utilize from our dynamic programming section involves the Longest Common Subsequence. We started that section off by talking about Edit Distance (i.e. spell checking resolves to either adding/removing/replacing/keeping a letter in a word). In many ways, LCS is just a translation of Edit Distance: first we sort the numbers (again, by sorting we ensure the sequence is ever increasing), set insertion or deletion of numbers (to string together the longest sequence) to a cost of 1, set the substitution cost to infinity (since we don’t allow substitutions in this problem), then simply subtract from our set the recursive call of the edit distance divided by two to get the lenght of the LCS. Why two? Every addition or subtraction has to happen as a pair; so the actual distance is about a “move” operation which doesn’t affect the length of the array.

Least common multiple & greatest common divisor

Finally, a really common pairing of problems is in dealing with pairs of integers: finding their least common multiple (e.g. 24 and 36 has a LCM of 72) and greatest common divisor (e.g. for those same numbers the GCD is 12).

In fact, we can use one to derive the other: if we take two numbers, say x and y, and multiply them together and divide over their greatest common denominator, you get the least common multiple! Just like our previous two examples where sorting translates a hard problem into an easier one, finding the GCD translates the LCM problem from a intractable problem to a very easy one. Normally you’d need the prime factorization which cannot be done efficiently. But by deriving the LCD using the Euclidean algorithm we can turn this into an extremely efficient O(n) algorithm!

We’ll persue harder examples in the following posts, but for now let’s move on to the next Daily Problem:

Onto the next Daily Problem

We wish to compute the laziest way to dial given n-digit number on a standard push-button telephone using two fingers. We assume that the two fingers start out on the * and # keys, and that the effort required to move a finger from one button to another is proportional to the Euclidean distance between them. Design and analyze an algorithm that computes in time O(n) the method of dialing that involves moving your fingers the smallest amount of total distance.

More problems to practice on

The last homework set for the class also covers chapter 9 on reductions as well, so here are your introductory problems into this space:

  1. Problem 9-1.
  2. Problem 9-2.

Get the FREE UI crash course

Sign up for our newsletter and receive a free UI crash course to help you build beautiful applications without needing a design background. Just enter your email below and you'll get a download link instantly.

A new version of this app is available. Click here to update.