For someone without classical education in computer science, like me, dynamic programming interview problems can be like a nightmare! Fianlly, I could found my way to understand these kinds of problems, and this is what I am going to talk about in this post.
During an interview procedure, I received an assignment asking for something like "the minimum number of times one needs to do an experiment to guarantee some value" . I approached the problem like a mathematician (whom I am, roughly speaking!), worked hard on it for one week, and I got my rejection notice, a couple of days after! I asked "But why? What was wrong there?" And the answer was "... the correct answer is X, but your answer is Y. One possible solution is using dynamic programming." Oh! That is it. I don't know about you, but this was the first time I heard the term "dynamic programming". I was wondering to know what is this so-called dynamic programming that failed me to get the job I really liked. I tried different approaches: "Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming" course in Coursera, geeksforgeeks exercises, and HackerRank. They were great but not helpful for me; I still had the problem to understand what should I do with a DP problem.
One night, and out of blue, I found "Dynamic programming for coding interviews", and I immediately read the first 6 chapters until midnight! (Note: The book is not voluminous, around 130 pages.) After, I could understand the DP approach much better than before, it was like a miracle for me! The reason might be that the book, often, approaches each example in three different (but related) ways:
- recursive approach: handle one step and delegate the rest back to the function
- memoization approach: recursive approach + save/cash the result to avoid computing that again in future
- dynamic programming approach: keep the recursive soul but without recursion, build the solution from bottom to top
For example, assume that you are asked to compute the \(n\)-th term of the famous Fibonacci sequence \(1,1,2,3,5,8,\dotsc\) with the recursive rule of \[\boxed{f_n = f_{n-1} + f_{n-2}}\] In the recursive approach, we define a function which simply breaks the main problem of computing
fib(n)
to two smaller problems, by calling itself for computing \((n-1)\)-th and \((n-2)\)-th terms:1 2 3 4 5 | def fib(n): if (n==1 or n==2): return 1 else: return fib(n-1) + fib(n-2) |
This code calls the function
fib()
several times, which makes it slow and with exponentially high complexity, while lots of these call are repetitive. For example, if the goal is to compute fib(5)
, we call fib(4)
and fib(3)
. The former calls fib(3)
and fib(2)
while the latter calls fib(2)
and fib(1)
. This has been illustrated in the following picture (from the book).So, just in the second "step" of this recursion,
fib(3)
has been called twice. One can imagine that the number of these repetitions get larger and larger for the next terms of the sequence. This is the fact that makes this code impractical and inefficient. What if we save somewhere the values that we have already computed, so that we can just read them from memory rather than adding an "overhead" of recursion. This is what is called "memoization". Look how the code looks like in this case:
1 2 3 4 5 6 7 8 9 | memo = [0] * 100 def fib(n): if memo[n]: return memo[n] if (n==1 or n==2): memo[n] = 1 else: memo[n] = fib(n-1) + fib(n-2) return memo[n] |
We still have the same recursion but if
memo
is already filled in, we just read from the memory. This will improve the efficiency a lot and usually it is easy to be done. But we still can do better, the "dynamic programming approach". For DP, we adopt the buttom-top approach, that is, to start from smaller problems in order to solve larger ones. In this example, it means that we start to compute
fib(3)
, fib(4)
, ... until we reach the term that we would like to compute. We combine cashing the already-computed results and the recursive code to this very fast DP-based code:1 2 3 4 5 6 7 | def fib(n): memo = [0] * 100 memo[0] = 1 memo[1] = 1 for i in range(2, n): memo[i] = memo[i-1] + memo[i-2] return memo[n-1] |
There is no recursive funcion call anymore; it is simply reading from memory! I do not want to give a very accurate benchmarking for the computation time here, but this is what I got on my machine for \(n=35\). You see that the difference is huge!
Approach | Executation time (ms) |
---|---|
Recursive | 11364 |
Memoization | 0.08 |
DP | 0.03 |
This example is certainly too simple. One even could think directly of the DP method. But, for better or worse, it is not always like this. There are cases where it is more difficult to imagine the structure of the DP solution right at once. For example, what do you think about this problem from the above-mentioned book? Hint: Imagine what would happen if you put the first tile vertically/horizontally, and start from the recursive solution.
The rope problem
Now, I can get back to the problem I was working on for my interview assignmenet: the rope problemYou are given \(M\geq 1\) identical ropes, and you are asked to find out the strength of the rope by attaching weights between \(1\) kg and \(N\) kg. If a rope fails during a test of load \(w\), it means that its strength is less than \(w\). The strength is not bigger than \(N\). You need to determine the minimum number of tests that are necessary to find this strength.
The tricky point is that the number of ropes is limited. If we had infinitely-many ropes, the binary search would be the best approach. On the other hand, with only one rope, we would not have any chance to miss, so we had to do a "linear search": to load the rope with weights \(1,2,3,4,\dotsc\) respectively, until it breaks.
At that time, the way I approached this problem was mathematical rather than algorithmic. I tried to find a clever choice of weights to be tested, to minimize the number of tests needed. I still believe that my answer was interesting, and could reduce the exponential complexity of a brute-force search to polynomial. However, it was not what they were looking for!
Let's start to solve this problem with the recursive approach: Given a rope of certain strength bound (\(n\) in the beginning), we do a search over all weights. As we need to consider the most difficult case, we find the maximum of two cases:
- The rope fails under weight \(w\), then we call back the function setting \(w-1\) for the strengh bound, and reduce the number of available ropes.
- The ropes does not fail, so we call back the function setting \((n-w)\) as the strengh lenght. Notice that, in fact, the strengh would be between \((w+1)\) and \(n\), but what really matter is the number of weights remains to test, not the upper-bound and lower bound.
1 2 3 4 5 6 7 8 9 10 11 | def testRope(m, n): minTest = n if m == 1: return minTest if n <= 0: return 0 if n == 1 or n == 2: return n for w in range(1,n+1): minTest = min(minTest, 1 + max(testRope(m-1,w-1), testRope(m,n-w))) return minTest |
The code works, nice! But, it does not give me any result for \(n>25\) even after minutes of waiting! So, let's think about the DP approach. We keep more or less the same recursion but not as recalling a function over and over. We adopt the same methodology as in the Fibonacci example; we save the results of smaller problems in a matrix considering that:
- Testing with one rope is equivalent to the linear search, so
testRope(1,n)=1
- If the strengh bound is 1 or 2, one can clearly see that we need 1 or 2 tests to determine the lenghts, so
testRope(m,1) = 1
andtestRope(m,2) = 2
- If there remains only one rope, need to do \(n\) tests (linear search), so
testRope(1,n) = n
andtestRope(m,2) = 2
These three simples rules are essentially lines 5-7 of the code below, where
memo
is our two-dimensional list saving the computed results. There is just a small tweak here in the indices, due to the zero-based indexing in python.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | def testRope(m, n): memo = [[0 for i in range(n)] for j in range(m)] for i in range(m): for j in range(n): memo[0][j] = j + 1 memo[i][0] = 1 memo[i][1] = 2 for i in range(1,m): for j in range(2,n): for w in range(j+1): if w == 0: memo[i][j] = min(j+1, 1 + memo[i][j-1]) elif w == j: memo[i][j] = min(memo[i][j], 1 + memo[i-1][j-1]) else: memo[i][j] = min(memo[i][j], 1 + max(memo[i-1][w-1], memo[i][j-w-1])) return memo[m-1][n-1] |
The next step is to solve more complicated problems. namely, with \(n=3\) and 2 ropes. What do we need for solving this? Exactly the simpler problems that we just solved! We test a weight, if it fails we have a linear search to do, if it does not fail, we have a usual search of length 1 or 2, whose solution is known to us! This is exactly what we have in line 17, and we put it in a loop. So, we build the final solution based on these sub-problems.
What about timing here? Let's take a look for the case \(m=2\) and \(n=25\):
Approach | Executation time (ms) |
---|---|
Recursive | 25700 |
DP | 0.27 |
Ok! I am finished with this post. I hope if you have not been familiar with DP, this post could have given you a better idea of how to approach it. And don't forget to check out the book. It explains the stuff much better than I did :)