Lecture from: 24.10.2024  Video: Videos ETHZ  Official Script
Dynamic Programming
Dynamic programming is a powerful technique that empowers even mere mortal programmers to tackle complex problems. It builds upon the idea of using invariants, similar to what we explored in sorting algorithms.
Instead of directly solving a problem, dynamic programming breaks it down into smaller overlapping subproblems and recursively solves them. The key difference lies in storing the results of these subproblems to avoid redundant calculations, leading to significant efficiency gains.
Recursion: A Primer
Let’s illustrate this with the classic Fibonacci sequence example:
Pseudocode:
This recursive approach defines fibonacci(n)
in terms of two smaller Fibonacci values. However, repeated calculations for the same subproblems lead to exponential time complexity.
Runtime Analysis of Fibonacci
Let’s analyze the runtime complexity of our recursive Fibonacci implementation:
 Base Cases: $T(1),T(2)≥c$ where ‘c’ is some constant (representing the time for the base cases).
 Recursive Step: $T(n)≥T(n−1)+T(n−2)+d$ where ‘d’ is another constant (accounting for additional operations within the recursive step).
Notice how this recurrence relation resembles the Fibonacci sequence calculation itself! This similarity is crucial to understanding the runtime.
Using Induction: We can use induction to show that: $T(n)≥cF_{n}$.
Where $F_{n}$ represents the nth Fibonacci number. (See exercise 3.4 for more info on how we deduced this…) $T(n)≥cF_{n}≥c⋅31 ⋅1.5_{n}=Ω(1.5_{n})$
Therefore, the runtime complexity of this recursive Fibonacci implementation is exponential, specifically $Ω(1.5_{n})$.
This begs the question: what can we do to combat this inefficiency?
Memoization (Solution 1)
Memoization provides a powerful technique to optimize our recursive Fibonacci implementation.
Instead of recalculating Fibonacci numbers repeatedly, we store the results of previously computed values in a data structure like an array or dictionary. This avoids redundant calculations, significantly improving performance.
Let’s illustrate this with Python and a memo
array:
Pseudocode:
Explanation:
 We initialize an empty list
memo
to store calculated Fibonacci numbers.  Before each recursive call, we check if the result for
n
is already inmemo
. If it is, we directly return the stored value. Otherwise, we calculate the Fibonacci number recursively and store it inmemo
before returning.
Runtime:
Memoization dramatically reduces the runtime complexity from exponential $Ω(1.5_{n})$ to linear $Θ(n)$. This is because each Fibonacci number is calculated only once and subsequently retrieved from the memo array, resulting in a single pass through the problem space.
Iterative Solution (Solution 2)
While memoization optimizes recursion, an iterative approach can be even more efficient and often easier to understand. This “bottomup” solution builds the Fibonacci sequence incrementally from smaller values up to the desired n
.
Pseudocode:
Runtime:
The iterative solution has a runtime complexity of $Θ(n)$. We perform a fixed number of iterations (from 2 to n
), making the time proportional to the input size.
The Core Idea
Dynamic programming (DP) hinges on a powerful concept: breaking down complex problems into smaller, overlapping subproblems. These subproblems can often be solved recursively, but DP’s real magic lies in how we handle these solutions.
We aim to solve each subproblem only once and store its result. This eliminates redundant calculations, leading to significant efficiency gains. DP offers various strategies for tackling these subproblems:
 TopDown: Start with the main problem and recursively break it down into smaller subproblems. Techniques like memoization (storing results in a table) are often used in topdown approaches.
 BottomUp: Solve the smallest subproblems first and progressively build up to the larger ones. This often involves iteratively computing solutions for increasing input sizes.
Max Subarray Problem Revisited
Let’s revisit the classic “Max Subarray Sum” problem from previous lectures (03 Max Subarray Sum): Given an array of numbers, find the contiguous subarray with the largest sum.
Subproblem Definition: The key insight is that to find the maximum sum subarray ending at index j
, we can use the result of the maximum sum subarray ending at index j1
. This forms our recursive relationship:
By clearly defining the subproblem and establishing a recursive relationship, we pave the way for efficient dynamic programming solutions.
Jump Game
Problem: Given an array of positive (nonzero) numbers A[1..n]
representing the maximum jump length from each position, determine the minimum number of jumps required to reach the last element, A[n]
, starting from position 1.
Input: An array of positive integers A[1..n]
where n
represents the total number of elements in the array.
Game Rules:
 Start at position 1 of the array.
 From any position
i
, you can jump to any positionj
wherei + 1 ≤ j ≤ i + A[i]
.
Objective: Find the minimum number of jumps needed to reach the last element, A[n]
.
Let’s solve this.
Try 1: Breaking Down the Problem
Define a subproblem: $S[i]=minimal number of jumps to reach indexi$
Consider an example array A = [1, 3, 5, 3, 2, 1, 2, 4, 4, 2, 9]
. How do we solve for S[8]
?
To reach index 8, consider the positions reachable in one jump:
 Jump to position 8 from position 3 (since
A[3] = 5
and3 + 5 >= 8
).  Jump to position 8 from position 7 (since
A[7] = 4
and7 + 4 >= 8
).
Thus, the solution for S[8]
is:
$S[8]=min(1+S[3],1+S[7])$
This generalizes to our recursive relationship for any index i
:
$S[i]=min{S[j]+1∣1≤j<iandj+A[j]≥i}fori>1.$
In other words, find the minimum jump count from reachable positions one step earlier.
BottomUp Implementation
Let’s implement a bottomup solution to calculate the minimum jumps required.
Pseudocode:
Explanation:
 Initialization: Create a list
S
of sizen+1
filled with infinity (float('inf')
), representing the minimum jumps needed to reach each index, initially assuming all indices are unreachable. SetS[1]
to 0 because it takes zero jumps to reach the starting position.  Iteration: For each index
i
from 2 ton
, consider all possible jumping positionsj
from 1 toi1
. If a jump from positionj
is valid (i.e.,j + A[j  1] >= i
), updateS[i]
to be the minimum of its current value and1 + S[j]
. This represents taking one jump from positionj
and adding the minimum jumps needed to reachj
.  Return: Finally, return
S[n]
, which contains the minimum jumps required to reach the last indexn
.
Runtime Complexity
This bottomup approach has a runtime complexity of $O(n_{2})$.
 Outer Loop: The outer loop iterates
n
times (for each indexi
).  Inner Loop: For each
i
, the inner loop potentially iterates up toi
times (considering all possible jumping positionsj
).
Mathematically, we can express this as: $O(∑_{i=2}i−1)=O(n_{2})$ The nested loop structure results in a quadratic time complexity.
Try 2: Switching Perspectives  Geht es besser?
Yes, we can potentially solve this problem more efficiently. Instead of focusing on the position S[i]
(minimum jumps to reach index i
), let’s consider the maximum reachable index given a certain number of jumps:
$M[k]=maximal index reachable withkjumps$
Thus, we redefine the problem in terms of reachable positions for each jump count ($k$).
Note for readers: Since I probably wasn’t the only one slightly confused with what’s going on, I suggest watching this video explaining the efficient approach…
This change in perspective leads to a more efficient solution. Let’s illustrate with an example using A = [1, 3, 5, 3, 2, 1, 2, 4, 4, 2, 9]
.
Example: To find M[3]
(the maximum index reachable with 3 jumps), build upon M[2]
:
 With
M[2]
, we can reach indices 0 through 4.  For each
i
within that range (M[2] <= i
), consider the positioni + A[i]
reachable with one more jump (fromM[2]
).
Recursive Formulation:
We start with $M[0]=1$. To compute $M[k]$ for $k≥1$ recursively, we consider all positions $i$ reachable with at most $k−1$ jumps—i.e., positions $i≤M[k−1]$. From each such position $i$, we can jump up to $i+A[i]$, so the maximum index reachable with $k$ jumps is:
$M[k]=max{i+A[i]∣1≤i≤M[k−1]}.$
This formulation builds the reachable indices iteratively by considering jumps from the previously reachable positions.
Pseudocode:
Optimization of the Recurrence
Looking closer at the recurrence formula for $M[k]$, we see that to compute $M[k]$, it suffices to consider only the indices $i$ that require exactly $k−1$ jumps to reach, rather than all $i$ reachable with $k−1$ jumps. These indices are the ones that satisfy:
$M[k−2]<i≤M[k−1].$
This gives an improved recurrence relation:
$M[k]=max{i+A[i]∣M[k−2]<i≤M[k−1]}fork≥2,$
with the additional base case $M[1]=A[0]$.
Pseudocode:
This optimized recurrence allows us to reduce the runtime of the algorithm to $O(n)$, as each index $i$ contributes to only one maximum calculation across the entire process.
Lower Bound
The time complexity of $O(n)$ is optimal for this problem. To demonstrate this, we can construct a worstcase input where any algorithm must examine each element at least once.
Consider the array $A=(1,1,…,1)$. Here, we must read all entries except the last to correctly compute the solution since each entry directly affects the reachability of subsequent indices. Any algorithm must read all $n−2$ entries, thus guaranteeing a worstcase time complexity of $Ω(n)$ for this problem.
Longest Common Subsequence
Leetcode: Longest Common Subsequence
Problem: Given two strings, A and B, find the length of their longest common noncontiguous subsequence. Input: Two strings, A and B, consisting of lowercase English letters. Output: The length of the longest common noncontiguous subsequence of A and B.
Example
Input:
A = "rabbbit"
B = "rabbit"
Output:
5 // "rabbi" is a common noncontiguous subsequence of length 5.
Input:
A = "tiger"
B = "ziege"
Output:
3 // "ige" is a common noncontiguous subsequence of length 3.
Understanding NonContiguous Subsequences
 A subsequence is a sequence formed by deleting some (or no) characters from another sequence without changing the order of the remaining characters. For example, “ten” and “tin” are subsequences of “kitten”.
 A noncontiguous subsequence allows gaps between chosen characters. “rabbi” is a noncontiguous subsequence of “rabbbit”.
This problem type has wide applications in computer science, including:
 Bioinformatics: Analyzing similarities and differences between DNA sequences.
 Text Processing: Finding common patterns or motifs within large text corpora.
Attempt 1: Subproblem A[1..i]
and B[1..i]
While thinking about subproblems (A[1..i]
, B[1..i]
) is a good start, simply carrying over the LCS from the previous subproblem won’t always work. We need to consider all possible substrings and their contributions to the overall LCS.
The Flaw:
Imagine A = “tiger” and B = “ziege”. At i = 4
, the LCS of A[1..4]
(“tige”) and B[1..4]
(“zieg”) is 2 (we find “ie”). However, at i = 5
, the correct LCS becomes 3 (“ige”). We can’t directly derive this from the previous subproblem’s result because we need to account for the potential inclusion of ‘e’ from B.
The Challenge:
We’d have to analyze every possible substring combination up to i = 5
to find the true LCS. This becomes computationally expensive and inefficient.
Attempt 2: Subproblem Definition and Recursion – A Deeper Dive
Note to reader: Here too, I suggest watching this great video…
Let’s shift our focus towards a more structured approach using subproblems and recursion.
We aim to define a subproblem that represents the LCS (Least Common Subsequence) of two subsequences, A[1..i]
and B[1..j]
, denoted as L[i, j]
. Our goal is to build a recursive relationship that allows us to calculate L[i, j]
based on smaller subproblems.
Breaking Down the Problem:
We can express this subproblem recursively based on three key cases:

Base Case (Empty Subsequences): If either sequence is empty (
i = 0
orj = 0
), there are no common elements, so the LCS is 0. $L[i,j]=0ifi=0orj=0$ 
Matching Characters: If the last characters of both subsequences (
A[i]
andB[j]
) match, we can include this character in our LCS. The remaining LCS is then calculated by considering the subsequences without their last elements: $L[i,j]=1+L[i−1,j−1]$ 
NonMatching Characters: If
A[i]
andB[j]
don’t match, we consider two possibilities:
 Include the character
a_i
fromA
and calculate the LCS ofA[1..i1]
andB[1..j]
: $L[i,j−1]$  Include the character
b_j
fromB
and calculate the LCS ofA[1..i]
andB[1..j1]
: $L[i−1,j]$
 We choose the option that yields a longer LCS: $L[i,j]=max{L[i,j−1],L[i−1,j]}$
Combining the Cases:
Finally, we combine these three cases into a single recursive formula:
$L[i,j]=⎩⎨⎧ 01+L[i−1,j−1]max{L[i,j−1],L[i−1,j]} ifi=0orj=0ifA[i]=B[j]otherwise $BottomUp Approach
The bottomup approach offers an efficient way to compute the longest common subsequence (LCS) by systematically filling a 2D table, reflecting the subproblem solutions as we go. Think of this table as a grid where each cell represents the LCS length for specific prefixes of our input strings A
and B
.
Pseudocode:
Explanation:
 Table Initialization: We create a
L
table of size(n+1) x (m+1)
, filled with zeros initially. Each cellL[i][j]
will store the length of the LCS for substringsA[0..i1]
andB[0..j1]
. The extra row and column handle empty substring cases.  Table Filling: We iterate through each character position
i
in stringA
andj
in stringB
. If
A[i1]
matchesB[j1]
, the LCS length increases by 1 compared to the diagonal cell (L[i1][j1]
).  Otherwise, we take the maximum LCS length from the cell above (
L[i1][j]
) or to the left (L[i][j1]
).
 If
 Result Extraction: The final result, the length of the LCS for
A
andB
, is stored inL[n][m]
.
Note that after filling the table, we backtrack from L[n][m]
(the bottomright corner) to reconstruct the actual LCS.
 If
A[i1]
equalsB[j1]
, it’s part of the LCS, so we add it to the beginning of thelcs
string and move diagonally upleft (i=1
,j=1
).  Otherwise, we move either up (
i = 1
) or left (j = 1
) based on which cell has a larger value in the table.
Time Complexity
The time complexity of this bottomup approach is $O(n×m)$, which translates to $O(N_{2})$ when both input strings, A
and B
, have a length of approximately N
. While this is efficient for many cases, the question remained: geht es besser?
It turns out that under certain assumptions, a slight improvement to $O(n_{2}/gn)$ is possible, although the implementation is quite complex. The real question was whether more substantial improvements were achievable – could we find an algorithm with a time complexity of $O(n_{2−ϵ})$ for some $ϵ>0$? This remained open for decades.
Finally, in 2015, two mathematicians/computer scientists provided a definitive answer: no, such a substantial improvement is impossible.
Edit Distance
This problem tackles the classic concept of edit distance, which quantifies the minimum number of operations required to transform one string into another. Think of it as the “distance” between two strings in terms of insertions, deletions, or substitutions.
Given two strings A
and B
, return the minimum number of operations needed to convert A
into B
. The allowed operations are:
 Insertion: Add a character to
A
.  Deletion: Remove a character from
A
.  Substitution: Replace a character in
A
with a different character.
Input:
 Two strings,
A
andB
, consisting of lowercase English letters.
Output:
 An integer representing the minimum edit distance between
A
andB
.
Example:
Input: word1 = "horse", word2 = "ros"
Output: 3
Explanation:
 One way to transform "horse" to "ros" is:
 Delete "h": "orse"
 Delete "e": "ors"
 Substitute 's' for 'r': "ros"
Let’s break down how to solve the Edit Distance problem using a recursive approach.
Understanding the Problem
The goal is to find the minimum number of edits (insertions, deletions, or substitutions) needed to transform one string (word1
) into another (word2
).
Note to the reader: Once again, I suggest watching this video…
Defining the Recursive Relation
Let ED(i, j)
represent the edit distance between the first i
characters of word1
and the first j
characters of word2
. We can express this recursively:
ED(i, j) = min(
1 + ED(i  1, j), // Deletion case: Remove A[i]
1 + ED(i, j  1), // Insertion case: Insert B[j]
ED(i  1, j  1) + (A[i] != B[j]) // Substitution case: Replace A[i] if needed
)
Explanation:
 Deletion Case (
1 + ED(i  1, j)
): If the last character ofword1
(A[i]
) is not present inword2
, we need to delete it. The edit distance becomes 1 plus the distance between the remaining substrings (ED(i  1, j)
).  Insertion Case (
1 + ED(i, j  1)
): If the last character ofword2
(B[j]
) is not present inword1
, we need to insert it. The edit distance becomes 1 plus the distance betweenword1
and the remaining substring ofword2
(ED(i, j  1)
).  Substitution Case (
ED(i  1, j  1) + (A[i] != B[j])
): If the last characters of both strings match (A[i] == B[j]
), no operation is needed. Otherwise, we need a substitution, adding 1 to the distance between the remaining substrings (ED(i  1, j  1)
).
Base Cases:
ED(0, 0) = 0
: The edit distance between empty strings is 0.ED(i, 0) = i
: To transform ani
character string into an empty one, we needi
deletions.ED(0, j) = j
: To transform an empty string into aj
character string, we needj
insertions.
Pseudocode:
Runtime
The runtime complexity of this recursive solution is $O(n×m)$, where n and m are the lengths of the input strings. This is because we visit each cell in the dp
array exactly once.