Lecture from: 31.10.2024 | Video: Videos ETHZ | Rui Zhangs Notes | Official Script

Subset Sum Problem

The Subset Sum problem presents a classic combinatorial challenge: given a set of non-negative integers and a target sum, determine if a subset of the given numbers adds up precisely to the target.

Formal Definition:

  • Input: A set (represented as an array) of non-negative integers and a target sum , where for all .
  • Output: A subset of indices such that . If no such subset exists, return an indication that no solution exists.

The Subset Sum problem finds applications in various real-world scenarios. Imagine an operating system needing to distribute tasks (each with a specific CPU time requirement) among multiple processors. The Subset Sum problem can help determine if a balanced workload distribution is possible, aiming to maximize processor utilization while minimizing idle time. For instance, if two processors are available, and the sum of all task CPU times is 22 units, we could check if a subset sums to 11 units to assign to each processor.

A straightforward, albeit computationally expensive, approach to solving the Subset Sum problem involves examining all possible subsets of the input set . This effectively means traversing the power set of (the set of all subsets, including the empty set and itself).

For each subset, we calculate the sum of its elements and compare it to the target sum . If a match is found, we return the corresponding subset as a solution. If no subset sums to after checking all possibilities, we conclude that no solution exists.

Runtime Analysis:

The power set of a set with elements contains subsets. Therefore, in the worst case, our naive algorithm would need to examine all subsets. This leads to an exponential runtime complexity of . The naive approach suffers from redundant computations. Many subsets share common sub-sums, which are repeatedly recalculated.

Dynamic Programming Approach

Note for readers: I suggest watching this…

Dynamic programming offers a significantly more efficient solution to the Subset Sum problem by avoiding redundant calculations inherent in the naive approach. This is achieved by breaking down the problem into smaller, overlapping subproblems and storing their solutions to reuse later.

Subproblem Definition

Instead of directly determining whether a subset sum exists for the entire array , we consider the possible subset sums achievable using only the first elements, .

Formally, we define our subproblem as follows:

In essence, is a boolean function indicating whether a subset sum of can be formed using elements from the subarray .

Recursive Relation

Now, let’s establish the recursive relationship that allows us to compute based on the solutions to smaller subproblems. Given the values of for all , how can we determine ?

We have two options to consider:

  1. Exclude : If is already a subset sum using elements from , then . This is captured by .

  2. Include : If is not a subset sum using only , but is, then we can include to achieve the target sum . This means if .

Combining these two cases, we get the following recurrence relation: Note that if , we simply consider the term to be 0, effectively excluding this impossible case.

The base cases for our dynamic programming table are used for initializing the recursion correctly:

  • for all : This signifies that a subset sum of is always achievable, regardless of how many elements of we consider. We can simply take the empty subset (containing no elements), and its sum will be .
  • for all : This states that if we are not allowed to use any elements from the array (i.e., ), then it’s impossible to achieve any positive subset sum . Since we’re dealing with non-negative integers, no combination of no elements can sum to a positive value.

Bottom Up Approach: With these base cases defined, we can now proceed to fill the entire dynamic programming table using the recursive relation, ultimately determining whether the target sum is achievable using elements from .

While the bottom-up approach systematically fills the entire dynamic programming table , it can lead to redundant computations, especially if the target sum is relatively small compared to the potential range of subset sums. A top-down (memoized) recursive approach can be more efficient in such cases by computing only the necessary table entries.

Top-Down (Memoized) Approach:

The top-down approach starts with the original problem () and recursively breaks it down into smaller subproblems. Crucially, it uses memoization to store the results of these subproblems. Before computing any , we check if its value has already been stored in the memoization table (often implemented as a dictionary or a 2D array). If it has, we simply retrieve the stored value, avoiding recomputation. Otherwise, we compute the value using the recursive relation and store it before returning.

Pseudocode (Python):

def subset_sum(A, b):
	n = len(A)
	memo = {} # Initialize memoization table (dictionary)
	
	def T(i, s):
		if (i, s) in memo:
			return memo[(i, s)]
		
		if s == 0:
			return 1
		if i == 0 and s > 0:
			return 0
		
		res = T(i - 1, s)
		if s - A[i-1] >= 0:
			res = res or T(i - 1, s - A[i-1])
		
		memo[(i, s)] = res
		return res
 
	return T(n, b)

Advantages of Top-Down:

Avoids calculating entries in that are never needed to determine . If you suspect a large portion of the DP table will not be needed, top-down can be more efficient.

Advantages of Bottom-Up:

Can be slightly faster for problems where most subproblems need to be computed anyway. Bottom-up might be a simpler and slightly faster choice for large problems.

Runtime Analysis

The dynamic programming approach we’ve outlined has a time complexity of , where is the number of elements in the input array , and is the target sum. This comes from filling a dynamic programming table of size .

Pseudo-Polynomial Runtime:

It’s crucial to understand that this runtime is pseudo-polynomial. While expressed as a product of and , the parameter represents a number in the input, not the input size itself. The actual input size is determined by the number of bits required to represent all the input numbers.

Here’s why this distinction matters:

  • Representing the elements of array requires at least units of space. We cannot represent distinct elements in less than units of space.
  • The target sum requires roughly bits to represent its value.

Therefore, the total input size can be approximated as . If is relatively small compared to (e.g., for some constant ), the term is dominated by the term, and the dynamic programming algorithm effectively behaves like a polynomial-time algorithm.

However, if is exponentially large compared to (e.g., ), the runtime becomes . The input size is . This runtime is now exponential in , the length of the input. This is the characteristic of a pseudo-polynomial runtime: it’s polynomial in the magnitude of the input numbers but exponential in the length of their representation (number of bits).

Lower Bound:

A trivial lower bound for this problem is the time required to read the input, which is . In the case where , this input reading runtime becomes .

Our dynamic programming algorithm’s runtime significantly exceeds this lower bound. It’s not simply processing each input bit a fixed number of times ; instead, the number of operations grows exponentially with the input size. This contrasts true polynomial-time algorithms where the number of operations is bounded by a polynomial function of the number of input bits.

Knapsack Problem

The Knapsack Problem is a classic optimization problem in which we aim to maximize profit while adhering to a weight constraint. Imagine you have a knapsack with a limited weight capacity, and you’re presented with various items, each having a specific weight and profit. The goal is to select a subset of items that maximizes the total profit within the knapsack’s weight restriction.

Input:

  • A set of items, each with a weight and a profit , where for .
  • A knapsack with a maximum weight capacity .

Output:

  • A subset of items such that:
  • The total weight of the selected items does not exceed the knapsack capacity:
  • The total profit of the selected items is maximized: is maximal among all valid subsets.

Example:

Consider a knapsack with a capacity of and the following items:

ItemWeight ()Profit ()
156
243
365
432

A possible solution is selecting items 1 and 4 (total weight = 8, total profit = 8). A better solution, however, is selecting items 1 and 2 (total weight = 9, total profit = 9). The challenge lies in finding the optimal combination.

Failing Approaches

Several seemingly intuitive approaches fall short of optimally solving the Knapsack Problem:

  • Greedy Approach (Maximum Profit First): Selecting items with the highest profit first might seem reasonable, but it can lead to suboptimal solutions. High-profit items could quickly fill the knapsack, leaving space for potentially more profitable combinations of smaller items.

  • Greedy Approach (Minimum Weight First): Prioritizing lightweight items allows fitting more items, but these might not necessarily yield the highest overall profit.

  • Greedy Approach (Maximum Profit/Weight Ratio): Choosing items with the highest profit-to-weight ratio can be better than the previous two greedy strategies but still doesn’t guarantee optimality. The discrete nature of the Knapsack problem means that fractional values of items cannot be taken. This fractional amount could correspond to a more profitable option.

These greedy approaches are efficient but often fail to find the absolute best solution because they don’t consider all possible combinations of items. Dynamic programming, however, provides a structured way to explore these combinations systematically.

Dynamic Programming Approach

Dynamic programming provides an optimal solution to the Knapsack Problem by breaking it down into smaller, overlapping subproblems and storing their solutions to avoid redundant computations. This will be a 2D DP approach.

Subproblem Definition

The core of our dynamic programming approach lies in defining the right subproblem. We aim to capture the essence of the Knapsack Problem but for a smaller subset of items and a potentially reduced weight capacity. This allows us to build up the solution incrementally.

Formally, we define our subproblem as:

the maximum profit achievable using a subset of items from the set with a total weight not exceeding .

In simpler terms, represents the best profit we can get if we only consider the first items and have a knapsack with a capacity of .

Recursive Relation

Note for readers: Check this video for a great explanation…

This subproblem allows us to create a recursive relationship. Consider the -th item: we either include it in the knapsack or not.

  • If we include item : We gain a profit of but reduce our available weight capacity to . The remaining profit comes from optimally selecting items from the set with the reduced weight capacity. This is represented by . The total profit in this case becomes .

  • If we don’t include item : Our available weight capacity remains , and the maximum profit we can obtain is the same as the maximum profit achievable using only the first items, which is .

We choose the better of these two options, leading to the recursive relationship:

It’s important to handle the case where (we can’t include item if its weight exceeds the current capacity). In such a case, the second term of the function is considered to be , effectively making the chosen value.

Runtime Analysis

The dynamic programming approach to the Knapsack Problem involves filling a table of size , where is the number of items and is the knapsack’s weight capacity. This leads to a time complexity of .

Pseudo-Polynomial Time:

Similar to the Subset Sum problem, the Knapsack Problem’s dynamic programming solution has a pseudo-polynomial runtime. The runtime is polynomial in the numerical value of , but not necessarily in the size of the input (number of bits needed to represent the input).

  • Representing the items requires space (for their weights and profits).
  • Representing the capacity takes bits.

The total input size is therefore . If is relatively small compared to (e.g., for some constant ), the dynamic programming algorithm behaves effectively like a polynomial-time algorithm.

However, if is exponentially large compared to (e.g., ), the runtime becomes , while the input size is . This runtime is exponential in the input size. This is the characteristic of a pseudo-polynomial runtime: polynomial in the magnitude of the input numbers (), but potentially exponential in the length of their representation ().

Implications:

The pseudo-polynomial nature of the runtime means that the dynamic programming approach is efficient for instances where is not excessively large. However, for very large values of , the runtime can become prohibitive. This is a crucial consideration when applying dynamic programming to the Knapsack Problem. Alternative algorithms might be necessary for instances with exponentially large .

Alternative Subproblem Definition

While the previous dynamic programming approach focused on maximizing profit for a given weight limit, we can formulate an alternative subproblem that focuses on minimizing weight for a given profit target. This can be particularly useful when we are primarily concerned with achieving a certain profit level and want to minimize the resources (weight) used.

Formally, let’s define the alternative subproblem as:

the minimum weight required to achieve a profit of at least using a subset of items from the set .

In other words, represents the smallest knapsack capacity needed to achieve a profit of or more, considering only the first items. If it’s impossible to reach a profit of using any combination of the first items, we set .

Recursive Relation

Similar to our previous approach, we can derive a recursive relationship for this subproblem:

Here’s the breakdown:

  • If we don’t include item : The minimum weight required to achieve a profit of remains the same as using only the first items, which is .

  • If we include item : We gain a profit of . Therefore, we need to achieve a profit of at least using the remaining items. The weight required for this is . Adding the weight of item () gives us the total weight in this case: .

As before, we select the minimum of these two options. We must handle the case where . In this situation, is considered to be 0 if and if .

Base Cases:

  • : If we have no items and want zero profit, zero weight is sufficient.
  • for : We cannot achieve any positive profit with no items.
  • for all : We can achieve zero profit using zero weight, simply by not taking any items, regardless of the number of available items.

Using this approach:

After filling the table using the recursive relation and base cases, we find the optimal solution by determining the maximum such that . This represents the maximum profit we can achieve within the knapsack’s weight capacity.

Runtime Analysis

This alternative dynamic programming approach involves filling a table of size , where is the number of items and is the maximum possible total profit achievable, summing over all the items profits. In other words: This assumes that all items could potentially be included. This leads to a time complexity of .

Pseudo-Polynomial Time:

Similar to the first dynamic programming approach, this alternative approach also has a pseudo-polynomial runtime. The runtime is polynomial in the numerical value of , but not necessarily in the input size (number of bits to represent the input).

  • Representing the items takes space (for weights and profits).
  • Representing the total profit requires bits. However, because is derived from the sum of profits of the items, each represented by bits, it could be as large as , it will usually be smaller since individual profit values are smaller than their sum. Nevertheless, can potentially be exponential in the input size.

The total input size is . If the individual profits and are bounded in relation to , the runtime can be considered polynomial in practice.

However, if grows exponentially with , or if the individual profit values are significantly larger than and , then the runtime could become , which would then be dominated by the profit values. Then the algorithm behaves exponentially with the input size, similar to the previous dynamic programming approach.

Comparison

The choice between the two dynamic programming approaches depends on the specific problem instance:

  • If the knapsack capacity is relatively small compared to the potential total profit , the first approach () is generally more efficient.
  • If the maximum possible profit is relatively small compared to the capacity , this alternative approach () can be more advantageous.

In situations where both and are large, other algorithms, or approximations, might be needed to tackle the problem effectively.

Can We Do Better? P vs. NP and the Knapsack Problem

We’ve seen two dynamic programming approaches to the Knapsack Problem, both with pseudo-polynomial runtimes. This means their efficiency depends on the magnitude of the input numbers (capacity or maximum profit ), not just the number of items . A natural question arises: can we find an algorithm with a runtime truly polynomial in the input size, meaning polynomial in and (or )?

The prevailing conjecture is no. This relates to the famous P vs. NP problem, one of the most fundamental unsolved questions in computer science and mathematics.

P (Polynomial Time): The class of decision problems solvable by a deterministic algorithm in polynomial time. This means the runtime is polynomial in the size of the input (number of bits).

NP (Nondeterministic Polynomial Time): The class of decision problems where a proposed solution can be verified in polynomial time. The Knapsack Problem, framed as a decision problem (“Is there a subset of items with total weight ≤ W and profit ≥ p?”), is in NP. Given a subset of items, we can quickly check if its weight is within the limit and calculate its profit.

NP-hard: A problem is NP-hard if every problem in NP can be reduced to it in polynomial time. If we could solve an NP-hard problem in polynomial time, we could solve every problem in NP in polynomial time.

NP-complete: A problem is NP-complete if it is both NP-hard and in NP. These problems are, in a sense, the “hardest” problems in NP.

The Knapsack Problem and NP-Completeness:

The Knapsack Problem is NP-complete. This means:

  1. It’s in NP (we can verify solutions efficiently).
  2. It’s NP-hard (if we could solve it in polynomial time, P would equal NP).

If either the Subset Sum or Knapsack problems could be solved in polynomial time, then . This is a profound statement because it implies that a fast algorithm for Knapsack would revolutionize computer science. We could solve thousands of other NP-complete problems efficiently, problems currently believed to require exponential time in the worst case.

Current State:

The P vs. NP question remains open. While most computer scientists believe , no proof exists. If it turns out that , then a truly polynomial-time algorithm for Knapsack (and all other NP-complete problems) would be possible. However, until then, the pseudo-polynomial dynamic programming approaches and approximation algorithms remain the most practical options for the Knapsack Problem.

Approximation Algorithms for the Knapsack Problem

Because finding an exact solution to the Knapsack Problem in polynomial time is highly unlikely (unless ), we often use approximation algorithms. These algorithms aim for near-optimal solutions with guaranteed polynomial runtimes.

Note for readers: I’d suggest watching this video to get a idea of what’s happening…

Key Idea: Scaling and Rounding:

One common technique involves scaling and rounding profit values to simplify the problem and speed up dynamic programming.

  1. Scale and Round Profits: Choose a scaling factor (a positive integer). Calculate scaled, rounded profits: .

  2. Solve Rounded Instance: Apply dynamic programming with the formulation using the rounded profits . Recall that represents the maximum profit achievable using items from the subset with a weight limit of . The recursive relation with the rounded profits becomes: .

  3. Return Solution: The solution obtained for this rounded instance serves as our approximation.

Also let us assume that all . This will become important later on.

Illustrative Example ():

Suppose , and an item has profit . Scaling gives . Rounding down results in .

How does this affect the dynamic programming table? We’re reducing the granularity of profit values. With rounded profits in multiples of 10, we only compute table entries for profit values that are also multiples of 10. We’re using a coarser grid, focusing on every tenth “column” (representing profit values) in the DP table. The rows (representing items) remain the same. gives us the standard DP algorithm, while larger increases speed but reduces accuracy.

Let’s clarify the observation regarding profit values and their relationship to the optimal solution in the Knapsack Problem’s approximation algorithm.

Observation:

The key insight for analyzing the approximation guarantee is the following inequality, which holds for each item :

Here’s a reminder of what each term represents:

  • : Original profit of item .
  • : Scaled and rounded profit of item .
  • : Scaling factor.
  • : Maximum profit among all items.
  • : The optimal achievable profit.

Runtime:

The dynamic programming algorithm runs in time, where is the sum of rounded profits which is upper-bounded by . This gives us a runtime of .

Accuracy:

How good is this approximated solution? Let be the set of indices returned by our approximation algorithm. The user would calculate the profit using the original profit values: . Let’s analyze this.

Since is optimal for the rounded profits (), we know that for any subset of indices whose total weight fits in the knapsack, the following holds:

Consider the case where , representing the true optimal subset (which we don’t know explicitly). The above inequality still holds:

Using our initial observation, we can derive:

Since , we have:

To control the accuracy, we introduce and set . This gives us:

This shows that our algorithm achieves a approximation.

Now, substituting back into our runtime expression:

(and inversely ) controls the trade-off between solution quality and runtime. Smaller (larger K) improves accuracy but increases runtime.

This runtime demonstrates the trade-off between accuracy () and efficiency. A smaller (higher accuracy) leads to a longer runtime.

Fully Polynomial Time Approximation Scheme (FPTAS):

As discussed, setting yields a -approximation in time. This is a Fully Polynomial Time Approximation Scheme (FPTAS), providing arbitrarily good approximations with a runtime polynomial in both and .

Can We Do Better? Geht es besser?

While the FPTAS we’ve described is powerful, its runtime can still be quite large for practical applications, especially as grows or a high accuracy (small ) is required.

The natural question then arises: can we improve upon this? Yes, indeed, there are more sophisticated algorithms that achieve better time complexities.

A much more complex algorithm exists that runs in time, ignoring logarithmic factors. This represents a significant improvement, particularly for large . What’s even more remarkable is that this algorithm is essentially optimal ; we can prove that significant further improvements are unlikely. This algorithm was developed as recently as April 2024, highlighting ongoing research in this area.

Longest Increasing Subsequence (LIS)

Leetcode: Longest Increasing Subsequence

The Longest Increasing Subsequence (LIS) problem involves finding the length of the longest subsequence within a given sequence such that all elements of the subsequence are in strictly increasing order. Note that the elements of the subsequence do not need to be contiguous within the original sequence.

Formal Definition:

  • Input: A sequence (often represented as an array) of numbers. For simplicity, let’s assume all numbers in are unique.
  • Output: The length of the longest strictly increasing subsequence in . A subsequence is a sequence that can be derived from by deleting some or no elements without changing the order of the remaining elements.

A first idea

Let’s explore a first attempt at defining a subproblem for the LIS problem and see why it falls short.

We might initially define the subproblem as follows:

the length of the longest increasing subsequence ending at index . We would then return the maximum value of .

Let’s consider the example and see how this subproblem definition works:

  • (the LIS ending at index 1 is just )
  • (the LIS ending at index 2 is )
  • (the LIS ending at index 3 is )
  • (the LIS ending at index 4 is )
  • (the LIS ending at index 5 is )
  • (the LIS ending at index 6 is )

If you observe this, it’s trivial to see that the doesn’t hint us what might be. For example, and both have the same value, despite their indexes being adjacent in the sequence.

This subproblem definition, while seemingly intuitive, is not sufficient for efficiently finding the overall LIS. We need a more robust subproblem definition that captures the global nature of the LIS.

Brute Force

Let’s refine our subproblem definition further. This approach won’t be the most efficient, but it brings us closer to the optimal solution and helps illustrate the thought process.

Define the subproblem as follows:

Does there exist an increasing subsequence of length that ends at index ? (This will be a boolean value, 1 if true and 0 if false)

Recursive Relation:

  1. Base Case: : for all . Any single element constitutes an increasing subsequence of length 1.

  2. Recursive Step: : An increasing subsequence of length ending at index can be formed if there exists an increasing subsequence of length ending at some index such that . We can express this recursively as:

Runtime Analysis:

The size of the table is . For each entry , we might need to check up to previous entries in the table. This leads to a time complexity of . This is because the exists step can be quite costly).

Dynamic Programming:

This approach focuses on efficiently determining the smallest possible ending value for an increasing subsequence of a given length.

Subproblem Definition

Define the subproblem as follows:

The smallest ending value of an increasing subsequence of length considering only the elements . If no increasing subsequence of length exists using these elements, set .

Recursive Relation

The value of can be determined by considering whether or not the element can extend an existing increasing subsequence.

Let’s break this down:

  • Case 1: : If is greater than the smallest ending value of a subsequence of length (meaning we can potentially extend a subsequence) and is smaller than the current smallest ending value for subsequences of length , then including creates a new LIS of length with a smaller ending value. We update to .
  • Case 2 (Otherwise): If the condition in Case 1 is not met, either we can’t extend any subsequence of length with , or including doesn’t create a smaller ending value for a subsequence of length . In this case, the smallest ending value remains the same as .

Base Cases:

This establishes the foundation for calculating for larger and .

Runtime Analysis

The table has dimensions , and filling each entry takes constant time (). The runtime of this dynamic programming approach is therefore .

Example:

Observations:

  • All the rows are sorted
  • We are only changing one value per row
    • We can find that entry in
    • This means we’ll not write a 2D DP Table but instead us the same array n times
    • This will mean we’ll get a runtime of

Dynamic Programming:

This approach optimizes the previous DP approach by utilizing binary search, leading to a significant improvement in runtime.

Subproblem Definition

This is the same as in DP Approach 3:

  • The smallest ending value of an increasing subsequence of length considering only the elements . If no increasing subsequence of length exists using these elements, set .

Key Insight: Instead of storing the entire table, we can maintain a single array, tails, of size . This array tails[l] stores the smallest ending value of all increasing subsequences of length l.

Note for readers: Good video explaining this specific ( algorithm)

Recursive Relation

We iterate through the input array A, processing each element A[i]. For each A[i], we perform a binary search in the tails array to find the largest index j where tails[j] < A[i]. This implies that we can extend an increasing subsequence of length j by adding A[i].

Algorithm

  1. Initialization: Initialize tails[1] = A[1] and set all other tails[i] to infinity.
  2. Iteration: For each element A[i] in the input array:
    • Perform binary search in tails array to find the largest index j where tails[j] < A[i].
    • Update: Set tails[j+1] = A[i]. This means we have found a shorter increasing subsequence ending with A[i].
  3. Result: The length of the longest increasing subsequence is the largest index l in tails where tails[l] is not infinity.

Runtime Analysis

  • Binary Search: Each binary search operation takes time.
  • Iteration: We perform binary search operations for each element in the input array.

Therefore, the total runtime of this approach is .

Example

Consider the same example as before, A = [1, 8, 9, 5, 6, 7].

  1. Initialize tails = [inf, 1, inf, inf, inf, inf, inf].
  2. A[1] = 1:
    • Binary search finds j = 1 (since tails[1] < A[1] is false, we need to go one index down)
    • Update tails = [inf, 1, inf, inf, inf, inf, inf]
  3. A[2] = 8:
    • Binary search finds j = 1 (since tails[1] < A[2] is true)
    • Update tails = [inf, 1, 8, inf, inf, inf, inf]
  4. A[3] = 9:
    • Binary search finds j = 2 (since tails[2] < A[3] is true)
    • Update tails = [inf, 1, 8, 9, inf, inf, inf]
  5. A[4] = 5:
    • Binary search finds j = 1 (since tails[1] < A[4] is true)
    • Update tails = [inf, 1, 5, 9, inf, inf, inf]
  6. A[5] = 6:
    • Binary search finds j = 2 (since tails[2] < A[5] is true)
    • Update tails = [inf, 1, 5, 6, 9, inf, inf]
  7. A[6] = 7:
    • Binary search finds j = 3 (since tails[3] < A[6] is true)
    • Update tails = [inf, 1, 5, 6, 7, 9, inf]

The final tails array contains [inf, 1, 5, 6, 7, 9, inf]. The length of the longest increasing subsequence is the largest index where tails is not infinity, which is 6.

Pseudocode

def longest_increasing_subsequence(arr):
  n = len(arr)
  tails = [float('inf')] * (n + 1)
  tails[1] = arr[0]
  
  for i in range(1, n):
    # Binary search to find the largest index j where tails[j] < arr[i]
    left, right = 1, n
    while left <= right:
      mid = (left + right) // 2
      if tails[mid] < arr[i]:
        left = mid + 1
      else:
        right = mid - 1
    
    # Update tails[j+1] = arr[i]
    tails[left] = arr[i]
 
  # The length of the longest increasing subsequence is the largest index j where tails[j] is not infinity
  for j in range(n, 0, -1):
    if tails[j] != float('inf'):
      return j
 
arr = [1, 8, 9, 5, 6, 7]
result = longest_increasing_subsequence(arr)
print(f"The length of the longest increasing subsequence is: {result}") 

Continue here: 09 Graph Theory, Eulerian Cycle Algorithm