Lecture from: 03.10.2024  Video: Videos ETHZ  Rui Zhangs Notes  Leetcode: 53. Maximum Subarray
Max Subarray Sum
Imagine you’re given a list of daily stock price changes. Your goal is to determine the maximum profit you could make by buying and selling the stock at most once. This problem, known as the Max Subarray Sum problem, involves finding the subarray with the largest sum, representing the period where your investment yields the highest return.
Algorithm 0: The Naive Approach ($Θ(n_{3})$)
A simple solution is to consider every possible subarray:
Pseudocode:
Here, we use three nested loops. The outer loop iterates through starting points (i), the second loop through ending points (j), and the innermost loop calculates the sum for each subarray. While this approach is conceptually straightforward, it suffers from a time complexity of $Θ(n_{3})$ due to the triple nesting.
Algorithm 1: Reduce Duplicate Work ($Θ(n_{2})$)
When calculating the sum of a subarray from index i
to j
, we only need the current element (array[j]
) plus the sum of the previous subarray (from index i
to j1
). This means we can avoid recalculating the entire subarray sum for each iteration.
Pseudocode:
Explanation:
 Initialization:
max_sum
is initialized to negative infinity to ensure that any valid subarray sum will be greater.  Outer Loop (
i
): This loop iterates through each element as a potential starting point for a subarray.  Inner Loop (
j
): This loop extends the subarray from indexi
to the current positionj
.current_sum
is updated by adding the value atarray[j]
. We compare
current_sum
withmax_sum
and updatemax_sum
ifcurrent_sum
is larger.
Time Complexity:
The nested loops result in $Θ(n_{2})$ time complexity because the inner loop iterates ni
times for each iteration of the outer loop.
In this current algorithm we are calculating all the possible subarray sums. If we want to improve the time complexity of this algorithm, we need to start thinking how to not need to calculate all the subarray sums (since we can’t reduce the operations more while also calculating all the sums).
Algorithm 3: Divide and Conquer ($Θ(ngn)$)
This algorithm leverages the concept of divide and conquer to efficiently solve the Max Subarray Sum problem.
Steps

Split: Divide the input array (
array
) into two roughly equal halves:left_half
andright_half
. 
Base Case: If either half contains only one element, its sum is the maximum subarray sum for that half.

Recurse:
 Find the maximum subarray sum (
max_left
) withinleft_half
recursively.  Find the maximum subarray sum (
max_right
) withinright_half
recursively.
 Find the maximum subarray sum (

Combine:
 Calculate the
cross_sum
: The sum of a subarray spanning both halves. Find the maximumcross_sum
by considering all possible starting points inleft_half
and ending points inright_half
.  The overall maximum subarray sum (
max_sum
) is the largest among:max_left
(maximum subarray sum in the left half)max_right
(maximum subarray sum in the right half)cross_sum
(maximum subarray sum spanning both halves)
 Calculate the

Return: Return
max_sum
.
Calculating cross_sum
The maximum sum subarray spanning both halves (cross_sum
) can be broken down into two parts:
$i≤2n <j→max(S_{i,j})=max(S_{i,2n})+max(S_{2n+1,j})$
To calculate this efficiently, we can adapt the idea from Algorithm 2. Instead of iterating through all possible pairs (i, j)
, we fix one pointer (either i
or j
) and incrementally move the other.
For each fixed position, we calculate the maximum subarray sum by:
 Starting from that fixed point.
 Iteratively moving the free pointer and adding its value to the running sum.
 Updating the maximum sum encountered during this process.
This will give us $2n −1$ additions for each half which means that in total we need $2(2n −1)+1=n−1$ additions to find the cross_sum
.
Pseudocode:
Time Complexity $Θ(ngn)$
We can write down the recurrence of additions in the following manner:
$A(n)=2A(2n )+n−1$We can see that the time complexity $A(n)=Θ(ng(n))$. Buuut, geht es besser???
Algorithm 4: Differently recursively (Kadane’s algorithm)
Currently, we’ve been using the divide and conquer concept on the problem of $n→2n $. But what if we are able to instead do: $n→n−1$?
Let us define $R_{j}:=max_{i≤j}S_{i,j}$, i.e. $R_{j}$ is the max sub array sum until there. Now we can observe that if we look at $R_{j−1}$ and want to find out $R_{j}$, we check if $R_{j−1}$ is positive, and if so, add it with the number at index $j$ to get $R_{j}$, if $R_{j−1}$ is negative, the number at index $j$ will be bigger, so we’ll start a “new subarray” starting there to calculate our further $R_{j}$‘s. This algorithm is called Kadane’s Algorithm.
Kadane’s Algorithm provides an efficient solution to the maximum subarray sum problem using dynamic programming. Instead of resorting to divideandconquer, Kadane’s Algorithm leverages dynamic programming to efficiently solve the maximum subarray sum problem. The core idea is to iteratively build up a solution by considering the contribution of each element to potential subarrays.
Let R[j]
represent the maximum subarray sum ending at index j
. We can express this recursively:
 Base Case:
R[0] = a[0]
(The maximum subarray sum ending at the first index is simply the value at that index).  Recursive Step:
R[j] = max(a[j], R[j1] + a[j])
 If starting a new subarray at index
j
yields a greater sum (a[j]
), then we updateR[j]
.  Otherwise, we extend the previous subarray ending at index
j  1
by adding the current element (R[j1] + a[j]
), ensuring we consider both possibilities.
 If starting a new subarray at index
While recursion can be insightful, it often leads to redundant calculations. Kadane’s Algorithm is typically implemented iteratively for efficiency:
Time complexity
This algorithm runs at $Θ(n)$. This is because we iterate through the array only once to calculate the maximum subarray sum.
Buuut, geht es besser??? → Not really, because we can’t really skip look at each element and still calculate the max subarray sum. Every correct Algorithm for this problem must at least run $n$ reading operations and thereby must run at $Θ(n)$.
Continue here: 04 Searching and Sorting