Lecture from: 18.10.2024  Video: Videos ETHZ
Solution Strategies
When tackling algorithmic problems, there are two primary approaches for breaking down the problem: Iterative and Recursive. Each approach has its own advantages and is suitable for different types of problems. Let’s explore both strategies, along with two common recursive techniques: Decrease and Conquer and Divide and Conquer.
1. Iterative Strategy
In the iterative approach, the problem is solved by using loops (like for
or while
), processing the problem in small steps until the solution is reached. This strategy involves repeatedly applying a set of instructions to the data until a condition is met.
 Direct computation: The problem is solved stepbystep using loops.
 No recursion: The solution does not call itself, making the iterative method memory efficient.
 Better for performancecritical scenarios: Since recursion involves overhead from function calls, iteration may be more efficient for large datasets.
Example: Finding Factorial Iteratively
In this case, we use a for
loop to directly compute the factorial by multiplying the numbers from 1
to n
.
2. Recursive Strategy
In the recursive approach, the problem is solved by reducing it into smaller instances of the same problem, and the function repeatedly calls itself. Recursion relies on two components:
 A base case that defines when to stop the recursion.
 A recursive case where the problem is reduced or split into subproblems.
Recursion is particularly useful for problems that have a natural hierarchical structure or where a direct solution is difficult to express iteratively.
 Selfreferential function calls: The function solves the problem by delegating smaller subproblems to itself.
 Natural fit for certain problems: Recursion works well for problems like tree traversal, backtracking, and problems with a divideandconquer nature.
 Can lead to stack overflow: Excessive recursion depth may result in a stack overflow error if the recursion isn’t terminated properly.
Recursive Techniques
a) Decrease and Conquer
In this strategy, we reduce the problem size step by step, typically by removing one or more elements, and solve the smaller problem recursively.
 Recurrence Form:
 Example: The factorial function is an example of decrease and conquer, where each recursive call solves a smaller version of the problem by decreasing
n
.
Example: Factorial with Decrease and Conquer
In each recursive call, the problem size is reduced by 1 (n  1
), until it reaches the base case.
b) Divide and Conquer
In this strategy, the problem is split into two or more smaller subproblems, each of which is solved recursively. After solving the subproblems, their results are combined to form the solution to the original problem.
 Recurrence Form:
 Example: Merge Sort, where the array is split into two halves, each half is sorted recursively, and then the two sorted halves are merged.
Example: Merge Sort with Divide and Conquer
In this example, the problem (sorting the array) is divided into two subproblems (sorting two halves of the array). Once the subproblems are solved, the results are merged.
Comparison of Strategies
Strategy  Characteristics  Use Cases 

Iterative   Direct stepbystep solution   Problems that can be solved sequentially 
 More efficient for large datasets   No natural hierarchical structure  
 No overhead from function calls   Performancecritical applications  
Recursive (Decrease and Conquer)   Reduces problem size in each recursive call   Problems like factorial, Fibonacci 
 Requires a base case   Natural selfreferencing problems  
 Slower for large datasets due to overhead   Backtracking problems  
Recursive (Divide and Conquer)   Splits problem into two or more subproblems   Sorting algorithms like Merge Sort, Quick Sort 
 Requires combining subproblem results   Tree traversal, graph algorithms  
 Suitable for parallel computation   Concurrency, parallelizable problems 
Logical Conclusions
Large programs are often developed by multiple people and are composed of smaller parts or modules, such as methods, classes, and libraries. To manage such complexity, it’s essential to clearly specify what each module is supposed to do. This helps both module developers and users understand its purpose and use.
Two Key Perspectives:
 For module developers: When is the module considered complete? What remains to be done?
 For module users: How should the module be used? What results can I expect?
Specifications
Examples of Specifications:

Square Root Function:
 Precondition (
PRE
): The inputx
must be greater than or equal to zero.  Postcondition (
POST
): The square of the result equalsx
.
 Precondition (

Sorting Function:
 Precondition (
PRE
): No specific condition (it is always true).  Postcondition (
POST
): For all indicesi
, the element at positioni
is less than or equal to the element at positioni + 1
. This ensures the array is sorted in nondecreasing order.
 Precondition (
Formal Specifications in Mathematical Language
These specifications are expressed using logic and concepts from discrete mathematics to ensure precision and provability. Each module can be described using preconditions and postconditions:
 Precondition (
PRE
): Defines what must be true before the module is executed.  Postcondition (
POST
): Defines what will be true after the module has finished executing.
In the examples above, preconditions and postconditions are added as special Java comments in the program to define the contract for the module.
The “Contract” Between Developers and Users
A specification consisting of a precondition and a postcondition establishes a contract between the module developer and the module user:
 Developer’s Responsibility: The developer can assume the preconditions are true when the module is called, allowing them to focus on fulfilling the postcondition.
 User’s Responsibility: The user doesn’t need to know the internal implementation of the module but can rely on the postconditions being met if the preconditions are satisfied.
Benefits of WellSpecified Software
Vision for Software Development:
 Every module has a formal specification.
 The overall program is created by combining correctly specified modules.
 Using mathematical proofs, developers can ensure that the entire program behaves as expected and delivers the desired results.
Practical Advantages:
 Thinking in Specifications: By specifying preconditions and postconditions, developers can guide their programming process more systematically and efficiently.
 Defining Interfaces: It helps define interfaces clearly (e.g., method signatures and contracts).
 Module Reusability: Wellspecified modules can be reused with confidence in various contexts.
 Collaborative Development: Developers and users can work independently of each other, as long as they adhere to the established contract (preconditions and postconditions).
While these ideas are often applied informally in early programming education, the full formal theory behind specification and verification is covered in more advanced topics such as Formal Methods (usually introduced in later semesters).
The Why
Specifying modules with preconditions and postconditions enables efficient and correct software development by:
 Allowing developers to focus on fulfilling their part of the contract.
 Giving users confidence in the module’s behavior without needing to know its internal implementation.
 Establishing a framework for building complex systems out of smaller, independently developed components.
Ultimately, thinking in terms of specifications not only makes programming more systematic but also helps ensure the software behaves correctly and predictably.
Assertion (Logical Statement)
An assertion is a logical statement that is either true or false. The condition for a valid assertion is that it must make sense to ask whether the statement is true or false. Assertions help to formally document and verify the behavior of code, ensuring that certain conditions hold at specific points in the program.
Key Points About Assertions:
 Comments in code: Assertions are written as comments in the source code. Although these comments don’t affect how the program runs in Java, they are significant for developers and reasoning about the code.
 Common notation: In textbooks and formal methods, assertions are often enclosed in curly braces
{...}
. For simplicity, we often omit the//
comment markers in such examples.  Sideeffectfree: Assertions should be sideeffectfree, meaning expressions like
i++
(which modifies the variable) are not valid assertions.  Mathematical integers: When reasoning with assertions, we assume mathematical integers
ℤ
rather than Java’sint
, meaning we ignore overflow issues, though integer division (e.g.,5 / 2 == 2
) still applies.
Forward Reasoning (Vorwärtsschliessen)
Forward reasoning involves starting with a known precondition (a logical statement assumed to be true before a block of code) and then determining the postconditions by stepwise reasoning through the code.
Procedure:
 Start: Choose or assume a meaningful precondition (also called
Assertion₀
).  Stepbystep: Derive the next assertion (
Assertionᵢ
) by applying the effect of the next instruction (Statementᵢ
).  Goal: Derive a valid postcondition (also called
Assertionₙ
).
The process answers the question: Forward: “What guarantees (postcondition) does my code provide, given the chosen precondition?”
Example:
At each step, we derive the updated state based on the effect of each assignment. It’s possible to make a mathematical implication from the last assertion: {z > 59}
.
Backward Reasoning (Rückwärtsschliessen)
Backward reasoning starts with a known or desired postcondition and works backwards through the code to figure out what preconditions must hold to guarantee that the postcondition is met.
Procedure:
 Start: Choose or assume a meaningful postcondition.
 Stepbystep: Derive the previous assertion (
Assertionᵢ₋₁
) by reasoning about what must have been true before each instruction (Statementᵢ
).  Goal: Derive a necessary and sufficient precondition.
The process answers the question: Backward: “What precondition must hold for my code to ensure the chosen postcondition?”
Example:
By backward reasoning, we deduce that for the final condition z > 0
, the precondition must be u > 59
(since u + 17 + 42 > 0
leads to u > 59
).
Comparison of Forward and Backward Reasoning
Common Ground:
 Both methods establish valid pairs of preconditions and postconditions (contracts). If the precondition holds before execution, then the postcondition will hold after execution.
Differences:
 Forward reasoning:
 It follows the natural flow of the program (linearly).
 It tends to accumulate a lot of details, some of which may not be relevant to the final postcondition.
 Backward reasoning:
 Reading the code backwards requires practice, but it offers a deeper understanding of how each statement contributes to achieving the final goal.
 It is particularly useful for ensuring that the code meets specific guarantees or postconditions.
Assertions in Java at Runtime
In Java, assertions can be enforced at runtime using the assert
statement.
Syntax:
Where expr
is a boolean expression. If the expression evaluates to false
, the program throws an AssertionError
and terminates.
Example:
Semantics:
If the expression in assert
fails, the program will halt execution, which is useful for checking assumptions or invariants during development.
Enabling Assertions:
 To enable assertions in Java (as they are disabled by default), you need to provide the
enableassertion
flag (orea
) as a JVM argument when running the program. In Eclipse, this option can be enabled in the run configurations.
The Why
Assertions provide a formal way to specify and verify the behavior of code. Using forward and backward reasoning helps developers think systematically about how each part of the program contributes to its overall correctness. Assertions can also be used dynamically in Java to verify assumptions during execution, allowing developers to catch errors early in the development process.
By practicing both forward and backward reasoning, developers gain a better understanding of the code’s behavior and can ensure that their programs meet the desired specifications effectively.
Hoare Logic
Syntax:
A Hoare triple consists of two logical assertions and a segment of code, representing the formalized reasoning about a program’s behavior. The general form of a Hoare triple is:
${P}S{Q}$Components of a Hoare Triple:
 P is the precondition: A logical statement that must be true before the execution of the program segment (e.g.,
u > 59
).  S is the program segment: A statement (or block of statements) in the program.
 Q is the postcondition: A logical statement that will be true after the execution of the program segment (e.g.,
z > 59
).
Meaning of a Hoare Triple:
A Hoare triple (${P}S{Q}$) is valid if and only if:
 For any state where the precondition P holds, the execution of statement S will always result in a state where the postcondition Q holds.
Informally:
 If P is true before the execution of S, then Q must be true afterward.
Otherwise, the Hoare triple is considered invalid.
Establishing (Proving) a Hoare Triple:
To prove that a Hoare triple (${P}S{Q}$) is valid, we use specific rules that relate the preconditions and postconditions for different types of program statements.
These rules include:
 Assignment rule: For statements that assign values to variables.
 Sequence rule: For multiple statements executed in sequence.
 Conditional rule: For branching statements, like
ifelse
.  Loop rule: For statements that involve loops.
 Other constructs: For more complex language features (such as method calls or exceptions).
Each rule exists in both forward and backward versions:
 Forward rules help derive postconditions based on known preconditions.
 Backward rules help determine the preconditions needed to ensure given postconditions.
For the purpose of this course, we focus on the backward reasoning rules as they are more relevant for reasoning about the correctness of programs.
Hoare Logic Rules
Assignment Rule
The goal is to establish a Hoare triple of the form (${P}X=E{Q}$), ensuring that this triple is valid.
 P and Q represent preconditions and postconditions.
 X is a placeholder for a variable.
 E is an expression.
Steps to Prove Validity:
 Substitute all occurrences of X in Q with E to get ($Q_{′}$).
 Prove that ($P⟹Q_{′}$) holds (i.e., if P is true, then ($Q_{′}$) is also true).
Explanation:
 Intuition: Why does this rule make sense?
 After assignment, Q should hold.
 The assignment updates X with E, while the rest of the state remains unchanged. So, the condition Q written in terms of E must have held before the assignment.
 If P implies $Q_{′}$, then P is a strong enough precondition to ensure that after assignment, Q holds.
Examples:
Example 1:
 X is
a
, E isb + 25
.  Q is
a > 30
, and ($Q_{′}=b+25>30$).  P is
b + 25 > 30
, which trivially implies ($Q_{′}$), so the triple is valid.
Example 2:
 X is
a
, E isb + 25
.  Q is
a > 30
, and ($Q_{′}=b+25>30$).  ($b>5⟹b+25>30$), so the triple is valid (easily proved through arithmetic).
Example 3:
 X is
a
, E isb + 25
.  Q is
a > 30
, and ($Q_{′}=b+25>30$).  ($b>0$) does not imply ($b+25>30$) (counterexample: if ($b=2$), so the triple is invalid.
Continue here: 10 Hoare Logic