Lecture from: 03.03.2023 | Video: YT

Combinational Logic Design (Continued)

This lecture continues our exploration of combinational logic design, focusing on the highlighted components before transitioning to sequential logic.

Logical (Functional) Completeness

Recall that Programmable Logic Arrays (PLAs) can implement any logic function.

  • PLAs are constructed exclusively from AND, OR, and NOT (inverter) gates.
  • The connections within the PLA are programmed based on the Sum-of-Products (SOP) form of the desired logic function.

This observation implies that the set of gates {AND, OR, NOT} is logically complete. A set of gates is logically complete if any Boolean function can be implemented using only gates from that set.

Logical Completeness of NAND and NOR:

Both NAND and NOR gates are individually logically complete. This means that any Boolean function can be implemented using only NAND gates, or only NOR gates. To prove this, we need to show how to construct AND, OR, and NOT gates using only NAND gates, and then repeat the process using only NOR gates.

Proof Sketch (NAND):

  • NOT from NAND: Connect both inputs of a NAND gate together. If the input is A, the output is A NAND A = NOT A.
  • AND from NAND: A AND B = NOT (A NAND B). So, use a NAND gate followed by a NOT gate (implemented with a NAND).
  • OR from NAND: A OR B = (NOT A) NAND (NOT B). Use NOT gates (implemented with NAND) on the inputs to a NAND gate.

Proof Sketch (NOR):

  • NOT from NOR: Connect both inputs of a NOR gate together. If the input is A, output is A NOR A = NOT A.
  • OR from NOR: A OR B = NOT(A NOR B). Use NOR gate followed by a NOT (using a NOR).
  • AND from NOR: A AND B = (NOT A) NOR (NOT B). Use NOT gates (implemented as NOR) on the inputs to a NOR gate.

Combinational Logic Blocks

Comparator (Equality Checker)

A comparator checks if two N-bit input values are identical. It’s built by comparing corresponding bits of the two inputs using XNOR gates (exclusive NOR). An XNOR gate outputs 1 if and only if both inputs are the same (both 0 or both 1). The outputs of the XNOR gates are then ANDed together. The final output is 1 if and only if all corresponding bits are equal.

Example: 4-bit comparator.

ALU (Arithmetic Logic Unit)

An ALU combines multiple arithmetic and logic operations into a single unit, capable of executing one operation at a time. The specific operation to be performed is selected using control signals. The outputs of individual arithmetic/logical units are fed into a multiplexer, and control signals to that multiplexer determine which operation’s result appears at the ALU output.

ALU Symbol:

ALU Implementation (Conceptual):

Tri-State Buffer

A tri-state buffer acts as a controllable switch that can either pass a signal through or disconnect it, effectively isolating the output.

  • Input: A
  • Output: Y
  • Enable Signal: E

Operation:

  • E = 0: Y = Z (High-Impedance/Floating). The output is disconnected from the input. “Z” represents a high-impedance state, meaning the output is neither driven high nor low; it’s effectively an open circuit. The output is susceptible to noise and can have any voltage.
  • E = 1: Y = A. The input signal A is passed directly to the output Y.

Tri-State Buffer vs. Transistor: A tri-state buffer is similar to a transistor acting as a switch, but it provides better signal integrity and isolation compared to a single transistor.

Example Uses:

  1. CPU-Memory Bus: A bus is a shared set of wires used for communication between multiple components. Tri-state buffers allow either the CPU or memory to drive the bus, but prevent them from doing so simultaneously (which would cause a conflict).

  2. Shared Bus Systems: Tri-state buffers are commonly used in systems with a shared bus where multiple devices (processor, video controller, Ethernet controller, memory, etc.) need to communicate over the same set of wires. Only one device is enabled to drive the bus at any given time, avoiding conflicts.

  3. Multiplexer Implementation: Tri-state buffers can be used to construct a multiplexer. The select signal determines which tri-state buffer is enabled, allowing its corresponding input to pass through to the output.

Logic Simplification using Boolean Algebra

Optimizing logic circuits is crucial for reducing cost, latency, and power consumption. Boolean algebra provides the tools to simplify logic expressions, leading to more efficient implementations. Automated design tools often start with a canonical form (like Sum-of-Products) and apply simplification rules.

The Uniting Theorem

A fundamental tool for simplification is the Uniting Theorem:

Explanation: This theorem states that if two product terms differ only in one variable (where one term contains the variable and the other contains its complement), the variable can be eliminated.

Essence of Simplification: The core idea is to identify pairs of minterms (elements of the ON-set) where only one variable changes its value. The Uniting Theorem allows us to remove that variable, resulting in a simpler expression.

Priority Circuit Example

Consider a 4-bit priority circuit. Given four input bits, the circuit outputs a 4-bit value representing the highest-priority input that is ‘1’. If no input is ‘1’, the output is ‘0000’. A direct Sum-of-Products (SOP) implementation from the full truth table would be complex.

Don’t Cares (X): A key simplification technique is to use “Don’t Care” (X) values in the truth table. These represent input combinations that are either impossible or irrelevant to the output. In the priority circuit, if a higher-priority input is ‘1’, the values of lower-priority inputs don’t affect the output. We can use this to our advantage.

The simplified truth table with Don’t Cares:

By incorporating Don’t Cares, we can form larger groups of adjacent minterms, leading to a more simplified SOP expression and a more efficient circuit implementation.

Karnaugh Maps (K-Maps)

Karnaugh Maps (K-Maps) are a graphical method for simplifying Boolean expressions, particularly useful for expressions with a small number of variables (typically up to 4 or 5). They provide a visual way to identify adjacent minterms that can be combined using the Uniting Theorem. Study this topic independently. While K-Maps are a helpful visualization tool, they have limited scalability for larger expressions. Automated tools rely on more sophisticated algorithmic approaches for larger circuits.

Sequential Logic: Circuits with Memory

Sequential logic circuits differ fundamentally from combinational circuits. A combinational circuit’s output depends solely on its current inputs. A sequential circuit’s output depends on both its current inputs and its past inputs (its internal state, which reflects past inputs). This introduces memory into the circuit.

This lecture will cover key elements and concepts in sequential logic design:

Circuits that can store information:

  • Cross-coupled Inverters
  • SR Latch
  • Gated D Latch
  • Registers
  • Memory (including SRAM overview)

Finite State Machines (FSMs):

  • State
  • Synchronous vs. Asynchronous Systems
  • Clock Signals

Introduction: Combinational vs. Sequential

Combinational circuits are like pure mathematical functions: output = f(current inputs). Sequential circuits have internal “memory,” so: output = f(current inputs, past inputs). The “past inputs” are captured in the circuit’s state.

The basic structure of a sequential circuit includes a combinational logic block and storage elements (memory) that feed the current state back into the combinational logic.

Big Picture: Types of Storage Elements

There’s a hierarchy of storage elements, trading off speed, cost, and volatility:

  • Latches and Flip-Flops: Very fast, parallel access. Very expensive (one bit costs tens of transistors). Used for registers and small, fast memory within the processor.
  • Static RAM (SRAM): Relatively fast. Expensive (one bit costs 6+ transistors). Used for caches (intermediate memory between the processor and main memory).
  • Dynamic RAM (DRAM): Slower; reading destroys the stored content, requiring periodic “refresh.” Cheap (one bit costs one transistor + one capacitor). Used for main memory (RAM). Requires specialized manufacturing processes.
  • Other Storage (Flash, Hard Disk, Tape): Much slower, but non-volatile (data persists without power). Very cheap. Used for long-term storage.

Basic Storage Elements

Cross-Coupled Inverters

The simplest memory element is a pair of inverters connected in a feedback loop:

Analysis:

  • Stable States: Two stable states exist:

    • Q = 1, Q’ = 0: If Q is 1, the top inverter outputs 0 (Q’). The bottom inverter receives 0 and outputs 1 (Q), reinforcing the initial state.
    • Q = 0, Q’ = 1: If Q is 0, the top inverter outputs 1 (Q’). The bottom inverter receives 1 and outputs 0 (Q), reinforcing the state.
  • Metastable State: A theoretical, unstable metastable state exists where both outputs are at an intermediate voltage. In this state, tiny noise fluctuations can push the circuit into one of the stable states. Real-world circuits will never stay in the metastable state due to unavoidable thermal noise.

  • Lack of Control: The cross-coupled inverters can “remember” a state, but there’s no way to control or change it. We can’t reliably set the value of Q. This makes it unsuitable as a practical memory element.

SRAM Cell (Brief Overview)

Before discussing the SR latch, let’s briefly mention the Static RAM (SRAM) cell, which utilizes cross-coupled inverters.

An SRAM cell adds transistors to the cross-coupled inverters to control access (reading and writing). The “wordline” enables the cell for access, and the “bitlines” (BL and BL’) are used for both reading and writing. We’ll cover SRAM in more detail later, but the key is that it uses the cross-coupled inverter principle for storage.

SR (Set-Reset) Latch

The SR latch adds control to the basic storage concept. It is built with cross-coupled NAND gates (it can also be built with NOR gates, with slightly different behavior).

Circuit:

Operation:

  • Inputs: S (Set), R (Reset), and the outputs Q and Q’ (which should be complements).
  • Idle State: S = 1, R = 1. The latch holds its current state. The NAND gate outputs depend on the previous values of Q and Q’.
  • Set (S = 0, R = 1): Forces Q to 1 and Q’ to 0.
  • Reset (S = 1, R = 0): Forces Q to 0 and Q’ to 1.
  • Forbidden State (S = 0, R = 0): This state violates the requirement that Q and Q’ be complements. Both outputs go to 1. When S and R return to 1, the latch’s state becomes unpredictable (it may enter a metastable state and then settle to either 0 or 1). This state must be avoided.

Truth Table:

Gated D Latch

The Gated D Latch solves the forbidden state problem of the SR latch. It adds circuitry to ensure that S and R are never both 0.

Circuit:

Operation:

  • Inputs: D (Data), WE (Write Enable).
  • WE = 0: The latch holds its current state (Q remains unchanged).
  • WE = 1: The latch becomes transparent; Q takes on the value of D (Q = D).

Truth Table:

The Gated D Latch is a significant improvement, but it is level-sensitive. The output Q can change whenever WE is 1 and D changes. This can be problematic in some circuits.

Registers

Registers are collections of D latches used to store multiple bits of data. A single Write Enable (WE) signal typically controls all latches in the register, allowing simultaneous writing to all bits.

The above image shows a 4-bit register, denoted as .

Memory

Memory consists of an array of storage locations, each with a unique address. Memory can be volatile (loses data when power is off, like RAM) or non-volatile (retains data without power, like flash memory or hard drives). Non-volatile memory typically does not use CMOS elements for storage, relying instead on other physical mechanisms (e.g., floating-gate transistors in flash memory).

An example of a small memory array (4 locations, each storing 8 bits):

  • Address: A unique identifier for each location in memory. A memory with 4 locations requires a 2-bit address.
  • Addressability: The number of bits stored at each location (8 bits in the example).
  • Address Space: The total set of unique locations in memory.
Addressing Memory (Example)

Let’s implement a simple memory array with 2 locations and 3 bits per location (3-bit addressability, 1-bit address).

We use Gated D latches for storage. Separate mechanisms are needed for reading and writing.

Reading from Memory

To read from memory, we:

  1. Decode the Address: Use a decoder to activate the correct wordline based on the input address. This “recognizes the address pattern.”

  2. Select the Output: Use a multiplexer (or a similar structure with AND and OR gates) to select the output from the enabled memory location.

Writing to Memory

To write to memory, we:

  1. Decode the Address: Use a decoder, along with a Write Enable (WE) signal, to activate the correct wordline and enable writing.

  2. Supply Data: Provide the data to be written on the data input lines. The WE signal, gated by the wordline, ensures that data is written only to the selected location.

Putting it Together: A 4x3 Memory Array

Here’s a combined read/write circuit for a 4-location x 3-bit memory:

Logic Functions Using Memory (LUTs)

Memory arrays can implement Boolean logic functions. By storing the truth table of a function in memory and setting WE to 0, the address lines act as inputs, and the data output represents the function’s output. This is the principle behind Look-Up Tables (LUTs), commonly used in Field-Programmable Gate Arrays (FPGAs).

State and State Diagrams

A sequential circuit’s behavior is described by its state. The state encapsulates the circuit’s memory of past inputs. A state diagram visually represents the states and transitions of a sequential circuit.

Example: Sequential Combination Lock

Consider a lock that opens only when the correct sequence of numbers is entered. This is a sequential circuit. Contrast this with a combinational lock (like a padlock) where only the current combination matters.

State Diagram:

This diagram shows the states (circles) and transitions (arrows) based on the input.

Asynchronous vs. Synchronous Systems

The sequential lock example is asynchronous: transitions occur whenever the inputs change. Most modern digital systems are synchronous: state transitions occur at specific times, controlled by a clock signal.

The Clock Signal

The clock is a periodic signal that synchronizes state changes across a sequential circuit.

  • Clock Cycle: The time between successive rising (or falling) edges of the clock.
  • Synchronization: Combinational logic evaluates during the clock cycle. State changes (triggered by flip-flops, discussed later) occur at the clock edge.
  • Clock Cycle Time: Must be long enough to accommodate the maximum delay of the combinational logic. This is critical for correct operation. We’ll explore this in more detail later.

Finite State Machines (FSMs)

What is an FSM?

A Finite State Machine (FSM) is a discrete-time model of a system with memory (a stateful system). It describes the system’s behavior in terms of a finite number of states, inputs, outputs, and transitions between those states. Each state represents a snapshot of the system at a particular point in time.

An FSM diagram visually depicts:

  1. All possible states: The set of all distinct configurations the system can be in.
  2. State transitions: How the system moves from one state to another based on inputs and current state.

FSMs can model a wide range of systems, including traffic lights, elevators, fan speed controllers, microprocessors, and many other digital systems.

An FSM is formally defined by five elements:

  1. A finite set of states: Each state represents a snapshot of all relevant elements of the system.
  2. A finite set of inputs: External signals that can influence the state transitions.
  3. A finite set of outputs: Signals generated by the FSM based on its current state and, sometimes, inputs.
  4. An explicit specification of all state transitions: Rules that define how the FSM moves from one state to another based on the current state and inputs.
  5. An explicit specification of what determines each output value.

FSM Circuit Structure

A typical FSM implementation consists of three main parts:

  • Next-State Logic: Combinational logic that determines the next state based on the current state and inputs.
  • State Register: A set of storage elements (typically flip-flops) that hold the current state of the FSM.
  • Output Logic: Combinational logic that generates the outputs based on the current state (and sometimes inputs).

Overview Circuit:

State Register Implementation: Requirements

The state register must fulfill two key requirements:

  1. Sample Input at the Clock Edge: The register should capture the next state value (determined by the next-state logic) at the beginning of each clock cycle (typically on the rising or falling edge).

  2. Hold State Constant: The stored state (output of the register) must remain constant throughout the entire clock cycle, providing a stable input to the next-state and output logic.

Problem with Latches for State Registers

Directly connecting the clock signal to the WE (Write Enable) input of a D latch is problematic for implementing state registers. A D latch is transparent when WE is high: the output Q continuously follows the input D. This violates the requirement to hold the state constant throughout the clock cycle.

The challenge is to design a storage element that:

  1. Samples the input (D) only at the beginning of the clock cycle (e.g., on the rising edge).
  2. Holds the output (Q) constant for the entire clock cycle, regardless of changes in D.

D Flip-Flop

The D Flip-Flop solves this problem. It’s constructed using two D latches, a “master” and a “slave,” in series. The clock signal is connected directly to the slave latch’s WE and inverted before connecting to the master latch’s WE.

Circuit:

Operation:

  • Clock Low (CLK = 0): The master latch is transparent (its Q follows its D), while the slave latch is holding its previous value. The input D propagates through the master latch to the input of the slave latch.
  • Clock High (CLK = 1): The master latch becomes opaque (holds its value), while the slave latch becomes transparent. The value that was present at the input of the slave latch (which was captured by the master latch before the clock went high) is now passed to the output Q.
  • Clock Transitions to Low (CLK = 0): The master latch once again becomes transparent allowing the new D input to proagate to the input of the slave. The slave continues to hold and display the previous D input.

Result: The D flip-flop effectively samples the D input on the rising edge of the clock (when CLK transitions from 0 to 1) and holds that value at the output Q for the entire clock cycle. Changes in D after the rising edge do not affect Q until the next rising edge.

Terminology:

  • Edge-Triggered: A flip-flop is called edge-triggered because it captures data on the clock edge (rising or falling).
  • Level-Sensitive: A latch is level-sensitive because its output follows the input whenever the enable signal (WE) is at the active level (high or low, depending on the latch design).