0% found this document useful (0 votes)
8 views

CAAL Special

The document explains key concepts in computer architecture, focusing on instructions, the instruction cycle, and the role of registers. It details the instruction cycle stages, including fetch, decode, execute, and write-back, and distinguishes between general-purpose and special-purpose registers. Additionally, it covers addressing modes, specifically direct and indirect addressing, highlighting their advantages and disadvantages.

Uploaded by

sultankhan725614
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

CAAL Special

The document explains key concepts in computer architecture, focusing on instructions, the instruction cycle, and the role of registers. It details the instruction cycle stages, including fetch, decode, execute, and write-back, and distinguishes between general-purpose and special-purpose registers. Additionally, it covers addressing modes, specifically direct and indirect addressing, highlighting their advantages and disadvantages.

Uploaded by

sultankhan725614
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

UNIT-1

Q1.What is instruction.

Ans- In computer architecture, an instruction is a command given to a computer's CPU that specifies an
operation to perform on data. Instructions are fundamental building blocks of a program and define the tasks
a processor must execute. Each instruction typically includes:

1. Opcode (Operation Code): Specifies the operation, such as addition, subtraction, data transfer, or branching.
2. Operands: Provide the data or references (such as memory addresses or registers) that the instruction will
use to complete the operation.

Instruction Set Architecture (ISA)

The instruction set architecture (ISA) is a part of computer architecture that defines the set of instructions
that a CPU can execute. The ISA provides a blueprint for instructions, data types, registers, memory access,
and addressing modes.

Q2. Give the Instruction Cycle.

Ans- The instruction cycle (or fetch-decode-execute cycle) is the process by which a CPU retrieves,
interprets, and executes each instruction in a program. This cycle is fundamental to a CPU’s operation and
enables the execution of instructions in sequence.

Stages of the Instruction Cycle

1. Fetch:
o The CPU retrieves the next instruction from memory. The
memory address of this instruction is held in the Program
Counter (PC).
o The instruction is then loaded into the Instruction Register
(IR).
o After fetching, the PC is incremented to point to the next
instruction in memory.
2. Decode:
o The CPU decodes the fetched instruction to determine what operation it
needs to perform.
o During decoding, the CPU interprets the opcode (operation code) and identifies the operands (data
or addresses) required.
o The CPU might also check which registers or memory locations it needs to access for the operation.
3. Memory Access (if needed):
o Some instructions, such as those that involve loading data from or storing data into memory, require
an additional memory access stage.
o The CPU might read data from or write data to memory based on the instruction’s needs.
4. Execute:
o The CPU performs the operation specified by the instruction.
o This could involve arithmetic operations, data transfer, logical operations, or control changes like
jumps.
o The exact actions depend on the type of instruction.
5. Write-back:
o The result of the operation is written back to a register or memory location.
o For example, if the instruction adds two numbers, the sum is stored back in the designated register.
6. Update Program Counter:
o Finally, the CPU prepares for the next instruction cycle by ensuring the PC points to the next
instruction in sequence, unless altered by control instructions (like jumps or branches).

Pipelining in the Instruction Cycle

To improve efficiency, modern CPUs often use pipelining, where multiple instructions are processed
simultaneously at different stages of the instruction cycle. For instance:

 While one instruction is in the execute stage, the next can be in the decode stage, and another can be in the
fetch stage.

Summary of the Instruction Cycle

The cycle can be summarized as:

1. Fetch → 2. Decode → 3. (Optional) Memory Access → 4. Execute → 5. Write-back

Q3. What is the use of register? Name any 5 registers.

Ans- In computer architecture, a register is a small, fast storage location within the CPU that
temporarily holds data, instructions, or addresses. Registers are essential for executing instructions, as they
provide the CPU with rapid access to frequently used data, reducing the need to fetch it from slower
memory locations.

Uses of Registers

Registers serve several important purposes:

1. Data Storage: Temporarily store data that is actively used in calculations or operations.
2. Instruction Storage: Hold the current instruction being executed.
3. Address Storage: Store memory addresses that the CPU needs to access for reading or writing.
4. Intermediate Results: Temporarily hold intermediate results during complex calculations.
5. Control: Hold status flags and control information for managing program execution.

Common Types of Registers

Here are five commonly used registers in most CPU architectures:

1. Program Counter (PC):


o Holds the memory address of the next instruction to be fetched and executed.
o It automatically increments after each instruction, directing the CPU to the next step in the program.
2. Accumulator (ACC):
o A general-purpose register used for storing intermediate results of arithmetic and logic operations.
o Often, the ACC is used as a primary operand in many operations.
3. Instruction Register (IR):
o Holds the current instruction that the CPU is executing.
o It’s loaded with an instruction from memory during the fetch stage of the instruction cycle.
4. Memory Address Register (MAR):
o Stores the address of a memory location to be accessed.
o Used during fetch or memory access stages to point to the location in memory for reading or
writing.
5. Memory Data Register (MDR) (also known as the Memory Buffer Register):
o Holds the actual data that is being transferred to or from the memory.
o Works alongside the MAR during data transfer operations.

Additional Registers (Optional)

Other common registers include:

 Stack Pointer (SP): Points to the top of the stack in memory.


 Status Register (or Flags Register): Holds flags that indicate the status of operations (e.g., zero, carry,
overflow flags).

Q4. Give brief short note:

i. Index Register
ii. Memory Interfacing Memory
iii. Operation Code
iv. Instruction Set

Ans- i. Index Register

An Index Register is a special-purpose register in the CPU used to hold a value (known as
the index or offset) that modifies the address of memory locations. This register is primarily
used in addressing modes to access data stored in arrays, tables, or sequential memory
locations.

 Function: It helps in accessing elements of data structures like arrays or matrices where the same
operation needs to be performed on consecutive memory locations. By holding the offset value,
the index register adds or subtracts from a base address to point to a new memory location.
 Example: If a program needs to loop through an array, the base address of the array is stored in a
general-purpose register, and the index register holds the index or offset value to access each
element.
 Usage in Assembly: In assembly languages, an instruction might look like MOV AX, [BX + SI],
where BX holds the base address, and SI is the index register that holds the offset to access array
elements.
 Benefit: This allows the CPU to efficiently access and manipulate arrays, improving performance in
loops and reducing the need for complex address calculations during program execution.

ii. Memory Interfacing

Memory Interfacing refers to the connection between the CPU and the system memory
(RAM), allowing data to be transferred between the two. It involves using a combination of
buses and control signals to ensure that the CPU can read from and write to memory
correctly and efficiently.
 Function: The CPU communicates with memory through buses—address bus, data bus, and control
bus—to send addresses, data, and commands to memory. The address bus carries the memory
address where the data is stored or will be written. The data bus carries the actual data, while the
control bus manages read/write operations.
 Process: When the CPU needs to fetch data, it sends the memory address over the address bus. The
memory then sends the requested data back to the CPU via the data bus. For writing data, the CPU
provides the data and memory address, and the control bus manages whether the operation is a
read or write.
 Types of Memory: Common memory interfacing involves RAM (Random Access Memory) for data
storage and ROM (Read-Only Memory) for storing the firmware or BIOS.
 Importance: Efficient memory interfacing ensures that the CPU can access data quickly, preventing
bottlenecks in data transfer that could slow down program execution. Faster memory interfacing
techniques, such as cache memory and Direct Memory Access (DMA), enhance performance by
reducing reliance on the CPU for data transfer.

iii. Operation Code (Opcode)

An Operation Code (Opcode) is a part of a machine-level instruction that specifies the


operation to be performed by the CPU, such as addition, subtraction, or data transfer. The
opcode is essential because it tells the processor what action to take on the operands
provided by the instruction.

 Function: The opcode is the first part of a machine instruction and is decoded by the CPU to
determine what operation is required. The operands, which are usually registers or memory
addresses, specify the data the CPU will work on.
 Types of Operations: Common opcodes include arithmetic operations (e.g., ADD, SUB), logical
operations (e.g., AND, OR), data movement operations (e.g., MOV, LOAD, STORE), and control flow
instructions (e.g., JUMP, CALL).
 Example: In an assembly language instruction like ADD R1, R2, the ADD is the opcode, and R1 and
R2 are the operands. The CPU will add the values in registers R1 and R2 and store the result in R1.
 Importance: The opcode is critical because it determines how the CPU interacts with the operands
and what result is produced. The opcode essentially defines the task the CPU is performing at any
given time.

iv. Instruction Set

An Instruction Set is the complete collection of all the instructions that a CPU can execute.
It defines the operations the CPU can perform, and these instructions are the foundation for
software execution on a processor.

 Types of Instructions: The instruction set includes:


o Arithmetic instructions (e.g., ADD, SUB) for performing calculations.
o Data transfer instructions (e.g., LOAD, STORE) for moving data between memory and
registers.
o Control instructions (e.g., JUMP, CALL) for controlling the flow of a program.
o Logic instructions (e.g., AND, OR) for performing bitwise operations.
 Categories: The instruction set can be classified into two main types:
o CISC (Complex Instruction Set Computing): CPUs with CISC architectures, like Intel x86
processors, support a wide variety of complex instructions, enabling programs to be written
in fewer lines of code.
o RISC (Reduced Instruction Set Computing): RISC architectures, such as ARM processors, use
simpler instructions that can be executed faster and more efficiently, often resulting in
higher performance per clock cycle.
 Architecture-Specific: Every CPU has its own unique instruction set, tailored to its architecture. For
instance, Intel CPUs have the x86 instruction set, while ARM processors use a different set of
instructions.
 Importance: The instruction set defines the CPU’s capabilities, limiting or enabling certain
operations based on the instructions it can understand. It also affects software development, as
programmers write code based on the available instructions in the CPU’s instruction set.

Q5.Difference between general purpose and special


purpose register.

Ans- Difference Between General-Purpose and Special-Purpose Registers


Registers in a CPU are used to store data temporarily during processing. They can be classified into two
categories: General-Purpose Registers and Special-Purpose Registers. Here’s a detailed comparison:

1. General-Purpose Registers (GPRs)

 Definition: General-purpose registers are flexible registers that can be used by the CPU to store
data, intermediate results, addresses, or other information during execution.
 Function: These registers are used by the CPU to perform operations on data. For example, when
performing arithmetic calculations or moving data between locations, the general-purpose registers
temporarily hold operands and results.
 Number: The number of general-purpose registers varies depending on the CPU architecture.
Typically, modern CPUs have between 8 and 32 general-purpose registers.
 Examples: In the x86 architecture, registers like EAX, EBX, ECX, and EDX are general-purpose
registers. In ARM, registers R0 to R15 serve as general-purpose registers.
 Usage: The programmer or the compiler has the freedom to use these registers for any purpose,
such as storing variables, temporary results, or addresses during program execution.

2. Special-Purpose Registers

 Definition: Special-purpose registers are designed to perform specific control or monitoring tasks in
the CPU. They are used to store important information that controls the CPU’s operations, like the
status of the program or the location of instructions.
 Function: These registers have specific, predefined roles. For example, some are used to store
memory addresses, control data flow, or hold the status flags that indicate the result of the last
operation (e.g., zero, carry, overflow).
 Number: The number of special-purpose registers is typically fewer than general-purpose registers,
as they serve specialized functions.
 Examples:
o Program Counter (PC): Holds the address of the next instruction to be executed.
o Stack Pointer (SP): Points to the top of the stack in memory.
o Instruction Register (IR): Holds the current instruction being executed.
o Status Register (Flags Register): Stores status flags, such as carry, zero, and overflow flags.
o Memory Address Register (MAR): Holds the address of data in memory that needs to be
read or written.
 Usage: These registers are not typically used for storing general data or intermediate results.
Instead, they control the flow of instructions, store system status, or hold critical information for
the CPU's operations.

Key Differences
Feature General-Purpose Registers Special-Purpose Registers

Store data, operands, intermediate


Purpose Control or monitor CPU operations
results

Can be used for any purpose by the


Flexibility Have predefined roles and functions
programmer

Program Counter (PC), Stack Pointer (SP), Instruction


Examples R0, R1, EAX, EBX, ECX
Register (IR)

Control Function No specific control function Responsible for controlling or monitoring CPU state

Number of
Varies, typically many (8-32) Typically fewer in number
Registers

Programmer can freely access and use


Access CPU uses these registers for system-level tasks
them

Used for controlling instruction flow and managing


Usage Used for general data manipulation
execution state

Q6. Difference between Direct and Indirect address


instruction.

Ans- What is Direct Addressing Mode?


In direct addressing mode, the address field in the instruction contains the effective address of the
operand and no intermediate memory access is required. Direct Addressing Mode is a way for a
computer to find data in memory using a specific memory address included in the instruction. When the
CPU gets an instruction, it can directly go to that address to retrieve or store the data.
For example, if an instruction says to get the value from memory address 1000, the CPU simply looks at
address 1000 and uses the data found there. This method is straightforward and quick because the CPU
doesn’t need to look anywhere else.

Example: Add the content of R1 and 1001 and


store back to R1:
Add R1, (1001)
Here 1001 is the address where the operand is
stored.
Advantages of Direct Addressing Mode
 Simplicity: Direct addressing mode is easy
to understand and use because the address of
the data is given directly in the instruction.
 Speed: Accessing data is fast since the location is specified directly in the instruction, reducing the
need for extra steps.
 Less Overhead: Fewer bits are needed in the instruction, as it directly points to the data location.
Disadvantages of Direct Addressing Mode
 Limited Addressing: It can only access a small range of memory addresses, which can be a
problem in large programs.
 Inflexibility: If the data location changes, the instruction must also change, which can make
updates harder.
 More Instructions Needed: For complex operations, you might need more instructions to
handle different data locations, leading to longer programs.

What is Indirect Addressing Mode?


In Indirect addressing mode, the address field in the instruction contains the memory location or register
where the effective address of the operand is present. It requires two memory access. It is further
classified into two categories: Register Indirect, and Memory Indirect. Indirect Addressing Mode is a
method for the computer to find data in memory using a pointer. Instead of giving the exact memory
address in the instruction, it provides an address that points to another location where the actual data is
stored.
For example, if an instruction says to get data from memory address 2000, but address 2000 holds the
address 1500, the CPU first checks 2000, finds 1500, and then goes to address 1500 to get the real data.
This mode is more flexible because it allows for easier data management and manipulation, but it
requires an extra step compared to
direct addressing.
Example:
LOAD R1, @1500
The above instruction is used to load the
content of the memory location stored
at memory location 1500 to register R1.
In other words, we can say, effective
address is stored at memory location
1500.

Advantages of Indirect Addressing Modes


 Flexibility: It allows access to a wider range of memory addresses since the instruction points to a
location that contains the actual address of the data.
 Easier Data Management: You can easily change the data location without modifying the
instructions, making updates simpler.
 Supports Dynamic Data: Useful for handling data structures like arrays and linked lists, where
data locations can change during execution.
Disadvantages of Indirect Addressing Modes
 Complexity: It can be harder to understand because you have to look up an address to find the
actual data.
 Slower Access: Fetching data takes more time since you first need to retrieve the address before
accessing the data.
 More Overhead: Extra bits in the instruction are needed to hold the address of the pointer, which
can make instructions larger.
Q7. Difference between micro instructions and micro
program.

Ans- Micro Instructions


 Definition: A micro instruction is the smallest unit of control that directs the CPU to perform a specific
operation. These instructions directly control individual hardware components of the CPU, such as registers,
the ALU (Arithmetic Logic Unit), or buses.
 Function: Each micro instruction typically triggers a single, basic operation, like moving data between
registers, performing arithmetic operations, or controlling data flow within the CPU.
 Example: A micro instruction might control the ALU to add two numbers, or it could activate a particular
register to store data.
 Execution: Micro instructions are executed one at a time to complete the low-level operations needed to
execute machine-level instructions.

Micro Programs

 Definition: A micro program is a series or sequence of micro instructions that together define how to
execute a machine-level instruction. Micro programs are used in microprogrammed control units, where
each machine instruction is broken down into multiple micro instructions.
 Function: The micro program specifies the steps (composed of micro instructions) needed to execute a
particular high-level machine instruction like ADD, SUB, or LOAD. Each step is controlled by a micro
instruction.
 Example: A micro program to execute an ADD instruction might include fetching the operands, performing
the addition, and storing the result.
 Execution: Micro programs are stored in the control memory and are fetched and executed in sequence to
carry out the machine-level instruction.

Q8. Define computer registers.

Ans- Computer Registers


A computer register is a small, fast storage location within the CPU (Central Processing Unit) used to
temporarily store data, instructions, or addresses that are needed quickly during program execution.
Registers are essential for the efficient operation of the CPU because they allow rapid access to frequently
used values.

Characteristics of Computer Registers:

1. Speed: Registers are much faster than memory (RAM), as they are located within the CPU.
2. Size: Registers are typically very small in size, usually 32 or 64 bits, depending on the architecture of the CPU.
3. Purpose: Registers store intermediate data, instructions, and addresses during the execution of programs.

Q9. Explain interrupt driven I/O in detail.

Ans- Interrupt-Driven I/O (Input/Output)


Interrupt-driven I/O is a method used by computers to manage input/output operations efficiently. Instead
of continuously polling devices (checking if they are ready), the CPU is interrupted by the I/O device when
it is ready to transfer data. This approach enhances system performance by allowing the CPU to perform
other tasks while waiting for I/O operations to complete.
How Interrupt-Driven I/O Works

1. Initiating the I/O Operation:


o The CPU sends a command to an I/O device (e.g., disk, keyboard, printer) to start an operation, such
as reading or writing data.
o The I/O device begins
processing the request
(e.g., reading data from a
disk or accepting input
from a keyboard).
2. CPU Continues Other Tasks:
o Instead of waiting for the
I/O device to complete the
operation, the CPU can
continue executing other
instructions. It does not
need to constantly check
the device's status (as in
polling).
3. Interrupt Request (IRQ):
o When the I/O device is
ready to transfer data (e.g.,
input from a keyboard or
output to a printer), it
sends an interrupt signal to
the CPU. This signal
indicates that the I/O operation is complete or that it requires attention.
4. Interrupt Handling:
o Upon receiving the interrupt, the CPU temporarily suspends its current execution, saves its state (so
it can resume where it left off), and jumps to an interrupt handler or Interrupt Service Routine (ISR).
o The ISR is a special piece of code designed to handle specific interrupts. In this case, it will process
the data from the I/O device or respond to the device's needs.
5. Processing the I/O:
o The interrupt handler retrieves or sends the data to/from the I/O device. For example, if the
interrupt was triggered by a keyboard, the ISR might read the key press and store it in memory.
6. Resuming Normal Operations:
o After the I/O operation is handled, the ISR returns control to the main program, and the CPU
continues from where it was interrupted.

Advantages of Interrupt-Driven I/O:

1. Efficient CPU Usage:


o The CPU is not wasting time checking devices continuously. It only reacts when necessary (when an
interrupt occurs), allowing it to perform other tasks in the meantime.
2. Faster Response Time:
o Since the CPU is immediately notified when the I/O device needs attention, the response time for
handling I/O operations is faster compared to polling.
3. Multiple Devices:
o The system can handle multiple I/O devices, each generating interrupts at different times. Interrupts
allow the CPU to prioritize and manage these devices effectively.
4. Lower CPU Load:
o Interrupt-driven I/O reduces the workload on the CPU because it eliminates the need for constant
polling, freeing up CPU resources for other tasks.
Disadvantages of Interrupt-Driven I/O:

1. Overhead:
o Every interrupt requires the CPU to save its current state, execute the ISR, and restore its state. If
interrupts are frequent, this can create significant overhead.
2. Complexity:
o Managing interrupts can be complex, especially when dealing with multiple devices and prioritizing
interrupts. Systems need to have proper interrupt handling mechanisms, such as Interrupt Priority
and Interrupt Vector Tables.
3. Interrupt Latency:
o There might be a delay (latency) in handling the interrupt, especially if the CPU is already processing
another interrupt or if the interrupt handling mechanism is not optimized.

Example of Interrupt-Driven I/O:

Consider a system with a keyboard and a printer.

1. Keyboard Input: The CPU sends a request to the keyboard to detect a key press. While the CPU is
doing other tasks, the keyboard monitors the keys. When a key is pressed, the keyboard sends an
interrupt signal to the CPU. The CPU then stops its current tasks, executes the appropriate interrupt
service routine, and stores the keypress in memory.
2. Printer Output: Similarly, if the CPU wants to print something, it sends data to the printer and
signals that it's ready. The CPU can continue processing other tasks. When the printer finishes
printing, it sends an interrupt to notify the CPU that it is done, prompting the CPU to handle any
further printing tasks.

Key Components in Interrupt-Driven I/O:

 Interrupt Request (IRQ): A signal sent from an I/O device to the CPU indicating that it needs attention.
 Interrupt Service Routine (ISR): A function or set of instructions that the CPU executes in response to an
interrupt.
 Interrupt Vector Table (IVT): A table that stores the memory address of the ISRs for each interrupt type.
 Interrupt Controller: A hardware component that manages interrupt requests and their priorities.

Q10. Define Interrupt. Explain its various types.

Ans- Interrupt
An interrupt is a mechanism that allows a CPU to temporarily halt its current execution and divert its
attention to a different task, usually in response to an external event or condition. When an interrupt occurs,
the CPU stops executing the current instructions, saves its state, and begins executing an Interrupt Service
Routine (ISR) to handle the interrupt. After the ISR finishes, the CPU resumes executing the program from
where it left off.

Interrupts are essential for efficient system operation because they allow the CPU to respond to events (like
input from a keyboard or arrival of data from a network) without constantly checking the status of devices (a
process known as polling).

1. Hardware Interrupts

Definition: Hardware interrupts are signals generated by external hardware devices or peripherals to get the
attention of the CPU. These interrupts are usually caused by external events, like user input or completion of
an I/O operation, and they help the CPU respond promptly to such events.
 Origin: Generated by physical devices such as keyboards, mice, printers, or network cards.
 Function: Hardware interrupts allow the CPU to temporarily stop executing its current task and give
attention to an external event (e.g., reading data from a keyboard or receiving data from a network).
 Types:
o Maskable Interrupts (IRQ): Interrupts that can be delayed or ignored by the CPU if a higher-priority
task is being executed. These are used for non-critical events.
 Example: A printer sending an interrupt to notify that it has finished printing.
o Non-Maskable Interrupts (NMI): Interrupts that cannot be ignored or delayed, as they signal critical
conditions requiring immediate attention, such as hardware failures.
 Example: A memory failure or power loss triggering an NMI to alert the CPU to stop and take
necessary actions.
 Process: When a hardware interrupt occurs, the CPU halts its current operation, saves its state, and
runs an Interrupt Service Routine (ISR) to process the interrupt. Afterward, it resumes its normal
operation.

2. Software Interrupts

Definition: Software interrupts are generated by programs or software applications running on the CPU.
These interrupts are usually used to request services from the operating system or to handle exceptional
conditions during program execution.

 Origin: Generated by instructions in the program, typically to request system-level services or to


handle errors.
 Function: Software interrupts allow a program to communicate with the operating system or to deal
with special conditions like errors or system calls.
 Types:
o System Calls: Software interrupts used to request services from the operating system, such as file
operations, memory allocation, or process control.
 Example: A program might use a system call to read from a file or send data to a printer.
o Traps: These are special software interrupts triggered by specific instructions in the program (or by
errors like dividing by zero) that cause the CPU to jump to a specific error-handling routine.
 Example: A trap might be triggered by a program to request a division operation that causes
an exception (e.g., divide by zero).
o Breakpoints: Used by debuggers to pause program execution at specific points to inspect or modify
program state.
 Process: When a software interrupt occurs, the CPU stops the current program execution, saves its
state, and executes the corresponding interrupt service routine to handle the interrupt. After handling
the interrupt, the CPU resumes normal program execution.

Flowchart of Interrupt Handling Mechanism


The Image below depicts the flowchart of interrupt handling mechanism
Step 1:- Any time that an interrupt is raised, it may either be an I/O interrupt or a system interrupt.
Step 2:- The current state comprising registers and the program counter is then stored in order to
conserve the state of the process.
Step 3:- The current interrupt and its handler is identified through the interrupt vector table in the
processor.
Step 4:- This control now shifts to the interrupt handler, which is a function located in the kernel
space.
Step 5:- Specific tasks are performed by Interrupt Service Routine (ISR) which are essential to
manage interrupt.
Step 6:- The status from the previous session is retrieved so as to build on the process from that
point.
Step 7:- The control is then shifted back to the other process that was pending and the normal
process continues.

Q11. Explain the role of register transfer in computer


architecture.

Ans- Role of Register Transfer in Computer Architecture


Register Transfer is a fundamental concept in computer architecture. It refers to the process of moving data
between registers or between a register and memory during the execution of an instruction. Registers are
small, fast storage locations within the CPU that temporarily hold data and addresses that are being
processed. Register transfer plays a crucial role in the operation of the processor, as it facilitates the flow of
data during computation.

What is Register Transfer?

A register transfer involves copying or moving data between registers within the CPU or between the CPU
and memory. It is represented in the form of a register transfer operation, which is written as:

Copy code
R1 ← R2

This means "the contents of register R2 are transferred to register R1."

In a more general form, a register transfer operation can involve several components, including:

 Registers: Temporary storage locations in the CPU.


 Memory: External storage that may involve reading or writing data.
 Arithmetic/Logic Unit (ALU): A part of the CPU that performs calculations and logic operations.
 Control Unit (CU): Directs the operation of the processor and generates the necessary control signals for
register transfer.

Role of Register Transfer in Computer Architecture

1. Data Movement:
o The primary role of register transfer is to facilitate data movement between various parts of the
CPU. Registers are faster than memory, so transferring data between them allows for rapid
computation.
o For example, transferring data from one register to another can occur as part of the execution of an
arithmetic operation or data manipulation.
2. Execution of Instructions:
o Instructions in computer systems are executed through a series of register transfers. Each
instruction typically involves transferring data between the registers and memory, and between
registers and the ALU.
o For example, the instruction "ADD R1, R2, R3" (which adds the contents of R2 and R3 and stores the
result in R1) would involve the following register transfers:
1. Load data from R2 and R3.
2. Perform the addition in the ALU.
3. Store the result in R1.
3. Efficient Data Processing:
o Registers hold data that is immediately needed by the processor. By moving data between registers
quickly, the processor can perform complex operations faster.
o Register transfer minimizes the need for slower memory accesses, improving system performance.
4. Control of the Processor:
o The Control Unit (CU) generates the necessary control signals to direct register transfers. These
signals define which registers should be used for the operation, and whether data should be read
from or written to memory or an I/O device.
o For example, when executing a load or store instruction, the CU controls the transfer of data from
memory to a register, or from a register to memory.
5. Support for Arithmetic and Logic Operations:
o Arithmetic operations (like addition, subtraction) and logical operations (like AND, OR) are typically
performed by the ALU, but before these operations can happen, the operands are loaded into
registers.
o After the operation is completed, the result is transferred to a register, ready to be used for the next
operation or stored in memory.
6. Pipelining and Parallelism:
o Pipelining allows multiple instructions to be in different stages of execution simultaneously. Register
transfer plays a key role here, as the outputs of one stage (e.g., the result of an ALU operation) must
be transferred to the next stage in the pipeline.
o Parallel execution of tasks also relies on the efficient transfer of data between multiple registers and
functional units.
7. Address Calculation:
o For many instructions, such as load and store, an address must be computed, often using an index
register or a base register.
o The register transfer operations involved in address calculation typically use the contents of one or
more registers, perform arithmetic operations (like addition), and transfer the result to another
register.

Q12. What is cache memory? Describe its operations in


brief.
Ans- Cache Memory
Cache memory is a small, high-speed memory located close to the CPU, designed to store frequently
accessed data and instructions temporarily. It acts as a buffer between the CPU and the main memory
(RAM), speeding up data retrieval
for the CPU by reducing the time
needed to access data from slower
main memory.

Key Operations of Cache Memory

1. Cache Hit: When the CPU


needs to access data, it first checks the cache. If the data is found in the cache (a cache hit), it is
accessed directly, which is much faster than accessing main memory.
2. Cache Miss: If the data is not found in the cache (a cache miss), the CPU fetches the data from the
main memory and also stores a copy of it in the cache. This way, the next time the CPU needs the
same data, it can be accessed directly from the cache.
3. Data Replacement (Cache Replacement Policy)

 When the cache is full, a replacement policy decides which data to replace to make room for new
data. Common replacement policies include:
o Least Recently Used (LRU): Replaces data that hasn’t been used for the longest time.
o First-In-First-Out (FIFO): Replaces data in the order it was added.
o Random Replacement: Randomly selects data to replace.

4. Write Policies

 Determines how data written to the cache is synchronized with main memory:
o Write-Through: Data is written to both the cache and main memory simultaneously,
ensuring consistency.
o Write-Back: Data is written only to the cache initially, with changes written to main memory
later (e.g., when that cache line is replaced).
UNIT-2
Q1. Discuss the following in brief:

i. General Register Organization

ii. Arithmetic Pipelining

iii. Array Processing

iv. Parallel Processing

v. Vector Processing

Ans- i. General Register Organization


General Register Organization refers to a CPU design where there is a set of general-purpose registers that
can be used by the processor to store temporary data during execution. In this organization, these registers
are accessible to the Arithmetic Logic Unit (ALU) and are used for various operations, like arithmetic
calculations, data storage, and data manipulation.

Key Points:

 General-Purpose Registers: These registers can hold data temporarily and are not restricted to specific
functions. Examples include registers like AX, BX, CX, and DX in x86 architecture.
 Flexibility: Programs can use these registers for multiple purposes, allowing greater flexibility and efficiency
in data handling.
 Direct Access: The CPU can access data in these registers much faster than accessing data from main
memory.
 Register-to-Register Operations: Operations are performed directly between registers, improving processing
speed.

Advantages of General Register Organization

 High Speed: Since data can


be moved and processed
directly between registers, it
increases the speed of
execution compared to
memory-based operations.
 Flexibility: Any register can
be used as a source or
destination, providing
flexibility in programming
and efficient use of the
register set.
 Efficient ALU Utilization: The
ALU can perform operations
on data stored in registers
without needing to access
slower main memory.
Consider R1 ← R2 + R3, the following are the functions implemented within the CPU −

 MUX A Selector (SELA) − It can place R2 into bus A.


 MUX B Selector (SELB) − It can place R3 into bus B.
 ALU Operation Selector (OPR) − It can select the arithmetic addition (ADD).
 Decoder Destination Selector (SELD) − It can transfers the result into R1.

 Registers (R1 - R7):


o There are seven general-purpose registers (R1 to R7) in this organization.
o These registers temporarily hold data and are used by the ALU for operations.
o Each register can be selected for input or output based on the control signals.
 Clock:
o The clock signal is used to synchronize the operations within the CPU.
o Registers update their values with each clock pulse, ensuring that operations proceed in a
coordinated manner.
 3×8 Decoder:
o The decoder receives a 3-bit control input (SELD) and activates one of the seven registers for
writing data.
o This is part of the Load mechanism, which determines which register is selected for storing
output data from the ALU.
 Multiplexers (MUX):
o Two multiplexers are used to select the source registers for the ALU operations.
o SELA and SELB are 3-bit control signals that determine which registers are connected to the
A bus and B bus respectively.
o By selecting different registers on the A and B buses, data from different registers can be fed
to the ALU for processing.
 A Bus and B Bus:
o The A bus and B bus carry the data from the selected registers to the ALU.
o The A and B buses allow data to flow from the selected registers to the ALU inputs for an
operation to be performed.
 Arithmetic Logic Unit (ALU):
o The ALU performs arithmetic (addition, subtraction, etc.) and logical (AND, OR, NOT, etc.)
operations on the data received from the A and B buses.
o The operation performed by the ALU is determined by a 5-bit Operation (OPR) control
signal.
 Output:
o The result of the ALU operation is sent to the selected register, determined by the SELD
signal, through the Load mechanism.
o This result can also be sent to other parts of the system if needed.
 Control Word:
o The control word consists of:
 3 bits for SELA: Selects the register connected to the A bus.
 3 bits for SELB: Selects the register connected to the B bus.
 3 bits for SELD: Selects the register where the ALU output will be stored.
 5 bits for OPR: Specifies the ALU operation to perform (e.g., addition, subtraction).

Operation of General Register Organization

1. The control word determines which registers to use and what operation to perform.
2. SELA and SELB select the registers to be used as input to the ALU through the A and B buses.
3. The ALU performs the specified operation (based on the OPR signal) on the data from these selected
registers.
4. The result of the ALU operation is stored in the register specified by the SELD signal.
ii. Arithmetic Pipelining

Arithmetic Pipelining is a technique used in processors to enhance performance by breaking down


arithmetic operations into smaller stages that can be processed concurrently. Each stage in the pipeline
completes a part of the overall operation, and as one stage finishes its task, the next stage starts. This enables
the CPU to work on multiple operations simultaneously, increasing throughput and efficiency.

Key Characteristics:

 The pipeline stages might include tasks such as fetching operands, performing arithmetic operations, and
storing results.
 Pipelining improves the efficiency of executing instructions that involve multiple arithmetic operations.
 Each stage processes a different arithmetic operation simultaneously, improving the overall processing
speed.

Example: For a multiplication operation, the stages might be:

1. Fetching operands.
2. Partial multiplication.
3. Final result computation and storing the result.

iii. Array Processing

What is Array Processor?


A processor which is used to perform different
computations on a huge array of data is called an array
processor. The other terms used for this processor are
vector processors or multiprocessors. This processor
performs only single instruction at a time on an array of
data. These processors work with huge data sets to
execute computations. So, they are mainly used for
enhancing the performance of computers.

1. Attached Array Processor :


To improve the performance of the host computer in
numerical computational tasks auxiliary processor is
attached to it.
Array Processor Architecture

Attached array processor has two interfaces:


1. Input output interface to a common
processor.
2. Interface with a local memory.

Here local memory interconnects main


memory. Host computer is general purpose
computer. Attached processor is back end
machine driven by the host computer.
The array processor is connected through an
I/O controller to the computer & the computer
treats it as an external interface.
2. SIMD array processor :
This is computer with multiple process unit operating in
parallel Both types of array processors, manipulate
vectors but their internal organization is different.

SIMD is a computer with multiple processing units


operating in parallel.
The processing units are synchronized to perform the
same operation under the control of a common control
unit. Thus providing a single instruction stream, multiple
data stream (SIMD) organization. As shown in figure,
SIMD contains a set of identical processing elements
(PES) each having a local memory M.

Advantages
The advantages of an array processor include the following.

 Array processors improve the whole instruction processing speed.


 These processors run asynchronously from the host CPU the overall capacity of the system is
improved.
These processors include their own local memory that provides extra memory to systems. So this is
an important consideration for the systems through a limited address space or physical memory.
 These processors simply perform computations on a huge array of data.
 These are extremely powerful tools that help in handling troubles with a high amount of
parallelism.
 This processor includes a number of ALUs that permits all the array elements to be processed
simultaneously.
 Generally, the I/O devices of this processor-array system are very efficient in supplying the
required data to the memory directly.
 The main advantage of using this processor with a range of sensors is a slighter footprint.
Applications
The applications of array processors include the following.
 This processor is used in medical and astronomy applications.
 These are very helpful in speech improvement.
 These are used in sonar and radar systems.
 These are applicable in anti-jamming, seismic exploration & wireless communication.
 This processor is connected to a general-purpose computer to improve the computer’s
performance within arithmetic computational tasks. So it attains high performance through parallel
processing by several functional units.

iv. Parallel Processing

Parallel Processing refers to the simultaneous execution of multiple processes or threads to perform
computations more efficiently. In parallel processing, tasks are divided into smaller sub-tasks that are
executed concurrently, either on multiple processors or cores, to achieve faster results.

Key Characteristics:

 Tasks are broken down into smaller sub-tasks that can run concurrently.
 Used in multi-core processors and distributed computing systems.
 Increases the speed and efficiency of computation by utilizing multiple processing units.
 Can be categorized into data parallelism (same operation on multiple data items) and task parallelism
(different operations on different tasks).

Example: In a multi-core CPU, different cores might execute different parts of a large program
simultaneously. In a distributed system, different computers might each handle a part of the computation.

v. Vector Processing

Vector Processing is a type of processing that allows the CPU to handle vectors (arrays of data) in a single
instruction, processing multiple data elements simultaneously. This is especially effective for tasks like
scientific calculations, signal processing, and machine learning, where operations on large data sets are
common.

Key Characteristics:

 The CPU uses special vector registers that can hold multiple data elements (a vector) at once.
 Operations like addition, multiplication, and other arithmetic functions can be performed on the entire
vector at once, speeding up processing.
 Vector processing is often implemented in SIMD (Single Instruction, Multiple Data) architectures, where a
single instruction operates on multiple data elements in parallel.

Example: A vector processor can add two arrays of numbers, element by element, in a single instruction,
significantly speeding up tasks like matrix multiplication or filtering.

Q2. Difference between RSIC and CISC.

Ans- Difference between RISC and CISC


RISC (Reduced Instruction Set Computer) and CISC (Complex Instruction Set Computer) are two
different CPU architectures that define how instructions are executed within a processor. These architectures
focus on simplifying or complicating the instruction set to optimize processing speed and efficiency. Here
are the main differences between them:

1. Instruction Set Complexity

 RISC: The RISC architecture uses a small, highly optimized instruction set. Each instruction is designed to
execute in a single clock cycle. It aims for simplicity, with instructions that perform very simple operations.
o Example: Load, store, add, and subtract are typical operations in RISC.
 CISC: CISC processors, on the other hand, have a large and more complex instruction set. Each instruction
can perform multiple tasks, such as loading data from memory, performing an operation, and storing results,
all in one instruction.
o Example: The x86 architecture is a classic CISC example, where an instruction can load data and
perform an operation on it in one step.
2. Instruction Length

 RISC: In RISC, instructions are typically of fixed length, making them easier to decode and execute.
This uniformity allows for faster execution and easier pipelining.
 CISC: CISC instructions are variable in length, meaning some instructions can be quite long
(because they perform multiple operations in a single instruction). This can lead to more complex
decoding.

3. Execution Time

 RISC: Each instruction in RISC is designed to complete in a single clock cycle, which makes the
execution time predictable and typically faster for simpler tasks. This allows for efficient pipelining
and parallel execution.
 CISC: CISC instructions can take multiple clock cycles to execute because of their complexity and
the number of operations they perform in one instruction. However, CISC architectures can be more
efficient for tasks that require complex operations, as they reduce the number of instructions needed.

4. Use of Registers

 RISC: RISC architectures emphasize the use of a large number of general-purpose registers. Most
operations in RISC involve manipulating data in these registers, reducing the need for frequent
memory access.
 CISC: CISC architectures use fewer registers and often perform operations directly with memory.
This means that more memory accesses are required during execution, which can slow down
performance for certain tasks.

5. Code Density

 RISC: RISC programs typically require more instructions to perform a task because each instruction
performs a simpler operation. This may result in larger code size.
 CISC: CISC architectures generally have higher code density because each instruction can perform
multiple operations, so fewer instructions are needed. This can make CISC programs more compact.

6. Number of Instructions

 RISC:
o RISC uses a smaller set of instructions. The idea is that fewer instructions, executed more efficiently,
can achieve the same results as more complex instructions.
o RISC processors tend to rely on software for complex tasks, using simpler instructions to implement
these operations.
 CISC:
o CISC uses a large number of instructions, including complex ones that can perform multiple
operations in a single instruction.
o This reduces the need for multiple instructions to perform a task, which can reduce program size.
7. Compiler Design

 RISC:
o Compilers for RISC architectures must work harder to optimize code because the processor uses
simpler instructions. The compiler will translate high-level operations into a sequence of simpler
instructions.
o As a result, RISC systems rely heavily on software optimization to maximize performance.
 CISC:
o The compiler for CISC architectures has to handle a more complex set of instructions, as the
instructions themselves are capable of doing more work in one go.
o The complexity of the CISC instruction set can sometimes lead to more efficient code generation
from the compiler.

8. Power Consumption

 RISC:
o RISC processors tend to consume less power because they perform simpler operations per cycle, and
the design allows for a more efficient pipeline.
o Fewer complex instructions mean less overall work and, typically, lower power consumption.
 CISC:
o CISC processors tend to consume more power due to the complexity of their instruction set and the
need for more clock cycles to execute some instructions.

9. Hardware Complexity

 RISC:
o RISC processors are generally simpler in terms of hardware design. Since instructions are simple and
uniform in length, the hardware doesn't need to be as complex.
o The simplicity of the RISC architecture allows for easier pipeline design and higher performance in
terms of speed.
 CISC:
o CISC processors have more complex hardware because of the larger and more varied set of
instructions.
o The CPU needs more sophisticated hardware to decode and execute variable-length instructions,
leading to increased design complexity.

10. Example Architectures

 RISC: Some well-known RISC processors are the ARM, MIPS, and SPARC architectures.
 CISC: The x86 architecture, used in most personal computers, is a prominent example of CISC.

Q3. What is Booth algorithm? Explain it in detail. Multiply


24 and -7 using booth algorithm.

Ans- Booth's Algorithm:


Booth's Algorithm is a multiplication algorithm used to multiply binary numbers in two's complement
representation. It was developed by Andrew D. Booth in 1951 and is particularly efficient when multiplying
numbers with both positive and negative values. The algorithm reduces the number of operations required
compared to the traditional multiplication approach.
Key Concepts:

 Two's complement representation: It is the most common method for representing negative numbers in
binary.
 Booth's algorithm uses the concept of bit-pair recoding to simplify multiplication.
 The idea is to look at pairs of bits (the current bit and the previous bit) and decide what operation to
perform (adding, subtracting, or doing nothing).

Hardware Implementation of Booths


Algorithm – The hardware implementation
of the booth algorithm requires the register
configuration shown in the figure below.

Advantages of Booth Algorithm

1. Efficient for Signed Multiplication:


Handles both positive and negative numbers
directly.
2. Reduces Operations: Skips operations by
encoding sequences of 1s, reducing the number
of additions/subtractions.
3. Works with Two's Complement: Designed to work well with standard signed number representation.
4. Speeds Up Multiplication: Especially faster for multipliers with large blocks of 1s.
5. Simplifies Hardware: Reduces complexity in hardware circuits for multiplication.
6. Less Storage Needed: Requires fewer intermediate calculations, saving storage space.

Steps in Booth's Algorithm:

1. Initialize:
o The two numbers to be multiplied are represented in
binary in two's complement form.
o Let M be the multiplicand (first
number) and Q be the multiplier (second
number).
o Initialize A (Accumulator) to 0 and
Q-1 (a "Q-minus-one" bit) to 0
(which is an extra bit for
examining bit pairs).
o The product will be stored in A and

Q together.

2. Booth's Recoding:
o Look at the pair of bits: the current
bit of Q and the previous bit (Q-1).
o Based on the pair, decide whether
to:
 Add M (if Q and Q-1 are
01).
 Subtract M (if Q and Q-1
are 10).
 Do nothing (if Q and Q-1 are 00 or 11).
The result of the operation is added or subtracted from the accumulator A.

3. Shift:
o After each step, perform an arithmetic right shift on the combined A and Q registers.
o The number of shifts is equal to the number of bits in the multiplier (usually the same as the
multiplicand).
4. Repeat:
o Repeat the process for the number of bits in the multiplier.

Example: Multiply 24 and -7 Using Booth's Algorithm


We will use 8-bit representation for simplicity. The multiplicand is 24 and the multiplier is -7.
Step 1: Represent the numbers in binary (two's complement)
 24 in binary (8-bit): 00011000
 -7 in binary (8-bit): First, represent 7 in binary: 00000111. Then, take the two's complement:
o Invert the bits: 11111000
o Add 1: 11111001 (This is -7 in two's complement).
Step 2: Initialize registers
 Let the multiplicand M = 24 = 00011000 (8-bit).
 Let the multiplier Q = -7 = 11111001 (8-bit).
 Initialize Accumulator (A) = 00000000 (8-bit).
 Set Q-1 = 0 (additional bit).
The initial setup is as follows:
css
Copy code
A = 00000000
Q = 11111001
Q-1 = 0

Step 3: Apply Booth's Algorithm


We need to perform 8 steps since we are using 8-bit numbers.

Step 1: Examine the pair (Q0, Q-1) = (1, 0)


 Operation: Subtract M from A
 A = A - M → 00000000 - 00011000 → 11101000
 Arithmetic right shift A and Q:
A = 11110100, Q = 11111100, Q-1 = 1

Step 2: Examine the pair (Q0, Q-1) = (0, 1)


 Operation: Add M to A
 A = A + M → 11110100 + 00011000 → 00001100
 Arithmetic right shift A and Q:
A = 00000110, Q = 11111110, Q-1 = 0

Step 3: Examine the pair (Q0, Q-1) = (0, 0)


 Operation: Do nothing
 Arithmetic right shift A and Q:
A = 00000011, Q = 11111111, Q-1 = 0

Step 4: Examine the pair (Q0, Q-1) = (1, 0)


 Operation: Subtract M from A
 A = A - M → 00000011 - 00011000 → 11100111
 Arithmetic right shift A and Q:
A = 11110011, Q = 11111111, Q-1 = 1

Step 5: Examine the pair (Q0, Q-1) = (1, 1)


 Operation: Do nothing
 Arithmetic right shift A and Q:
A = 11111001, Q = 11111111, Q-1 = 1

Step 6: Examine the pair (Q0, Q-1) = (1, 1)


 Operation: Do nothing
 Arithmetic right shift A and Q:
A = 11111100, Q = 11111111, Q-1 = 1

Step 7: Examine the pair (Q0, Q-1) = (1, 1)


 Operation: Do nothing
 Arithmetic right shift A and Q:
A = 11111110, Q = 11111111, Q-1 = 1

Step 8: Examine the pair (Q0, Q-1) = (1, 1)


 Operation: Do nothing
 Arithmetic right shift A and Q:
A = 11111111, Q = 11111111, Q-1 = 1

Step 9: Final product


After completing the 8 steps, the product will be stored in the combined A and Q registers.
 The final value of A and Q is 11111111 11111101, which in two's complement is -168.
Thus, the result of multiplying 24 and -7 is -168.
Q4. What do you mean by addressing modes? Explain
various modes with the help of examples.

Ans- What is Addressing Mode?


Addressing modes are the techniques used in computer architecture to specify where the operand (data) for
an instruction is located. The operand could be in a register, in memory, or it could be specified directly in
the instruction itself. Addressing modes determine how the processor will access the data required for an
operation.

In simpler terms, addressing modes tell the CPU how to find the data it needs to execute an instruction.
There are different types of addressing modes, each providing different ways to locate operands.

1. Immediate Addressing Mode

 Definition: In this mode, the operand (the data to be used) is directly provided in the instruction itself. The
value is a constant, and the instruction does not reference any memory or register.
 Example: MOV A, #5
o This instruction moves the immediate value 5 directly into register A.
o Here, #5 is the operand (the data), which is immediately available in the instruction. This mode does
not require memory access because the data is part of the instruction.

Use Case: Useful for operations that require constants like 5, 10, etc.

2. Register Addressing Mode

 Definition: In register addressing mode, the operand is stored in a register. The instruction specifies the
register that holds the data to be used.
 Example: MOV A, B
o This means "move the contents of register B into register A".
o Here, A and B are registers, and the operand is found in register B.

Use Case: Efficient for operations on data already loaded into registers.

3. Direct Addressing Mode

 Definition: The operand’s memory address is given explicitly in the instruction. The instruction specifies the
exact memory location where the data resides.
 Example: MOV A, 1000H
o This means "move the contents from memory location 1000H into register A".
o 1000H is the direct memory address.

Use Case: Ideal when the operand is located in a specific memory location that is known in advance.

4. Indirect Addressing Mode

 Definition: In indirect addressing mode, the instruction specifies a register or memory location that contains
the address of the operand. The operand is located at the address specified by the register or memory
location.
 Example: MOV A, [BX]
o This means "move the contents from the memory location whose address is stored in register BX
into register A".
o The value in BX is treated as a pointer to the memory location that holds the actual data.

Use Case: Useful when the operand's address is not fixed and may change dynamically.

5. Indexed Addressing Mode

 Definition: The address of the operand is calculated by adding a constant (index or offset) to a base address,
which is typically stored in a register. This mode is useful when working with arrays or tables.
 Example: MOV A, [BX + 5]
o This means "move the contents from the memory location at address BX + 5 into register A".
o Here, BX is the base address stored in a register, and 5 is the offset added to the base to find the
actual operand.

Use Case: Commonly used when accessing elements of an array or data structure where the address of each
element is calculated by adding an offset.

6. Register Indirect Addressing Mode

 Definition: The address of the operand is stored in a register, and the operand is located at the memory
address specified by the contents of the register. It’s similar to indirect addressing, but here, the address
pointer is a register.
 Example: MOV A, [SP]
o This means "move the contents from the memory location whose address is in the stack pointer
register (SP) into register A".
o The operand is at the memory address pointed to by the SP register.
Use Case: Used for operations where a register points to a memory address, like when working with the
stack.

7. Relative Addressing Mode

 Definition: In this mode, the address of the operand is computed relative to the current position of the
program counter (PC). This is typically used in branch and jump instructions.
 Example: JMP [PC + 5]
o This means "jump to the instruction located 5 bytes ahead of the current instruction".
o The operand's address is calculated by adding an offset (+5) to the current value of the program
counter (PC).

Use Case: Commonly used in conditional branches and loops, where the next instruction to execute depends
on the current program counter.

8. Base Register Addressing Mode

 Definition: The address of the operand is calculated by adding a base register value (often used to reference
arrays or data structures) and an offset (immediate value or index). The base register typically holds a base
address, and the operand's address is calculated by adding an offset to this base address.
 Example: MOV A, [BX + SI]
o This means "move the contents from the memory location at address BX + SI into register A".
o Here, BX is the base register, and SI is the index register used to calculate the operand's memory
address.

Use Case: Useful when accessing data structures like arrays, where the base register points to the beginning
of the data, and the index register provides the offset.
UNIT-3
Q1. Write short note on Floating point representation.

Ans-. Floating Point representation

Floating point representation is a method of representing real numbers (including very large and very
small numbers) in a computer's memory. Unlike fixed-point representation, where the decimal point is fixed
in place, floating point allows the decimal point to "float," enabling the representation of a much wider range
of values.

In floating point, numbers are expressed in the form of:

Number=Sign×Mantissa×BaseExponent

Where:

 Sign indicates whether the number is positive or negative.


 Mantissa (or significand) represents the significant digits of the number.
 Exponent represents the power of the base (usually 2 in binary systems) that scales the mantissa.

Components of Floating Point Representation:

1. Sign bit: Determines whether the number is positive (0) or negative (1).
2. Exponent: A binary number that represents the scale factor for the mantissa, typically with a bias to allow
both positive and negative exponents.
3. Mantissa (or Fraction): Represents the precision bits of the number, often normalized so that the leading
digit is assumed to be 1.

Standard Format:

 Single Precision (32-bit): 1 bit for the sign, 8 bits for the exponent, and 23 bits for the mantissa.
 Double Precision (64-bit): 1 bit for the sign, 11 bits for the exponent, and 52 bits for the mantissa.

Example:

The number −6.75-6.75−6.75 in binary floating point (using 32-bit representation) is stored as:

 Sign bit = 1 (negative)


 Exponent = 2 (bias of 127, so the actual exponent is 2+127=1292 + 127 = 1292+127=129, or
10000001210000001_2100000012)
 Mantissa = 1.10111.10111.1011 (normalized value for the binary representation of 6.75)

Advantages:

 Wide Range: Can represent both very large and very small numbers (e.g., 10−30810^{-308}10−308 to
1030810^{308}10308).
 Precision: Allows for high precision in calculations with real numbers.
Limitations:

 Precision Loss: Floating point numbers have limited precision, which may lead to rounding errors.
 Complexity: Floating point arithmetic is more computationally expensive than integer arithmetic.

Q2. Distinguish between Fixed point and Floating point


representation.

Ans- Sure! Let's break down the differences between fixed point and floating point representation:
Fixed Point Representation:

1. Definition: Represents numbers with a fixed number of digits before and after the decimal point.
2. Precision: Limited precision, as the position of the decimal point is fixed.
3. Range: Limited range because it can only represent numbers within a specific interval.
4. Usage: Commonly used in applications where speed and simplicity are important, and the range of
values is known (e.g., embedded systems, financial calculations).
5. Arithmetic Operations: Faster and simpler since the position of the decimal point is fixed.

Floating Point Representation:

1. Definition: Represents numbers with a floating (variable) number of digits before and after the
decimal point using a format that includes a significand (or mantissa) and an exponent.
2. Precision: Higher precision, as it can represent a wider range of values with a varying number of
significant digits.
3. Range: Much larger range, capable of representing very small and very large numbers.
4. Usage: Used in scientific and engineering applications where a wide range of values and high
precision are required (e.g., simulations, graphics, computations).
5. Arithmetic Operations: More complex and slower due to the need for adjusting the exponent and
significand during calculations.
UNIT-4
Q1.Write short note on:-

i. Synchronous data transfer

ii. Asynchronous data transfer

iii. Serial communication

Ans- i. Synchronous Data Transfer


Synchronous Data Transfer is a method of transferring data between two devices where both devices share
a common clock signal. In this system, data is sent in a continuous stream, with each bit being transferred at
a specific time interval synchronized to the clock. Both the sender and the receiver are aware of the timing,
ensuring that the data is read correctly.

Key Characteristics:

 Data transfer occurs at regular intervals determined by a shared clock signal.


 The sender and receiver are synchronized to the same clock, reducing the chances of errors.
 It is faster and more efficient than asynchronous transfer because the clock signal ensures the timing is
precise.

Example: In a synchronous serial communication (like I2C), the data and clock lines are used to transmit
information, and both the sender and receiver are synchronized to the clock signal.

ii. Asynchronous Data Transfer

Asynchronous Data Transfer is a type of data transmission where data is sent without a shared clock
signal between the sender and receiver. Instead, each byte of data is framed with start and stop bits to
indicate the beginning and end of the transmission. The sender and receiver operate independently, and
timing is determined by the start and stop bits.

Here's how asynchronous data transfer


works:

1. Start Bit: When a device is ready


to send a byte of data, it first sends
a start bit to indicate the beginning
of the transmission. This start bit is usually a transition
2. from a high voltage to a low voltage (0).
3. Data Bits: The actual data bits follow the start bit. These bits represent the byte of data being

transmitted. The number of data bits can vary (usually 7, 8, or 9 bits).

4. Parity Bit (Optional): An optional parity bit may be included for error-checking purposes. This bit
can be set to make the number of 1s in the byte either even or odd.
5. Stop Bit: After the data bits (and optional parity bit), a stop bit is sent to indicate the end of the
transmission. The stop bit is usually a transition from a low voltage to a high voltage (1). There can
be one or more stop bits.

Pros and Cons of Asynchronous Data Transfer:

 Pros:
o Flexibility: Devices do not need to share a clock, making asynchronous communication more flexible
and easier to implement in certain environments.
o Simple: Typically requires less complex hardware and can be used for communication over long
distances with minimal overhead.
o Efficient for low-speed communication: Ideal for low-to-moderate speed data transfers like
keyboard inputs, serial communication (e.g., RS-232), etc.
 Cons:
o Lower Data Transfer Rate: Asynchronous communication tends to be slower than synchronous
communication due to the overhead of start and stop bits.
o Potential for Timing Errors: If the baud rate is not properly synchronized between sender and
receiver, timing mismatches may cause data corruption.

Example: An example of asynchronous communication is the RS-232 standard used in serial ports (like
those used in older computer systems) where the data is transmitted one byte at a time, with a start bit and a
stop bit.

iii. Serial Communication


Serial Communication is a method of transmitting data one bit at a time over a single channel or wire. It’s
commonly used in applications where data needs to be transferred over long distances or between devices
with limited data lines, like microcontrollers, sensors, and computers.
In serial communication, data bits are transmitted one after another, following a specific order, starting
from the most significant or least significant bit, depending on the protocol. This type of communication is
often preferred in systems where the simplicity and reduced wiring requirements are more critical than the
speed of data transfer, as it is generally slower than parallel communication for short-distance data
transfer. However, for long distances, serial communication is more reliable and less prone to signal
crosstalk or interference.
Modes of Serial Communication Interface(SCI)
SCI can be classified into many types based on its implementation and their usage:
1. Simplex
2. Half Duplex
3. Full Duplex
1.Simplex SCI
Simplex communication involves one-way transmission of signals, with a clear sender and receiver
but no ability for the receiver to reply. It is like a one-lane road, allowing efficient and straightforward
communication. Lets see the diagram how simplex works
Components
1. Sender/Transmitter: The device or
module responsible for sending the data.
2. Receiver: The device or module that
receives and processes the data.
3. Communication Medium: The
physical or wireless medium through
which the data is transmitted, such as
cables, fiber optics, or radio waves.
Working of Simplex
The working of simplex type of communication is very easy to understand that only one type of
transmission happens that is from sender to receiver. The reverse communication is not possible,
only the sender alone can send the messages to receiver. The receiver only can read it but their no
possibility of reverse communication in these type of communication.
2.Half Duplex
This mode uses a single communication channel for both transmission and reception, but not
simultaneously. Devices must take turns to send and receive data. Lets see the diagram how Half
Duplex works.

Components
1. Sender/Transmitter: The device or
module responsible for sending the
data.
2. Receiver: The device or module
that receives and processes the data.
3. Communication Medium: The
physical or wireless medium through
which the data is transmitted, such as
cables, fiber optics, or radio waves.
Working of Half Duplex
The working of half duplex involves both side communication but not simultaneously. That means if
sender sends the message to receiver than after the the message is successfully transmitted only
the receiver can send the return message to sender. In the same way if receiver sends the message
to sender than after the the message is successfully transmitted only the sender can send the return
message to receiver.
3.Full Duplex
In this mode, data transmission and reception occur simultaneously on separate channels, allowing
continuous bidirectional communication. Lets see the diagram how Full Duplex works.

Components
1. Sender/Transmitter: The device
or module responsible for sending
the data.
2. Receiver: The device or module
that receives and processes the data.
3. Communication Medium: The
physical or wireless medium through
which the data is transmitted, such as
cables, fiber optics, or radio waves.
Working of Full Duplex
The working of full duplex involves both
side communication same as half duplex but here simultaneous communication is possible. That
means if sender sends the message to receiver at the same that is possible the receiver can also
send message to sender. In the same way if receiver sends the message to sender than at the same
that is possible the sender can also send message to receiver.
Q2. What is priority interrupt? Explain polling and chaining
priority.

Ans-Priority Interrupt
A priority interrupt is a type of interrupt where different interrupt requests (IRQs) are assigned a priority
level. When multiple devices request service at the same time, the system gives priority to the interrupt with
the highest priority. This ensures that critical or time-sensitive tasks are handled first, while less important
tasks wait their turn. The priority system helps in managing multiple interrupts without conflict and ensures
that important operations are not delayed.

Key Characteristics of Priority Interrupt:

 Priority Levels: Devices or events requesting interrupts are assigned priority levels (often represented by
numbers or priority vectors). The interrupt with the highest priority is serviced first.
 Interrupt Masking: Some systems allow interrupt masking, where lower-priority interrupts can be
temporarily disabled while higher-priority interrupts are being serviced.
 Interrupt Handling: If multiple interrupts occur simultaneously, the interrupt controller selects the highest-
priority interrupt to handle first, based on pre-set priority levels.

Polling Priority

Polling is a technique where the CPU actively checks (or "polls") each device in a sequence to see if it needs
attention. In the context of priority interrupts, polling can be used to determine which interrupt should be
handled based on priority. Here, the CPU polls devices in a pre-defined order, and the first device that
requires service is processed. However, polling does not have automatic priority assignment like the priority
interrupt method does.

Characteristics of Polling Priority:

 The CPU repeatedly checks each device's interrupt request line.


 The devices are polled one by one, and the CPU gives service to the first device that raises an interrupt.
 Polling can be slow and inefficient, especially when many devices are involved or when interrupt requests
are frequent.

Example: If a system has 3 devices and the CPU polls in the order of device 1 → device 2 → device 3, the
CPU will check device 1 first, then device 2, and then device 3. If device 1 has an interrupt, it will be
serviced first. If it does not, the CPU will move on to device 2, and so on.

Chaining Priority

Chaining (also known as daisy-chaining) is a method used to handle priority interrupts in hardware. In
chaining, the devices are connected in series, and each device has a line connected to the next device,
allowing the interrupt requests to be passed along. The priority of the devices is determined by their physical
order in the chain. The device at the highest priority sends its interrupt request to the next device, and the
next device only sends its interrupt if it has a higher priority than the current device.

Characteristics of Chaining Priority:


 Devices are arranged in a sequence with an interrupt chain.
 When a device raises an interrupt, it sends a signal to the next device in the chain.
 The interrupt controller or CPU checks the devices in order, starting from the highest-priority device.
 If the first device in the chain does not request an interrupt, the request is passed along to the next device.
 The interrupt controller only acknowledges the interrupt when it reaches the device with the highest priority
that has requested attention.

Example: In a chained priority system with three devices (device 1, device 2, and device 3), device 1 has the
highest priority, device 2 has medium priority, and device 3 has the lowest priority. When an interrupt is
triggered:

 Device 1 (highest priority) raises an interrupt first, and the interrupt is passed to the CPU.
 If device 1 does not raise an interrupt, the signal moves to device 2 (medium priority), and if necessary, to
device 3 (lowest priority).
 The interrupt is serviced in the order of their priority.

Q3. What do you understand by Direct Memory Access


(DMA).Explain with the help of example.
Ans- Direct Memory Access
(DMA)
Direct Memory Access (DMA) is a feature
that allows peripheral devices to transfer
data directly to and from memory
without continuous involvement from
the CPU. This is particularly useful for
high-speed data transfer operations,
such as those needed in disk drives,
sound cards, and network cards, because
it frees up the CPU to perform other
tasks while the data transfer is in
progress.

Key Components of DMA:

 DMA Controller: The core


component that manages the entire DMA process.
 System Bus: The shared communication channel between the CPU, memory, and I/O devices.
 I/O Devices: Devices like disk drives, network cards, and sound cards that require efficient data
transfer.

Advantages of DMA:

 Improved Performance: Offloads data transfer from the CPU, allowing it to focus on other tasks.
 Reduced CPU Overhead: Minimizes CPU intervention in data transfer operations.
 Efficient Data Transfer: Enables high-speed data transfer between I/O devices and memory.

Disadvantages of DMA:

 Increased Hardware Complexity: Requires additional hardware components.


 Potential for Conflicts: May compete with the CPU for system bus access.

What is a DMA Controller?


Direct Memory Access (DMA) uses hardware for accessing the memory, that hardware is called a DMA Controller.
It has the work of transferring the data between Input Output devices and main memory with very less interaction
with the processor. The direct Memory Access Controller is a control unit, which has the work of transferring data.
DMA Controller in Computer Architecture
DMA Controller is a type of control unit that works as an interface for the data bus and the I/O Devices. As
mentioned, DMA Controller has the work of transferring the data without the intervention of the processors,
processors can control the data transfer. DMA Controller also contains an address unit, which generates the
address and selects an I/O device for the transfer of data. Here we are showing the block diagram of the DMA
Controller.

Block Diagram of DMA Controller

Types of Direct Memory Access (DMA)


There are four popular types of DMA.
 Single-Ended DMA
 Dual-Ended DMA
 Arbitrated-Ended DMA
 Interleaved DMA
Single-Ended DMA: Single-Ended DMA Controllers operate by reading and writing from a single memory address.
They are the simplest DMA.
Dual-Ended DMA: Dual-Ended DMA controllers can read and write from two memory addresses. Dual-ended DMA
is more advanced than single-ended DMA.
Arbitrated-Ended DMA: Arbitrated-Ended DMA works by reading and writing to several memory addresses. It is
more advanced than Dual-Ended DMA.
Interleaved DMA: Interleaved DMA are those DMA that read from one memory address and write from another
memory address.
Working of DMA Controller
The DMA controller registers have three registers as follows.
 Address register – It contains the address to specify the desired location in memory.
 Word count register – It contains the number of words to be transferred.
 Control register – It specifies the transfer mode.

Q4. Describe Direct Memory Access (DMA) . Explain its


functioning of DMA transfer with the help of diagram.
Ans- Direct Memory Access
(DMA)

DS- Device Select

RS-Register Select

Direct Memory Access (DMA) is a


method that allows peripheral devices
(like disk drives, sound cards, or
network cards) to send or receive data
directly from the main memory without
requiring continuous CPU intervention.
With DMA, the CPU can initiate a data
transfer and then continue with other
tasks, freeing up processing time.

DMA is essential for high-speed data transfers, as it bypasses the CPU to avoid bottlenecks and improve
overall system performance, especially in data-intensive tasks like streaming video or large file transfers.

DMA (Direct Memory Access) Transfer is a process in computer systems that allows certain hardware
subsystems to access the system memory independently of the central processing unit (CPU). DMA is
primarily used to speed up data transfers, especially between I/O devices (like disk drives, graphics cards,
network adapters, etc.) and system memory, without burdening the CPU with the entire data transfer
process.

Purpose of DMA:

 The main purpose of DMA is to offload data transfer tasks from the CPU, allowing the CPU to
perform other processing tasks.
 It increases data transfer efficiency, especially for high-speed devices, by allowing data transfer
directly between memory and peripherals.

Modes of Data Transfers in DMA

1. Burst Transfer Mode:


o In this mode, the DMA controller transfers a block of data all at once without interruptions.
o The CPU has no access to the bus until the burst transfer is complete, allowing high-speed data
movement.
2. Cycle Stealing Mode:
o Here, the DMA controller transfers data one byte at a time, allowing the CPU to regain control in
between each byte transfer.
o This mode minimizes CPU waiting time but is slower than burst mode.
3. Transparent Mode:
o In this mode, the DMA controller transfers data only when the CPU is not using the system bus.
o It does not affect the CPU’s performance but can lead to slower transfers since DMA waits for idle
CPU cycles.

Functioning of DMA Transfer


1. Components in the DMA System

 Processor (CPU): Initiates the DMA transfer by setting up the DMA controller and granting it access to the
bus when a transfer request is made.
 DMA Controller (DMAC): Manages the data transfer between the RAM and peripherals without involving
the CPU in every step.
 RAM: The main memory where data is either read from or written to during the transfer.
 Peripherals: External devices (such as hard drives, network cards, etc.) that require data transfer to or from
the memory.
 Address Select: A unit responsible for selecting the appropriate address for data transfer in memory.

2. Signals and Connections in the Diagram

 BR (Bus Request): The DMA controller sends a Bus Request signal to the CPU when it wants control of the
data bus to perform a data transfer.
 BG (Bus Grant): In response to the Bus Request, the CPU sends a Bus Grant signal to the DMA controller,
allowing it temporary control of the system buses.
 RD (Read) and WR (Write): Signals indicating whether data should be read from or written to the memory or
peripheral device.
 Interrupt: After the DMA transfer is complete, an interrupt signal is sent to the CPU to notify it that the
transfer is finished.
 DMA Request and DMA Acknowledge: The peripheral device sends a DMA Request signal to the DMA
controller when it needs data transfer, and the DMA controller responds with a DMA Acknowledge signal
once it is ready to handle the request.

3. DMA Transfer Process Based on the Diagram

Here’s a step-by-step breakdown of how DMA functions:

1. DMA Request:
o The peripheral device needing data transfer sends a DMA Request signal to the DMA Controller
(DMAC).
2. Bus Request (BR):
o The DMA controller requests control of the system bus by sending a Bus Request (BR) signal to the
CPU.
3. Bus Grant (BG):
o The CPU, upon receiving the Bus Request, completes its current tasks, pauses its operations, and
sends a Bus Grant (BG) signal back to the DMA controller.
o This Bus Grant allows the DMA controller to take control of the system buses (address, data, and
control buses) for data transfer.
4. Data Transfer:
o Read/Write Control: Depending on the direction of the data transfer, the DMA controller sets the
Read (RD) or Write (WR) signal.
o Address Bus and Data Bus: The DMA controller manages the Address Bus to specify memory
locations in RAM and uses the Data Bus to transfer data between RAM and the peripheral device.
o The Address Select component ensures that the correct memory address is selected for the data
transfer.
5. Completion and Interrupt:
o Once the data transfer is complete, the DMA controller releases the bus back to the CPU.
o The DMA controller sends an Interrupt signal to the CPU to indicate that the data transfer is finished
and that the CPU can resume its operations.
Q5. What do you mean by Input-Output Processor (IOP).
Explain with the help of block diagram.

Ans- Input-Output Processor (IOP)


An Input-Output Processor (IOP) is a specialized processor designed to manage the input and output
operations in a computer system. Its main function is to handle data transfer between the computer's main
processor (CPU) and peripheral devices (such as disks, printers, or keyboards), freeing the CPU from these
time-consuming tasks. By offloading the I/O operations to a dedicated processor, the IOP helps improve
system performance and efficiency.

Here's a breakdown of the diagram:

1. Memory Unit: This is where data and instructions are stored.

2. Memory Bus: This bus connects the memory unit to both the CPU and the IOP. It allows for data transfer
between these components.

3. CPU: The central processing unit handles the primary processing tasks. It can also initiate I/O operations
by sending commands to the IOP.

4. IOP: The Input-Output Processor is dedicated to handling I/O operations. It can directly access the
memory bus to transfer data between I/O devices and memory, freeing up the CPU for other tasks.

5. Peripheral Devices (PD1, PD2, PD3): These are various I/O devices like disk drives, printers,
keyboards, etc., connected to the system. They communicate with the IOP for data transfer.

Here are some key aspects of Input-Output Processors:


‣ Interface with Peripherals:

IOPs serve as intermediaries between the CPU and various peripheral devices such as
keyboards, mice, printers, storage devices, network interfaces, etc. They provide a
standardized interface for communication between these devices and the CPU.

‣ Data Transfer:

IOPs facilitate the transfer of data between the CPU and peripherals. They manage data
transfers efficiently, handling tasks like data buffering, error detection, and error
correction to ensure reliable communication.

‣ Interrupt Handling:

IOPs handle interrupts generated by peripheral devices to signal events such as data
arrival, completion of data transfer, or error conditions. Upon receiving an interrupt,
the IOP suspends the CPU’s current task, processes the interrupt, and performs the
necessary actions to handle the event.

‣ Device Control:

IOPs control the operation of peripheral devices by sending commands and receiving
status information. They manage device initialization, configuration, and operation
according to the requirements of the CPU and the applications running on it.

‣ I/O Addressing:

IOPs utilize I/O addressing mechanisms to access peripheral devices. They interpret
I/O instructions issued by the CPU, translate them into appropriate signals or
commands for the peripherals, and handle the data transfer between memory and
devices.

‣ DMA (Direct Memory Access):

Many modern IOPs support DMA, a feature that enables data transfer between memory
and peripherals without CPU intervention. DMA allows for faster data transfer rates
and relieves the CPU from the burden of managing data transfers, improving overall
system performance.

‣ Bus Interface:

IOPs connect to the system bus to communicate with the CPU and memory. They adhere
to bus protocols and standards to ensure compatibility with the CPU and other system
components.
‣ Parallelism and Pipelining:

Advanced IOPs may employ parallel processing and pipelining techniques to improve
throughput and efficiency. Parallelism allows simultaneous processing of multiple I/O
operations, while pipelining enables overlapping of different stages of I/O operations to
reduce latency.

CPU-IOP Communication

 There is a communication channel between IOP and CPU to perform task which
come under computer architecture. This channel explains the commands executed by IOP
and CPU while performing some programs. The CPU do not executes the instructions but
it assigns the task of initiating operations, the instructions are executed by IOP. I/O
transfer is instructed by CPU. The IOP asks for CPU through interrupt.

Explanation:

• Communication channel between IOP and CPU Whenever CPU gets interrupt from IOP to
access memory, it sends test path instruction to IOP. IOP executes and check for status, if the
status given to CPU is OK, then CPU gives start instruction to IOP and gives it some control
and get back to some another (or same) program, after that IOP is able to access memory for
its program. Now IOP start controlling I/O transfer using DMA and create another status
report as well. AS soon as this I/O transfer completes IOP once again send interrupt to CPU,
CPU again request for status of IOP, IOP check status word from memory location and gives
it to CPU. Now CPU check the correctness of status and continues with the same process.

Q6. Differentiate between Hardwired control unit and Micro


programmed control unit.

Ans- Difference Between Hardwired Control Unit and Microprogrammed Control Unit
The Control Unit (CU) in a computer system is responsible for directing the operation of the processor by
sending control signals to other parts of the computer, such as the ALU (Arithmetic Logic Unit), registers,
and memory. The CU interprets instructions from memory and ensures the correct sequence of operations.
There are two main types of control units used in a CPU: Hardwired Control Unit (HWCU) and
Microprogrammed Control Unit (MCU).

Here’s a detailed comparison between the two:

1. Hardwired Control Unit (HWCU)

The Hardwired Control Unit is designed using fixed logic circuits (such as gates, flip-flops, and
combinational logic circuits) that are hardwired to generate control signals based on the inputs received. It is
the traditional approach to designing control units.

 Design: The Hardwired Control Unit is implemented using fixed logic circuits (combinational logic) to
directly produce control signals for the CPU. This involves using hardware components such as gates, flip-
flops, decoders, and other circuits.

 Speed: Hardwired control units are faster because


control signals are generated by direct
connections through
the logic circuits, meaning they can
execute instructions rapidly.

 Flexibility: They are less flexible

because any change in the control logic


requires a redesign of the hardware.
Adding new instructions or modifying
existing ones can be complicated and
costly.

 Complexity: Designing a hardwired control unit can become

very complex, especially for CPUs with a

large and intricate instruction set. The complexity increases

significantly with the number of instructions.


 Cost: The design and implementation can be expensive due to the need for custom hardware, but it may
be justified for high-speed applications.

2. Micro-programmed Control Unit

A micro-programmed control unit can be described as a simple logic circuit. We can use it in two ways, i.e.,
it is able to execute each instruction with the help of generating control signals, and it is also able to do
sequencing through microinstructions. It will generate the control signals with the help of programs. At the
time of evolution of CISC
architecture in the past, this
approach was very famous. The
program which is used to create the
control signals is known as the
"Micro-program". The micro-
program is placed on the processor
chip, which is a type of fast
memory. This memory is also
known as the control store or control
memory.

 Design: This control unit uses a


memory called the control memory
to store microinstructions. These
microinstructions generate control
signals. The design involves writing
a sequence of microinstructions to perform each operation.

 Speed: Microprogrammed control units are generally slower because fetching microinstructions from
control memory adds an extra step. However, this extra step is a trade-off for flexibility.

 Flexibility: They are more flexible because changes and additions to the instruction set can be made by
updating the microprogram in the control memory, without changing the hardware.

 Complexity: Easier to design and modify, even for complex instruction sets, because changes are made in
the microprogram, not in the hardware. This reduces the risk of errors in the logic design.

 Cost: Generally less costly in terms of design and manufacturing as it uses standard memory components
instead of custom hardware.

3. Some Important Terms


1. Control Word: A control word is a word whose individual bits represent various control
signals.
2. Micro-routine: A sequence of control words corresponding to the control sequence of a
machine instruction constitutes the micro-routine for that instruction.
3. Micro-instruction: Individual control words in this micro-routine are referred to as
microinstructions.
4. Micro-program: A sequence of micro-instructions is called a micro-program, which is stored
in a ROM or RAM called a Control Memory (CM).
5. Control Store: the micro-routines for all instructions in the instruction set of a computer are
stored in a special memory called the Control Store.
Q7. What are various data transfer schemes. Briefly discuss each scheme.

Ans-
UNIT-5
Q1. Draw and explain the architecture of 8085
microprocessor along with all its registers and instruction
set.

Ans- Architecture of the 8085 Microprocessor


The 8085 microprocessor is an 8-bit microprocessor developed by Intel in the 1970s. It has a simple and
efficient architecture, making it popular for educational purposes and in small embedded systems. Here’s a
look at its architecture along with its key components, registers, and an overview of its instruction set.

Block Diagram of the


8085 Microprocessor
1. 8-Bit Internal Data Bus

 The central red bus shown in the diagram is the 8-bit internal data bus, which allows data transfer between
different components of the microprocessor.
 It connects components like the accumulator, ALU, general-purpose registers, and memory buffers, enabling
data to be exchanged within the CPU.

2. Accumulator (A)

 The accumulator is an 8-bit register, labeled as A.


 It is used to hold one of the operands for arithmetic and logic operations and store the results of those
operations.
 This register is critical for data manipulation and temporary storage during computations.

3. Temporary Register

 A temporary register is used to hold data temporarily during operations.


 It serves as a holding area for data that is being processed by the ALU, allowing efficient data manipulation
within the CPU.

4. Flag Register

 The flag register holds five condition flags (sign, zero, auxiliary carry, parity, and carry), which indicate the
outcome of an operation (e.g., if the result is zero, if there is a carry, etc.).
 These flags are used by conditional branch instructions to decide the next course of action in a program.

5. Arithmetic and Logic Unit (ALU)

 The ALU performs arithmetic operations (addition, subtraction) and logic operations (AND, OR, NOT, etc.).
 It receives input from the accumulator and other registers and sends the result back to the accumulator or
other storage locations.

6. Instruction Register and Instruction Decoder

 The instruction register holds the current instruction being executed.


 The instruction decoder interprets the instruction and directs the control unit to carry out the required
operation.
 Together, they determine the operation to perform and set up the necessary data pathways.

7. Timing and Control Unit

 This unit synchronizes all operations, generating timing signals and control signals.
 It coordinates data transfer, manages communication between the CPU and other parts of the system, and
handles execution of instructions.
 Control signals go to various components like the ALU, registers, and buses to manage data flow.

8. Interrupt Control

 Interrupt control allows external devices to interrupt the CPU, temporarily halting its current operations.
 When an interrupt is triggered, the CPU handles higher-priority tasks (e.g., I/O tasks) before returning to its
previous operation.
 This mechanism is useful for real-time processing, such as handling I/O devices.
9. Serial I/O Control

 The Serial I/O Control manages serial communication, allowing data to be transmitted or received serially
(one bit at a time).
 This feature is used for serial devices like keyboards or modems, facilitating communication with peripherals.

10. General Purpose Registers (W, Z, B, C, D, E, H, L)

 These 8-bit registers can store data temporarily during program execution.
 The registers can be used individually or as register pairs (BC, DE, HL) to store 16-bit data.
 The HL pair is often used to hold a memory address (memory pointer).

11. Stack Pointer (SP) and Program Counter (PC)

 Stack Pointer (SP): A 16-bit register that holds the address of the top of the stack in memory. The stack is
used for temporary storage, particularly during subroutine calls and interrupts.
 Program Counter (PC): A 16-bit register that holds the address of the next instruction to be executed. It
automatically increments after each instruction, ensuring sequential execution.

12. Multiplexer

 A multiplexer is used to select different inputs for specific operations.


 Here, it manages the selection between various registers (like W, Z, B, C, D, E, H, L) for data transfers.

13. Address and Data Address Buffers

 Address Buffer (A<sub>8</sub> – A<sub>15</sub>): This 8-bit buffer holds the higher byte of the memory
address during memory access.
 Data Address Buffer (AD<sub>7</sub> – AD<sub>0</sub>): This 8-bit buffer serves a dual purpose. It
carries both the lower byte of the address and data, based on the timing signals.

Instruction Set of the 8085 Microprocessor

The 8085 instruction set is divided into several categories based on the types of operations they perform:

1. Data Transfer Instructions


o Move and transfer data between registers, memory, and I/O ports.
o Example: MOV, MVI, LDA, STA.
2. Arithmetic Instructions
o Perform arithmetic operations like addition, subtraction, increment, and decrement.
o Example: ADD, SUB, INR, DCR.
3. Logical Instructions
o Perform logic operations such as AND, OR, XOR, compare, and rotate.
o Example: ANA, ORA, XRA, CMP, RLC.
4. Branching Instructions
o Change the flow of execution by jumping to different parts of the program or calling subroutines.
o Example: JMP, CALL, RET, JC, JZ.
5. Control Instructions
o Control the operation of the microprocessor, handle interrupts, and manage other system tasks.
o Example: NOP, HLT, DI, EI.

Q2. Uses and Issues in 8085 Microprocessor.


Ans- Uses of 8085 microprocessor :
The 8085 microprocessor is a versatile 8-bit microprocessor that has been used in a wide variety of
applications, including:
1. Embedded Systems: The 8085 microprocessor is commonly used in embedded systems, such as
industrial control systems, automotive electronics, and medical equipment.
2. Computer Peripherals: The 8085 microprocessor has been used in a variety of computer
peripherals, such as printers, scanners, and disk drives.
3. Communication Systems: The 8085 microprocessor has been used in communication systems,
such as modems and network interface cards.
4. Instrumentation and Control Systems: The 8085 microprocessor is commonly used in
instrumentation and control systems, such as temperature and pressure controllers.
5. Home Appliances: The 8085 microprocessor is used in various home appliances, such as washing
machines, refrigerators, and microwave ovens.
6. Educational Purposes: The 8085 microprocessor is also used for educational purposes, as it is an
inexpensive and easily accessible microprocessor that is widely used in universities and technical
schools.
7. Research and development: The 8085 microprocessor is often used in research and development
projects, where it can be used to develop and test new digital electronics and computer systems.
Researchers and developers can use the microprocessor to prototype new systems and test their
performance.
8. Retro computing: The 8085 microprocessor is still used by enthusiasts today for retro computing
projects. Retro computing involves using older computer systems and technologies to explore the
history of computing and gain a deeper understanding of how modern computing systems have
evolved.

Issues in 8085 microprocessor :


Here are some common issues with the 8085 microprocessor:
1. Overheating: The 8085 microprocessor can overheat if it is used for extended periods or if it is
not cooled properly. Overheating can cause the microprocessor to malfunction or fail.
2. Power Supply Issues: The 8085 microprocessor requires a stable power supply for proper
operation. Power supply issues such as voltage fluctuations, spikes, or drops can cause the
microprocessor to malfunction.
3. Timing Issues: The 8085 microprocessor relies on accurate timing signals for proper operation.
Timing issues such as clock signal instability, noise, or interference can cause the microprocessor to
malfunction.
4. Memory Interface Issues: The 8085 microprocessor communicates with memory through its
address and data buses. Memory interface issues such as faulty memory chips, loose connections, or
address decoding errors can cause the microprocessor to malfunction.
5. Hardware Interface Issues: The 8085 microprocessor communicates with other devices through
its input/output ports. Hardware interface issues such as faulty devices, incorrect wiring, or improper
device selection can cause the microprocessor to malfunction.
6. Programming Issues: The 8085 microprocessor is programmed with machine language or
assembly language instructions. Programming issues such as syntax errors, logic errors, or incorrect
instruction sequences can cause the microprocessor to malfunction or produce incorrect results.

Q3. Write about flag register in 8085.


Ans- The flag register in the 8085 microprocessor is an 8-bit register that stores the outcome of various
operations performed by the Arithmetic and Logic Unit (ALU). It helps the microprocessor make
decisions based on the results of arithmetic and logical operations. Only five of the eight bits are used as
flags, while the other three bits are unused.

The five flags in the 8085 flag register are:

 Sign Flag (S):

 Bit Position: 7
 Description: This flag is set to 1 if the result of an operation is negative (when the most significant
bit (MSB) of the result is 1). It is cleared to 0 if the result is positive.

 Zero Flag (Z):

 Bit Position: 6
 Description: This flag is set to 1 if the result of an operation is zero. It is cleared to 0 if the result is
non-zero.

 Auxiliary Carry Flag (AC):

 Bit Position: 4
 Description: This flag is set to 1 if there is a carry-out from the lower nibble (the lower 4 bits) in a
binary addition. It is mainly used for Binary-Coded Decimal (BCD) arithmetic.

 Parity Flag (P):

 Bit Position: 2
 Description: This flag is set to 1 if the number of 1s in the result is even (even parity). It is cleared to
0 if the number of 1s is odd (odd parity).

 Carry Flag (CY):

 Bit Position: 0
 Description: This flag is set to 1 if there is a carry-out from the most significant bit (MSB) during an
addition, or if there is a borrow during a subtraction. It indicates an overflow in unsigned arithmetic
operations.
UNIT-6
Q1. Write a program to add or subtract two number in
assembly language.
Ans- section .data
result db 0
result_str db "Result: ",0
section .bss
output resb 1 ; To store ASCII representation of result

section .text
global _start
_start:
mov cl,1
mov bl,1
add cl,bl ;for subtraction write sub at the place of add
; Store result
mov [result], cl

; Convert result to ASCII (add '0')


add cl, '0'
mov [output], cl

; Print the "Result: " message


mov rax, 1 ; Syscall: write
mov rdi, 1 ; File descriptor: stdout
mov rsi, result_str ; Message address
mov rdx, 8 ; Message length
syscall

; Print the result


mov rax, 1 ; Syscall: write
mov rdi, 1 ; File descriptor: stdout
mov rsi, output ; Address of result ASCII
mov rdx, 1 ; Length of result ASCII
syscall

; Exit program
mov rax, 60 ; Syscall: exit
xor rdi, rdi ; Status code 0
syscall

Q2. Explain subroutine of assembly language.


Ans-What is a Subroutine?
A set of instructions that are used repeatedly in a program can be referred to as a Subroutine. Only one copy of this
Instruction is stored in the memory. When a Subroutine is required it can be called many times during the Execution of
a particular program. A call Subroutine Instruction calls the Subroutine. Care Should be taken while returning a
Subroutine as a Subroutine can be called from a different place from the memory.
The content of the PC must be Saved by the call Subroutine Instruction to make a correct return to the calling program.

Process of a subroutine in a program

The subroutine linkage method is a way in which computers call and return the Subroutine. The simplest way of
Subroutine linkage is saving the return address in a specific location, such as a register which can be called a link
register called Subroutine.

Advantages of Subroutines
 Code reuse: Subroutines can be reused in multiple parts of a program, which can save time and reduce the
amount of code that needs to be written.
 Modularity: Subroutines help to break complex programs into smaller, more manageable parts, making them
easier to understand, maintain, and modify.
 Encapsulation: Subroutines provide a way to encapsulate functionality, hiding the implementation details from
other parts of the program.

Disadvantages of Subroutines
 Overhead: Calling a subroutine can incur some overhead, such as the time and memory required to push and
pop data on the stack.
 Complexity: Subroutine nesting can make programs more complex and difficult to understand, particularly if
the nesting is deep or the control flow is complicated.
 Side Effects: Subroutines can have unintended side effects,
such as modifying global variables or changing the state of the
program, which can make debugging and testing more difficult.

What is Subroutine Nesting?


Subroutine nesting is a common Programming practice In which one
Subroutine calls another Subroutine.
A Subroutine calling another subroutine

From the above figure, assume that when Subroutine 1 calls Subroutine
2 the return address of Subroutine 2 should be saved somewhere. So if
the link register stores the return address of Subroutine 1 this will be
(destroyed/overwritten) by the return address of Subroutine 2. As the
last Subroutine called is the first one to be returned ( Last in first out
format). So stack data structure is the most efficient way to store the
return addresses of the Subroutines.
Q3. Discuss various logical instruction, machine control
instruction, and program control instruction in the
assembly language.

Ans- Assembly Language Instructions: A Breakdown


Assembly language provides a low-level interface to a computer's hardware, allowing programmers to
directly manipulate its registers and memory. These instructions are categorized into three primary types:

1. Logical Instructions:

Logical instructions perform bitwise operations on data, such as AND, OR, XOR, and NOT. These operations
are fundamental to many programming tasks, including bit manipulation, flag setting, and data masking.

 AND: Performs a bitwise AND operation on two operands, setting each bit of the result to 1 only if
both corresponding bits of the operands are 1.
 OR: Performs a bitwise OR operation on two operands, setting each bit of the result to 1 if at least
one of the corresponding bits of the operands is 1.
 XOR: Performs a bitwise XOR operation on two operands, setting each bit of the result to 1 only if
the corresponding bits of the operands are different.
 NOT: Performs a bitwise NOT operation on a single operand, inverting each bit of the operand.

; Example of logical instructions


mov al, 0b11001100 ; Load binary value into AL
and al, 0b10101010 ; AL = AL AND 0b10101010 (result: 0b10001000)
or al, 0b01010101 ; AL = AL OR 0b01010101 (result: 0b11011101)
xor al, 0b11111111 ; AL = AL XOR 0b11111111 (result: 0b00100010)
not al ; AL = NOT AL (result: 0b11011101)

2. Machine Control Instructions:

Machine control instructions control the operation of the CPU itself, such as halting the processor, setting
interrupt flags, and manipulating the program counter. These instructions are essential for managing the
flow of execution and interacting with the hardware.

 HALT: Stops the execution of the program.


 NO-OP (NOP): Does nothing, often used for timing delays or filling empty spaces in code.
 Interrupt Enable/Disable: Enables or disables interrupts, allowing or preventing the CPU from
responding to external signals.

 STC : Sets the carry flag (CF = 1).


 CLC : Clears the carry flag (CF = 0).
 CMC : Complements (toggles) the carry flag.
 LOCK : Prefix to ensure atomicity in multi-processor systems.
; Machine control example
nop ; Do nothing (placeholder or delay)
stc ; Set carry flag
clc ; Clear carry flag
cmc ; Complement carry flag
hlt ; Halt the processor

3. Program Control Instructions:

Program control instructions alter the normal flow of execution, such as branching, looping, and calling
procedures. These instructions are crucial for implementing complex algorithms and decision-making
processes.

 Jump (JMP): Unconditionally transfers control to a specified address.


 Conditional Jump (Jcc): Transfers control to a specified address only if a certain condition is met
(e.g., zero flag, carry flag, etc.).
 Call: Transfers control to a subroutine and saves the return address on the stack.
 Return: Returns control to the calling function by popping the return address from the stack.
 Loop: Repeats a block of code a specified number of times or until a certain condition is met.

; Unconditional jump
jmp label1 ; Jump to label1

; Conditional jump (if zero flag is set)


cmp ax, bx ; Compare AX and BX
je equal_label ; Jump to 'equal_label' if AX == BX

; Call and return from subroutine


call my_subroutine ; Jump to subroutine 'my_subroutine'
ret ; Return from subroutine

You might also like