COA Solved Question Paper
COA Solved Question Paper
Compulsory. (10x1=10)
a. Keyboard b. Printer
c. Monitor d. Plotter
Ans. A
Ans .(A)
a. 1,000 b. 1,000,000,000,000
c. 1,000,000 d. 1,000,000,000
Ans D
Q.4 RAM is
Ans. A
Ans. D
a. PROM b. EEPROM
c. EAROM d. MEPROM
Solved Question Paper Computer Engineering 4th Sem.
Computer Organization & Architecture
Ans. D
c. PROM d. MEPROM
Ans. A
a. MIFO b. SIFO
c. FIFO d. LIFO
Ans. C
Ans. D
c. BIOS d. RISC
Ans. A
Section B
Q.11 The ___ stores intermediate data used during the execution of the instructions
Ans. Register
Ans. Storage
Ans. Yes, in parallel MIMD (Multiple Instruction, Multiple Data) systems, communication between
processors is essential for coordinating tasks and processing data efficiently.
Section C
Ans. One-address instructions are a type of machine language instruction format that uses a single
address field. This address can point to a memory location or a register where the operand is stored. The
typical structure of a one-address instruction includes an opcode, which specifies the operation to be
performed, and an address field, which specifies the location of the operand. An accumulator register
often acts as the implicit second operand and the destination for the result. The process involves
fetching the operand from memory, performing the specified operation with the accumulator, and then
storing the result back in the accumulator. This format simplifies the instruction set and reduces the
complexity of the hardware needed to decode instructions.
1. Direct Address Mode: In direct address mode, the address field of the instruction specifies the
memory location where the operand is directly stored. This mode allows for fast access to
operands because the address is explicitly stated in the instruction, eliminating the need for
further memory reference. However, it limits the addressable memory space due to the fixed
length of the address field.
2. Indirect Address Mode: In indirect address mode, the address field of the instruction refers to a
memory location that contains the effective address of the operand. This mode allows for
accessing a larger memory space since the effective address can be stored in a memory location.
Solved Question Paper Computer Engineering 4th Sem.
Computer Organization & Architecture
However, it requires additional memory fetches, which can slow down instruction execution
compared to direct addressing.
Q.23 What are the steps followed by CPU when an interrupt occurs?
When an interrupt occurs, the CPU follows a specific sequence of steps to handle it:
1. Interrupt Signal Recognition: The CPU recognizes the interrupt signal sent by an I/O device or
software interrupt.
2. Save Current State: The CPU saves the current state of execution, including the program
counter, registers, and other status information, typically on the stack.
3. Interrupt Vector Lookup: The CPU uses the interrupt vector table to find the address of the
interrupt service routine (ISR).
4. ISR Execution: The CPU jumps to the ISR address and begins executing the code to handle the
interrupt.
5. Restore State: After the ISR has finished executing, the CPU restores the saved state from the
stack.
6. Resume Execution: The CPU resumes normal execution of the program from the point where it
was interrupted.
• Internal Interrupts: These are generated by internal conditions within the CPU, such as
arithmetic overflow, division by zero, or illegal instructions. They are often referred to as
exceptions or traps and are used to handle errors and exceptional conditions within the CPU
itself.
• External Interrupts: These are generated by external hardware devices, such as keyboards,
mouse, or network cards, requiring the CPU's attention. External interrupts are asynchronous
and can occur at any time, prompting the CPU to handle I/O operations or other hardware
events.
1. SRAM (Static RAM): SRAM is a type of semiconductor memory that uses bistable latching
circuitry to store each bit. It is called static because it retains data as long as power is supplied,
without the need for periodic refresh cycles. SRAM is faster and more reliable than DRAM but is
more expensive and consumes more power, making it suitable for cache memory and other
high-speed applications.
2. DRAM (Dynamic RAM): DRAM stores each bit of data in a separate capacitor within an
integrated circuit. Because capacitors leak charge, DRAM requires periodic refreshing to
maintain the stored data. It is slower than SRAM but is cheaper and has higher storage density,
making it ideal for main system memory where large capacity is required.
Virtual memory is used in computer systems to extend the apparent amount of physical memory by
using a portion of the disk storage as additional RAM. This allows a system to run larger applications and
Solved Question Paper Computer Engineering 4th Sem.
Computer Organization & Architecture
handle more processes simultaneously than the physical memory alone would permit. Virtual memory
improves system stability and multitasking capabilities by isolating the memory spaces of different
applications, thus preventing them from interfering with each other. It also provides a more efficient
and flexible use of physical memory through techniques like paging and segmentation, ensuring that
memory is allocated dynamically based on the needs of running applications.
Memory mapping is the process of translating logical addresses used by a program into physical
addresses used by the computer hardware. This translation is managed by the memory management
unit (MMU), which allows the CPU to access data and instructions stored in memory efficiently. Memory
mapping supports the implementation of virtual memory, enabling programs to use more memory than
physically available by mapping parts of their address space to disk storage. It also provides isolation
between processes, ensuring that each process runs in its own protected memory space, thereby
enhancing security and stability.
1. Translation Lookaside Buffer (TLB): A cache that stores recent translations of virtual addresses
to physical addresses to speed up memory access.
2. Page Table: A structure that holds the mapping between virtual addresses and physical
addresses.
3. Segment Table: Used in systems that implement segmentation, holding the base address and
limit for each segment.
4. Memory Protection Unit (MPU): Ensures that a program can only access its allocated memory
regions, preventing unauthorized access.
5. Address Translation Logic: The circuitry that performs the actual translation from virtual to
physical addresses.
The primary differences between static RAM (SRAM) and dynamic RAM (DRAM) are:
• Storage Technique: SRAM uses bistable flip-flops to store each bit, whereas DRAM uses
capacitors to store each bit.
• Speed: SRAM is faster than DRAM because it does not require refreshing.
• Cost: SRAM is more expensive than DRAM due to its complex circuitry.
• Power Consumption: SRAM consumes more power than DRAM.
• Density: DRAM has higher storage density than SRAM, allowing for larger memory capacities.
The Basic Input/Output System (BIOS) performs several critical functions in a computer system:
Solved Question Paper Computer Engineering 4th Sem.
Computer Organization & Architecture
1. Power-On Self-Test (POST): Checks the hardware components and ensures they are working
properly before booting the operating system.
2. Bootstrap Loader: Locates the operating system and passes control to it.
3. BIOS Setup Utility: Provides a user interface to configure hardware settings, such as system
clock, boot sequence, and hardware configurations.
4. Hardware Abstraction: Provides a layer of abstraction between the operating system and
hardware, enabling the OS to interact with hardware components without needing to know
their specifics.
An interrupt priority encoder is a hardware component that manages multiple interrupt requests by
assigning priority levels to each request. When several interrupts occur simultaneously, the priority
encoder determines which interrupt has the highest priority and should be serviced first. It generates a
corresponding priority code and sends it to the CPU, which then handles the interrupt accordingly. This
mechanism ensures that critical tasks are addressed promptly, maintaining system stability and
efficiency by preventing lower-priority interrupts from delaying the processing of higher-priority ones.
Parallel processing can be classified into several types based on the architecture and the way tasks are
executed:
1. Single Instruction, Multiple Data (SIMD): Executes the same instruction on multiple data points
simultaneously. Commonly used in vector processors and GPUs.
2. Multiple Instruction, Multiple Data (MIMD): Executes different instructions on different data
points concurrently. Found in most modern multiprocessor systems and distributed computing
environments.
3. Single Instruction, Single Data (SISD): Executes a single instruction on a single data point.
Represents traditional sequential processing.
4. Multiple Instruction, Single Data (MISD): Executes multiple instructions on a single data point.
Rarely used in practical systems but can be found in certain specialized applications like fault-
tolerant computing systems.
Solved Question Paper Computer Engineering 4th Sem.
Computer Organization & Architecture
Q.34 What is Reverse Polish Notation?
Reverse Polish Notation (RPN), also known as postfix notation, is a mathematical notation in which
operators follow their operands. For example, the expression "3 + 4" in conventional infix notation is
written as "3 4 +" in RPN. This notation eliminates the need for parentheses to indicate the order of
operations, as the position of the operator inherently determines the precedence. RPN is advantageous
for computer processing because it allows for efficient stack-based computation and simplifies the
parsing of mathematical expressions.
Multiprocessors have several key characteristics that enhance their performance and capabilities:
Section C
Reduced Instruction Set Computer (RISC) architecture is designed to simplify the instructions executed
by the CPU to increase performance. The key characteristics of RISC architecture include:
1. Simple Instructions: RISC processors use a small set of simple instructions, each of which can be
executed in a single clock cycle. This contrasts with Complex Instruction Set Computer (CISC)
architectures, which have a larger set of more complex instructions that may take multiple
cycles to execute.
2. Load/Store Architecture: RISC separates memory access operations from computational
operations. Data is loaded from memory into registers, and all computations are performed
using these registers. Results are then stored back in memory. This approach simplifies
instruction decoding and execution.
3. Pipelining: RISC architectures are designed to maximize instruction throughput using pipelining,
where multiple instruction stages are processed simultaneously. This allows for the overlapping
of instruction execution, significantly increasing CPU efficiency and performance.
4. Fixed Instruction Length: Instructions in RISC are typically of a fixed length, simplifying the
instruction fetch and decode stages of the CPU pipeline. This uniformity enhances predictability
and reduces the complexity of the control unit.
Solved Question Paper Computer Engineering 4th Sem.
Computer Organization & Architecture
5. Few Addressing Modes: RISC architectures limit the number of addressing modes to simplify
instruction decoding and execution. Common addressing modes include register, immediate,
and displacement addressing.
6. Compiler Optimization: RISC architecture relies heavily on optimizing compilers to efficiently
translate high-level language code into machine code. The simplicity of RISC instructions allows
compilers to generate optimized code that maximizes performance.
7. High Register Count: RISC processors have a large number of general-purpose registers to
minimize memory access and improve performance. By using registers for most operations, RISC
reduces the number of slow memory accesses.
8. Delayed Branching: To mitigate the impact of branch instructions on the instruction pipeline,
RISC architectures often use delayed branching. This technique delays the execution of a branch
instruction by filling the pipeline with useful instructions, reducing pipeline stalls.
Overall, RISC architecture's focus on simplicity, efficiency, and high-speed instruction execution makes it
well-suited for applications requiring high performance and low power consumption.
1. Explain Various Types of Pipelining: Pipelining is a technique used in CPU design to increase
instruction throughput by overlapping the execution of multiple instructions. There are several
types of pipelining:
Solved Question Paper Computer Engineering 4th Sem.
Computer Organization & Architecture
o Instruction Pipelining: This divides the instruction execution process into several stages,
such as fetch, decode, execute, and write-back. Each stage processes a different
instruction simultaneously, allowing for one instruction to be completed per clock cycle.
o Arithmetic Pipelining: Used in floating-point units and other arithmetic operations,
arithmetic pipelining breaks down complex arithmetic operations into simpler stages
that can be executed in parallel.
o Super pipelining: Increases the number of pipeline stages, allowing for even finer
granularity and higher instruction throughput. This type of pipelining requires careful
management of pipeline hazards to maintain efficiency.
o Superscalar Pipelining: Executes multiple instructions in parallel within the same clock
cycle by using multiple pipelines. This requires additional hardware complexity and
sophisticated scheduling mechanisms to handle instruction dependencies and hazards.
2. Discuss Characteristics of Computer Architecture: Computer architecture refers to the
conceptual design and fundamental operational structure of a computer system. Key
characteristics include:
o Instruction Set Architecture (ISA): Defines the set of instructions that a processor can
execute, including the instruction formats, addressing modes, and supported data types.
o Microarchitecture: Describes the implementation of the ISA within a processor,
including the design of the data path, control unit, and memory hierarchy. It determines
the efficiency and performance of instruction execution.
o Memory Hierarchy: Organizes memory into a hierarchy of levels, each with different
speeds and sizes, from fast, small cache memory to slower, larger main memory and
disk storage. This hierarchy optimizes the trade-off between speed and cost.
o Parallelism: Involves techniques such as pipelining, superscalar execution, and
multiprocessing to perform multiple operations concurrently, increasing overall system
performance.
o I/O Systems: Manages input and output operations, including the design of interfaces,
communication protocols, and data transfer methods between the processor, memory,
and peripheral devices.
o Power Efficiency: Focuses on reducing power consumption while maintaining
performance, an essential consideration in modern portable and embedded systems.
Techniques include dynamic voltage scaling, power gating, and energy-efficient
instruction execution.
Solved Question Paper Computer Engineering 4th Sem.
Computer Organization & Architecture