Coa Merged (2)
Coa Merged (2)
BASIC CONCEPTS
• Computer Architecture (CA) is concerned with the structure and behaviour of the computer.
• CA includes the information formats, the instruction set and techniques for addressing memory.
• In general covers, CA covers 3 aspects of computer-design namely: 1) Computer Hardware, 2)
Instruction set Architecture and 3) Computer Organization.
1. Computer Hardware
It consists of electronic circuits, displays, magnetic and optical storage media and communication
facilities.
2. Instruction Set Architecture
It is programmer visible machine interface such as instruction set, registers, memory organization
and exception handling.
Two main approaches are 1) CISC and 2) RISC.
(CISCComplex Instruction Set Computer, RISCReduced Instruction Set Computer)
3. Computer Organization
It includes the high level aspects of a design, such as
→ memory-system
→ bus-structure &
→ design of the internal CPU.
It refers to the operational units and their interconnections that realize the architectural
specifications.
It describes the function of and design of the various units of digital computer that store and process
information.
1
COMPUTER ORGANIZATION
FUNCTIONAL UNITS
• A computer consists of 5 functionally independent main parts:
1) Input
2) Memory
3) ALU
4) Output &
5) Control units.
Input device accepts the coded information as source program i.e. high level language. This is either
stored in the memory or immediately used by the processor to perform the desired operations. The program
stored in the memory determines the processing steps. Basically the computer converts one source program
to an object program. i.e. into machine language.
Finally the results are sent to the outside world through output device. All of these actions are
coordinated by the control unit.
Input unit: -
The source program/high level language program/coded information/simply data is fed to a computer
through input devices keyboard is a most common type. Whenever a key is pressed, one corresponding
word or number is translated into its equivalent binary code over a cable & fed either to memory or
processor.
Memory unit: -
Its function into store programs and data. It is basically to two types
1. Primary memory
2. Secondary memory
1. Primary memory: - Is the one exclusively associated with the processor and operates at the electronics
speeds programs must be stored in this memory while they are being executed. The memory contains a
large number of semiconductors storage cells. Each capable of storing one bit of information. These are
processed in a group of fixed site called word.
2
COMPUTER ORGANIZATION
To provide easy access to a word in memory, a distinct address is associated with each word location.
Addresses are numbers that identify memory location.
Number of bits in each word is called word length of the computer. Programs must reside in the
memory during execution. Instructions and data can be written into the memory or read out under the
control of processor.
Memory in which any location can be reached in a short and fixed amount of time after specifying its
address is called random-access memory (RAM).
The time required to access one word in called memory access time. Memory which is only readable by
the user and contents of which can’t be altered is called read only memory (ROM) it contains operating
system.
Caches are the small fast RAM units, which are coupled with the processor and are aften contained on
the same IC chip to achieve high performance. Although primary storage is essential it tends to be
expensive.
2 Secondary memory: - Is used where large amounts of data & programs have to be stored, particularly
information that is accessed infrequently.
Examples: - Magnetic disks & tapes, optical disks (ie CD-ROM’s), floppies etc.,
The control and the ALU are may times faster than other devices connected to a computer system. This
enables a single processor to control a number of external devices such as key boards, displays, magnetic
and optical disks, sensors and other mechanical controllers.
Output unit:-
These actually are the counterparts of input unit. Its basic function is to send the processed results to the
outside world.
Control unit:-
It effectively is the nerve center that sends signals to other units and senses their states. The actual
timing signals that govern the transfer of data between input unit, processor, memory and output unit are
generated by the control unit.
3
COMPUTER ORGANIZATION
4
COMPUTER ORGANIZATION
5
COMPUTER ORGANIZATION
BUS STRUCTURE
• A bus is a group of lines that serves as a connecting path for several devices.
• A bus may be lines or wires.
• The lines carry data or address or control signal.
• There are 2 types of Bus structures: 1) Single Bus Structure and 2) Multiple Bus Structure.
1) Single Bus Structure
Because the bus can be used for only one transfer at a time, only 2 units can actively use the bus at
any given time.
Bus control lines are used to arbitrate multiple requests for use of the bus.
Advantages:
1) Low cost &
2) Flexibility for attaching peripheral devices.
2) Multiple Bus Structure
Systems that contain multiple buses achieve more concurrency in operations.
Two or more transfers can be carried out at the same time.
Advantage: Better performance.
Disadvantage: Increased cost.
PERFORMANCE
• The most important measure of performance of a computer is how quickly it can execute
programs.
• The speed of a computer is affected by the design of
1) Instruction-set.
2) Hardware & the technology in which the hardware is implemented.
3) Software including the operating system.
• Because programs are usually written in a HLL, performance is also affected by the compiler
that translates programs into machine language. (HLL High Level Language).
• For best performance, it is necessary to design the compiler, machine instruction set and
hardware in a co-ordinated way.
6
COMPUTER ORGANIZATION
examine the flow of program instructions and data between the memory & the processor.
• At the start of execution, all program instructions are stored in the main-memory.
• As execution proceeds, instructions are fetched into the processor, and a copy is placed in the
cache.
• Later, if the same instruction is needed a second time, it is read directly from the cache.
• A program will be executed faster
if movement of instruction/data between the main-memory and the processor is minimized which is
achieved by using the cache.
PROCESSOR CLOCK
• Processor circuits are controlled by a timing signal called a Clock.
• The clock defines regular time intervals called Clock Cycles.
• To execute a machine instruction, the processor divides the action to be performed into a
sequence of basic steps such that each step can be completed in one clock cycle.
• Let P = Length of one clock cycle R = Clock rate.
• Relation between P and R is given by
------(1)
• Equ1 is referred to as the basic performance equation.
To achieve high performance, the computer designer must reduce the value of T, which means N
The value of N is reduced if source program is compiled into fewer machine instructions.
The value of S is reduced if instructions have a smaller number of basic steps to perform.
The value of R can be increased by using a higher frequency clock.
• Care has to be taken while modifying values since changes in one parameter may affect the
other.
7
COMPUTER ORGANIZATION
Problem 1:
List the steps needed to execute the machine instruction:
Load R2, LOC
in terms of transfers between the components of processor and some simple control commands. Assume
that the address of the memory-location containing this instruction is initially in register PC. Solution:
1. Transfer the contents of register PC to register MAR.
2. Issue a Read command to memory.
And, then wait until it has transferred the requested word into register MDR.
3. Transfer the instruction from MDR into IR and decode it.
4. Transfer the address LOCA from IR to MAR.
5. Issue a Read command and wait until MDR is loaded.
6. Transfer contents of MDR to the ALU.
7. Transfer contents of R0 to the ALU.
8. Perform addition of the two operands in the ALU and transfer result into R0.
9. Transfer contents of PC to ALU.
10. Add 1 to operand in ALU and transfer incremented address to PC.
Problem 2:
List the steps needed to execute the machine instruction:
Add R4, R2, R3
in terms of transfers between the components of processor and some simple control commands. Assume
that the address of the memory-location containing this instruction is initially in register PC. Solution:
1. Transfer the contents of register PC to register MAR.
2. Issue a Read command to memory.
And, then wait until it has transferred the requested word into register MDR.
3. Transfer the instruction from MDR into IR and decode it.
4. Transfer contents of R1 and R2 to the ALU.
5. Perform addition of two operands in the ALU and transfer answer into R3.
6. Transfer contents of PC to ALU.
7. Add 1 to operand in ALU and transfer incremented address to PC.
Problem 3:
(a) Give a short sequence of machine instructions for the task “Add the contents of memory-location A
to those of location B, and place the answer in location C”. Instructions:
Load Ri, LOC and
Store Ri, LOC
are the only instructions available to transfer data between memory and the general purpose registers.
Add instructions are described in Section 1.3. Do not change contents of either location A or B.
(b) Suppose that Move and Add instructions are available with the formats:
Move Location1, Location2 and
Add Location1, Location2
These instructions move or add a copy of the operand at the second location to the first location,
overwriting the original operand at the first location. Either or both of the operands can be in the memory
or the general-purpose registers. Is it possible to use fewer instructions of these types to accomplish the
task in part (a)? If yes, give the sequence.
Solution:
8
COMPUTER ORGANIZATION
(a)
Load A, R0 Load B, R1 Add R0, R1
Store R1, C
(b) Yes;
Move B, C Add A, C
Problem 4:
A program contains 1000 instructions. Out of that 25% instructions requires 4 clock cycles,40%
instructions requires 5 clock cycles and remaining require 3 clock cycles for execution. Find the total time
required to execute the program running in a 1 GHz machine.
Solution:
N = 1000
25% of N= 250 instructions require 4 clock cycles.
40% of N =400 instructions require 5 clock cycles. 35% of N=350 instructions require 3 clock cycles.
T = (N*S)/R= (250*4+400*5+350*3)/1X109 =(1000+2000+1050)/1*109= 4.05 μs.
Problem 5:
For the following processor, obtain the performance.
Clock rate = 800 MHz
No. of instructions executed = 1000
Average no of steps needed / machine instruction = 20
Solution:
Problem 6
(a) Suppose that execution time for a program is proportional to instruction fetch time. Assume that
fetching an instruction from the cache takes 1 time unit, but fetching it from the main-memory takes 10
time units. Also, assume that a requested instruction is found in the cache with probability 0.96. Finally,
assume that if an instruction is not found in the cache it must first be fetched from the main- memory into
the cache and then fetched from the cache to be executed. Compute the ratio of program execution time
without the cache to program execution time with the cache. This ratio is called the speedup resulting from
the presence of the cache.
(b) If the size of the cache is doubled, assume that the probability of not finding a requested
instruction there is cut in half. Repeat part (a) for a doubled cache size.
Solution:
(a) Let cache access time be 1 and main-memory access time be 20. Every instruction that is executed
must be fetched from the cache, and an additional fetch from the main-memory must be performed for 4%
of these cache accesses.
Therefore,
(b)
9
COMPUTER ORGANIZATION
NUMBER REPRESENTATION:
We obviously need to represent both positive and negative numbers. Three systems are used
for representing such numbers :
•Sign-and-magnitude
•1's-complement
•2's-complement
In all three systems, the leftmost bit is 0 for positive numbers and 1 for negative numbers. Fig 2.1
illustrates all three representations using 4-bit numbers. Positive values have identical representations in al
systems, but negative values have different representations. In the sign-andmagnitude systems, negative
values are represented by changing the most significant bit (b in figure 2.1) from 0 to 1 in the B vector of
the corresponding positive value. For example, +5 is represented by 0101, and -5 is represented by 1101. In
1's- complement representation, negative values are obtained by complementing each bit of the
corresponding positive number. Thus, the representation for -3 is obtained by complementing each bit in
the vector 0011 to yield 1100. clearly, the same operation, bit complementing, is done in converting a
negative number to the corresponding positive value. Converting either way is referred to as forming the
1's-complement of a given number. Finally, in the 2's-complement system, forming the 2's-complement of
a number is done by subtracting that number from 2n.
10
COMPUTER ORGANIZATION
0 1 0 1
+0 +0 +1 +1
____ ____ ___ ___
0 1 1 10
Carry-out
Figure 2.2 Addition of 1-bit numbers.
We introduced three systems for representing positive and negative numbers, or, simply, signed
numbers. These systems differ only in the way they represent negative values. Their relative merits from
the standpoint of ease of performing arithmetic operations can be summarized as follows. The sign-and-
magnitude system is the simplest representation, but it is also the most awkward for addition and
subtraction operations. The 1’s-complement method is somewhat better. The 2’s-complement system is the
most efficient method for performing addition and subtraction operations.
To understand 2’s-complement arithmetic, consider addition modulo N (abbreviated as mod N ). A
helpful graphical device for the description of addition of unsigned integersmod N is a circle with the
values 0 through N 1 marked along its perimeter. Consider the case N = 16, shown in part (b) of the figure.
The decimal values 0 through 15 are represented by their 4-bit binary values 0000 through 1111 around the
11
COMPUTER ORGANIZATION
outside of the circle. In terms of decimal values, the operation (7 + 5) mod 16 yields the value 12. To
perform this operation graphically, locate 7 (0111) on the outside of the circle and then move 5 units in the
clockwise direction to arrive at the answer 12 (1100).
Similarly, (9 + 14) mod 16 = 7; this is modeled on the circle by locating 9 (1001) and
moving 14 units in the clockwise direction past the zero position to arrive at the answer
7 (0111). This graphical technique works for the computation of (a + b) mod 16 for any
unsigned integers a and b; that is, to perform addition, locate a and move b units in the
clockwise direction to arrive at (a + b) mod 16.
Now consider a different interpretation of the mod 16 circle. We will reinterpret the
binary vectors outside the circle to represent the signed integers from 8 through + 7 in the
2’s-complement representation as shown inside the circle.
Let us apply the mod 16 addition techniques to the example of adding +7 to -3.
The 2’s-complement representation for these numbers is 0111 and 1101, respectively. To add
these numbers, locate 0111 on the circle in Figure 1.5b. Then move 1101 (13) steps in the
clockwise direction to arrive at 0100, which yields the correct answer of + 4. Note that the
2’s-complement representation of 3 is interpreted as an unsigned value for the number of
steps to move.
If we perform this addition by adding bit pairs from right to left, we obtain
0111
+ 1101
1 0100
↑Carry-out
If we ignore the carry-out from the fourth bit position in this addition, we obtain the correct
answer. In fact, this is always the case. Ignoring this carry-out is a natural result of using
mod N arithmetic. As we move around the circle in Figure 1.5b, the value next to 1111
would normally be 10000. Instead, we go back to the value 0000.
The rules governing addition and subtraction of n-bit signed numbers using the 2’scomplement
representation system may be stated as follows:
• To add two numbers, add their n-bit representations, ignoring the carry-out bit from
the most significant bit (MSB) position. The sum will be the algebraically correct value in
2’s-complement representation if the actual result is in the range -2 n-1 through + 2n-1 − 1.
• To subtract two numbers X and Y , that is, to perform X – Y , form the 2’s-complement
of Y , then add it to X using the add rule. Again, the result will be the algebraically correct
value in 2’s-complement representation if the actual result is in the range -2 n-1 through
+ 2 n-1 −1.
12
COMPUTER ORGANIZATION
Using 2’s-complement representation, n bits can represent values in the range -2 n-1 to +2n-1 - 1
For example, the range of numbers that can be represented by 4 bits is 8 through +7. When the actual
result of an arithmetic operation is outside the representable range, an arithmetic overflow has occurred.
When adding unsigned numbers, a carry-out of 1 from the most significant bit position
indicates that an overflow has occurred. However, this is not always true when adding signed
numbers. For example, using 2’s-complement representation for 4-bit signed numbers, if
we add + 7 and +4, the sum vector is 1011, which is the representation for -5, an incorrect
result. In this case, the carry-out bit from the MSB position is 0. If we add 4 and 6, we get
0110 = +6, also an incorrect result. In this case, the carry-out bit is 1.
Hence, the value of the carry-out bit from the sign-bit position is not an indicator of overflow.
Clearly, overflow may occur only if both summands have the same sign. The addition of
numbers with different signs cannot cause overflow because the result is always within the
representable range.
These observations lead to the following way to detect overflow when adding two
numbers in 2’s-complement representation. Examine the signs of the two summands and
the sign of the result. When both summands have the same sign, an overflow has occurred
when the sign of the sum is not the same as the signs of the summands.
When subtracting two numbers, the testing method needed for detecting overflow has
to be modified somewhat; but it is still quite straightforward.
The descriptions provided here are based on the 2008 version of IEEE (Institute of Electrical and
Electronics Engineers) Standard 754, labeled 754-2008 [4].
A binary floating-point number can be represented by
A sign for the number
Some significant bits
A signed scale factor exponent for an implied base of 2
The basic IEEE format is a 32-bit representation, shown in Figure 9.26a. The leftmost bit represents the
sign, S, for the number. The next 8 bits, E , represent the signed exponent of the scale factor (with an
implied base of 2), and the remaining 23 bits, M , are the fractional part of the significant bits. The full 24-
bit string, B, of significant bits, called the mantissa, always has a leading 1, with the binary point
immediately to its right. Therefore, the mantissa
13
COMPUTER ORGANIZATION
By convention, when the binary point is placed to the right of the first significant bit, the number is said
to be normalized. Note that the base, 2, of the scale factor and the leading 1 of the mantissa are both fixed.
They do not need to appear explicitly in the representation. Instead of the actual signed exponent, E, the
value stored in the exponent field is an unsigned integer E = E + 127. This is called the excess-127 format.
Thus, E is in the range 0 ≤ E ≤ 255. The end values of this range, 0 and 255, are used to represent special
values, as described later. Therefore, the range of E for normal values is 1 ≤ E ≤ 254. This means that the
actual exponent, E, is in the range −126 ≤ E ≤ 127. The use of the excess-127 representation for exponents
simplifies comparison of the relative sizes of two floating-point numbers. (See Problem 9.23.) The 32-bit
standard representation in Figure 9.26a is called a single-precision representation because it occupies a
single 32-bit word. The scale factor has a range of 2−126 to 2+127, which is approximately equal to 10±38.
The 24-bit mantissa provides approximately the same precision as a 7-digit decimal value. An example of a
single-precision floating-point number is shown in Figure 9.26b. To provide more precision and range for
floating-point numbers, the IEEE standard also specifies a double-precision format, as shown in Figure
9.26c. The double-precision format has increased exponent and mantissa ranges. The 11-bit excess-1023
exponent E has the range 1 ≤ E ≤ 2046 for normal values, with 0 and 2047 used to indicate special values,
as before. Thus, the actual exponent E is in the range −1022 ≤ E ≤ 1023, providing scale factors of 2−1022
to 21023 (approximately 10±308). The 53-bit mantissa provides a precision equivalent to about 16 decimal
digits. A computer must provide at least single-precision representation to conform to the IEEE standard.
Double-precision representation is optional. The standard also specifies certain optional extended versions
of both of these formats. The extended versions provide increased precision and increased exponent range
for the representation of intermediate values in a sequence of calculations. The use of extended formats
helps to reduce the size of the accumulated round-off error in a sequence of calculations leading to a
14
COMPUTER ORGANIZATION
desired result. For example, the dot product of two vectors of numbers involves accumulating a sum of
products. The input vector components are given in a standard precision, either single or double, and the
final answer (the dot product) is truncated to the same precision. All intermediate calculations should be
done using extended precision to limit accumulation of errors. Extended formats also enhance the accuracy
of evaluation of elementary functions such as sine, cosine, and so on. This is because they are usually
evaluated by adding up a number of terms in a series representation. In addition to requiring the four basic
arithmetic operations, the standard requires three additional operations to be provided: remainder, square
root, and conversion between binary and decimal representations.
We note two basic aspects of operating with floating-point numbers. First, if a number is not
normalized, it can be put in normalized form by shifting the binary point and adjusting the exponent.
Figure 9.27 shows an unnormalized value, 0.0010110 ... × 29, and its normalized version, 1.0110 ... × 26.
Since the scale factor is in the form 2i , shifting the mantissa right or left by one bit position is compensated
by an increase or a decrease of 1 in the exponent, respectively. Second, as computations proceed, a number
that does not fall in the representable range of normal numbers might be generated. In single precision, this
means that its normalized representation requires an exponent less than −126 or greater than +127. In the
first case, we say that underflow has occurred, and in the second case, we say that overflow has occurred.
Special Values:
The end values 0 and 255 of the excess-127 exponent E are used to represent special values. When E = 0
and the mantissa fraction M is zero, the value 0 is represented. When E = 255 and M = 0, the value ∞ is
represented, where ∞ is the result of dividing a normal number by zero. The sign bit is still used in these
representations, so there are representations for ±0 and ±∞. When E = 0 and M = 0, denormal numbers are
represented. Their value is ±0.M × 2−126. Therefore, they are smaller than the smallest normal number.
There is no implied one to the left of the binary point, and M is any nonzero 23-bit fraction. The purpose of
introducing denormal numbers is to allow for gradual underflow, providing an extension of the range of
normal representable numbers. This is useful in dealing with very small numbers, which may be needed in
certain situations. When E = 255 and M = 0, the value represented is called Not a Number (NaN). A NaN
represents the result of performing an invalid operation such as 0/0 or √−1.
Exceptions :
15
COMPUTER ORGANIZATION
In conforming to the IEEE Standard, a processor must set exception flags if any of the following
conditions arise when performing operations: underflow, overflow, divide by zero, inexact, invalid. We
have already mentioned the first three. Inexact is the name for a result that requires rounding in order to be
represented in one of the normal formats. An invalid exception occurs if operations such as 0/0 or √−1 are
attempted. When an exception occurs, the result is set to one of the special values. If interrupts are enabled
for any of the exception flags, system or user-defined routines are entered when the associated exception
occurs. Alternatively, the application program can test for the occurrence of exceptions, as necessary, and
decide how to proceed
16
COMPUTER ORGANIZATION
BYTE-ADDRESSABILITY
• In byte-addressable memory, successive addresses refer to successive byte locations in the
memory.
• Byte locations have addresses 0, 1, 2. . . . .
• If the word-length is 32 bits, successive words are located at addresses 0, 4, 8. . with each word
having 4 bytes.
17
COMPUTER ORGANIZATION
Consider a 32-bit integer (in hex): 0x12345678 which consists of 4 bytes: 12, 34, 56, and 78.
Hence this integer will occupy 4 bytes in memory.
Assume, we store it at memory address starting 1000.
On little-endian, memory will look like
Address Value
1000 78
1001 56
1002 34
1003 12
1000 12
1001 34
1002 56
1003 78
WORD ALIGNMENT
• Words are said to be Aligned in memory if they begin at a byte-address that is a multiple of the
number of bytes in a word.
• For example,
If the word length is 16(2 bytes), aligned words begin at byte-addresses 0, 2, 4 . . . . .
If the word length is 64(2 bytes), aligned words begin at byte-addresses 0, 8, 16 . . . . .
• Words are said to have Unaligned Addresses, if they begin at an arbitrary byte-address.
MEMORY OPERATIONS
• Two memory operations are:
1) Load (Read/Fetch) &
2) Store (Write).
• The Load operation transfers a copy of the contents of a specific memory-location to the
processor. The memory contents remain unchanged.
• Steps for Load operation:
1) Processor sends the address of the desired location to the memory.
18
COMPUTER ORGANIZATION
19
COMPUTER ORGANIZATION
20
COMPUTER ORGANIZATION
Access to data in the registers is much faster than to data stored in memory-locations.
. Let Ri represent a general-purpose register. The instructions:
Load A,Ri
Store Ri,A
Add A,Ri
are generalizations of the Load, Store and Add Instructions for the single-accumulator case, in which
register Ri performs the function of the accumulator.
• In processors, where arithmetic operations as allowed only on operands that are in registers, the
task C<-[A]+[B] can be performed by the instruction sequence:
Move A,Ri Move B,Rj Add Ri,Rj Move Rj,C
21
COMPUTER ORGANIZATION
Program Explanation
• Consider the program for adding a list of n numbers (Figure 2.9).
• The Address of the memory-locations containing the n numbers are symbolically given as
NUM1, NUM2…..NUMn.
• Separate Add instruction is used to add each number to the contents of register R0.
• After all the numbers have been added, the result is placed in memory-location SUM.
BRANCHING
• Consider the task of adding a list of „n‟ numbers (Figure 2.10).
• Number of entries in the list „n‟ is stored in memory-location N.
• Register R1 is used as a counter to determine the number of times the loop is executed.
• Content-location N is loaded into register R1 at the beginning of the program.
• The Loop is a straight line sequence of instructions executed as many times as needed. The loop
starts at location LOOP and ends at the instruction Branch>0.
• During each pass,
→ address of the next list entry is determined and
→ that entry is fetched and added to R0.
• The instruction Decrement R1 reduces the contents of R1 by 1 each time through the loop.
• Then Branch Instruction loads a new value into the program counter. As a result, the processor
fetches and executes the instruction at this new address called the Branch Target.
A Conditional Branch Instruction causes a branch only if a specified condition is satisfied. If the
condition is not satisfied, the PC is incremented in the normal way, and the next instruction in sequential
address order is fetched and executed.
22
COMPUTER ORGANIZATION
23
COMPUTER ORGANIZATION
MODULE 2:
The instruction set of a computer typically provides a number of such methods, called addressing
modes. While the details differ from one computer to another, the underlying concepts are the same.
ADDRESSING MODES:-
In general, a program operates on data that reside in the computer’s memory. These data can be
organized in a variety of ways. If we want to keep track of students’ names, we can write them in a list.
Programmers use organizations called data structures to represent the data used in computations. These
include lists, linked lists, arrays, queues, and so on.
Programs are normally written in a high-level language, which enables the programmer to use
constants, local and global variables, pointers, and arrays. The different ways in which the location of an
operand is specified in an instruction are referred to as addressing modes.
24
COMPUTER ORGANIZATION
Register Mode
• The operand is the contents of a register.
• The name (or address) of the register is given in the instruction.
• Registers are used as temporary storage locations where the data in a register are accessed.
• For example, the instruction
Move R1, R2 ;Copy content of register R1 into register R2.
Immediate Mode
• The operand is given explicitly in the instruction.
• For example, the instruction
Move #200, R0 ;Place the value 200 in register R0.
• Clearly, the immediate mode is only used to specify the value of a source-operand.
25
COMPUTER ORGANIZATION
Indirect Mode
• The EA of the operand is the contents of a register(or memory-location).
• The register (or memory-location) that contains the address of an operand is called a Pointer.
• We denote the indirection by
→ name of the register or
→ new address given in the instruction.
E.g: Add (R1),R0 ;The operand is in memory. Register R1 gives the effective-address (B) of the
operand. The data is read from location B and added to contents of register R0.
• To execute the Add instruction in fig 2.11 (a), the processor uses the value which is in register
R1, as the EA of the operand.
• It requests a read operation from the memory to read the contents of location B. The value read is
the desired operand, which the processor adds to the contents of register R0.
• Indirect addressing through a memory-location is also possible as shown in fig 2.11(b). In this
case, the processor first reads the contents of memory-location A, then requests a second read operation
using the value B as an address to obtain the operand
26
COMPUTER ORGANIZATION
Program Explanation
• In above program, Register R2 is used as a pointer to the numbers in the list, and the operands
are accessed indirectly through R2.
• The initialization-section of the program loads the counter-value n from memory-location N into
R1 and uses the immediate addressing-mode to place the address value NUM1, which is the address of the
first number in the list, into R2. Then it clears R0 to 0.
• The first two instructions in the loop implement the unspecified instruction block starting at
LOOP.
• The first time through the loop, the instruction Add (R2), R0 fetches the operand at location
NUM1 and adds it to R0.
• The second Add instruction adds 4 to the contents of the pointer R2, so that it will contain the
address value NUM2 when the above instruction is executed in the second pass through the loop.
Index mode
• The operation is indicated as X(Ri)
where X=the constant value which defines an offset(also called a displacement).
Ri=the name of the index register which contains address of a new location.
• The effective-address of the operand is given by EA=X+[Ri]
• The contents of the index-register are not changed in the process of generating the effective-
address.
• The constant X may be given either
→ as an explicit number or
→ as a symbolic-name representing a numerical value.
• Fig(a) illustrates two ways of using the Index mode. In fig(a), the index register, R1, contains the
address of a memory-location, and the value X defines an offset(also called a displacement) from this
address to the location where the operand is found.
• To find EA of operand: Eg: Add 20(R1), R2
EA=>1000+20=1020
• An alternative use is illustrated in fig(b). Here, the constant X corresponds to a memory address,
and the contents of the index register define the offset to the operand. In either case, the effective-address is
the sum of two values; one is given explicitly in the instruction, and the other is stored in a register.
27
COMPUTER ORGANIZATION
RELATIVE MODE
• This is similar to index-mode with one difference:
The effective-address is determined using the PC in place of the general purpose register Ri.
• The operation is indicated as X(PC).
• X(PC) denotes an effective-address of the operand which is X locations above or below the
current contents of PC.
• Since the addressed-location is identified "relative" to the PC, the name Relative mode is
associated with this type of addressing.
• This mode is used commonly in conditional branch instructions.
• An instruction such as
Branch > 0 LOOP ;Causes program execution to go to the branch target location identified by
name LOOP if branch condition is satisfied.
28
COMPUTER ORGANIZATION
ASSEMBLY LANGUAGE
• We generally use symbolic-names to write a program.
• A complete set of symbolic-names and rules for their use constitute an Assembly Language.
• The set of rules for using the mnemonics in the specification of complete instructions and
programs is called the Syntax of the language.
• Programs written in an assembly language can be automatically translated into a sequence of
machine instructions by a program called an Assembler.
• The user program in its original alphanumeric text formal is called a Source Program, and the
assembled machine language program is called an Object Program.
For example:
MOVE R0,SUM ;The term MOVE represents OP code for operation performed by instruction.
ADD #5,R3 ;Adds number 5 to contents of register R3 & puts the result back into registerR3.
ASSEMBLER DIRECTIVES
• Directives are the assembler commands to the assembler concerning the program being
assembled.
29
COMPUTER ORGANIZATION
These commands are not translated into machine opcode in the object-program.
• EQU informs the assembler about the value of an identifier (Figure: 2.18).
Ex: SUM EQU 200 ;Informs assembler that the name SUM should be replaced by the value 200.
• ORIGIN tells the assembler about the starting-address of memory-area to place the data block.
Ex: ORIGIN 204 ;Instructs assembler to initiate data-block at memory-locations starting from 204.
• DATAWORD directive tells the assembler to load a value into the location.
Ex: N DATAWORD 100 ;Informs the assembler to load data 100 into the memory-location N(204).
L O O C
a pera per omm
b tion and ent
el s
1) Label is an optional name associated with the memory-address where the machine language
instruction produced from the statement will be loaded.
2) Operation Field contains the OP-code mnemonic of the desired instruction or assembler.
3) Operand Field contains addressing information for accessing one or more operands, depending
on the type of instruction.
4) Comment Field is used for documentation purposes to make program easier to understand.
30
COMPUTER ORGANIZATION
• Assembler Program
→ replaces all symbols denoting operations & addressing-modes with binary-codes used in machine
instructions.
→ replaces all names and labels with their actual values.
→ assigns addresses to instructions & data blocks, starting at address given in ORIGIN directive
→ inserts constants that may be given in DATAWORD directives.
→ reserves memory-space as requested by RESERVE directives.
• Debugger Program is used to help the user find the programming errors.
• Debugger program enables the user
→ to stop execution of the object-program at some points of interest &
→ to examine the contents of various processor-registers and memory-location.
31
COMPUTER ORGANIZATION
MEMORY-MAPPED I/O
• Some address values are used to refer to peripheral device buffer-registers such as DATAIN &
DATAOUT.
• No special instructions are needed to access the contents of the registers; data can be transferred
between these registers and the processor using instructions such as Move, Load or Store.
• For example, contents of the keyboard character buffer DATAIN can be transferred to register
R1 in the processor by the instruction
MoveByte DATAIN,R1
• The MoveByte operation code signifies that the operand size is a byte.
• The Testbit instruction tests the state of one bit in the destination, where the bit position to be
tested is indicated by the first operand.
STACKS
• A stack is a special type of data structure where elements are inserted from one end and elements
are deleted from the same end. This end is called the top of the stack (Figure: 2.14).
• The various operations performed on stack:
1) Insert: An element is inserted from top end. Insertion operation is called push operation.
2) Delete: An element is deleted from top end. Deletion operation is called pop operation.
• A processor-register is used to keep track of the address of the element of the stack that is at the
top at any given time. This register is called the Stack Pointer (SP).
• If we assume a byte-addressable memory with a 32-bit word length,
1) The push operation can be implemented as
Subtract #4, SP Move NEWITEM, (SP)
2) The pop operation can be implemented as
Move (SP), ITEM Add #4, SP
32
COMPUTER ORGANIZATION
33
COMPUTER ORGANIZATION
QUEUE
• Data are stored in and retrieved from a queue on a FIFO basis.
• Difference between stack and queue?
1) One end of the stack is fixed while the other end rises and falls as data are pushed and popped.
2) In stack, a single pointer is needed to keep track of top of the stack at any given time.
In queue, two pointers are needed to keep track of both the front and end for removal and insertion
respectively.
3) Without further control, a queue would continuously move through the memory of a computer in
the direction of higher addresses. One way to limit the queue to a fixed region in memory is to use a
circular buffer.
SUBROUTINES
• A subtask consisting of a set of instructions which is executed many times is called a
Subroutine.
• A Call instruction causes a branch to the subroutine (Figure: 2.16).
• At the end of the subroutine, a return instruction is executed
• Program resumes execution at the instruction immediately following the subroutine call
• The way in which a computer makes it possible to call and return from subroutines is referred to
as its Subroutine Linkage method.
• The simplest subroutine linkage method is to save the return-address in a specific location, which
may be a register dedicated to this function. Such a register is called the Link Register.
• When the subroutine completes its task, the Return instruction returns to the calling-program by
branching indirectly through the link-register.
• The Call Instruction is a special branch instruction that performs the following operations:
→ Store the contents of PC into link-register.
→ Branch to the target-address specified by the instruction.
• The Return Instruction is a special branch instruction that performs the operation:
→ Branch to the address contained in the link-register.
34
COMPUTER ORGANIZATION
PARAMETER PASSING
• The exchange of information between a calling-program and a subroutine is referred to as
Parameter Passing (Figure: 2.25).
• The parameters may be placed in registers or in memory-location, where they can be accessed by
the subroutine.
• Alternatively, parameters may be placed on the processor-stack used for saving the return-
address.
• Following is a program for adding a list of numbers using subroutine with the parameters passed
through registers.
35
COMPUTER ORGANIZATION
ADDITIONAL INSTRUCTIONS:
LOGIC INSTRUCTIONS
• Logic operations such as AND, OR, and NOT applied to individual bits.
• These are the basic building blocks of digital-circuits.
• This is also useful to be able to perform logic operations is software, which is done using
instructions that apply these operations to all bits of a word or byte independently and in parallel.
• For example, the instruction
Not dst
LOGICAL SHIFTS
• Two logical shift instructions are
1) Shifting left (LShiftL) &
2) Shifting right (LShiftR).
• These instructions shift an operand over a number of bit positions specified in a count operand
contained in the instruction.
36
COMPUTER ORGANIZATION
ROTATE OPERATIONS
• In shift operations, the bits shifted out of the operand are lost, except for the last bit shifted out
which is retained in the Carry-flag C.
• To preserve all bits, a set of rotate instructions can be used.
• They move the bits that are shifted out of one end of the operand back into the other end.
• Two versions of both the left and right rotate instructions are usually provided. In one version,
the bits of the operand is simply rotated.
In the other version, the rotation includes the C flag.
37
COMPUTER ORGANIZATION
Problem 1:
Write a program that can evaluate the expression A*B+C*D In a single-accumulator processor. Assume
that the processor has Load, Store, Multiply, and Add instructions and that all values fit in the accumulator
Solution:
A program for the expression is: Load A
Multiply B Store RESULT Load C Multiply D Add RESULT
Store RESULT
Problem 2:
Registers R1 and R2 of a computer contains the decimal values 1200 and 4600. What is the effective-
address of the memory operand in each of the following instructions?
(a) Load 20(R1), R5
(b) Move #3000,R5
(c) Store R5,30(R1,R2)
(d) Add -(R2),R5
(e) Subtract (R1)+,R5
Solution:
(a) EA = [R1]+Offset=1200+20 = 1220
(b) EA = 3000
(c) EA = [R1]+[R2]+Offset = 1200+4600+30=5830
(d) EA = [R2]-1 = 4599
(e) EA = [R1] = 1200
Problem 3:
Registers R1 and R2 of a computer contains the decimal values 2900 and 3300. What is the effective-
address of the memory operand in each of the following instructions?
(a) Load R1,55(R2)
(b) Move #2000,R7
(c) Store 95(R1,R2),R5
(d) Add (R1)+,R5
(e) Subtract-(R2),R5
Solution:
a) Load R1,55(R2) This is indexed addressing mode. So EA = 55+R2=55+3300=3355.
b) Move #2000,R7 This is an immediate addressing mode. So, EA = 2000
c) Store 95(R1,R2),R5 This is a variation of indexed addressing mode, in which contents of 2
registers are added with the offset or index to generate EA. So, 95+R1+R2=95+2900+3300=6255.
d) Add (R1)+,R5 This is Autoincrement mode. Contents of R1 are the EA so, 2900 is the EA.
e) Subtract -(R2),R5 This is Auto decrement mode. Here, R2 is subtracted by 4 bytes (assuming
32-bt processor) to generate the EA, so, EA= 3300-4=3296.
Problem 4:
Given a binary pattern in some memory-location, is it possible to tell whether this pattern represents a
machine instruction or a number?
Solution:
No; any binary pattern can be interpreted as a number or as an instruction.
38
COMPUTER ORGANIZATION
Problem 5:
Both of the following statements cause the value 300 to be stored in location 1000, but at different times.
ORIGIN 1000
DATAWORD 300
A
n Move #300,1000
d
Explain the difference.
Solution:
The assembler directives ORIGIN and DATAWORD cause the object program memory image
constructed by the assembler to indicate that 300 is to be placed at memory word location 1000 at the time
the program is loaded into memory prior to execution.
The Move instruction places 300 into memory word location 1000 when the instruction is executed as
part of a program.
Problem 6:
Register R5 is used in a program to point to the top of a stack. Write a sequence of instructions using the
Index, Autoincrement, and Autodecrement addressing modes to perform each of the following tasks:
(a) Pop the top two items off the stack, and them, and then push the result onto the stack.
(b) Copy the fifth item from the top into register R3.
(c) Remove the top ten items from the stack.
Solution:
(a) Move (R5)+,R0
Add (R5)+,R0 Move R0,-(R5)
(b) Move 16(R5),R3
(c) Add #40,R5
Problem 7:
Consider the following possibilities for saving the return address of a subroutine:
(a) In the processor register.
(b) In a memory-location associated with the call, so that a different location is used when the
subroutine is called from different places
(c) On a stack.
Which of these possibilities supports subroutine nesting and which supports subroutine recursion(that is,
a subroutine that calls itself)?
Solution:
(a) Neither nesting nor recursion is supported.
(b) Nesting is supported, because different Call instructions will save the return address at different
memory-locations. Recursion is not supported.
(c) Both nesting and recursion are supported.
39
COMPUTER ORGANIZATION
ACCESSING I/O-DEVICES
• A single bus-structure can be used for connecting I/O-devices to a computer (Figure 7.1).
• Each I/O device is assigned a unique set of address.
• Bus consists of 3 sets of lines to carry address, data & control signals.
• When processor places an address on address-lines, the intended-device responds to the command.
• The processor requests either a read or write-operation.
• The requested-data are transferred over the data-lines.
• There are 2 ways to deal with I/O-devices: 1) Memory-mapped I/O & 2) I/O-mapped I/O.
1) Memory-Mapped I/O
Memory and I/O-devices share a common address-space.
Any data-transfer instruction (like Move, Load) can be used to exchange information.
For example,
Move DATAIN, R0; This instruction sends the contents of location DATAIN to register R0.
Here, DATAIN address of the input-buffer of the keyboard.
2) I/O-Mapped I/O
Memory and I/0 address-spaces are different.
A special instructions named IN and OUT are used for data-transfer.
Advantage of separate I/O space: I/O-devices deal with fewer address-lines.
I/O Interface for an Input Device
1) Address Decoder: enables the device to recognize its address when this address
appears on the address-lines (Figure 7.2).
2) Status Register: contains information relevant to operation of I/O-device.
3) Data Register: holds data being transferred to or from processor. There are 2 types:
i) DATAIN Input-buffer associated with keyboard.
ii) DATAOUT Output data buffer of a display/printer.
COMPUTER ORGANIZATION
4 |Page
COMPUTER ORGANIZATION
HANDLING MULTIPLE DEVICES
• While handling multiple devices, the issues concerned are:
1) How can the processor recognize the device requesting an interrupt?
2) How can the processor obtain the starting address of the appropriate ISR?
3) Should a device be allowed to interrupt the processor while another interrupt is being
serviced?
4) How should 2 or more simultaneous interrupt-requests be handled?
POLLING
• Information needed to determine whether device is requesting interrupt is available in status-register
• Following condition-codes are used:
DIRQ Interrupt-request for display.
KIRQ Interrupt-request for keyboard.
KEN keyboard enable.
DEN Display Enable.
SIN, SOUT status flags.
• For an input device, SIN status flag in used.
SIN = 1 when a character is entered at the keyboard.
SIN = 0 when the character is read by processor.
IRQ=1 when a device raises an interrupt-requests (Figure 4.3).
• Simplest way to identify interrupting-device is to have ISR poll all devices connected to bus.
• The first device encountered with its IRQ bit set is serviced.
• After servicing first device, next requests may be serviced.
• Advantage: Simple & easy to implement.
Disadvantage: More time spent polling IRQ bits of all devices.
5 |Page
COMPUTER ORGANIZATION
VECTORED INTERRUPTS
• A device requesting an interrupt identifies itself by sending a special-code to processor over bus.
• Then, the processor starts executing the ISR.
• The special-code indicates starting-address of ISR.
• The special-code length ranges from 4 to 8 bits.
• The location pointed to by the interrupting-device is used to store the staring address to ISR.
• The staring address to ISR is called the interrupt vector.
• Processor
→ loads interrupt-vector into PC &
→ executes appropriate ISR.
• When processor is ready to receive interrupt-vector code, it activates INTA line.
• Then, I/O-device responds by sending its interrupt-vector code & turning off the INTR signal.
• The interrupt vector also includes a new value for the Processor Status Register.
6 |Page
COMPUTER ORGANIZATION
INTERRUPT NESTING
• A multiple-priority scheme is implemented by using separate INTR & INTA lines for each device
• Each INTR line is assigned a different priority-level (Figure 4.7).
• Priority-level of processor is the priority of program that is currently being executed.
• Processor accepts interrupts only from devices that have higher-priority than its own.
• At the time of execution of ISR for some device, priority of processor is raised to that of the device.
• Thus, interrupts from devices at the same level of priority or lower are disabled.
Privileged Instruction
• Processor's priority is encoded in a few bits of PS word. (PS Processor-Status).
• Encoded-bits can be changed by Privileged Instructions that write into PS.
• Privileged-instructions can be executed only while processor is running in Supervisor Mode.
• Processor is in supervisor-mode only when executing operating-system routines.
Privileged Exception
• User program cannot
→ accidently or intentionally change the priority of the processor &
→ disrupt the system-operation.
• An attempt to execute a privileged-instruction while in user-mode leads to a Privileged Exception.
7 |Page
COMPUTER ORGANIZATION
SIMULTANEOUS REQUESTS
• The processor must have some mechanisms to decide which request to service when simultaneous
requests arrive.
• INTR line is common to all devices (Figure 4.8a).
• INTA line is connected in a daisy-chain fashion.
• INTA signal propagates serially through devices.
• When several devices raise an interrupt-request, INTR line is activated.
• Processor responds by setting INTA line to 1. This signal is received by device 1.
• Device-1 passes signal on to device 2 only if it does not require any service.
• If device-1 has a pending-request for interrupt, the device-1
→ blocks INTA signal &
→ proceeds to put its identifying-code on data-lines.
• Device that is electrically closest to processor has highest priority.
• Advantage: It requires fewer wires than the individual connections.
Arrangement of Priority Groups
• Here, the devices are organized in groups & each group is connected at a different priority level.
• Within a group, devices are connected in a daisy chain. (Figure 4.8b).
8 |Page
COMPUTER ORGANIZATION
DIRECT MEMORY ACCESS (DMA)
• The transfer of a block of data directly b/w an external device & main-memory w/o continuous
involvement by processor is called DMA.
• DMA controller
→ is a control circuit that performs DMA transfers (Figure 8.13).
→ is a part of the I/O device interface.
→ performs the functions that would normally be carried out by processor.
• While a DMA transfer is taking place, the processor can be used to execute another program.
9 |Page
COMPUTER ORGANIZATION
10 | P a g e
COMPUTER ORGANIZATION
11 | P a g e
COMPUTER ORGANIZATION
BASIC CONCEPTS
• Maximum size of memory that can be used in any computer is determined by addressing
mode.
• If MAR is k-bits long then
• The Sense/Write circuits are connected to data-input or output lines of the chip.
2) CS’ Chip Select input selects a given chip in the multi-chip memory-system.
COMPUTER ORGANIZATION
CMOS Cell
• Transistor pairs (T3, T5) and (T4, T6) form the inverters in the latch (Figure 8.5).
• In state 1, the voltage at point X is high by having T5, T6 ON and T4, T5 are OFF.
• Thus, T1 and T2 returned ON (Closed), bit-line b and b‟ will have high and low signals
respectively.
• Advantages:
1) It has low power consumption „.‟ the current flows in the cell only when the cell is
active.
2) Static RAM‟s can be accessed quickly. It access time is few nanoseconds.
• Disadvantage: SRAMs are said to be volatile memories
„.‟ their contents are lost when poweris interrupted.
COMPUTER ORGANIZATION
ASYNCHRONOUS DRAM
• Less expensive RAMs can be implemented if simple cells are used.
• Such cells cannot retain their state indefinitely. Hence they are called Dynamic RAM
(DRAM).
• The information stored in a dynamic memory-cell in the form of a charge on a capacitor.
• This charge can be maintained only for tens of milliseconds.
• The contents must be periodically refreshed by restoring this capacitor charge to its full value.
• In order to store information in the cell, the transistor T is turned „ON‟ (Figure 8.6).
• The appropriate voltage is applied to the bit-line which charges the capacitor.
• After the transistor is turned off, the capacitor begins to discharge.
• Hence, info. stored in cell can be retrieved correctly before threshold value of capacitor drops
down.
• During a read-operation,
→ transistor is turned „ON‟
→ a sense amplifier detects whether the charge on the capacitor is above the threshold
value.
If (charge on capacitor) > (threshold value) Bit-line will have logic value „1‟.
If (charge on capacitor) < (threshold value) Bit-line will set to logic value
„0‟.
COMPUTER ORGANIZATION
ASYNCHRONOUS DRAM DESCRIPTION
• The 4 bit cells in each row are divided into 512 groups of 8 (Figure 5.7).
• 21 bit address is needed to access a byte in the memory. 21 bit is divided as follows:
1) 12 address bits are needed to select a row.
i.e. A8-0 → specifies row-address of a byte.
2) 9 bits are needed to specify a group of 8 bits in the selected row.
i.e. A20-9 → specifies column-address of a byte.
• During Read/Write-operation,
→ row-address is applied first.
→ row-address is loaded into row-latch in response to a signal pulse on RAS’
input of chip.(RAS = Row-address Strobe CAS = Column-address Strobe)
• When a Read-operation is initiated, all cells on the selected row are read and refreshed.
• Shortly after the row-address is loaded, the column-address is
→ applied to the address pins &
→ loaded into CAS’.
• The information in the latch is decoded.
• The appropriate group of 8 Sense/Write circuits is selected.
R/W’=1(read-operation) Output values of selected circuits are transferred to data-lines
D0-D7.
R/W’=0(write-operation) Information on D0-D7 are transferred to the selected
circuits.
COMPUTER ORGANIZATION
RAS‟ & CAS‟ are active-low so that they cause latching of address when they change
from highto low.
• To ensure that the contents of DRAMs are maintained, each row of cells is accessed
periodically.
• A special memory-circuit provides the necessary control signals RAS‟ & CAS‟ that govern
the timing.
• The processor must take into account the delay in the response of the memory.
Fast Page Mode
Transferring the bytes in sequential order is achieved by applying the consecutive
sequenceof column-address under the control of successive CAS‟ signals.
This scheme allows transferring a block of data at a faster rate.
The block of transfer capability is called as fast page mode.
Both SRAM and DRAM chips are volatile, i.e. They lose the stored information if power is
turned off.
Many application requires non-volatile memory which retains the stored information if
power isturned off.
For ex:
OS software has to be loaded from disk to memory i.e. it requires non-volatile memory.
Since the normal operation involves only reading of stored data, a memory of this type is
called ROM.
Transistor switch is closed & voltage on bit-line nearly drops to zero (Figure 8.11).
At Logic value ‘1’ Transistor switch is open.The bit-line remains at high voltage.
COMPUTER ORGANIZATION
TYPES OF ROM
• Different types of non-volatile memory are
1) PROM
2) EPROM
3) EEPROM &
4) Flash Memory (Flash Cards & Flash Drives)
FLASH MEMORY
• In EEPROM, it is possible to read & write the contents of a single cell.
• In Flash device, it is possible to read contents of a single cell & write entire contents of a
block.
• Prior to writing, the previous contents of the block are erased.
Eg. In MP3 player, the flash memory stores the data that represents sound.
• Single flash chips cannot provide sufficient storage capacity for embedded-system.
• Advantages:
1) Flash drives have greater density which leads to higher capacity & low cost per bit.
2) It requires single power supply voltage & consumes less power.
COMPUTER ORGANIZATION
• There are 2 methods for implementing larger memory: 1) Flash Cards & 2) Flash Drives
1) Flash Cards
One way of constructing larger module is to mount flash-chips on a small card.
Such flash-card have standard interface.
The card is simply plugged into a conveniently accessible slot.
Memory-size of the card can be 8, 32 or 64MB.
Eg: A minute of music can be stored in 1MB of memory. Hence 64MB flash cards
can store anhour of music.
2) Flash Drives
Larger flash memory can be developed by replacing the hard disk-drive.
The flash drives are designed to fully emulate the hard disk.
The flash drives are solid state electronic devices that have no movable parts.
Advantages:
1) They have shorter seek & access time which results in faster response.
2) They have low power consumption. .‟. they are attractive for battery
drivenapplication.
3) They are insensitive to vibration.
Disadvantages:
1) The capacity of flash drive (<1GB) is less than hard disk (>1GB).
2) It leads to higher cost per bit.
3) Flash memory will weaken after it has been written a number of times
(typically atleast 1 million times).
CACHE MEMORIES
• The effectiveness of cache mechanism is based on the property of „Locality of
Reference’.Locality of Reference
• Many instructions in the localized areas of program are executed repeatedly during some
time period
• Remainder of the program is accessed relatively infrequently (Figure 8.15).
• There are 2 types:
1) Temporal
The recently executed instructions are likely to be executed again very soon.
2) Spatial
Instructions in close proximity to recently executed instruction are also likely to be
executed soon.
• If active segment of program is placed in cache-memory, then total execution time can be
reduced.
• Block refers to the set of contiguous address locations of some size.
• The cache-line is used to refer to the cache-block.
Write-Back Protocol
This technique is to
→ update only the cache-location &
→ mark the cache-location with associated flag bit called Dirty/Modified Bit.
The word in memory will be updated later, when the marked-block is removed from
cache.
During Read-operation
• If the requested-word currently not exists in the cache, then read-miss will occur.
• To overcome the read miss, Load–through/Early restart protocol is used.
Load–Through Protocol
The block of words that contains the requested-word is copied from the memory into
cache.
After entire block is loaded into cache, the requested-word is forwarded to processor.
During Write-operation
• If the requested-word not exists in the cache, then write-miss will occur.
1) If Write Through Protocol is used, the information is written directly into main-
memory.
2) If Write Back Protocol is used,
→ then block containing the addressed word is first brought into the cache &
→ then the desired word in the cache is over-written with the new information.
COMPUTER ORGANIZATION
VIRTUAL MEMORY
• It refers to a technique that automatically move
program/data blocks into the main-memory when they are
required for execution (Figure 8.24).
• The address generated by the processor is referred to as a virtual/logical address.
• The virtual-address is translated into physical-address by MMU (Memory Management
Unit).
• During every memory-cycle, MMU determines whether the addressed-word is in
the memory.If the word is in memory.
Then, the word is accessed and execution proceeds.
Otherwise, a page containing desired word is transferred from disk to memory.
• Using DMA scheme, transfer of data between disk and memory is performed.
COMPUTER ORGANIZATION
SECONDARY-STORAGE
• The semi-conductor memories do not provide all the storage capability.
• The secondary-storage devices provide larger storage requirements.
• Some of the secondary-storage devices are:
1) Magnetic Disk
2) Optical Disk &
3) Magnetic Tapes
MAGNETIC DISK
• Magnetic Disk system consists of one or more disk mounted on a common spindle.
• A thin magnetic film is deposited on each disk (Figure 8.27).
• Disk is placed in a rotary-drive so that magnetized surfaces move in close proximity to R/W
heads.
• Each R/W head consists of 1) Magnetic Yoke & 2) Magnetizing-Coil.
• Digital information is stored on magnetic film by applying current pulse to the magnetizing-
coil.
• Only changes in the magnetic field under the head can be sensed during the Read-operation.
• Therefore, if the binary states 0 & 1 are represented by two opposite states,
then a voltage is induced in the head only at 0-1 and at 1-0 transition in the bit
stream.
• A consecutive of 0‟s & 1‟s are determined by using the clock.
• Manchester Encoding technique is used to combine the clocking information with data.
COMPUTER ORGANIZATION
• R/W heads are maintained at small distance from disk-surfaces in order to achieve high bit
densities.
• When disk is moving at their steady state, the air pressure develops b/w disk-surfaces
& head.This air pressure forces the head away from the surface.
• The flexible spring connection between head and its arm
mounting permits the head to fly at the desired distance away
from the surface.
Winchester Technology
• Read/Write heads are placed in a sealed, air–filtered enclosure called the Winchester
Technology.
• The read/write heads can operate closure to magnetic track surfaces because
the dust particles which are a problem in unsealed assemblies are absent.
Advantages
• It has a larger capacity for a given physical size.
• The data intensity is high because
the storage medium is not exposed to contaminating elements.
• The read/write heads of a disk system are movable.
• The disk system has 3 parts: 1) Disk Platter (Usually called Disk)
2) Disk-drive (spins the disk & moves Read/write heads)
3) Disk Controller (controls the operation of the system.)
COMPUTER ORGANIZATION
Capacity of formatted
disk=20x15000x400x512=60x109 =60GB Seek
time=3ms
Platter rotation=10000
rev/min Latency=3ms
Internet transfer rate=34MB/s
DATA BUFFER/CACHE
• A disk-drive that incorporates the required SCSI circuit is referred as SCSI Drive.
• The SCSI can transfer data at higher rate than the disk tracks.
• A data buffer can be used to deal with the possible difference in transfer rate b/w disk and
SCSI bus
• The buffer is a semiconductor memory.
• The buffer can also provide cache mechanism for the disk.
i.e. when a read request arrives at the disk, then controller first check if the data is
available inthe cache/buffer.
If data is available in cache.
Then, the data can be accessed & placed on
SCSI bus. Otherwise, the data will be retrieved
from the disk.
COMPUTER ORGANIZATION
DISK CONTROLLER
• The disk controller acts as interface between disk-drive and system-bus (Figure 8.13).
• The disk controller uses DMA scheme to transfer data between disk and memory.
• When the OS initiates the transfer by issuing R/W‟
request, the controllers register will load the following
information:
1) Memory Address: Address of first memory-location of the block of words
involved in thetransfer.
2) Disk Address: Location of the sector containing the beginning of the desired block of
words.
3) Word Count: Number of words in the block to be transferred.
Floppy Disks
The disks discussed above are known as hard or rigid disk units. Floppy disks are
smaller, simpler, and cheaper disk units that consist of a flexible, removable, plastic
diskette
coated with magnetic material. The diskette is enclosed in a plastic jacket, which has an
opening where the read/write head can be positioned. A hole in the center of the diskette
allows a spindle mechanism in the disk drive to position and rotate the diskette.
The main feature of floppy disks is their low cost and shipping convenience. However,
they have much smaller storage capacities, longer access times, and higher failure rates
than hard disks. In recent years, they have largely been replaced by CDs, DVDs, and flash
Processor speeds have increased dramatically. At the same time, access times to disk
drives are still on the order of milliseconds, because of the limitations of the mechanical
motion involved. One way to reduce access time is to use multiple disks operating in
storage system [5]. They called it RAID, for Redundant Array of Inexpensive Disks.
(Since all disks are now inexpensive, the acronym was later reinterpreted as Redundant
Array of Independent Disks.) Using multiple disks also makes it possible to improve the
reliability of the overall system. Different configurations were proposed, and many more
The basic configuration, known as RAID 0, is simple. A single large file is stored in
COMPUTER ORGANIZATION
several separate disk units by dividing the file into a number of smaller pieces and storing
these pieces on different disks. This is called data striping. When the file is accessed for
a Read operation, all disks access their portions of the data in parallel. As a result, the
rate at which the data can be transferred is equal to the data rate of individual disks times
the number of disks. However, access time, that is, the seek and rotational delay needed
to locate the beginning of the data on each disk, is not reduced. Since each disk operates
independently, access times vary. Individual pieces of the data are buffered, so that the
complete file can be reassembled and transferred to the memory as a single entity.
Various RAID configurations form a hierarchy, with each level in the hierarchy providing
storing identical copies of the data on two disks rather than just one. The two disks are
said
to be mirrors of each other. If one disk drive fails, all Read and Write operations are
directed
to its mirror drive. Other levels of the hierarchy achieve increased reliability through
various parity-checking schemes, without requiring a full duplication of disks. Some also
The RAID concept has gained commercial acceptance. RAID systems are available
4
COMPUTER ORGANIZATION
PERFORMING AN ARITHMETIC OR LOGIC OPERATION
• The ALU performs arithmetic operations on the 2 operands applied to its A and B inputs.
• One of the operands is output of MUX;
And, the other operand is obtained directly from processor-bus.
• The result (produced by the ALU) is stored temporarily in register Z.
• The sequence of operations for [R3][R1]+[R2] is as follows:
1) R1out, Yin
2) R2out, SelectY, Add, Zin
3) Zout, R3in
• Instruction execution proceeds as follows:
Step 1 --> Contents from register R1 are loaded into register Y.
Step2 --> Contents from Y and from register R2 are applied to the A and B inputs of ALU;
Addition is performed &
Result is stored in the Z register.
Step 3 --> The contents of Z register is stored in the R3 register.
• The signals are activated for the duration of the clock cycle corresponding to that step. All other
signals are inactive.
CONTROL-SIGNALS OF MDR
• The MDR register has 4 control-signals (Figure 7.4):
1) MDRin & MDRout control the connection to the internal processor data bus &
2) MDRinE & MDRoutE control the connection to the memory Data bus.
• MAR register has 2 control-signals.
1) MARin controls the connection to the internal processor address bus &
2) MARout controls the connection to the memory address bus.
5
COMPUTER ORGANIZATION
FETCHING A WORD FROM MEMORY
• To fetch instruction/data from memory, processor transfers required address to MAR.
At the same time, processor issues Read signal on control-lines of memory-bus.
• When requested-data are received from memory, they are stored in MDR. From MDR, they are
transferred to other registers.
• The response time of each memory access varies (based on cache miss, memory-mapped I/O). To
accommodate this, MFC is used. (MFC Memory Function Completed).
• MFC is a signal sent from addressed-device to the processor. MFC informs the processor that the
requested operation has been completed by addressed-device.
• Consider the instruction Move (R1),R2. The sequence of steps is (Figure 7.5):
1) R1out, MARin, Read ;desired address is loaded into MAR & Read command is issued.
2) MDRinE, WMFC ;load MDR from memory-bus & Wait for MFC response from memory.
3) MDRout, R2in ;load R2 from MDR.
where WMFC=control-signal that causes processor's control.
circuitry to wait for arrival of MFC signal.
6
COMPUTER ORGANIZATION
EXECUTION OF A COMPLETE INSTRUCTION
• Consider the instruction Add (R3),R1 which adds the contents of a memory-location pointed by R3 to
register R1. Executing this instruction requires the following actions:
1) Fetch the instruction.
2) Fetch the first operand.
3) Perform the addition &
4) Load the result into R1.
7
COMPUTER ORGANIZATION
BRANCHING INSTRUCTIONS
• Control sequence for an unconditional branch instruction is as follows:
8
COMPUTER ORGANIZATION
MULTIPLE BUS ORGANIZATION
• Disadvantage of Single-bus organization: Only one data-word can be transferred over the bus in
a clock cycle. This increases the steps required to complete the execution of the instruction
Solution: To reduce the number of steps, most processors provide multiple internal-paths. Multiple
paths enable several transfers to take place in parallel.
• As shown in fig 7.8, three buses can be used to connect registers and the ALU of the processor.
• All general-purpose registers are grouped into a single block called the Register File.
• Register-file has 3 ports:
1) Two output-ports allow the contents of 2 different registers to be simultaneously placed on
buses A & B.
2) Third input-port allows data on bus C to be loaded into a third register during the same
clock-cycle.
• Buses A and B are used to transfer source-operands to A & B inputs of ALU.
• The result is transferred to destination over bus C.
• Incrementer Unit is used to increment PC by 4.
9
COMPUTER ORGANIZATION
COMPLETE PROCESSOR
• This has separate processing-units
units to deal with integer data and floating-point
floating data.
Integer Unit To process integer data. (Figure 7.14).
Floating Unit To process floating –point data.
• Data-Cache is inserted between these processing-units
processing & main-memory.
The integer and floating unit gets data from data cache.
• Instruction-Unit fetches instructions
→ from an instruction-cache or
→ from main-memory
memory when desired instructions are not already in cache.
• Processor is connected to system-bus
bus &
hence to the rest of the computer by means of a Bus Interface.
• Using separate caches for instructions & data is common practice in many processors today.
• A processor may include several units of each type to increase the potential for concurrent
operations.
• The 80486 processor has 8-kbytes
kbytes single cache for both instruction and data.
Whereas the Pentium processor has two separate 8 kbytes caches for instruction
instruction and data.
10
COMPUTER ORGANIZATION
Note:
To execute instructions, the processor must have some means of generating the control-signals. There
are two approaches for this purpose:
1) Hardwired control and 2) Microprogrammed control.
HARDWIRED CONTROL
• Hardwired control is a method of control unit design (Figure 7.11).
• The control-signals are generated by using logic circuits such as gates, flip-flops, decoders etc.
• Decoder/Encoder Block is a combinational-circuit that generates required control-outputs
depending on state of all its inputs.
• Instruction Decoder
It decodes the instruction loaded in the IR.
If IR is an 8 bit register, then instruction decoder generates 28(256 lines); one for each
instruction.
It consists of a separate output-lines INS1 through INSm for each machine instruction.
According to code in the IR, one of the output-lines INS1 through INSm is set to 1, and all
other lines are set to 0.
• Step-Decoder provides a separate signal line for each step in the control sequence.
• Encoder
It gets the input from instruction decoder, step decoder, external inputs and condition codes.
It uses all these inputs to generate individual control-signals: Yin, PCout, Add, End and so on.
For example (Figure 7.12), Zin=T1+T6.ADD+T4.BR
;This signal is asserted during time-slot T1 for all instructions.
during T6 for an Add instruction.
during T4 for unconditional branch instruction
• When RUN=1, counter is incremented by 1 at the end of every clock cycle.
When RUN=0, counter stops counting.
• After execution of each instruction, end signal is generated. End signal resets step counter.
• Sequence of operations carried out by this machine is determined by wiring of logic circuits, hence
the name “hardwired”.
• Advantage: Can operate at high speed.
• Disadvantages:
1) Since no. of instructions/control-lines is often in hundreds, the complexity of control unit is
very high.
2) It is costly and difficult to design.
3) The control unit is inflexible because it is difficult to change the design.
11
COMPUTER ORGANIZATION
HARDWIRED CONTROL VS MICROPROGRAMMED CONTROL
Attribute Hardwired Control Microprogrammed Control
Definition Hardwired control is a control Micro programmed control is a control
mechanism to generate control- mechanism to generate control-signals
signals by using gates, flip- by using a memory called control store
flops, decoders, and other (CS), which contains the control-
digital circuits. signals.
Speed Fast Slow
Control functions Implemented in hardware. Implemented in software.
Flexibility Not flexible to accommodate More flexible, to accommodate new
new system specifications or system specification or new instructions
new instructions. redesign is required.
Ability to handle large Difficult. Easier.
or complex instruction
sets
Ability to support Very difficult. Easy.
operating systems &
diagnostic features
Design process Complicated. Orderly and systematic.
Applications Mostly RISC microprocessors. Mainframes, some microprocessors.
Instructionset size Usually under 100 instructions. Usually over 100 instructions.
ROM size - 2K to 10K by 20-400 bit
microinstructions.
Chip area efficiency Uses least area. Uses more area.
Diagram
12
COMPUTER ORGANIZATION
MICROPROGRAMMED CONTROL
• Microprogramming is a method of control unit design (Figure 7.16).
• Control-signals are generated by a program similar to machine language programs.
• Control Word(CW) is a word whose individual bits represent various control-signals (like Add, PCin).
• Each of the control-steps in control sequence of an instruction defines a unique combination of 1s &
0s in CW.
• Individual control-words in microroutine are referred to as microinstructions (Figure 7.15).
• A sequence of CWs corresponding to control-sequence of a machine instruction constitutes the
microroutine.
• The microroutines for all instructions in the instruction-set of a computer are stored in a special
memory called the Control Store (CS).
• Control-unit generates control-signals for any instruction by sequentially reading CWs of
corresponding microroutine from CS.
• µPC is used to read CWs sequentially from CS. (µPC Microprogram Counter).
• Every time new instruction is loaded into IR, o/p of Starting Address Generator is loaded into µPC.
• Then, µPC is automatically incremented by clock;
causing successive microinstructions to be read from CS.
Hence, control-signals are delivered to various parts of processor in correct sequence.
Advantages
• It simplifies the design of control unit. Thus it is both, cheaper and less error prone implement.
• Control functions are implemented in software rather than hardware.
• The design process is orderly and systematic.
• More flexible, can be changed to accommodate new system specifications or to correct the design
errors quickly and cheaply.
• Complex function such as floating point arithmetic can be realized efficiently.
Disadvantages
• A microprogrammed control unit is somewhat slower than the hardwired control unit, because time is
required to access the microinstructions from CM.
• The flexibility is achieved at some extra hardware cost due to the control memory and its access
circuitry.
13
COMPUTER ORGANIZATION
ORGANIZATION OF MICROPROGRAMMED CONTROL UNIT TO SUPPORT CONDITIONAL
BRANCHING
• Drawback of previous Microprogram control:
It cannot handle the situation when the control unit is required to check the status of the
condition codes or external inputs to choose between alternative courses of action.
Solution:
Use conditional branch microinstruction.
• In case of conditional branching, microinstructions specify which of the external inputs, condition-
codes should be checked as a condition for branching to take place.
• Starting and Branch Address Generator Block loads a new address into µPC when a
microinstruction instructs it to do so (Figure 7.18).
• To allow implementation of a conditional branch, inputs to this block consist of
→ external inputs and condition-codes &
→ contents of IR.
• µPC is incremented every time a new microinstruction is fetched from microprogram memory except
in following situations:
1) When a new instruction is loaded into IR, µPC is loaded with starting-address of microroutine
for that instruction.
2) When a Branch microinstruction is encountered and branch condition is satisfied, µPC is
loaded with branch-address.
3) When an End microinstruction is encountered, µPC is loaded with address of first CW in
microroutine for instruction fetch cycle.
14
COMPUTER ORGANIZATION
15